Monday, August 19, 2024

Kubernetes 1.31 || Testing the Image Volume mount feature using Minikube

With Kubernetes new version 1.31 (https://kubernetes.io/blog/2024/08/13/kubernetes-v1-31-release/) there are so many features releases for which are in alpha, beta and stable state.



1 of the new feature which is can really good in the world on AI and other use case is "Support for image volumes". 

I am not going to focus on the different use cases for now, but was trying to test this locally using Minikube.

Env Setup 

  • Mac (14.1.2)
  • Minikube (v1.33.1)
    • Driver - docker
    • Container Runtime - cri-o(v1.31) hard requirement for now

Minikube Setup

$minikube start --feature-gates=ImageVolume=true --driver=docker     --nodes 1     --cni calico     --cpus=2     --memory=4g     --kubernetes-version=v1.31.0     --container-runtime=cri-o     --profile crio

😄  [crio] minikube v1.33.1 on Darwin 14.1.2 (arm64)
    ▪ KUBECONFIG=/Users/kulsharm2/OSB/aks/staaks
❗  Specified Kubernetes version 1.31.0 is newer than the newest supported version: v1.30.0. Use `minikube config defaults kubernetes-version` for details.
❗  Specified Kubernetes version 1.31.0 not found in Kubernetes version list
🤔  Searching the internet for Kubernetes version...
✅  Kubernetes version 1.31.0 found in GitHub version list
✨  Using the docker driver based on user configuration
📌  Using Docker Desktop driver with root privileges
👍  Starting "crio" primary control-plane node in "crio" cluster
🚜  Pulling base image v0.0.44 ...
🔥  Creating docker container (CPUs=2, Memory=2048MB) ...
🎁  Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
❌  Unable to load cached images: loading cached images: stat /Users/kulsharm2/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1: no such file or directory
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring Calico (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "crio" cluster and "default" namespace by default

Check for the Kubernetes version and Container runtime version. You will notice that K8S is v1.31 which we need, but cri-o version is v1.24.6

$ kubectl get node -o wide

NAME      STATUS   ROLES           AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION   CONTAINER-RUNTIME

crio   Ready    control-plane   15m   v1.31.0   192.168.49.2   <none>        Ubuntu 22.04.4 LTS   6.6.41-0-virt    cri-o://1.24.6


Now to test if you try to test the image volume mounting feature it is going to fail as below:

Test the Image Volume 

Pod Manifest

$cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: image-volume
spec:
  containers:
  - name: shell
    command: ["sleep", "100m"]
    image: quay.io/crio/alpine:3.9
    volumeMounts:
    - name: volume
      mountPath: /volume
  volumes:
  - name: volume
    image:
      reference: quay.io/crio/artifact:v1
      pullPolicy: IfNotPresent

$kubectl get po

NAME           READY   STATUS                 RESTARTS   AGE
image-volume   0/1     CreateContainerError   0          72s

If describe the pod then you will see below error:
    Warning  Failed     2s (x2 over 15s)   kubelet            Error: mount.HostPath is empty

Now since, Minikube doesn't provide OOB option to install specific container runtime. I am going to bit do manual setup for cri-o setup with version v1.31. For this you can go through https://github.com/cri-o/cri-o/blob/main/install.md for more details.

Build Cri-o runtime from source code

  • Ssh to the Minikube node
    • $minikube ssh -p crio (Profile I am using)
    • Check for the installed crio version
      • root@k8-1:~# crio version
        • INFO[2024-08-18 18:10:59.678939615Z] Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)
        • Version:          1.24.6
        • GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
        • GitTreeState:     clean
        • BuildDate:        2023-06-14T14:44:50Z
        • GoVersion:        go1.18.2
        • Compiler:         gc
        • Platform:         linux/arm64
        • Linkmode:         dynamic
        • BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
        • SeccompEnabled:   true
        • AppArmorEnabled:  false
    • Install golang 1.23 which is required (go1.23.0.linux-amd64.tar.gz)
      • root@k8-1:~# go version
      • -bash: go: command not found
      • root@k8-1:~# export GOPATH=/root/go/bin/
      • root@k8-1:~# export GOROOT=/root/go/
      • root@k8-1:~# export PATH=$PATH:$GOPATH
      • root@k8-1:~# go version
        • go version go1.23.0 linux/arm64
    • Install all the required dependencies
      • $apt-get update -qq && apt-get install -y \
      •   libbtrfs-dev \
      •   containers-common \
      •   git \
      •   libassuan-dev \
      •   libglib2.0-dev \
      •   libc6-dev \
      •   libgpgme-dev \
      •   libgpg-error-dev \
      •   libseccomp-dev \
      •   libsystemd-dev \
      •   libselinux1-dev \
      •   pkg-config \
      •   go-md2man \
      •   cri-o-runc \
      •   libudev-dev \
      •   software-properties-common \
      •   gcc \
      •   make
    • Clone the cri-o source code
      • $git clone https://github.com/cri-o/cri-o (skip SSL verification incase you get issue http.sslVerify=false)
      • $cd cri-o
      • $make
      • $make install
    • Verify the cri-o version
      • docker@k8-1:~/cri-o$ crio version
        • INFO[2024-08-18 19:33:20.276411929Z] Updating config from single file: /etc/crio/crio.conf
        • INFO[2024-08-18 19:33:20.276446721Z] Updating config from drop-in file: /etc/crio/crio.conf
        • INFO[2024-08-18 19:33:20.276917513Z] Updating config from path: /etc/crio/crio.conf.d
        • INFO[2024-08-18 19:33:20.277628471Z] Updating config from drop-in file: /etc/crio/crio.conf.d/01-crio-runc.conf
        • INFO[2024-08-18 19:33:20.278858429Z] Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf
        • INFO[2024-08-18 19:33:20.279481513Z] Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf
        • Version:        1.31.0
        • GitCommit:      67290a12649b37b45c3d4de343dbda7668308afb
        • GitCommitDate:  2024-08-16T14:35:55Z
        • GitTreeState:   clean
        • BuildDate:      2024-08-18T19:32:19Z
        • GoVersion:      go1.23.0
        • Compiler:       gc
        • Platform:       linux/arm64
        • Linkmode:       dynamic
        • BuildTags:
        •   containers_image_ostree_stub
        •   containers_image_openpgp
        •   seccomp
        •   selinux
        •   exclude_graphdriver_devicemapper
        • LDFlags:          unknown
        • SeccompEnabled:   true
        • AppArmorEnabled:  false
    • Restart the Minikube to make the changes in effect
      • $minikube stop --profile crio
        • ✋  Stopping node "crio"  ...
        • 🛑  Powering off "crio" via SSH ...
        • 🛑  1 node stopped.
      • $ minikube start -p crio
        • 😄  [crio] minikube v1.33.1 on Darwin 14.1.2 (arm64)
        • ❗  Specified Kubernetes version 1.31.0 is newer than the newest supported version: v1.30.0. Use `minikube config defaults kubernetes-version` for details.
        • ❗  Specified Kubernetes version 1.31.0 not found in Kubernetes version list
        • 🤔  Searching the internet for Kubernetes version...
        • ✅  Kubernetes version 1.31.0 found in GitHub version list
        • ✨  Using the docker driver based on existing profile
        • 👍  Starting "crio" primary control-plane node in "crio" cluster
        • 🚜  Pulling base image v0.0.44 ...
        • 🏃  Updating the running docker "crio" container ...
        • 🎁  Preparing Kubernetes v1.31.0 on CRI-O 1.31.0 ...
        • ❌  Unable to load cached images: loading cached images: stat /Users/kulsharm2/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9: no such file or directory
        • 🔎  Verifying Kubernetes components...
        •     ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
        • 🌟  Enabled addons: storage-provisioner, default-storageclass
        • 🏄  Done! kubectl is now configured to use "crio" cluster and "default" namespace by default
      • Verify the Kubernetes version and Container runtime version
        • $ kubectl get node -o wide
        • NAME      STATUS   ROLES           AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION   CONTAINER-RUNTIME
        • crio   Ready    control-plane   15m   v1.31.0   192.168.49.2   <none>        Ubuntu 22.04.4 LTS   6.6.41-0-virt    cri-o://1.31.0

Test the Image Volume

pod manifest 

$cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: image-volume
spec:
  containers:
  - name: shell
    command: ["sleep", "infinity"]
    image: quay.io/crio/alpine:3.9
    volumeMounts:
    - name: volume
      mountPath: /volume
  volumes:
  - name: volume
    image:
      reference: quay.io/crio/artifact:v1
      pullPolicy: IfNotPresent

$kubectl apply -f pod1.yaml
pod/image-volume created

 Check the pod status

$kubectl get po
NAME           READY   STATUS    RESTARTS   AGE
image-volume   1/1     Running   0          15s

 Verify the volume attached on the container

$kubectl exec -it image-volume -- df -h /volume
Filesystem                Size      Used Available Use% Mounted on
overlay                  97.9G     30.8G     62.1G  33% /volume
$kubectl exec -it image-volume -- ls -l /volume
total 8
drwxr-xr-x    2 1000     users         4096 Jun 18 09:02 dir
-rw-r--r--    1 1000     users            2 Jun 18 09:02 file

 


Wednesday, April 1, 2020

Integrate Jenkins with Azure Key Vault


Jenkins has been one of the most used CI/CD tools. For every tool which we are using in our daily life, it becomes really challenges when it comes to handling secret information. I know there are lots of tools available provided with PAAS or in house hosting solution. But we need those tools to support integration with different toolsets without many efforts. 

In this particular blog, we will be discussing the integration of Jenkins with the Azure Key Vault. Thanks to all the guys who are continuously working for different communities and spending time to make product more flexible and enhancing the product capabilities.


We are going to use Azure Key Vault plugin for this. There are multiple ways to use this. But in this post, we'll go through the integration and then testing using declarative pipelines.

Pre-Requisites-

  • Make sure you have running Jenkins setup
  • You have valid Azure subscription
Implementation Steps-

     1. Create an Azure Key Vault using the below steps:


kulsharm2@WKMIN5257929:~$ ⚙️  $az login
You have logged in. Now let us find all the subscriptions to which you have access...
[
  {
    "cloudName": "AzureCloud",
    "id": "dd019fb5-db8a-4e4f-96ec-fc8decd2db8b",
    "isDefault": true,
    "name": "<>",
    "state": "Enabled",
    "tenantId": "d52c9ea1-7c21-47b1-82a3-33a74b1f74b8",
    "user": {
      "name": "<>",
      "type": "user"
    }
  }
]



kulsharm2@WKMIN5257929:~$ ⚙️  $az ad sp create-for-rbac --name http://local-jenkins
Found an existing application instance of "7e575c9b-b902-4510-8a06-8cbe1639aba3". We will patch it
Creating a role assignment under the scope of "/subscriptions/dd019fb5-db8a-4e4f-96ec-fc8decd2db8b"
  Role assignment already exits.

{
  "appId": "7e575c9b-b902-4510-8a06-8cbe1639aba3",
  "displayName": "local-jenkins",
  "name": "http://local-jenkins",
  "password": "e7157115-6e35-46f9-a811-c856ba9bb5c0",
  "tenant": "d52c9ea1-7c21-47b1-82a3-33a74b1f74b8"
}
kulsharm2@WKMIN5257929:~$ ⚙️  $RESOURCE_GROUP_NAME=my-resource-group
kulsharm2@WKMIN5257929:~$ ⚙️  $az group create  --name $RESOURCE_GROUP_NAME -l "East US"
{
  "id": "/subscriptions/dd019fb5-db8a-4e4f-96ec-fc8decd2db8b/resourceGroups/my-resource-group",
  "location": "eastus",
  "managedBy": null,
  "name": "my-resource-group",
  "properties": {
    "provisioningState": "Succeeded"
  },
  "tags": null,
  "type": "Microsoft.Resources/resourceGroups"
}
kulsharm2@WKMIN5257929:~$ ⚙️  $az group show --name $RESOURCE_GROUP_NAME -o table
Location    Name
----------  -----------------
eastus      my-resource-group

kulsharm2@WKMIN5257929:~$ ⚙️  $VAULT=jenkins-local
kulsharm2@WKMIN5257929:~$ ⚙️  $az keyvault create --resource-group $RESOURCE_GROUP_NAME --name $VAULT
{
  "id": "/subscriptions/dd019fb5-db8a-4e4f-96ec-fc8decd2db8b/resourceGroups/my-resource-group/providers/Microsoft.KeyVault/vaults/jenkins-local",
  "location": "eastus",
  "name": "jenkins-local",
  "properties": {
    "accessPolicies": [
      {
        "applicationId": null,
        "objectId": "fd5bcd48-13d1-40c5-98a3-d46442c5194e",
        "permissions": {
          "certificates": [
  .          
  .       
  <>

kulsharm2@WKMIN5257929:~$ ⚙️  $az keyvault list -o table
Location    Name           ResourceGroup
----------  -------------  -----------------
eastus      jenkins-local  my-resource-group
kulsharm2@WKMIN5257929:~$ ⚙️  $az keyvault set-policy --resource-group $RESOURCE_GROUP_NAME --name $VAULT    --secret-permissions get list --spn http://local-jenkins
{
  "id": "/subscriptions/dd019fb5-db8a-4e4f-96ec-fc8decd2db8b/resourceGroups/my-resource-group/providers/Microsoft.KeyVault/vaults/jenkins-local",
  "location": "eastus",
  "name": "jenkins-local",
  "properties": {
    "accessPolicies": [
      {
        "applicationId": null,
        "objectId": "fd5bcd48-13d1-40c5-98a3-d46442c5194e",
        "permissions": {
          "certificates": [
            "get",
            "list",
            "delete",
            "create",
            "import",
            "update",
            "managecontacts",
            "getissuers",
            "listissuers",
            "setissuers",
            "deleteissuers",
            "manageissuers",
 <>
      2. Create one secret in the Azure Key Vault :

kulsharm2@WKMIN5257929:~$ ⚙️  $az keyvault secret set --vault-name $VAULT --name secret-key --value my-super-secret
{
  "attributes": {
    "created": "2020-04-01T05:18:37+00:00",
    "enabled": true,
    "expires": null,
    "notBefore": null,
    "recoveryLevel": "Purgeable",
    "updated": "2020-04-01T05:18:37+00:00"
  },
  "contentType": null,
  "id": "https://jenkins-local.vault.azure.net/secrets/secret-key/85a36fe61ba34f53b60217c5e08f1774",
  "kid": null,
  "managed": null,
  "tags": {
    "file-encoding": "utf-8"
  },
  "value": "my-super-secret"
}

      3. Let's make changes on Jenkins side to complete the integration:
          1. Install the plugin as below:



        2. Add the Azure Key Vault URL to Jenkins Configuration following "Manage Jenkins --> Configure System" as below :


       
       4. Add credentials by going through "Credentials --> System --> Global Credentials(unrestricted)" as below:

       
       5. Create new credential as below-
 

    6. Now, let's create a pipeline and try to fetch the secret we stored in AKV:


*** Pipeline Code ***
pipeline {
  agent any
  environment {
    SECRET_KEY = credentials('secret-key')
  }
  stages {
    stage('Foo') {
      steps {
        echo SECRET_KEY
        echo SECRET_KEY.substring(0, SECRET_KEY.size() -1) // shows the right secret was loaded, don't do this for real secrets unless you're debugging 
      }
    }
  }
}






Happy Learning!!

Sunday, March 22, 2020

How to handle packaging in python using __init__.py


Keeping in mind the current situation across the world. I hope everyone is doing good. Please take precautions and stay at home and keep your self busy in whatever way you want to be.

I was reading the book "Python for DevOps" and came across the topic "Packaging". In every business, packaging plays a big role while it comes to product distribution. 

While it comes to IT software usually, below are the few things which should take care of :

  • Descriptive Versioning 
    • In Python packages, the following two variants are used:
      • major.minor
      • major.minor.micro
    • major - for backward-incompatible changes
    • minor - adds features that are also backward compatible
    • micro - adds backward-compatible bug fixes.

  • The Changelog
    • This is a simple file that keeps track of all the changes we will be doing for each version upgrade.

Not going in detail here, coming directly to implementation on how we can handle packaging in python using the "__init__.py" file. 

The tool used here for packaging is "setuptools" python module.
Now we'll create python virtual environment and add "setuptools" there as below - 

$ python3 -m venv /tmp/packaging
$ source /tmp/packaging/bin/activate
$ pip3 install setuptools

Tip - you can cross check the list of installed modules using pip3 as below.

Now, let's see the code. I have simple hello-world examples as below

(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $tree .
.
├── README
├── hello_world
│   ├── __init__.py
│   ├── hello_python.py
│   └── hello_world.py
└── setup.py

1 directory, 5 files
Note -
  • README - simple instructions
  • hello_world(directory) - module name
  • __int__.py - organize modules while keeping them in directory
  • hello_*.py - two different module with different functionality
  • setup.py - required by "setuptools" to build a package.

Source code is available on GitHub page as well.
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $cat hello_world/hello_world.py 
def helloworld():
    return "HELLO WORLD"
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $cat hello_world/hello_python.py 
def hellopython():
    return "HELLO PYTHON3" 
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $cat hello_world/__init__.py 
from .hello_python import hellopython
from .hello_world import helloworld
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $cat setup.py 
from setuptools import setup, find_packages

setup(
    name="hello_example",
    version="0.0.1",
    author="Example Author",
    author_email="author@example.com",
    url="example.com",
    description="A hello-world example package",
    packages=find_packages(),
    classifiers=[
        "Programming Language :: Python :: 3",
        "License :: OSI Approved :: MIT License",
        "Operating System :: OS Independent",
    ],
)
Now, lets start packaging our hello world program:
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $python3 setup.py sdist
running sdist
running egg_info
creating hello_example.egg-info
writing hello_example.egg-info/PKG-INFO
writing dependency_links to hello_example.egg-info/dependency_links.txt
writing top-level names to hello_example.egg-info/top_level.txt
writing manifest file 'hello_example.egg-info/SOURCES.txt'
reading manifest file 'hello_example.egg-info/SOURCES.txt'
writing manifest file 'hello_example.egg-info/SOURCES.txt'
running check
creating hello_example-0.0.1
creating hello_example-0.0.1/hello_example.egg-info
creating hello_example-0.0.1/hello_world
copying files to hello_example-0.0.1...
copying README -> hello_example-0.0.1
copying setup.py -> hello_example-0.0.1
copying hello_example.egg-info/PKG-INFO -> hello_example-0.0.1/hello_example.egg-info
copying hello_example.egg-info/SOURCES.txt -> hello_example-0.0.1/hello_example.egg-info
copying hello_example.egg-info/dependency_links.txt -> hello_example-0.0.1/hello_example.egg-info
copying hello_example.egg-info/top_level.txt -> hello_example-0.0.1/hello_example.egg-info
copying hello_world/__init__.py -> hello_example-0.0.1/hello_world
copying hello_world/hello_python.py -> hello_example-0.0.1/hello_world
copying hello_world/hello_world.py -> hello_example-0.0.1/hello_world
Writing hello_example-0.0.1/setup.cfg
creating dist
Creating tar archive
removing 'hello_example-0.0.1' (and everything under it)

After this, you will see that the above command has created other folders as well and our packaged module has been stored in "dist" folder.

(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $tree .
.
├── README
├── dist
│   └── hello_example-0.0.1.tar.gz
├── hello_example.egg-info
│   ├── PKG-INFO
│   ├── SOURCES.txt
│   ├── dependency_links.txt
│   └── top_level.txt
├── hello_world
│   ├── __init__.py
│   ├── hello_python.py
│   └── hello_world.py
└── setup.py

3 directories, 10 files
Now let's install and list down the modules using "pip3".


(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $pip3 install dist/hello_example-0.0.1.tar.gz 
Processing ./dist/hello_example-0.0.1.tar.gz
Installing collected packages: hello-example
    Running setup.py install for hello-example ... done
Successfully installed hello-example-0.0.1
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $ pip3 list --format=columns
Package       Version
------------- -------
hello-example 0.0.1  
pip           20.0.2 
setuptools    41.2.0 
As the module has been installed, let's test this first using "ipyhon" console and then using the python program.

(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $ipython3
/usr/local/lib/python3.7/site-packages/IPython/core/interactiveshell.py:931: UserWarning: Attempting to work in a virtualenv. If you encounter problems, please install IPython inside the virtualenv.
  warn("Attempting to work in a virtualenv. If you encounter problems, please "
Python 3.7.7 (default, Mar 10 2020, 15:43:03) 
Type 'copyright', 'credits' or 'license' for more information
IPython 7.8.0 -- An enhanced Interactive Python. Type '?' for help.

In [1]: import hello_world as hw                                                                                       

In [2]: hw.hellopython()                                                                                               
Out[2]: 'HELLO PYTHON3'

In [3]: hw.helloworld()                                                                                                
Out[3]: 'HELLO WORLD'

In [4]:
Here you saw that I can call the function using my custom module "hello_world". 

Using this module in the program-

(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $cat example.py 
import hello_world as hw

print("Calling Hello Python Function: "+hw.hellopython())
print("Calling Hello World Function: "+hw.helloworld())
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $python3 example.py 
Calling Hello Python Function: HELLO PYTHON3
Calling Hello World Function: HELLO WORLD
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $

Now, let's suppose you want to upgrade to version "0.0.2", then we will follow the below steps -

(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $cat hello_world/hello_python.py 
def hellopython():
    return "HELLO PYTHON3 with version **0.0.2**"
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $cat hello_world/hello_world.py 
def helloworld():
    return "HELLO WORLD with version ** 0.0.2 **"
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $cat setup.py 
from setuptools import setup, find_packages

setup(
    name="hello_example",
    version="0.0.2",
    author="Example Author",
    author_email="author@example.com",
    url="example.com",
    description="A hello-world example package",
    packages=find_packages(),
    classifiers=[
        "Programming Language :: Python :: 3",
        "License :: OSI Approved :: MIT License",
        "Operating System :: OS Independent",
    ],
)
Package it again - 
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $ python3 setup.py sdist
running sdist
running egg_info
writing hello_example.egg-info/PKG-INFO
writing dependency_links to hello_example.egg-info/dependency_links.txt
writing top-level names to hello_example.egg-info/top_level.txt
reading manifest file 'hello_example.egg-info/SOURCES.txt'
writing manifest file 'hello_example.egg-info/SOURCES.txt'
running check
creating hello_example-0.0.2
creating hello_example-0.0.2/hello_example.egg-info
creating hello_example-0.0.2/hello_world
copying files to hello_example-0.0.2...
copying README -> hello_example-0.0.2
copying setup.py -> hello_example-0.0.2
copying hello_example.egg-info/PKG-INFO -> hello_example-0.0.2/hello_example.egg-info
copying hello_example.egg-info/SOURCES.txt -> hello_example-0.0.2/hello_example.egg-info
copying hello_example.egg-info/dependency_links.txt -> hello_example-0.0.2/hello_example.egg-info
copying hello_example.egg-info/top_level.txt -> hello_example-0.0.2/hello_example.egg-info
copying hello_world/__init__.py -> hello_example-0.0.2/hello_world
copying hello_world/hello_python.py -> hello_example-0.0.2/hello_world
copying hello_world/hello_world.py -> hello_example-0.0.2/hello_world
Writing hello_example-0.0.2/setup.cfg
Creating tar archive
removing 'hello_example-0.0.2' (and everything under it)
Install new version -

(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $pip3 install dist/hello_example-0.0.2.tar.gz 
Processing ./dist/hello_example-0.0.2.tar.gz
Installing collected packages: hello-example
  Attempting uninstall: hello-example
    Found existing installation: hello-example 0.0.1
    Uninstalling hello-example-0.0.1:
      Successfully uninstalled hello-example-0.0.1
    Running setup.py install for hello-example ... done
Successfully installed hello-example-0.0.2
Verify the installation - 
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $ pip3 list --format=columns
Package       Version
------------- -------
hello-example 0.0.2  
pip           20.0.2 
setuptools    41.2.0 
Test again using the python program -

(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $python3 example.py 
Calling Hello Python Function: HELLO PYTHON3 with version **0.0.2**
Calling Hello World Function: HELLO WORLD with version ** 0.0.2 **
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $


Wednesday, November 27, 2019

Deploy and Scale Kubernetes Application using Spinnaker

Deploy and Scale Kubernetes Application using Spinnaker-


In my last post Getting started with Spinnaker, we completed the installation and setup part of spinnaker. In this post, I'll be going through the "Deploying and Scaling application on Kubernetes using spinnaker".

In this particular exercise, we'll create a simple "nginx" deployment on kubernetes and expose that as a service. After that we'll see how we can scale up and down the deployment easily from Spinnaker Dashboard itself.

Fot this, first make sure we have done port-forwarding for required pods and able to access Spinnaker Dashboard. 

Note - Before moving ahead, please make sure that "kubernetes" provider is enabled. You can check this on "Halyard" configuration as below.

$ kubectl exec -it  spinnaker-local-spinnake-halyard-0 /bin/bash -n spinnaker
$ hal config list | grep -A 37 kubernetes




After that click on the Create Application in Application tab to below popup.



After filling the required information and hitting the create button. You'll get landed on below screen.

There are few other terms which you can check on documents like "Clusters, Load Balancers, Server Group" etc.

Here, we'll create a Server Group which will basically contain our deployment manifest(below).

---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 1 # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  type: NodePort
  ports:
    - port: 80
      nodePort: 30000
  selector: 
app: nginx 

Once you click on "Create Server Group", You'll see below screen where we need to paste above yaml and hit create.



Below will be the end state, if everything goes well.


Now, lets inspect our deployment. 
On Cluster tab we can see the deployment and number of replicas(pod) available in this deployment as below -


Please verify the same from CLI using kubectl-


Check for the services in Load Balancer Section -

Verify same using CLI -


Access the service -




Scale up the Deployment from 1 to 4 pods and verify the results-






So, this was the simple how-to for managing the K8S manifest. In the next post, I will try to explore the integration on Jenkins with Spinnaker and auto triggering Spinnaker Deployment based on Jenkins events.



Tuesday, November 26, 2019

Getting Started with Spinnaker locally using minikube(local Kubernetes)

Before jumping to Installation and setup part, first of all, lets briefly summarize about the "What is Spinnaker"



Spinnaker : 

            Spinnaker is an open-source, multi-cloud continuous delivery platform for releasing software changes with high velocity and confidence.

          I am not going in many details about the functionality here but would like to highlight the main architectural component which I think we should know at least before starting playing with this. This will help you in troubleshooting if got stuck in between.

So, Spinnaker is composed on multiple components. You will be able to see all these after we complete the setup. List of different components is as below(currently just copy-pasting from the official site)-
  1. Deck is the browser-based UI.
  2. Gate is the API gateway.
    The Spinnaker UI and all api callers communicate with Spinnaker via Gate.
  3. Orca is the orchestration engine. It handles all ad-hoc operations and pipelines. Read more on the Orca Service Overview.
  4. Clouddriver is responsible for all mutating calls to the cloud providers and for indexing/caching all deployed resources.
  5. Front50 is used to persist the metadata of applications, pipelines, projects and notifications.
  6. Rosco is the bakery. It produces immutable VM images (or image templates) for various cloud providers.
    It is used to produce machine images (for example GCE imagesAWS AMIsAzure VM images). It currently wraps packer, but will be expanded to support additional mechanisms for producing images.
  7. Igor is used to trigger pipelines via continuous integration jobs in systems like Jenkins and Travis CI, and it allows Jenkins/Travis stages to be used in pipelines.
  8. Echo is Spinnaker’s eventing bus.
    It supports sending notifications (e.g. Slack, email, SMS), and acts on incoming webhooks from services like Github.
  9. Fiat is Spinnaker’s authorization service. 
    It is used to query a user’s access permissions for accounts, applications and service accounts.
  10. Kayenta provides automated canary analysis for Spinnaker.
  11. Halyard is Spinnaker’s configuration service.
Halyard manages the lifecycle of each of the above services. It only interacts with these services during Spinnaker startup, updates, and rollbacks.

Note - In our setup "Fiat and Kayenta" will not be present as this is not available in the helm chart that we have installed on minikube.
Along with Architecture, I guess we should know the ports mapping as well.


Minikube - 

        Minikube provides a way to setup Kubernetes locally for development purpose. I am not going in details about the installation. Please go through my previous blog post if you want to install minikube.

After installation, let's start minikube cluster. I am starting with my custom configuration so that it should be able to handle the load.



Other Tools -

     Apart from minikube, below are the other tools that we need and I am supposing that these are already installed.
  1. helm
  2. kubectl



Install Spinnaker -

        Now, we have minikube with helm installed and running. We are ready to install spinnaker. We will install spinnaker using helm chart.
Helm is a templating engine for k8s deployments. We need to provide values those templates. So, to start with they are providing the default set of values which we are going to use.


Download the default values file form above helm repo.
$ curl -Lo values.yaml https://raw.githubusercontent.com/kubernetes/charts/master/stable/spinnaker/values.yaml



Now, let's install the spinnaker to K8S cluter.
$ helm install -n spinnaker-local stable/spinnaker -f values.yaml --timeout 300   --namespace spinnaker


Tip - In case you get timed out exception in first run(like below). Then please delete the helm installation using "helm del --purge {release-name}" and re-run the same command again.


After successful installation, check for the pods in "spinnaker" namespace. All should be in running state. 



P.S. - Please ignore the hal status in above output. Its taking some time to start :).

To access the Spinnaker UI, follow the above instructions. If you notice then in above output we are doing the port-forward for two pods. As per the architecture, these two components are responsible for below functionalities.
  • First one is "deck" - which is providing the UI dashboard
  • Second one the "gate" - which is responsible for accesing the apis.
#export DECK_POD=$(kubectl get pods --namespace spinnaker -l "cluster=spin-deck" -o jsonpath="{.items[0].metadata.name}")
#export GATE_POD=$(kubectl get pods --namespace spinnaker -l "cluster=spin-gate" -o jsonpath="{.items[0].metadata.name}")
#echo $DECK_POD
#echo $GATE_POD
#alias ui='kubectl port-forward --namespace spinnaker $DECK_POD 9000'
#alias api='kubectl port-forward --namespace spinnaker     $GATE_POD 8084'
#ui & api &




Access the Spinnaker Dashboard




In next post we'll try to create pipelines which will deploy entities on kubernetes. Also in later posts we'll explore more on the integration with different providers e.g. Jenkins/Cloud vendors.


Kubernetes 1.31 || Testing the Image Volume mount feature using Minikube

With Kubernetes new version 1.31 ( https://kubernetes.io/blog/2024/08/13/kubernetes-v1-31-release/ ) there are so many features releases for...