Friday, May 30, 2025

Quickstart Guide for Kagent Setup with Local LM and Azure OpenAI

 


LM Studio overview with running on the local system.


To upgrade and install the kagent custom resource definitions (CRDs), you can execute the following command in your terminal:

> helm upgrade --install kagent-crds oci://ghcr.io/kagent-dev/kagent/helm/kagent-crds \
--namespace kagent --create-namespace


Next, to install the kagent itself, run the following command:

>helm upgrade --install kagent oci://ghcr.io/kagent-dev/kagent/helm/kagent --namespace kagent


To verify that the installation was successful, you can check the model configurations by executing the following command:

>kubectl get mc -n kagent

To get more detailed information about the model configuration, you can run the following command:

> kubectl get mc -n kagent -oyaml

This will return output similar to the following:

apiVersion: v1  
items:
  - apiVersion: kagent.dev/v1alpha1
    kind: ModelConfig  
    metadata:  
      annotations:  
        meta.helm.sh/release-name: kagent  
        meta.helm.sh/release-namespace: kagent  
      creationTimestamp: "2025-05-29T15:08:00Z"  
      generation: 8  
      labels:  
        app.kubernetes.io/instance: kagent  
        app.kubernetes.io/managed-by: Helm  
        app.kubernetes.io/name: kagent  
        app.kubernetes.io/version: 0.3.11  
        helm.sh/chart: kagent-0.3.11  
      name: default-model-config  
      namespace: kagent  
      resourceVersion: "26472"  
      uid: a2c8db4e-88a5-4ecd-8e1b-73dba9c93ac2  
    spec:  
      apiKeySecretKey: OPENAI_API_KEY  
      apiKeySecretRef: kagent-openai  
      model: gemma-3-1b-it-qat  
      modelInfo:  
        family: unknown  
        functionCalling: true  
      openAI:  
        baseUrl: http://192.168.1.33:1234/v1  
      provider: OpenAI


Once you have verified the configurations, you can proceed to browse the dashboard.


Here, you will see the default dashboard that comes with some built-in agents, providing you with a quick overview of the system's status and functionality.


If you wish to create a new agent, you can do so using the interface provided in the dashboard.


Try out the new agent and parallel see the logs for local LLM Setup



Now, let's transition from a local setup to utilizing Azure OpenAI, which represents a significant step forward in our development process. In order to make this transition successful, we need to update the model configuration to ensure that it is fully compatible with Azure OpenAI's requirements and capabilities.



Following this configuration update, our next step will be to create a new agent specifically designed for this environment. We will then proceed to extract some information using this agent. It is crucial to assess the accuracy of the information retrieved, and we have high expectations in this regard. We anticipate that the accuracy will be notably high, especially considering that we are now working with a fully trained model, which is a significant improvement over the local version that was previously used for testing purposes. 😆






Monday, August 19, 2024

Kubernetes 1.31 || Testing the Image Volume mount feature using Minikube

With Kubernetes new version 1.31 (https://kubernetes.io/blog/2024/08/13/kubernetes-v1-31-release/) there are so many features releases for which are in alpha, beta and stable state.



1 of the new feature which is can really good in the world on AI and other use case is "Support for image volumes". 

I am not going to focus on the different use cases for now, but was trying to test this locally using Minikube.

Env Setup 

  • Mac (14.1.2)
  • Minikube (v1.33.1)
    • Driver - docker
    • Container Runtime - cri-o(v1.31) hard requirement for now

Minikube Setup

$minikube start --feature-gates=ImageVolume=true --driver=docker     --nodes 1     --cni calico     --cpus=2     --memory=4g     --kubernetes-version=v1.31.0     --container-runtime=cri-o     --profile crio

😄  [crio] minikube v1.33.1 on Darwin 14.1.2 (arm64)
    ▪ KUBECONFIG=/Users/kulsharm2/OSB/aks/staaks
❗  Specified Kubernetes version 1.31.0 is newer than the newest supported version: v1.30.0. Use `minikube config defaults kubernetes-version` for details.
❗  Specified Kubernetes version 1.31.0 not found in Kubernetes version list
🤔  Searching the internet for Kubernetes version...
✅  Kubernetes version 1.31.0 found in GitHub version list
✨  Using the docker driver based on user configuration
📌  Using Docker Desktop driver with root privileges
👍  Starting "crio" primary control-plane node in "crio" cluster
🚜  Pulling base image v0.0.44 ...
🔥  Creating docker container (CPUs=2, Memory=2048MB) ...
🎁  Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
❌  Unable to load cached images: loading cached images: stat /Users/kulsharm2/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1: no such file or directory
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring Calico (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "crio" cluster and "default" namespace by default

Check for the Kubernetes version and Container runtime version. You will notice that K8S is v1.31 which we need, but cri-o version is v1.24.6

$ kubectl get node -o wide

NAME      STATUS   ROLES           AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION   CONTAINER-RUNTIME

crio   Ready    control-plane   15m   v1.31.0   192.168.49.2   <none>        Ubuntu 22.04.4 LTS   6.6.41-0-virt    cri-o://1.24.6


Now to test if you try to test the image volume mounting feature it is going to fail as below:

Test the Image Volume 

Pod Manifest

$cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: image-volume
spec:
  containers:
  - name: shell
    command: ["sleep", "100m"]
    image: quay.io/crio/alpine:3.9
    volumeMounts:
    - name: volume
      mountPath: /volume
  volumes:
  - name: volume
    image:
      reference: quay.io/crio/artifact:v1
      pullPolicy: IfNotPresent

$kubectl get po

NAME           READY   STATUS                 RESTARTS   AGE
image-volume   0/1     CreateContainerError   0          72s

If describe the pod then you will see below error:
    Warning  Failed     2s (x2 over 15s)   kubelet            Error: mount.HostPath is empty

Now since, Minikube doesn't provide OOB option to install specific container runtime. I am going to bit do manual setup for cri-o setup with version v1.31. For this you can go through https://github.com/cri-o/cri-o/blob/main/install.md for more details.

Build Cri-o runtime from source code

  • Ssh to the Minikube node
    • $minikube ssh -p crio (Profile I am using)
    • Check for the installed crio version
      • root@k8-1:~# crio version
        • INFO[2024-08-18 18:10:59.678939615Z] Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)
        • Version:          1.24.6
        • GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
        • GitTreeState:     clean
        • BuildDate:        2023-06-14T14:44:50Z
        • GoVersion:        go1.18.2
        • Compiler:         gc
        • Platform:         linux/arm64
        • Linkmode:         dynamic
        • BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
        • SeccompEnabled:   true
        • AppArmorEnabled:  false
    • Install golang 1.23 which is required (go1.23.0.linux-amd64.tar.gz)
      • root@k8-1:~# go version
      • -bash: go: command not found
      • root@k8-1:~# export GOPATH=/root/go/bin/
      • root@k8-1:~# export GOROOT=/root/go/
      • root@k8-1:~# export PATH=$PATH:$GOPATH
      • root@k8-1:~# go version
        • go version go1.23.0 linux/arm64
    • Install all the required dependencies
      • $apt-get update -qq && apt-get install -y \
      •   libbtrfs-dev \
      •   containers-common \
      •   git \
      •   libassuan-dev \
      •   libglib2.0-dev \
      •   libc6-dev \
      •   libgpgme-dev \
      •   libgpg-error-dev \
      •   libseccomp-dev \
      •   libsystemd-dev \
      •   libselinux1-dev \
      •   pkg-config \
      •   go-md2man \
      •   cri-o-runc \
      •   libudev-dev \
      •   software-properties-common \
      •   gcc \
      •   make
    • Clone the cri-o source code
      • $git clone https://github.com/cri-o/cri-o (skip SSL verification incase you get issue http.sslVerify=false)
      • $cd cri-o
      • $make
      • $make install
    • Verify the cri-o version
      • docker@k8-1:~/cri-o$ crio version
        • INFO[2024-08-18 19:33:20.276411929Z] Updating config from single file: /etc/crio/crio.conf
        • INFO[2024-08-18 19:33:20.276446721Z] Updating config from drop-in file: /etc/crio/crio.conf
        • INFO[2024-08-18 19:33:20.276917513Z] Updating config from path: /etc/crio/crio.conf.d
        • INFO[2024-08-18 19:33:20.277628471Z] Updating config from drop-in file: /etc/crio/crio.conf.d/01-crio-runc.conf
        • INFO[2024-08-18 19:33:20.278858429Z] Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf
        • INFO[2024-08-18 19:33:20.279481513Z] Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf
        • Version:        1.31.0
        • GitCommit:      67290a12649b37b45c3d4de343dbda7668308afb
        • GitCommitDate:  2024-08-16T14:35:55Z
        • GitTreeState:   clean
        • BuildDate:      2024-08-18T19:32:19Z
        • GoVersion:      go1.23.0
        • Compiler:       gc
        • Platform:       linux/arm64
        • Linkmode:       dynamic
        • BuildTags:
        •   containers_image_ostree_stub
        •   containers_image_openpgp
        •   seccomp
        •   selinux
        •   exclude_graphdriver_devicemapper
        • LDFlags:          unknown
        • SeccompEnabled:   true
        • AppArmorEnabled:  false
    • Restart the Minikube to make the changes in effect
      • $minikube stop --profile crio
        • ✋  Stopping node "crio"  ...
        • 🛑  Powering off "crio" via SSH ...
        • 🛑  1 node stopped.
      • $ minikube start -p crio
        • 😄  [crio] minikube v1.33.1 on Darwin 14.1.2 (arm64)
        • ❗  Specified Kubernetes version 1.31.0 is newer than the newest supported version: v1.30.0. Use `minikube config defaults kubernetes-version` for details.
        • ❗  Specified Kubernetes version 1.31.0 not found in Kubernetes version list
        • 🤔  Searching the internet for Kubernetes version...
        • ✅  Kubernetes version 1.31.0 found in GitHub version list
        • ✨  Using the docker driver based on existing profile
        • 👍  Starting "crio" primary control-plane node in "crio" cluster
        • 🚜  Pulling base image v0.0.44 ...
        • 🏃  Updating the running docker "crio" container ...
        • 🎁  Preparing Kubernetes v1.31.0 on CRI-O 1.31.0 ...
        • ❌  Unable to load cached images: loading cached images: stat /Users/kulsharm2/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9: no such file or directory
        • 🔎  Verifying Kubernetes components...
        •     ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
        • 🌟  Enabled addons: storage-provisioner, default-storageclass
        • 🏄  Done! kubectl is now configured to use "crio" cluster and "default" namespace by default
      • Verify the Kubernetes version and Container runtime version
        • $ kubectl get node -o wide
        • NAME      STATUS   ROLES           AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION   CONTAINER-RUNTIME
        • crio   Ready    control-plane   15m   v1.31.0   192.168.49.2   <none>        Ubuntu 22.04.4 LTS   6.6.41-0-virt    cri-o://1.31.0

Test the Image Volume

pod manifest 

$cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: image-volume
spec:
  containers:
  - name: shell
    command: ["sleep", "infinity"]
    image: quay.io/crio/alpine:3.9
    volumeMounts:
    - name: volume
      mountPath: /volume
  volumes:
  - name: volume
    image:
      reference: quay.io/crio/artifact:v1
      pullPolicy: IfNotPresent

$kubectl apply -f pod1.yaml
pod/image-volume created

 Check the pod status

$kubectl get po
NAME           READY   STATUS    RESTARTS   AGE
image-volume   1/1     Running   0          15s

 Verify the volume attached on the container

$kubectl exec -it image-volume -- df -h /volume
Filesystem                Size      Used Available Use% Mounted on
overlay                  97.9G     30.8G     62.1G  33% /volume
$kubectl exec -it image-volume -- ls -l /volume
total 8
drwxr-xr-x    2 1000     users         4096 Jun 18 09:02 dir
-rw-r--r--    1 1000     users            2 Jun 18 09:02 file

 


Wednesday, April 1, 2020

Integrate Jenkins with Azure Key Vault


Jenkins has been one of the most used CI/CD tools. For every tool which we are using in our daily life, it becomes really challenges when it comes to handling secret information. I know there are lots of tools available provided with PAAS or in house hosting solution. But we need those tools to support integration with different toolsets without many efforts. 

In this particular blog, we will be discussing the integration of Jenkins with the Azure Key Vault. Thanks to all the guys who are continuously working for different communities and spending time to make product more flexible and enhancing the product capabilities.


We are going to use Azure Key Vault plugin for this. There are multiple ways to use this. But in this post, we'll go through the integration and then testing using declarative pipelines.

Pre-Requisites-

  • Make sure you have running Jenkins setup
  • You have valid Azure subscription
Implementation Steps-

     1. Create an Azure Key Vault using the below steps:


kulsharm2@WKMIN5257929:~$ ⚙️  $az login
You have logged in. Now let us find all the subscriptions to which you have access...
[
  {
    "cloudName": "AzureCloud",
    "id": "dd019fb5-db8a-4e4f-96ec-fc8decd2db8b",
    "isDefault": true,
    "name": "<>",
    "state": "Enabled",
    "tenantId": "d52c9ea1-7c21-47b1-82a3-33a74b1f74b8",
    "user": {
      "name": "<>",
      "type": "user"
    }
  }
]



kulsharm2@WKMIN5257929:~$ ⚙️  $az ad sp create-for-rbac --name http://local-jenkins
Found an existing application instance of "7e575c9b-b902-4510-8a06-8cbe1639aba3". We will patch it
Creating a role assignment under the scope of "/subscriptions/dd019fb5-db8a-4e4f-96ec-fc8decd2db8b"
  Role assignment already exits.

{
  "appId": "7e575c9b-b902-4510-8a06-8cbe1639aba3",
  "displayName": "local-jenkins",
  "name": "http://local-jenkins",
  "password": "e7157115-6e35-46f9-a811-c856ba9bb5c0",
  "tenant": "d52c9ea1-7c21-47b1-82a3-33a74b1f74b8"
}
kulsharm2@WKMIN5257929:~$ ⚙️  $RESOURCE_GROUP_NAME=my-resource-group
kulsharm2@WKMIN5257929:~$ ⚙️  $az group create  --name $RESOURCE_GROUP_NAME -l "East US"
{
  "id": "/subscriptions/dd019fb5-db8a-4e4f-96ec-fc8decd2db8b/resourceGroups/my-resource-group",
  "location": "eastus",
  "managedBy": null,
  "name": "my-resource-group",
  "properties": {
    "provisioningState": "Succeeded"
  },
  "tags": null,
  "type": "Microsoft.Resources/resourceGroups"
}
kulsharm2@WKMIN5257929:~$ ⚙️  $az group show --name $RESOURCE_GROUP_NAME -o table
Location    Name
----------  -----------------
eastus      my-resource-group

kulsharm2@WKMIN5257929:~$ ⚙️  $VAULT=jenkins-local
kulsharm2@WKMIN5257929:~$ ⚙️  $az keyvault create --resource-group $RESOURCE_GROUP_NAME --name $VAULT
{
  "id": "/subscriptions/dd019fb5-db8a-4e4f-96ec-fc8decd2db8b/resourceGroups/my-resource-group/providers/Microsoft.KeyVault/vaults/jenkins-local",
  "location": "eastus",
  "name": "jenkins-local",
  "properties": {
    "accessPolicies": [
      {
        "applicationId": null,
        "objectId": "fd5bcd48-13d1-40c5-98a3-d46442c5194e",
        "permissions": {
          "certificates": [
  .          
  .       
  <>

kulsharm2@WKMIN5257929:~$ ⚙️  $az keyvault list -o table
Location    Name           ResourceGroup
----------  -------------  -----------------
eastus      jenkins-local  my-resource-group
kulsharm2@WKMIN5257929:~$ ⚙️  $az keyvault set-policy --resource-group $RESOURCE_GROUP_NAME --name $VAULT    --secret-permissions get list --spn http://local-jenkins
{
  "id": "/subscriptions/dd019fb5-db8a-4e4f-96ec-fc8decd2db8b/resourceGroups/my-resource-group/providers/Microsoft.KeyVault/vaults/jenkins-local",
  "location": "eastus",
  "name": "jenkins-local",
  "properties": {
    "accessPolicies": [
      {
        "applicationId": null,
        "objectId": "fd5bcd48-13d1-40c5-98a3-d46442c5194e",
        "permissions": {
          "certificates": [
            "get",
            "list",
            "delete",
            "create",
            "import",
            "update",
            "managecontacts",
            "getissuers",
            "listissuers",
            "setissuers",
            "deleteissuers",
            "manageissuers",
 <>
      2. Create one secret in the Azure Key Vault :

kulsharm2@WKMIN5257929:~$ ⚙️  $az keyvault secret set --vault-name $VAULT --name secret-key --value my-super-secret
{
  "attributes": {
    "created": "2020-04-01T05:18:37+00:00",
    "enabled": true,
    "expires": null,
    "notBefore": null,
    "recoveryLevel": "Purgeable",
    "updated": "2020-04-01T05:18:37+00:00"
  },
  "contentType": null,
  "id": "https://jenkins-local.vault.azure.net/secrets/secret-key/85a36fe61ba34f53b60217c5e08f1774",
  "kid": null,
  "managed": null,
  "tags": {
    "file-encoding": "utf-8"
  },
  "value": "my-super-secret"
}

      3. Let's make changes on Jenkins side to complete the integration:
          1. Install the plugin as below:



        2. Add the Azure Key Vault URL to Jenkins Configuration following "Manage Jenkins --> Configure System" as below :


       
       4. Add credentials by going through "Credentials --> System --> Global Credentials(unrestricted)" as below:

       
       5. Create new credential as below-
 

    6. Now, let's create a pipeline and try to fetch the secret we stored in AKV:


*** Pipeline Code ***
pipeline {
  agent any
  environment {
    SECRET_KEY = credentials('secret-key')
  }
  stages {
    stage('Foo') {
      steps {
        echo SECRET_KEY
        echo SECRET_KEY.substring(0, SECRET_KEY.size() -1) // shows the right secret was loaded, don't do this for real secrets unless you're debugging 
      }
    }
  }
}






Happy Learning!!

Sunday, March 22, 2020

How to handle packaging in python using __init__.py


Keeping in mind the current situation across the world. I hope everyone is doing good. Please take precautions and stay at home and keep your self busy in whatever way you want to be.

I was reading the book "Python for DevOps" and came across the topic "Packaging". In every business, packaging plays a big role while it comes to product distribution. 

While it comes to IT software usually, below are the few things which should take care of :

  • Descriptive Versioning 
    • In Python packages, the following two variants are used:
      • major.minor
      • major.minor.micro
    • major - for backward-incompatible changes
    • minor - adds features that are also backward compatible
    • micro - adds backward-compatible bug fixes.

  • The Changelog
    • This is a simple file that keeps track of all the changes we will be doing for each version upgrade.

Not going in detail here, coming directly to implementation on how we can handle packaging in python using the "__init__.py" file. 

The tool used here for packaging is "setuptools" python module.
Now we'll create python virtual environment and add "setuptools" there as below - 

$ python3 -m venv /tmp/packaging
$ source /tmp/packaging/bin/activate
$ pip3 install setuptools

Tip - you can cross check the list of installed modules using pip3 as below.

Now, let's see the code. I have simple hello-world examples as below

(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $tree .
.
├── README
├── hello_world
│   ├── __init__.py
│   ├── hello_python.py
│   └── hello_world.py
└── setup.py

1 directory, 5 files
Note -
  • README - simple instructions
  • hello_world(directory) - module name
  • __int__.py - organize modules while keeping them in directory
  • hello_*.py - two different module with different functionality
  • setup.py - required by "setuptools" to build a package.

Source code is available on GitHub page as well.
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $cat hello_world/hello_world.py 
def helloworld():
    return "HELLO WORLD"
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $cat hello_world/hello_python.py 
def hellopython():
    return "HELLO PYTHON3" 
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $cat hello_world/__init__.py 
from .hello_python import hellopython
from .hello_world import helloworld
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $cat setup.py 
from setuptools import setup, find_packages

setup(
    name="hello_example",
    version="0.0.1",
    author="Example Author",
    author_email="author@example.com",
    url="example.com",
    description="A hello-world example package",
    packages=find_packages(),
    classifiers=[
        "Programming Language :: Python :: 3",
        "License :: OSI Approved :: MIT License",
        "Operating System :: OS Independent",
    ],
)
Now, lets start packaging our hello world program:
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $python3 setup.py sdist
running sdist
running egg_info
creating hello_example.egg-info
writing hello_example.egg-info/PKG-INFO
writing dependency_links to hello_example.egg-info/dependency_links.txt
writing top-level names to hello_example.egg-info/top_level.txt
writing manifest file 'hello_example.egg-info/SOURCES.txt'
reading manifest file 'hello_example.egg-info/SOURCES.txt'
writing manifest file 'hello_example.egg-info/SOURCES.txt'
running check
creating hello_example-0.0.1
creating hello_example-0.0.1/hello_example.egg-info
creating hello_example-0.0.1/hello_world
copying files to hello_example-0.0.1...
copying README -> hello_example-0.0.1
copying setup.py -> hello_example-0.0.1
copying hello_example.egg-info/PKG-INFO -> hello_example-0.0.1/hello_example.egg-info
copying hello_example.egg-info/SOURCES.txt -> hello_example-0.0.1/hello_example.egg-info
copying hello_example.egg-info/dependency_links.txt -> hello_example-0.0.1/hello_example.egg-info
copying hello_example.egg-info/top_level.txt -> hello_example-0.0.1/hello_example.egg-info
copying hello_world/__init__.py -> hello_example-0.0.1/hello_world
copying hello_world/hello_python.py -> hello_example-0.0.1/hello_world
copying hello_world/hello_world.py -> hello_example-0.0.1/hello_world
Writing hello_example-0.0.1/setup.cfg
creating dist
Creating tar archive
removing 'hello_example-0.0.1' (and everything under it)

After this, you will see that the above command has created other folders as well and our packaged module has been stored in "dist" folder.

(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $tree .
.
├── README
├── dist
│   └── hello_example-0.0.1.tar.gz
├── hello_example.egg-info
│   ├── PKG-INFO
│   ├── SOURCES.txt
│   ├── dependency_links.txt
│   └── top_level.txt
├── hello_world
│   ├── __init__.py
│   ├── hello_python.py
│   └── hello_world.py
└── setup.py

3 directories, 10 files
Now let's install and list down the modules using "pip3".


(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $pip3 install dist/hello_example-0.0.1.tar.gz 
Processing ./dist/hello_example-0.0.1.tar.gz
Installing collected packages: hello-example
    Running setup.py install for hello-example ... done
Successfully installed hello-example-0.0.1
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $ pip3 list --format=columns
Package       Version
------------- -------
hello-example 0.0.1  
pip           20.0.2 
setuptools    41.2.0 
As the module has been installed, let's test this first using "ipyhon" console and then using the python program.

(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $ipython3
/usr/local/lib/python3.7/site-packages/IPython/core/interactiveshell.py:931: UserWarning: Attempting to work in a virtualenv. If you encounter problems, please install IPython inside the virtualenv.
  warn("Attempting to work in a virtualenv. If you encounter problems, please "
Python 3.7.7 (default, Mar 10 2020, 15:43:03) 
Type 'copyright', 'credits' or 'license' for more information
IPython 7.8.0 -- An enhanced Interactive Python. Type '?' for help.

In [1]: import hello_world as hw                                                                                       

In [2]: hw.hellopython()                                                                                               
Out[2]: 'HELLO PYTHON3'

In [3]: hw.helloworld()                                                                                                
Out[3]: 'HELLO WORLD'

In [4]:
Here you saw that I can call the function using my custom module "hello_world". 

Using this module in the program-

(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $cat example.py 
import hello_world as hw

print("Calling Hello Python Function: "+hw.hellopython())
print("Calling Hello World Function: "+hw.helloworld())
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $python3 example.py 
Calling Hello Python Function: HELLO PYTHON3
Calling Hello World Function: HELLO WORLD
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $

Now, let's suppose you want to upgrade to version "0.0.2", then we will follow the below steps -

(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $cat hello_world/hello_python.py 
def hellopython():
    return "HELLO PYTHON3 with version **0.0.2**"
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $cat hello_world/hello_world.py 
def helloworld():
    return "HELLO WORLD with version ** 0.0.2 **"
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $cat setup.py 
from setuptools import setup, find_packages

setup(
    name="hello_example",
    version="0.0.2",
    author="Example Author",
    author_email="author@example.com",
    url="example.com",
    description="A hello-world example package",
    packages=find_packages(),
    classifiers=[
        "Programming Language :: Python :: 3",
        "License :: OSI Approved :: MIT License",
        "Operating System :: OS Independent",
    ],
)
Package it again - 
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $ python3 setup.py sdist
running sdist
running egg_info
writing hello_example.egg-info/PKG-INFO
writing dependency_links to hello_example.egg-info/dependency_links.txt
writing top-level names to hello_example.egg-info/top_level.txt
reading manifest file 'hello_example.egg-info/SOURCES.txt'
writing manifest file 'hello_example.egg-info/SOURCES.txt'
running check
creating hello_example-0.0.2
creating hello_example-0.0.2/hello_example.egg-info
creating hello_example-0.0.2/hello_world
copying files to hello_example-0.0.2...
copying README -> hello_example-0.0.2
copying setup.py -> hello_example-0.0.2
copying hello_example.egg-info/PKG-INFO -> hello_example-0.0.2/hello_example.egg-info
copying hello_example.egg-info/SOURCES.txt -> hello_example-0.0.2/hello_example.egg-info
copying hello_example.egg-info/dependency_links.txt -> hello_example-0.0.2/hello_example.egg-info
copying hello_example.egg-info/top_level.txt -> hello_example-0.0.2/hello_example.egg-info
copying hello_world/__init__.py -> hello_example-0.0.2/hello_world
copying hello_world/hello_python.py -> hello_example-0.0.2/hello_world
copying hello_world/hello_world.py -> hello_example-0.0.2/hello_world
Writing hello_example-0.0.2/setup.cfg
Creating tar archive
removing 'hello_example-0.0.2' (and everything under it)
Install new version -

(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $pip3 install dist/hello_example-0.0.2.tar.gz 
Processing ./dist/hello_example-0.0.2.tar.gz
Installing collected packages: hello-example
  Attempting uninstall: hello-example
    Found existing installation: hello-example 0.0.1
    Uninstalling hello-example-0.0.1:
      Successfully uninstalled hello-example-0.0.1
    Running setup.py install for hello-example ... done
Successfully installed hello-example-0.0.2
Verify the installation - 
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $ pip3 list --format=columns
Package       Version
------------- -------
hello-example 0.0.2  
pip           20.0.2 
setuptools    41.2.0 
Test again using the python program -

(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $python3 example.py 
Calling Hello Python Function: HELLO PYTHON3 with version **0.0.2**
Calling Hello World Function: HELLO WORLD with version ** 0.0.2 **
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $


Wednesday, November 27, 2019

Deploy and Scale Kubernetes Application using Spinnaker

Deploy and Scale Kubernetes Application using Spinnaker-


In my last post Getting started with Spinnaker, we completed the installation and setup part of spinnaker. In this post, I'll be going through the "Deploying and Scaling application on Kubernetes using spinnaker".

In this particular exercise, we'll create a simple "nginx" deployment on kubernetes and expose that as a service. After that we'll see how we can scale up and down the deployment easily from Spinnaker Dashboard itself.

Fot this, first make sure we have done port-forwarding for required pods and able to access Spinnaker Dashboard. 

Note - Before moving ahead, please make sure that "kubernetes" provider is enabled. You can check this on "Halyard" configuration as below.

$ kubectl exec -it  spinnaker-local-spinnake-halyard-0 /bin/bash -n spinnaker
$ hal config list | grep -A 37 kubernetes




After that click on the Create Application in Application tab to below popup.



After filling the required information and hitting the create button. You'll get landed on below screen.

There are few other terms which you can check on documents like "Clusters, Load Balancers, Server Group" etc.

Here, we'll create a Server Group which will basically contain our deployment manifest(below).

---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 1 # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  type: NodePort
  ports:
    - port: 80
      nodePort: 30000
  selector: 
app: nginx 

Once you click on "Create Server Group", You'll see below screen where we need to paste above yaml and hit create.



Below will be the end state, if everything goes well.


Now, lets inspect our deployment. 
On Cluster tab we can see the deployment and number of replicas(pod) available in this deployment as below -


Please verify the same from CLI using kubectl-


Check for the services in Load Balancer Section -

Verify same using CLI -


Access the service -




Scale up the Deployment from 1 to 4 pods and verify the results-






So, this was the simple how-to for managing the K8S manifest. In the next post, I will try to explore the integration on Jenkins with Spinnaker and auto triggering Spinnaker Deployment based on Jenkins events.



Quickstart Guide for Kagent Setup with Local LM and Azure OpenAI

  LM Studio overview with running on the local system. To upgrade and install the kagent custom resource definitions (CRDs), you can execute...