Sunday, December 15, 2013

Linux Swap Size and How to free increased swap space

What is Swapping?



In modern Operating Systems, program which is accessed less frequently can be temporarily stored on disk or other media, So that this space can be used by some other program which really need it.This is called "swapping",  as an area of memory that can be used by others and what this contains can be swapped or exchanged easily. Swapping is also known as "Paging".
 

Features of Swapping:

   Mainly, Swapping has two important features:
    1. Useful for executing the programs which need memory more than available physical memory. In these scenarios, Kernel/OS move out less used programs from main memory and move in the process that needs memory immediately to memory.
    2. Any, application which loads number of pages during its startup and then don't use those ones any more. Now such unused pages can be stored in swap space, so that main memory can be freed for other programs.

Swap Space in Linux-

        In Linux, normally during installation we create swap partition(used for swapping purpose). Mostly, we create swap partition/file of size equal to 2*sizeof(RAM). This is because we need  almost double space while doing swapping for any program(hope you already know this, not going in details). 

Note : On system where we have enough RAM present to handle memory requirements for all processes, we don't need to create swap partition/file twice of RAM. As, this will not be fully utilized and will be wastage of space.

How to free/clear the Swap Partition/File when gets filled up?

Now, let us come to main topic. Suppose any time due to number of swap in/out your all space gets utilized and you want to free up swap partition/file. In that case you can follow below steps.

        You may be wondering that I am using two terms partition and file. So, Partition is logical portion of our HDD and file is the special file that we have created on existing partition that can be used for swapping .

To check what swap space you have. you can use "swapon -s" as root.
# swapon -s
Filename                                Type            Size    Used    Priority
/dev/sda3                               partition       4096564 308     -1

Here, Type field indicate that we have swap space as "Partition" rather than file and logical partition is "/dev/sda3". Size column show the size in "KB", Used column shows How many space has been used in "KB" and finally Priority columns tell kernel to which swap space is to be used first.

Currently, I have 308 KB of swap space is used and I want to free up all the used space. But, before clearing you used swap space, make sure you have enough free main memory to hold the pages which currently reside in space space. In, my case I have enough free memory to hold pages in swap space. So, I am moving ahead with this :).

#free -o
                    total           used              free        shared     buffers     cached
Mem:      49453924   49002192     451732          0       1547144   24390976
Swap:      4096564        308          4096256

# swapoff -a && swapon -a
          # free -o
                            total          used               free        shared    buffers     cached
         Mem:      49453924   49002572     451352          0    1547816   24391608
         Swap:      4096564          0             4096564

Note : Time taken to free up your swap space depends on the size of your swap partition/file. More is the size, more will be the time taken to free up space.

Hope this will help you!!

Thanks! !




Sunday, November 3, 2013

Simple HTTP Web Server using Python


Basic HTTP Web Server using Python


Python make use of SimpleHTTPServer — Simple HTTP request handler to start for starting a New Web Server to serve the file relative to current directory. We can setup this Simple Web Server within seconds without any effort.

Note :  Before moving ahead, I just want to let you know that this functionality is available only with Version 2.4 or Later. $”python -m http.server [port]” for Python 3.

Installation and Verification :


Install python, if you have don’t have installed, then please install.

//Redhat/Centos/Fedora
# yum install python OR  #rpm –ivh python-version.rpm

//On Ubuntu
#sudo apt-get install python

//Verify the Installation(Redhat/Centos)
#yum list python                                                                                                   
Loaded plugins: downloadonly, rhnplugin, security
This system is receiving updates from RHN Classic or RHN Satellite.
Installed Packages
python.x86_64                                                      2.4.3-56.el5                                                      installed

Start Basic Web Server :

You can start basic web server by using one single command as below. By default it will get started on Port 8000 and IP 0.0.0.0 .
$ python -m SimpleHTTPServer
Serving HTTP on 0.0.0.0 port 8000 ...

Now you can access this using any IP on your system and Port 8000. You will be able to see all contents on current directory as below :





Now what if you have already another service using this default port i.e. 8000. In that case you’ll get exceptions :

$ python -m SimpleHTTPServer                                                                                              
Traceback (most recent call last):
  File "/usr/lib64/python2.4/SimpleHTTPServer.py", line 206, in ?
    test()
  File "/usr/lib64/python2.4/SimpleHTTPServer.py", line 202, in test
    BaseHTTPServer.test(HandlerClass, ServerClass)
  File "/usr/lib64/python2.4/BaseHTTPServer.py", line 569, in test
    httpd = ServerClass(server_address, HandlerClass)
  File "/usr/lib64/python2.4/SocketServer.py", line 330, in __init__
    self.server_bind()
  File "/usr/lib64/python2.4/BaseHTTPServer.py", line 101, in server_bind
    SocketServer.TCPServer.server_bind(self)
  File "/usr/lib64/python2.4/SocketServer.py", line 341, in server_bind
    self.socket.bind(self.server_address)
  File "", line 1, in bind
socket.error: (98, 'Address already in use')

 

Start Basic Web Server on different Port :

We can start Web Server other than default by passing Port as argument.
$ python -m SimpleHTTPServer 1234
Serving HTTP on 0.0.0.0 port 1234 ...            
Above command starting Server on Port 1234.

Start Basic Web Server on different IP and Port :

                We can use below code snippet for starting Web Server on Any specified Port and IP.
$cat webshare.py
import sys
import BaseHTTPServer from SimpleHTTPServer import SimpleHTTPRequestHandler

HandlerClass = SimpleHTTPRequestHandler
ServerClass  = BaseHTTPServer.HTTPServer
Protocol     = "HTTP/1.0"

if sys.argv[1:]:
    port = int(sys.argv[1])
else:
    port = 1234
server_address = (’192.168.2.145', port)

HandlerClass.protocol_version = Protocol
httpd = ServerClass(server_address, HandlerClass)

sa = httpd.socket.getsockname()
print "Serving HTTP on", sa[0], "port", sa[1], "..."
httpd.serve_forever()

Run this python script:
$ python webshare.py
Serving HTTP on 192.168.2.145 port 1234 ...


Advantages :
Ø  comes with python (>= 2.4), no need to install anything.
Ø  no configuration needed.




Saturday, September 14, 2013

screenFetch: The BASH Screenshot Information Tool


screenFetch
The title pretty much explains it. screenFetch is a screenshot information tool.


Features:
  - Detects Desktop Environment, Kernel, Uptime, Window Manager, Window Manager Theme, GTK Theme, GTK Icon Theme, and GTK Font. It also attempts to detect running distribution

Install screenFetch On Linux
You can either install it by directly downloading the source package from the project page or by cloning the screenFetch git repository.
Install From Source:
Download the latest version here. I downloaded and installed it in /home/sk/Downloads directory.
Extract it with command:

[root@linux bin]# wget http://git.silverirc.com/cgit.cgi/screenfetch.git/snapshot/screenfetch-3.2.2.zip
--2014-03-14 20:25:21--  http://git.silverirc.com/cgit.cgi/screenfetch.git/snapshot/screenfetch-3.2.2.zip
Resolving git.silverirc.com (git.silverirc.com)... 75.132.7.177
Connecting to git.silverirc.com (git.silverirc.com)|75.132.7.177|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [application/x-zip]
Saving to: ‘screenfetch-3.2.2.zip’

    [   <=>                                                                                                       ] 47,945      86.3KB/s   in 0.5s   

2014-03-14 20:25:22 (86.3 KB/s) - ‘screenfetch-3.2.2.zip’ saved [47945]

[root@linux bin]# unzip screenfetch-3.2.2.zip 
Archive:  screenfetch-3.2.2.zip
   creating: screenfetch-3.2.2/
  inflating: screenfetch-3.2.2/CHANGELOG  
  inflating: screenfetch-3.2.2/COPYING  
  inflating: screenfetch-3.2.2/README.mkdn  
  inflating: screenfetch-3.2.2/TODO  
  inflating: screenfetch-3.2.2/screenfetch-dev  
[root@linux bin]# cd screenfetch-3.2.2/
[root@linux screenfetch-3.2.2]# 
[root@linux screenfetch-3.2.2]# ll
total 188
-rw-r--r-- 1 root root   9699 Oct 28 15:07 CHANGELOG
-rw-r--r-- 1 root root  35147 Oct 28 15:07 COPYING
-rw-r--r-- 1 root root   2920 Oct 28 15:07 README.mkdn
-rwxr-xr-x 1 root root 131701 Oct 28 15:07 screenfetch-dev
-rw-r--r-- 1 root root   1082 Oct 28 15:07 TODO
[root@linux screenfetch-3.2.2]# ./screenfetch-dev
Sample output:




Thanks! 
Kuldeep

Tuesday, July 23, 2013

DROP all MySQL Database tables from the command line!!


DROP all MySQL tables from the command line

I had a situation today where I wanted to drop all the tables in a database, but didn't have access to a UI like phpMyAdmin. The easiest thing to do would have been to drop the entire database then re-create it, but I didn't have permissions to drop an entire database.
After searching around online, this is the best solution I found:
mysqldump -u username -p --no-data dbname | grep ^DROP > drop.sql
mysql -u username -p dbname < drop.sql
rm drop.txt
Simple, and only requires 2 lines (ok, a 3rd line for cleanup).
What these commands do is basically generated a mysqldump file (which included DROP commands by default), then filter the output to only show the lines that have the DROP statements. The arrow (>) redirects the output to a file. The second line then imports those SQL statements into the database.
Another option I have is to use information_schema database and generate an sql script like below :

Select concat('DROP TABLE database_name.', table_name,';') from information_schema.TABLES where table_schema='database_name';
Which gives:
DROP TABLE database_name.table_name1;
...
DROP TABLE database_name.table-nameN;

You can pipe it to be executed directly. I prefer this syntax than the DROP TABLE IF EXISTS table_name1;
Note : Replace database_name with your DB Name. Also, sometime if we execute generated script, then it may throw exceptions if tables have dependencies with each other. In such case you have to first delete master table and then dependent table.

Sunday, July 21, 2013

MariaDB Drop in Replacement of MySQL !!


MariaDB 

                                           

MariaDB is a community-developed fork of the MySQL relational database management system, the impetus being the community maintenance of its free status under the GNU GPL. As a fork of a leading open source software system, it is notable for being led by its original developers and triggered by concerns over direction by an acquiring commercial company OracleContributors are required to share their copyright with Monty Program AB.
MariaDB is a fork of the MySQL database project that provides a drop-in replacement for MySQL. It preserves API/ABI compatibility with MySQL and adds some new features.
The original company behind MySQL, MySQL AB, were bought out by Sun which was then bought by Oracle. Recent changes made by Oracle indicate they are moving the MySQL project to be more closed. They are no longer publishing any useful information about security issues (CVEs), and they are not providing complete regression tests any more, and a very large fraction of the mysql bug database is now not public.
MariaDB, which was founded by some of the original MySQL developers, has a more open-source attitude and an active community. We have found them to be much easier to work with, especially in regards to security matters.

Its lead developer is Michael "Monty" Widenius, the founder of MySQL and Monty Program AB. He had previously sold his company, MySQL AB, to Sun Microsystems for 1 billion USD. MariaDB is named after Monty's younger daughter, Maria.

Note : Now Fedora-19 is coming with default DB As MariaDB instead of MySQL DB.
[from Fedora 19 Release ]: We would like to replace MySQL with MariaDB in early development cycle for Fedora 19. Original MySQL implementation packaged as mysql will be renamed to community-mysql and package mysql will retire as of Fedora 19. It means the name "mysql" will become only a virtual provider for compatibility reasons. MariaDB will become the default and will include Provides/Obsoletes for mysql, while community-mysql will only include Provides for functional packages (not -devel packages) in order to avoid ambiguity when choosing one from two packages providing the same symbol. The community-mysql packages will be installed after users explicitly request so. Also, we do not intend to support concurrent installation of both packages on the same machine; pick one or the other. The only packages, that will be able to be installed in parallel will be client libraries (mariadb-libs and community-mysql-libs).

Benefit to Fedora

Fedora will have a truly open-source MySQL implementation and won't depend on what Oracle decides to do with MySQL in the future. Compared to MySQL, MariaDB is faster in some cases, it includes some new engines and features, while the existing engines and features are enhanced and still compatible.

More Interesting One from ZDNet : 


Thanks!!

Wednesday, July 17, 2013

Linux Server Build Date


Find the Installation date for any Linux System

Below are the steps for checking the install date of Linux Operation System (Red Hat) – Not official, but useful.
There are few ways to find out build date on any linux system.
1.) Using basesystem package installed on system.
2.) Using passwd command to check status of user which is created at system install time by default.

1.) Using basesystem package installed on system:

The package basesystem contains no files
# rpm -ql basesystem
(contains no files)

Basesystem defines the components of a basic Red Hat Linux system (for example, the package installation order to use during bootstrapping).Basesystem should be the first package installed on a system and it should never be removed.

You can check Install Date from this package

$ sudo rpm -qi basesystem-8.0-5.1.1Name        : basesystem                   Relocations: (not relocatable)Version     : 8.0                               Vendor: Red Hat, Inc.Release     : 5.1.1                         Build Date: Wed 12 Jul 2006 12:08:04 AM PDTInstall Date: Thu 03 Nov 2011 09:40:36 PM PDT      Build Host: ls20-bc2-14.build.redhat.comGroup       : System Environment/Base       Source RPM: basesystem-8.0-5.1.1.src.rpmSize        : 0                                License: public domainSignature   : DSA/SHA1, Thu 18 Jan 2007 07:33:57 AM PST, Key ID 5326810137017186Packager    : Red Hat, Inc.
Summary     : The skeleton package which defines a simple Red Hat Linux system.Description :Basesystem defines the components of a basic Red Hat Linux system (forexample, the package installation order to use during bootstrapping).Basesystem should be the first package installed on a system, and itshould never be removed.


 2.) Using passwd command to check status of user which is created at system install time by default :


A good way to find the installation date of a linux machine is by using the passwd command with the -S option. This option displays account status information. The status information consists of 7 fields according to “man passwd”. The third field gives the date of the last password change. So, by finding the date of the users who never changed their password we can easily assume the installation date.
These users can be system users like, bindaemon etc.
example:

$ sudo  passwd -S daemon
daemon LK
2011-11-03 0 99999 7 -1 (Alternate authentication scheme in use.)
$ sudo  passwd -S binbin LK 2011-11-03 0 99999 7 -1 (Alternate authentication scheme in use.)


Thanks!!



Sunday, July 14, 2013

UNIX Load Average

UNIX Load Average:


Today suddenly I got alert from Nagios Like "**Current Load is CRITICAL **". After that I have verified the LA on system and Nagios configuration as well. 

After doing some further research I have come to know about some interesting topic i.e. "What is Load Average" and "How LA is being calculated". So sharing the thing that I have found.

Actually, load average is not a UNIX command in the conventional sense. Rather it’s an embedded metric that appears in the output of other UNIX commands like uptime and procinfo. These commands are commonly used by UNIX sysadmin’s to observe system resource consumption. 

Uptime :
The uptime shell command produces the following output:

[pax:~]% uptime
9:40am up 9 days, 10:36, 4 users, load average: 0.02, 0.01, 0.00

It shows the time since the system was last booted, the number of active user processes and
something called the load average.

W
The w(ho) command produces the following output:

[pax:~]% w
9:40am up 9 days, 10:35, 4 users, load average: 0.02, 0.01, 0.00
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
mir ttyp0 :0.0 Fri10pm 3days 0.09s 0.09s bash
neil ttyp2 12-35-86-1.ea.co 9:40am 0.00s 0.29s 0.15s w
...
Notice that the first line of the output is identical to the output of the uptime command.

Top
The top command is anothor UNIX command set that ranks processes according to the amount of CPU time they consume. It produces the following output:

4:09am up 12:48, 1 user, load average: 0.02, 0.27, 0.17
58 processes: 57 sleeping, 1 running, 0 zombie, 0 stopped
CPU states: 0.5% user, 0.9% system, 0.0% nice, 98.5% idle
Mem: 95564K av, 78704K used, 16860K free, 32836K shrd, 40132K buff
Swap: 68508K av, 0K used, 68508K free, 14508K cched
PID USER PRI NI SIZE RSS SHARE STAT LIB %CPU %MEM TIME COMMAND
5909 neil 13 0 720 720 552 R 0 1.5 0.7 0.01 top
1 root 0 0 396 396 328 S 0 0.0 0.4 0.02 init
2 root 0 0 0 0 0 SW 0 0.0 0.0 0.00 kflushd
3 root -12 -12 0 0 0 SW< 0 0.0 0.0 0.00 kswapd

The standard Nagios plugins include a "check_load" command which will raise a warning or error if the load averages for the target machine exceed some threshold. Today after getting alert I was thinking that what thresholds should be.

The usage for the check_load command is as follows:
Usage: check_load -w WLOAD1,WLOAD5,WLOAD15 -c CLOAD1,CLOAD5,CLOAD15
Without looking at the source I'm pretty sure that the program is either just opening a pipe to uptime or using the /dev/proc file system to read the load averages for the past 1, 5 and 15 minutes. Should be safe to assume then that Nagios' concept of load is exactly the same as uptime's and that the figures are ultimately is coming from the kernel scheduler. (Note: Yup. :-) Just checked the source.)

So the first question is: what does "load" actually measure?

UNIX AND LINUX LOAD

Breifly, when Unix machines report their "load" (usually through uptimetop or who) they are reporting a weighted average of the number of processes either running or waiting for the CPU (Linux will also count processes that may be blocked waiting on I/O). This average is calculated over 1, 5 and 15 minutes (hence the three values) based on values that are sampled every 5 seconds (on Linux at least). Dr Neil Gunther has written more than you might ever want to know about how those load averages are calculated and what they mean. It's an excellent series of articles (see also the inevitable Wikipedia article).

So assuming we have a single-core CPU, a load value of "1.0" would suggest that the CPU has been 100% utilised over whatever reporting period that figure was calculated for. A load of "2.0" would mean that whenever one process had the CPU there was another that was forced to wait. However, if we have 2 cores, the same "2.0" load value would suggest that both processes got the CPU time they needed, while a load of "1.0" would suggest the CPU had only been at 50% capacity.

On a simple web server, running a single 2-core CPU a load average of "2.0, 1.0, 0.5" suggests that, over the last minute, the CPU has been 100% utilised; over the last 5 minutes it's been 50% utilised; and over the last 15 minutes, it's been 25% utilised. Halve those values if 4 cores are available and double them if only one is in the system.

You can see then that sensible threshold values for warning and critical states requires you to consider how many CPUs and CPU cores your system has. You're therefore probably going to want to set your thresholds per machine or at least set them differently for each different type of configuration.

For example, one of our Solaris boxes has 12 cores so a load of "6.0" is nothing to be concerned about. However that same load figure on another, single-core box might be worthy of a warning or even critical alert, depending on how sensitive we were to process queue lengths on that box. Except if that box is a Linux box with a lot of I/O and slow devices (like a tape drive) and is counting processes that are sitting idle and waiting for an I/O operation to finish. And what is the application running on it? Is it threaded? How is your kernel counting threads in that total -- or is it just counting processes?

SETTING THE CHECK_LOAD THRESHOLDS

So determining an appropriate warning and critical set of threshold values for check_load will depend on what you think a reasonable process queue length will be; how your specific system treats threads; how your applications on that system behave (and their expected responsiveness levels); and how many CPUs / cores your system has. Oh -- and your performance targets or SLAs.

This is why experienced admins use a time honoured, complicated heuristic process to set an initial value and then continually adjust that value based on the correlation of alerts raised and actual performance and hence user impact.

In other words: we rub our bellies and take a guess and then change the values if we get too many or too few alerts. We're experienced sysadmins -- how much time do you think we have? :-)

In our case, for web servers, we decided that over 5 and 15 minute periods we expect spare capacity on the box -- but we only want to be alerted if the box is basically maxing out on CPU over a significant period. Over 1 minute we expect the occasional spike and don't really want an alert unless it's way beyond expectations. We're using Apache with no threading so 1 load point = 1 process using or waiting for CPU.

We've set warning levels for 15 minute load average at number of CPU cores times 2 (plus one!). For 5 minutes increase the threshold by 5. For one minute, increase it by 5 again. Critical threshold starts at number of CPU cores times 4 and then follows the same pattern for the 5 and 1 minute warning.

Here's a sample nrpe.cfg config file for a web server with 2 cores:
command[check_load]=/path/check_load -w 15,10,5 -c 30,25,20

It's important to actually test this set up. Use ApacheBench or JMeter or similar tool to get your load average up and test performance under those thresholds to see if it's acceptable. If your application is unacceptably slow from a user perspective at lower load values then lower your thresholds.

Special Thanks to Hisso Hathair!!


Wednesday, July 3, 2013

Installing MongoDB on Linux


In Last posts, we have gone through Overview and features of MongoDB. This tutorial includes: an overview of the available packages, instructions for configuring the package manager, the process install packages from the 10gen repository, and preliminary MongoDB configuration and operation.

Actually, we have two ways for installing.
1.) We can download .tgz or tar package and then can compile manually.
2.) We can create a repository, then use yum downloading and installing.

Here, I am going through 2nd way:

Package Options

The 10gen repository contains two packages:
  • mongo-10gen-server
    This package contains the mongod and mongos daemons from the latest stable release and associated configuration and init scripts. Additionally, you can use this package to install daemons from a previous release of MongoDB.
  • mongo-10gen
    This package contains all MongoDB tools from the latest stable release. Additionally, you can use this package to install tools from a previous release of MongoDB. Install this package on all production MongoDB hosts and optionally on other systems from which you may need to administer MongoDB systems.

    Install MongoDB

    Configure Package Management System (YUM)

    Create a /etc/yum.repos.d/10gen.repo file to hold information about your repository. If you are running a 64-bit system (recommended,) place the following configuration in /etc/yum.repos.d/10gen.repo file:
    [10gen]
    name=10gen Repository
    baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/x86_64
    gpgcheck=0
    enabled=1
    
    If you are running a 32-bit system, which isn’t recommended for production deployments, place the following configuration in /etc/yum.repos.d/10gen.repo file:
    [10gen]
    name=10gen Repository
    baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/i686
    gpgcheck=0
    enabled=1
    

    Install Packages

    Issue the following command (as root or with sudo) to install the latest stable version of MongoDB and the associated tools:
    yum install mongo-10gen mongo-10gen-server
    
    When this command completes, you have successfully installed MongoDB!

    Manage Installed Versions

    You can use the mongo-10gen and mongo-10gen-server packages to install previous releases of MongoDB. To install a specific release, append the version number, as in the following example:
    yum install mongo-10gen-2.2.3 mongo-10gen-server-2.2.3
    
    This installs the mongo-10gen and mongo-10gen-server packages with the 2.2.3 release. You can specify any available version of MongoDB; however yum willupgrade the mongo-10gen and mongo-10gen-server packages when a newer version becomes available. Use the following pinning procedure to prevent unintended upgrades.
    To pin a package, add the following line to your /etc/yum.conf file:
    exclude=mongo-10gen,mongo-10gen-server

Configure MongoDB

These packages configure MongoDB using the /etc/mongod.conf file in conjunction with the control script. You can find the init script at /etc/rc.d/init.d/mongod.
This MongoDB instance will store its data files in the /var/lib/mongo and its log files in /var/log/mongo, and run using the mongod user account.
Note
 
If you change the user that runs the MongoDB process, you will need to modify the access control rights to the /var/lib/mongo and /var/log/mongodirectories. Also there are number of parameters that we can change to do further modifications.

Control MongoDB

Warning
 
With the introduction of systemd in Fedora 15, the control scripts included in the packages available in the 10gen repository are not compatible with Fedora systems. 

Start MongoDB

Start the mongod process by issuing the following command (as root, or with sudo):
service mongod start
You can verify that the mongod process has started successfully by checking the contents of the log file at /var/log/mongo/mongod.log.
You may optionally, ensure that MongoDB will start following a system reboot, by issuing the following command (with root privileges:)
chkconfig mongod on

Stop MongoDB

Stop the mongod process by issuing the following command (as root, or with sudo):
service mongod stop

Restart MongoDB

You can restart the mongod process by issuing the following command (as root, or with sudo):
service mongod restart
Follow the state of this process by watching the output in the /var/log/mongo/mongod.log file to watch for errors or important messages from the server.

Control mongos

As of the current release, there are no control scripts for mongosmongos is only used in sharding deployments and typically do not run on the same systems where mongodruns. You can use the mongodb script referenced above to derive your own mongos control script.

Using MongoDB

Among the tools included in the mongo-10gen package, is the mongo shell. You can connect to your MongoDB instance by issuing the following command at the system prompt:
mongo
This will connect to the database running on the localhost interface by default. At the mongo prompt, issue the following two commands to insert a record in the “test” collection of the (default) “test” database and then retrieve that document.
db.test.save( { a: 1 } )
db.test.find()


Thanks!!Kuldeep

Kubernetes 1.31 || Testing the Image Volume mount feature using Minikube

With Kubernetes new version 1.31 ( https://kubernetes.io/blog/2024/08/13/kubernetes-v1-31-release/ ) there are so many features releases for...