Wednesday, July 7, 2010

Heartbeat Clustering

basics of Clustering, advantages of Clustering and configuration of simple fail-over Cluster.
Let’s start.
What is a Cluster any way?
Ans :
A computer cluster is a group of linked computers, working together closely so that in many respects they form a single computer. The components of a cluster are commonly, but not always, connected to each other through fast local area networks. Clusters are usually deployed to improve performance and/or availability over that of a single computer, while typically being much more cost-effective than single computers of comparable speed or availability
www.wikipedia.org.
Cluster terminology.

Node : It’s one of the system/computer which participates with other systems to form a Cluster.

Heartbeat : This a pulse kind of single which is send from all the nodes at regular intervals using a UDP packet so that each system will come to know the status of availability of other node. It’s a kind of door knocking activity like pinging a system, So that each node which are participating in Cluster will come to know the status of other nodes availability in the Cluster.

Floating IP or Virtual IP : This is the IP assigned to the Cluster through which user can access the services. So when ever clients request a service they will be arrived to this IP, and client will not know what are the back-end/actual ip addresses of the nodes. This virtual IP is used to nullify the effect of nodes going down.

Master node : This is the node most of the time where services are run in a High availability Cluster.

Slave node : This is the node which is used in High availability Cluster when master node is down. It will take over the role of servicing the users, when it will not receive heartbeat pulse from master. And automatically gives back the control when the master server is up and running. This slave comes to know about the status of master through heartbeat pulse/signals.

Types of Clusters:
Cluster types can be divided in to two main types
1.
High availability :

These types of Clusters are configured where there should be no downtime. If one node in the cluster goes down second node will take care of serving users without interrupted service with availability of five nines i.e. 99.999%.

2. Load balancing :
These types of Clusters are configured where there are high loads from users. Advantages of load balancing are that users will not get any delays in their request because load on a single system is shared by two or more nodes in the Cluster.

Advantages of Cluster :
1.Reduced Cost : Its cheaper to by 10 normal servers and do cluster on them then buying a high end servers like blade servers, which will do more work than a single blade server which have more processing power.
2. Processing Power
3. Scalability
4. Availability

Configuration files details :
Three main configuration files :

· /etc/ha.d/authkeys
· /etc/ha.d/ha.cf
· /etc/ha.d/haresources

Some other configuration files/folders to know :
/etc/ha.d/resource.d. Files in this directory are very important which contains scripts to start/stop/restart a service run by this Heartbeat cluster.

Before configuration of Heartbeat Cluster these below points to be noted.

Note1 : The contents of ha.cf file are same in all the nodes in a cluster, except ucast and bcast derivatives.

Note2 : The contents of authkeys and haresources files are exact replica on all the nodes in a cluster.

Note3 : A cluster is used to provided a service with high availability/high performance, that service may be a web server, reverse proxy or a Database.

Test scenario setup:
1.
The cluster configuration which I am going to show is a
two node cluster with failover capability for a Squid reverse proxy..
2.For Squid reverse proxy configuration please click here..
3.
Node details are as follows

Node1 :
IpAddress(eth0):10.77.225.21
Subnetmask(eth0):255.0.0.0
Default Gateway(eth0):10.0.0.1
IpAddress(eth1):192.168.0.1(To send heartbeat signals to other nodes)
Sub net mask (eth1):255.255.255.0
Default Gateway (eth1):None(don’t specify any thing, leave blank for this interface default gateway).

Node2 :
IpAddress(eth0):10.77.225.22
Subnetmask(eth0):255.0.0.0
Default Gateway (eth0):10.0.0.1
IpAddress(eth1):192.168.0.2(To send heartbeat signals to other nodes)
Sub net mask (eth1):255.255.255.0
Default Gateway(eth1):None(don’t specify any thing, leave blank for this interface default gateway).

4. Floating Ip address:10.77.225.20

Lets start configuration of Heartbeat cluster. And make a note that ever step in this Heartbeat cluster configuration is divided in two parts parts

1.(configurations on node1)
2.(configurations on node2)

For better understanding purpose

Step1 :
Install the following packages in the same order which is shown

Step1(a) : Install the following packages on node1
#rpm -ivh heartbeat-2.1.2-2.i386.rpm
#rpm -ivh heartbeat-ldirectord-2.1.2-2.i386.rpm
#rpm -ivh heartbeat-pils-2.1.2-2.i386.rpm
#rpm -ivh heartbeat-stonith-2.1.2-2.i386.rpm

Step1(b) : Install the following packages on node2
#rpm -ivh heartbeat-2.1.2-2.i386.rpm
#rpm -ivh heartbeat-ldirectord-2.1.2-2.i386.rpm
#rpm -ivh heartbeat-pils-2.1.2-2.i386.rpm
#rpm -ivh heartbeat-stonith-2.1.2-2.i386.rpm



Step2 : By default the main configuration files (ha.cf, haresources and authkeys) are not present in /etc/ha.d/ folder we have to copy these three files from /usr/share/doc/heartbeat-2.1.2 to /etc/ha.d/

Step2(a) : Copy main configuration files from /usr/share/doc/heartbeat-2.1.2 to /etc/ha.d/ on node 1
#cp /usr/share/doc/heartbeat-2.1.2/ha.cf /etc/ha.d/
#cp /usr/share/doc/heartbeat-2.1.2/haresources /etc/ha.d/
#cp /usr/share/doc/heartbeat-2.1.2/authkeys /etc/ha.d/

Step2(b) : Copy main configuration files from /usr/share/doc/heartbeat-2.1.2 to /etc/ha.d/ on node 2
#cp /usr/share/doc/heartbeat-2.1.2/ha.cf /etc/ha.d/
#cp /usr/share/doc/heartbeat-2.1.2/haresources /etc/ha.d/
#cp /usr/share/doc/heartbeat-2.1.2/authkeys /etc/ha.d/



Step3 : Edit ha.cf file
#vi /etc/ha.d/ha.cf

Step3(a) : Edit ha.cf file as follows on node1
debugfile /var/log/ha-debug
logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 25
warntime 10
initdead 50
udpport 694
bcast eth1
ucast eth1 192.168.0.1
auto_failback on
node rp1.linuxnix.com
node rp2.linuxnix.com

Step3(b) : Edit ha.cf file as follows on node2
debugfile /var/log/ha-debug
logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 25
warntime 10
initdead 50
udpport 694
bcast eth1
ucast eth1 192.168.0.2
auto_failback on
node rp1.linuxnix.com
node rp2.linuxnix.com


Let me explain each entry in detail:
Debugfile :
This is the file where debug info with good details for your heartbeat cluster will be stored, which is very much useful to do any kind of troubleshooting.

Logfile : This is the file where general logging of heartbeat cluster takes place.

Logfacility : This directive is used to specify where to log your heartbeat logs(if its local that indicates store logs locally or if it’s a syslog then store it on remote server and none to disable logging). And there are so many other options, please explore yourself.

Keepalive : This directive is used to set the time interval between heartbeat packets and the nodes to check the availability of other nodes. In this example I specified it as two seconds(keepalive 2).

Deadtime : A node is said to be dead if the other node didn’t receive any update form it.

Warntime : Time in seconds before issuing a "late heartbeat" warning in the logs.

Initdead : With some configurations, the network takes some time to start working after a reboot. This is a separate "deadtime" to handle that case. It should be at least twice the normal deadtime.

Udpport : This is the port used by heartbeat to send heartbeat packet/signals to other nodes to check availability(here in this example I used default port:694).

Bcast : Used to specify on which device/interface to broadcast the heartbeat packets.

Ucast : Used to specify on which device/interface to unicast the heartbeat packets.

auto_failback : This option determines whether a resource will automatically fail back to its "primary" node, or remain on whatever node is serving it until that node fails, or an administrator intervenes. In my example I have given as on that indicate if the failed node come back online, controle will be given to this node automatically. Let me put it in this way. I have two nodes node1 and node2. My node one machine is a high end one and node is for serving temporary purpose when node 1 goes down. Suppose node1 goes down, node2 will take the control and serve the service, and it will check periodically for node1 starts once it find that node 1 is up, the control is given to node1.

Node : This is used to specify the participated nodes in the cluster. In my cluster only two nodes are participating (rp1 and rp2) so just specify that entries. If in your implementation more nodes are participating please specify all the nodes.


Step4 : Edit haresources file
#vi /etc/ha.d/haresources

Step4(a) : Just specify below entry in last line of this file on node1
rp1.linuxnix.com 10.77.225.20 squid

Step4(b) : Just specify below entry in last line of this file on node1
rp1.linuxnix.com 10.77.225.20 squid

Explanation of each entry :
rp1.linuxnix.com
is the
main node in the cluster
10.77.225.20
is the floating ip address of this cluster.

Squid : This is the service offered by the cluster. And make a note that this is the script file located in /etc/ha.d/ resource.d/.

Note : By default squid script file will not be there in that folder, I created it according to my squid configuration.

What actually this script file contains?
Ans :
This is just a start/stop/restart script for the particular service. So that heartbeat cluster will take care of the starting/stoping/restarting of the service(here its squid).
Here is what squid script file contains.
http://sites.google.com/site/surendra/Home/squid.txt.txt?attredirects=0&d=1

Step5 : Edit authkeys file, he authkeys configuration file contains information for Heartbeat to use when authenticating cluster members. It cannot be readable or writeable by anyone other than root. so change the permissions of the file to 600 on both the nodes..

Two lines are required in the authkeys file:
A line which says which key to use in signing outgoing packets.
One or more lines defining how incoming packets might be being signed.

Step5 (a) : Edit authkeys file on node1
#vi /etc/ha.d/authkeys
auth 2
#1 crc
2 sha1 HI!
#3 md5 Hello!
Now save and exit the file

Step5 (b) : Edit authkeys file on node2
#vi /etc/ha.d/authkeys
auth 2
#1 crc
2 sha1 HI!
#3 md5 Hello!
Now save and exit the file



Step6 : Edit /etc/hosts file to give entries of hostnames for the nodes


Step6(a) : Edit /etc/hosts file on node1 as below

 
10.77.225.21 rp1.linuxnix.com rp1
10.77.225.22 rp2.linuxnix.com rp2


Step6(b) : Edit /etc/hosts file on node2 as below
 
10.77.225.21 rp1.linuxnix.com rp1
10.77.225.22 rp2.linuxnix.com rp2

Step7 : Start Heartbeat cluster

Step7(a) : Start heartbeat cluster on node1
#service heartbeat start

Step7(b) : Start heartbeat cluster on node2
#service heartbeat start

Checking your Heartbeat cluster:
If your heartbeat cluster is running fine a Virtual Ethernet Interface is created on
node1 and 10.77.225.20
Clipped output of my first node
# ifconfig

Eth0 Link encap:Ethernet HWaddr 00:02:A5:4C:AF:8E
inet addr:10.77.225.21 Bcast:10.77.231.255 Mask:255.255.248.0
inet6 addr: fe80::202:a5ff:fe4c:af8e/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:5714248 errors:0 dropped:0 overruns:0 frame:0
TX packets:19796 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1533278899 (1.4 GiB) TX bytes:4275200 (4.0 MiB)
Base address:0x5000 Memory:f7fe0000-f8000000

Eth0:0
Link encap:Ethernet HWaddr 00:02:A5:4C:AF:8E
inet addr:10.77.225.20 Bcast:10.77.231.255 Mask:255.255.248.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Base address:0x5000 Memory:f7fe0000-f8000000

Eth1
Link encap:Ethernet HWaddr 00:02:A5:4C:AF:8F
inet addr:192.168.0.1 Bcast:192.168.0.255 Mask:255.255.255.0
inet6 addr: fe80::202:a5ff:fe4c:af8f/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:145979 errors:0 dropped:0 overruns:0 frame:0
TX packets:103753 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:100
RX bytes:38966724 (37.1 MiB) TX bytes:27640765 (26.3 MiB)
Base address:0x5040 Memory:f7f60000-f7f80000

Try accessing your browser whether Squid is working fine or not.

SVN_APACHE Repository Configuration

1.)Setting Up A Subversion Repository Using Apache, With Auto Updatable Working Copy:

What is Subversion?
Subversion is a free/open-source version control system. That is, Subversion manages files and directories over time. A tree of files is placed into a central repository. The repository is much like an ordinary file server, except that it remembers every change ever made to your files and directories. This allows you to recover older versions of your data, or examine the history of how your data changed. In this regard, many people think of a version control system as a sort of “time machine”.

SVN has few methods to serve its users. Below are some examples:
1, SVN+SSH
2, SVN+Apache
3, SVNServe
In this case we are using the Apache method.
Apache should be running as an normal user, not nobody.
I won't guide people how to install apache in this how to.


Install subversion using yum
#yum install subversion

Apache Configuration:

Edit the default configuration file, or whatever configuration file apache uses to run as.
I am going to assume the configuration file is /etc/httpd/conf/httpd.conf.

[root@test ~]# vi /etc/httpd/conf/httpd.conf


Locate the line where it states something like.
              
                         User nobody
                         Group #-1
and vhange it to
                         User apache
                         Group apache

Creating a repository:

Suppose I want to create a Repository at /usr/local/subversion/repository using fsfs database so execute the command:

mkdir -v /usr/local/subversion/

svnadmin create --fs-type fsfs /usr/local/subversion/repository

That should create a subversion repository under /usr/local/subversion/repository.

ls /usr/local/subversion/repository
  
conf/  dav/  db/  format  hooks/  locks/  README.txt 

Setting up httpd.conf to serve the created repository:

Add the following lines to the end of the httpd.conf or the appropriate apache configuration file.


    ServerName 192.168.1.13

  Dav svn
  SVNPath /usr/local/subversion/repository/
  AuthType Basic
  AuthName "Subversion repository"
  AuthUserFile /usr/local/subversion/repository/conf/svn-auth-file
  Require valid-user


Adding SVN users:

Since we are using svn with an apache server, and an apache basic authentication method.
We need to create a password file with the htpasswd binary provided with a standard apache installation.

htpasswd -cmd /usr/local/subversion/repository/conf/svn-auth-file {user-name}

-c option creates a new htpasswd file.
-m encrypts the password with an MD5 algorithm.
-d encrypts the password with a CRYPT algorithm.
Where {user-name} stands for an actual user name that will be used for authentication.

Warning: We should not use the -c option once we have added the first user. Using so will create and replace all existing user within the file.

htpasswd -md /usr/local/subversion/repository/conf/svn-auth-file {user-name}

Setting up the initial repository layout:

A repository mostly contains 3 standard folders.
branches
tags
trunk
For creating those standard folders in a repository, create a temporary folder anywhere you want, /tmp would be a good idea, with the following subdirectories.

mkdir -pv /tmp/subversion-layout/{branches,tags}

After we have made all the layout folders, move all the contents of your project to the trunk folder.

mv -v /usr/local/apache2/htdocs /tmp/subversion-layout/trunk

Then make an initial import of the temporary created directory.

svn import /tmp/subversion-layout/ http://127.0.0.1/subversion/


This will setup you up with a default repository layout, and make a first revision.


Setting up a working copy:

Now what we need to do is to make a working copy of all the files in the repository under /usr/local/apache2/htdocs. So that whenever a developer updates the php codes, they can see the code changes taking effect in a working environment. But setting up a working copy would not accomplish this task, we would need to make the hook scripts to work with a working copy.
Thus, whenever a developer commits to the repository, the hook script will run itself, and update the working copy. Make sure that htdocs folder under /usr/local/apache2/doesn’t already exist. If you want you can rename it to htdocs_old.

To setup a working copy, do the following.

cd /usr/local/apache2/

 svn checkout http://127.0.0.1/subversion/trunk/ htdocs

Setting up the hook scripts:

A hook is a program triggered by some repository event, such as the creation of a new revision or the modification of an unversioned property. Each hook is handed enough information to tell what that event is, what target(s) it's perating on, and the username of the person who triggered the event. Depending on the hook's output or return status, the hook program may continue the action, stop it, or suspend it in some way. The hooks subdirectory is, by default, filled with templates for various repository hooks. 

post-commit.tmpl          post-unlock.tmpl          pre-revprop-change.tmpl
post-lock.tmpl            pre-commit.tmpl           pre-unlock.tmpl
post-revprop-change.tmpl  pre-lock.tmpl             start-commit.tmpl

For now, I will be discussing about the post-commit hook script, since that is what we need in our case. Copy the post-commit.tmpl file into post-commit in the same hooks directory, and give post-commit execution rights.


cp -v /usr/local/subversion/repository/hooks/post-commit.tmpl /usr/local/subversion/repository/hooks/post-commit

 chmod +x /usr/local/subversion/repository/hooks/post-commit

 Now edit the post-commit script and comment the follow two lines at the bottom, and add the following line to it.

/usr/bin/svn update /usr/local/apache2/htdocs/ >> /usr/local/subversion/repository/logs/post-commit.log
 
After doing that, make a new folder logs, under /usr/local/subversion/ so that we can enable logging, and create a blank the post-commit.log file.

mkdir -v /usr/local/subversion/repository/logs/
touch /usr/local/subversion/repository/logs/post-commit.log

Once again, we need to make sure the repository folder has the proper user ownership, it is advised to set ownership on /usr/local/subversion/repository/ for user apache.

chown -Rv apache.apache /usr/local/subversion/repository/

if all goes well, that's should be it.
You now have a working subversion repository server up which is ready for further imports, as soon as you start the apache server.


Adding and Deleting Files in SVN Repositories
1.) For Adding:
     > Simply copy file to working directory


 
 
 
 
 
 

 


 




 

Tuesday, July 6, 2010

PXE(Preboot eXecution Environment) Installation and Configuration

Red Hat Enterprise Linux supports network installation using the NFS, FTP, or HTTP protocols. A network installation can be started from a boot CD-ROM or Red Hat Enterprise Linux CD #1, a bootable flash memory drive. Alternatively, Pre-Execution Environment (PXE) can also be used to install RHEL, if the system to be installed contains a network interface card (NIC) with Pre-Execution Environment (PXE) support.

How PXE works:

The client's NIC with PXE support sends out a broadcast request for DHCP information and the DHCP server provides the client with an IP address, name server information, the hostname or IP address of the tftp server and the location of the installation files on the tftp server.

Basic steps required for preparation of PXE server

  1. Configure the network (either NFS or FTP or  HTTP) server to export the installation tree.
  2. Configure the files on the tftp server necessary for PXE booting.
  3. Configure DHCP.

Note:---
  • We can configure our nfs or ftp or http installation server to pxe installation server.
  • Here we will use ftp server for our pxe installation server.

configuring vsftpd for PXE

Setting up vsftpd for basic use require just two steps. Installation of vsftpd rpm and starting the vsftpd service. We also need to chkconfig vsftpd so that vsftpd starts automatically when system reboots.

Step 1:
install vsftpd, if is not installed.

Step 2:
Starting vsftpd service as:

service vsftpd start
chkconfig --level 345 vsftpd on
Our next step is to copy all the CDs on to /var/ftp/pub/

PXE Installation - Copying CDs to /var/ftp/pub

Copying CDs to /var/ftp/pub

 /var/ftp/pub directory path is automatically created when we install vsftpd rpm. So, now we need to copy all the cds in /var/ftp/pub/rhel5.3

There are various steps to copy RPMs in /var/ftp/pub/rhel5.3.
1) if I have iso images of the cds
2) if X server is not running and cd fails to automount.

here I will explain, how to work at CLI mode in all the three conditions.

I have iso images of the CDs

Step 1:
move to /media/ directory and create directory iso-1, iso-2, iso- 3, iso-4, and iso-5 as

cd /media
mkdir iso-1 iso-2 iso- 3 iso-4 iso-5 
Step 2:
All the iso images are present on one DVD. if your DVD fails to mount automatically, you first need to mount your DVD. I assume that the DVD fail to mount. First I will make directory named dvd  in /media/ and will move into it as:
mkdir /media/dvd
mount /dev/hdc /media/dvd
cd /media/dvd
 
Note:
If you are not sure if the your device is hdc type mount /dev/h and press Tab, you will get the clue.
This directory /media/dvd will contain all the 5 ISOs. Now you need to mount all the respective iso images which are present on one dvd, into their respective directories as:
mount -o loop rhel-server-5.3-i386-disc1.iso /media/iso-1/
mount -o loop rhel-server-5.3-i386-disc2.iso /media/iso-2/
mount -o loop rhel-server-5.3-i386-disc3.iso /media/iso-3/
mount -o loop rhel-server-5.3-i386-disc4.iso /media/iso-4/
mount -o loop rhel-server-5.3-i386-disc5.iso /media/iso-5/
Here rhel-server-5.3-i386-disc<>.iso represents the name of my iso files.

At this point iso-1, iso-2, iso- 3, iso-4, iso-5 will contain data of respective CDs.
Step 3:
Copy all the RHEL-5 cd as:
cp -rvf /media/iso-1/* /var/ftp/pub/rhel5.3/
cp -rvf /media/iso-2/* /var/ftp/pub/rhel5.3/
cp -rvf /media/iso-3/* /var/ftp/pub/rhel5.3/
cp -rvf /media/iso-4/* /var/ftp/pub/rhel5.3/
cp -rvf /media/iso-5/* /var/ftp/pub/rhel5.3/
step 4:
You also need to copy the following two files from iso-1 as:
cp -rvf /media/iso-1/.diskinfo /var/ftp/pub/rhel5.3/
cp -rvf /media/iso-1/.treeinfo /var/ftp/pub/rhel5.3/

At this stage all the CDs are being dumped on to /var/ft/pub/rhel5.3/ directory.
 

X server is not running and cd fails to automount

Step 1:
Create a directory named rom in /media, where you will mount all the CDs one by one.

mkdir /media/rom
mount /dev/hdc /media/rhel5.3

Step 2:

Insert the CD1 in the CD-ROM and execute the following command:
mount /dev/hdc /media/rom
cp -ivf /media/rhel5.3/ /var/ftp/pub/
eject
Repeat step 2 for all the remaining 4 CDs.

At this stage all the CDs are being dumped on to /var/ft/pub/rhel5.3/ directory.

Our next step is to install and configure our tftp-server.

tftp Server Configuration:
  Step 1:
You need to install tftp, as it is not installed when you install your Linux box with "Customize Later" option selected. On  RHEL5 it is present on CD-2 or CD-3 depending upon the version of RHEL5 you are using:

tftp-server-0.42-3.1.i386.rpm
 
NOTE:
tftp-server is a xinetd dependent, if xinetd is not installed, you need to install xinetd rpm (which is present on CD-2) before can can successfully install tftp-server.
Step 2:
tftp is an xinetd based service, and requires to be started manually as:
chkconfig --level 345 tftp on

Our next step is to install and configure dhcp Server.

DHCP Server Configuration:

DHCP (Dynamic Host Configuration Protocol), is a network protocol for automatically assigning TCP/IP information to client machines, which includes, IP address, gateway, and DNS Server information.

You need to install dhcp rpm, as it is not installed, when you install you system by selecting the option "Customize Later".

Configuration File:

By default the dhcpd.conf file is not present in /etc/. The sample file is present at /usr/share/doc/dhcp/dhcp.conf.sample which can be copied to /etc/ for use.

/etc/dhcpd.conf                ;main configuration file

/var/lib/dhcp/dhcpd.leases     ;store the client lease database
 

Server Side configuration:

 dhcpd.conf should be something like the given file:

[root@server2 pxelinux.cfg]# cat /etc/dhcpd.conf
ddns-update-style interim;
ignore client-updates;

allow booting;
allow bootp;
class "pxeclients" {
      match if substring(option vendor-class-identifier, 0, 9) = "PXEClient";
      next-server 192.168.1.12;
      filename "linux-install/pxelinux.0";
}

subnet 192.168.1.0 netmask 255.255.255.0 {

# --- default gateway
      option routers            192.168.1.1;
      option subnet-mask        255.255.255.0;

     range dynamic-bootp 192.168.1.10 192.168.1.200; #assign IP address from the range
      default-lease-time 21600;                        #seconds till expire
      max-lease-time 43200;                            #maximum lease time
}

where 192.168.1.12 is my PXE server's IP address.

# service dhcpd restart

#chkconfig dhcpd on

The next step is to configure PXE boot configuration, so that tftp-server can serve client requests. This configuration can be done either through GUI tool system-config-netboot or through CLI using pxeos command.

  • system-config-netboot is available if system-config-netboot-0.1.45.1-1.el5.noarch.rpm is installed.
  • pxeos and pxeboot commands are available if system-config-netboot-cmd-0.1.45.1-1.el5.noarch.rpm is installed.
  • Both system-config-netboot-0.1.45.1-1.el5.noarch.rpm and system-config-netboot-cmd-0.1.45.1-1.el5.noarch.rpm are present on CD-2.
  • In this post I have mentioned how to use cmd tool to configure PXE boot, so, installation of system-config-netboot-0.1.45.1-1.el5.noarch.rpm is not required. Installation of system-config-netboot-cmd-0.1.45.1-1.el5.noarch.rpm will serve our purpose.

PXE Boot configuration

We need to execute the following command to configure PXE configuration as:  

# pxeos -a -i "Redhat 5.3" -p FTP -D 0 -s 192.168.1.12 -L /pub/rhel5.3/ redhat-el5.3

Where:

-a
add a new Operating System description

-i 
set short description associated with the Operating System.

-p
specify protocol used to access the Operating System

-D <1|0>
specify whether the configuration is diskless or not, zero specifies that it is not a diskless configuration

-s
specify the machine containing the Operating System

-L
specify the directory on the sever machine containing the Operating System.

Specify the unique Operating System identifier, which is used as the directory name in the /tftpboot/linux-install/directory.

 At this stage, you can boot your PXE enabled system to install redhat 5.3. But we will like to add  two more OS, CentOS 5.3 and Fedora 11, so that user can select, which one to install. For this we will have to opt the same procedure, as we opted in case of redhat 5.3. We will create two directories centos5.3 and fedora11 in /var/ftp/pub and copy the respective OS in their respective directories. Then we will execute the following command which will add these two OS in the list:

pxeos -a -i "centOS 5.3" -p FTP -D 0 -s 192.168.1.100 -L /pub/centos5.3/ centos5.3
pxeos -a -i "Fedora 11" -p FTP -D 0 -s 192.168.1.100 -L /pub/fedora11/ fedora11
 
At this stage our pxe server is ready to server any pxe clients. Now we can install Linux on any machine which supports pxe booting. You need to connect your PXE Server and client though a network cable. 

 
 
The available OS will be displayed as shown above. You will have to press1, 2 or 3 depending upon what you want to select.
Note :-
pxeos -l command can be used on the PXE Server to see the list of OS configured for PXE installation.
pxeos -d can be used to delete the OS entry from the list.

Kubernetes 1.31 || Testing the Image Volume mount feature using Minikube

With Kubernetes new version 1.31 ( https://kubernetes.io/blog/2024/08/13/kubernetes-v1-31-release/ ) there are so many features releases for...