Kmaiti

  • Subscribe to our RSS feed.
  • Twitter
  • StumbleUpon
  • Reddit
  • Facebook
  • Digg

Tuesday, 17 May 2011

Configuring iSCSI initiator with multipathing ?

Posted on 09:56 by Unknown
For configurations where both paths to the iSCSI target travel over different networks-or-subnets

1. Config the first path through one of you network interfaces:

# service iscsid start
# chkconfig iscsid on
# iscsiadm -m discovery -t st -p -P 1
# iscsiadm -m discovery -t st -p -l

2. After logging into the target you should see new SCSI block devices created, verify this by executing fdisk -l:

# partprobe
# fdisk -l
3. Config the second path through eth1:

# iscsiadm -m discovery -t st -p -P 1
# iscsiadm -m discovery -t st -p -l

For configurations where both paths to the iSCSI target travel over the same network and subnet
1. Configure the iSCSI interfaces by creating iSCSI iface bindings for all interfaces and binding by network device name (eth0, alias, vlan name, etc) or MAC address:

# service iscsid start
# chkconfig iscsid on
# iscsiadm -m iface -I iscsi-eth0 -o new
# iscsiadm -m iface -I iscsi-eth0 -o update -n iface.net_ifacename -v eth0
# iscsiadm -m iface -I iscsi-eth1 -o new
# iscsiadm -m iface -I iscsi-eth1 -o update -n iface.net_ifacename -v eth1

2. Next, verify your targets are available and log in:

# iscsiadm -m discovery -t st -p -I iscsi-eth0 -I iscsi-eth1 -P 1
# iscsiadm -m discovery -t st -p -I iscsi-eth0 -I iscsi-eth1 -l

3. After logging into the target you should see new SCSI block devices created, verify this by executing fdisk -l:

# partprobe
# fdisk -l


Each LUN has a different World Wide Identifier (WWID.) Each scsi block device with the same WWID is a different path to the same LUN. To verify the WWIDs perform the following:

# scsi_id -gus /block/sd

Configuring Multipath:

After configuring the iSCSI layer Multipath must be configured via /etc/multipath/multipath.com. Please note that different SAN vendors will their own recommendations for configuring the multipath.conf file; their recommendations should be used if they are provided. For more information on the specific settings for your NAS please contact you hardware vendor.

1. Make the following changes to /etc/multipath.conf to set up a simple Multipath configuration with default settings:



* Un-comment the "defaults" stanza by removing the hash symbols on the following lines:

defaults {
user_friendly_names yes
}

* Comment-out the "blacklist" stanza by putting hash symbols on the following lines:

# blacklist {
# devnode "*"
# }


For more information on device mapper multipath please refer to: Using Device-Mapper Multipath

2. Save the changes to multipath.conf. Start multipath and ensure that it is configured to start at boot time:

# service multipathd start
# chkconfig multipathd on

3. After starting the multipath daemon the multipath command can be used to view your multipath devices. Example output is as follows:

mpath0 (1IET_00010001) dm-4 IET,VIRTUAL-DISK
[size=10G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 6:0:0:1 sdf 8:80 [active][ready]
\_ 7:0:0:1 sdh 8:112 [active][ready]

4. Using the mpath psuedo-device for the multipathed storage, create a partition and inform the kernel of the change:

# fdisk /dev/mapper/mpath0
# partprobe

5. Use the kpartx command to inform multipath of the new partition:

# kpartx -a /dev/mapper/mapth0

6. Device mapper will then create a new mpath pseudo device. Example:

/dev/mapper/mapth0p1

7. Create a file system on the multipathed storage and mount it:

# mkfs.ext3 /dev/mapper/mpath0p1
# mount /dev/mapper/mpath0p1 /

8. With the storage mounted begin failover testing. The following is an example of failover testing via a cable-pull on eth1:

* Use the mulitpath command to verify that all paths are up. Example output:

mpath0 (1IET_00010001) dm-4 IET,VIRTUAL-DISK
[size=10G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 6:0:0:1 sdf 8:80 [active][ready]
\_ 7:0:0:1 sdh 8:112 [active][ready]

* Pull the cable on eth1. Verify the path is failed with multipath -ll. Example output:

mpath0 (1IET_00010001) dm-4 IET,VIRTUAL-DISK
[size=10G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 6:0:0:1 sdf 8:80 [active][ready]
\_ 7:0:0:1 sdh 8:112 [faulty][failed]


9. The final step in the process is tuning failover timing.

o With the default timeouts in /etc/iscsi/iscsi.conf multipath failover takes about 1.5 minutes.
o Some users of multipath and iSCSI want lower timeourts so that I/O doesn't remain queued for long periods of time.
o For more information on lowering multipathed iSCSI failover time refer to How can I improve the failover time of a faulty path when using device-mapper-multipath over iSCSI?
Read More
Posted in | No comments

What is Fibre Channel?

Posted on 09:42 by Unknown
What is Fibre Channel?

Fibre Channel (FC) is a transport protocol commonly used in storage networks. A common misunderstanding is that FC and fiber optic infrastructure such as host bus adaptor cards and fiber optic cables are the one-in-the-same. This is incorrect. FC is a protocol, like TCP/IP, and can be used over fiber optic cables or over copper. FC is commonly used to transport SCSI commands over fibre optic or copper cables in a Storage Area Network (SAN.)

A SAN is simply that: a storage network. A traditional LAN connects computers and devices via a switched infrastructure (typically over copper cables) and employs TCP/IP as a transport protocol for passing data between devices and services. In a SAN computers and storage devices are connected via copper or fiber optic cables and Fibre Channel is employed as a transport protocol for passing data between storage devices and computers.

Fibre Channel SAN Topologies :

The way that devices are connected to each other in a SAN is referred to as its topology. There are 3 topologies available in Fibre Channel:
* Point to Point
* Arbitrated Loop
* Switched Fabric

P2P: For example a workstation with an HBA in it is hooked by a fiber optic cable directly into a tape array.
Switched Fabric is the most complex and most common topology for fibre channel storage networks.

Common Fibre Channel, SCSI, and SAN Terms :

HBA : Host Bus Adaptor. An HBA can be likened to a NIC for a SAN. An HBA is a card with ports, typically fiber optic ports, that allow the system it is housed in to connect to

the SAN infrastructure. HBAs typically have multiple ports (2 or 4) to allow for multiple paths to the storage. Common HBA speeds are 2 gigabit, 4 gigabit, and 8 gigabit and the HBA speed must match the speed of the switch and fabric. Common HBA vendors are Qlogic and Emulex.
Each port on an HBA appears to the system as its own SCSI host. A 2 port card, for example, will result in the system seeing 2 SCSI hosts. You can view the information for each port under /sys/class/fc_transport/

FCoE : Fibre Channel over Ethernet. As fibre channel is a protocol, not a class of hardware, it can pass over any suitable medium. FCoE uses copper ethernet cables to transport FC data rather than fiber optical cables. FCoE cards are HBAs with ethernet ports as apposed to fiber optic ports. Just as with fibre optic HBAs FCoE ports appear to the system they are housed in as SCSI hosts. Though FCoE utilizes standard ethernet cabling FCoE switches and cards are still required to allow for operation as standard ethernet LAN equipment does not support SAN specific functionality such as zoning, masking, logins, etc

WWN : World Wide Name
WWID is a device identifier. WWIDs are the preferred device identifier to be used within the kernel.You can view the device identifier of a SCSI device by running "scsi_id -gus /block/sd" or "scsi_id -gus -p 83 /block/sd" to retrieve the WWID, if one exists.
WWPN is a port identifier, WWPN for a port can be viewed at /sys/class/fc_host/host/port_name
WWNN : World Wide Node Name. The WWNN is a unique identifier, like a MAC address, for a "node" (read: card) on a SAN. A WWNN for an HBA can be viewed at /sys/class/fc_host/host/node_name
JBOD : Just a Bunch Of Disks. A JBOD is just that, a bunch of disks. A JBOD is an array of disks with no intelligence. While full featured arrays have the intelligence to control RAID levels, striping, redundancy, replication, and carving out logical storage units a JBOD may have few or none of those features.
SP : Storage Processor. An SP can be thought of as the "brain" of an array. It is the control unit that houses the intelligence that allows modern arrays to do advanced operations like LUN allocation, striping, redundancy, etc.
Target : A target is a SCSI concept, not a fibre channel concept. A target is a device that allows for incoming connections and storage access from initiators. The target side would be the array side in a SAN.

Initiator :
An initiator is a SCSI concept, not a fibre channel concept. An initiator is a device that connects to a target to access storage on that target. The initiator side would be the HBA in a SAN.

LUN : Logical Unit. A LUN is a SCSI concept, not a fibre channel concept. A LUN is analogous to a Logical Volume in LVM. The array would be the Volume Group and the LUNs would be the Logical Volumes over that. A LUN is a logical storage device carved out of a larger storage pool of aggregated physical devices.

Path: A path is a single IO connection back to a LUN or storage pool. A path typically maps to a physical connection; however, zoning and mapping must be taken into account too. A path defines the route between the system and a device and consists of 4 numbers: H:B:T:L, host, bus, target, and lun.
SAN Issue Troubleshooting Tips :-

multipath -ll
Read More
Posted in | No comments

What is cluster ?

Posted on 09:37 by Unknown
What is cluster ?

Ans : A cluster is two or more interconnected computers that create a solution to provide higher availability, higher scalability or both. The advantage of clustering computers for high availability is seen if one of these computers fails, another computer in the cluster can then assume the workload of the failed computer. Users of the system see no interruption of access.

Clustering software : Red Hat Cluster suite
Platform : Red Hat Enterprise Linux 4,5,6
Storage : SCSI, SAN, NAS
Storage Protocols : iSCSI(pronounced "eye-scuzzy) /FCP
iSCSI => protcol to connect server to storage over IP network. Needs iSCSI initiator util will be on source or server. iSCSI target util will be on storage/target machine.
FCP => fibre channel protocol to connect server to storage over optical channel. Here needs HBA(host bus adapter like NIC) cards. It driver accesses this HBA and HBA communicates to SAN switch/storage controller.(Drivers like qla2xxx of qlogic company, lpfc(of emulex) etc)

Concepts: iSCSI is a protocol whereas SCSI is storage disk. consists using initiator(software+ hardware) and target. Initiator send packet to HBA. Target resides on storage like EqualLogic, NetApp filer, EMC NS-series or a HDS HNAS computer appliance. These attched with LUN to the drives. LUN is logical unit on storage treats as device or drive.

Storage System Connection Types :

a)active/active : all paths active all time
b)active‐passive : one path is active and other is backup
c)virtual port storage system.

Multipathing and Path Failover : When transferring data between the host server and storage, the SAN uses a multipathing technique where package "device-mapper-multipath" will have to be installed on server/node. The daemon "multipathd" periodically checks the connection paths to the storage. Multipathing allows you to have more than one physical path from the Server host to a LUN(treat is a device) on a storage system. If a path or any component along the path—HBA or NIC, cable, switch or switch port, or storage processor—fails, the server selects another of the available paths. The process of detecting a failed path and switching to another is called path failover.

Installation of Red Hat Cluster Suite on RHEL 5 :

1. Register system to RHN(Needs subscription with Red Het). Ignore if system is already registered :

---
#rhn_register
---

2. Use following command :

----
#yum groupinstall clustering cluster-storage
----

To separately install it do :

----
For Standard kernel :

#yum cman cman-kernel dlm dlm-kernel magma magma-plugins system-config-cluster rgmanager ccs fence modcluster --force

For SMP kernel :

#yum cman cman-kernel-smp dlm dlm-kernel-smp magma magma-plugins system-config-cluster rgmanager ccs fence modcluster --force
----

3. This steps should be followed on each nodes.

Configuring Red Hat Cluster Suite :

Configuration can be achieved in three ways like :
a) Using web interface(Conga tools) ie ricci and luci.
Conga — This is a comprehensive user interface for installing, configuring, and managing Red Hat clusters, computers, and storage attached to clusters and computers.

1. #yum install luci // Do on one machine(say A) to manage nodes. This machine may be outof clustered nodes.

2.Now initialize luci like

#luci_admin init

3. Install ricci on each nodes like :
#yum install ricci
4. Then access A(where luci has installed) like : http://IPof_A:port. Note that you'll get the url when you'll execute #luci_admin init in the above step.

b) Using "system-config-cluster" GUI interface. This is a user interface for configuring and managing a Red Hat cluster. Just use this command. Sometime it may not work if the server/node doesn't have GUI package like gnome or KDE.
c) Using "Command line tools" — This is a set of command line tools for configuring and managing a Red Hat cluster.

4. Different Clustered services (Ordered as per the manually starting queue): ccsd, cman, fence, rgmanager.
If you use LVM with GFS : ccsd, cman, fence, clvmd, gfs, rgmanager.

5. Configuration file (will be same on each nodes): /etc/cluster/cluster.conf, /etc/sysconfig/cluster. While you'll configure it using web interface, it'll be automatically copied on each nodes. Make sure you have enabled all the ports in firewall or disabled the firewall on all nodes as well as on luci node.

6. Now login into LUCI web interface and create a new cluster and give a name. Then in this lcuster add each nodes one by one. In this cluster add one fail over domain like httpd.(Make sure you have installed the httpd on each nodes where all the configuration files are same.). I shall describe it later and show you the result of real fail over testing.

Shared Disk configure(Disk size minimum 10MB is enough) : Why it is needed ?

AA) The shared partitions are used to hold cluster state information including "Cluster lock states", "Service states", "Configuration information". The shared disk may be on any node or on storage disk( will be connected to HBA, RAID controller(raid 1 ie mirror). This will be for shared disk(primary partition+shadow). Each minimum 10MB. Two raw devices on shared disk storage must be created for the primary shared partition and the shadow shared partition. Each shared partition must have a minimum size of 10 MB. The amount of data in a shared partition is constant; it does not increase or decrease over time. Periodically, each member writes the state of its services to shared storage. In addition, the shared partitions contain a version of the cluster configuration file. This ensures that each member has a common view of the cluster configuration. If the primary shared partition is corrupted, the cluster members read the information from the shadow (or backup) shared partition and simultaneously repair the primary partition. Data consistency is maintained through checksums, and any inconsistencies between the partitions are automatically corrected. If a member is unable to write to both shared partitions at start-up time, it is not allowed to join the cluster. In addition, if an active member can no longer write to both shared partitions, the member removes itself from the cluster by rebooting (and may be remotely power cycled by a healthy member).

BB) The following are shared partition requirements:
a)Both partitions must have a minimum size of 10 MB.
b)Shared partitions must be raw devices since file cache won't be there. They cannot contain file systems.
c)Shared partitions can be used only for cluster state and configuration information.

CC) Following are recommended guidelines for configuring the shared partitions(By Red Hat):

a)It is strongly recommended to set up a RAID subsystem for shared storage, and use RAID 1 (mirroring) to make the logical unit that contains the shared partitions highly available. Optionally, parity RAID can be used for high availability. Do not use RAID 0 (striping) alone for shared partitions.
b)Place both shared partitions on the same RAID set, or on the same disk if RAID is not employed, because both shared partitions must be available for the cluster to run.
c)Do not put the shared partitions on a disk that contains heavily-accessed service data. If possible, locate the shared partitions on disks that contain service data that is rarely accessed.

DD) Make shared partitions and attach it with the cluster :
i) initialise quorum disk once in any node
#mkqdisk -c /dev/sdX -l myqdisk
ii)Add quorum disk to cluster at the backend(In web interface it can be done. Just login into luci interface and go to cluster. You'll see "Quorum Partition" tab. click on it and proceed further to configure it.) :
a)
-----


. . . . .
. . . . .


#expected votes =(nodes total votes + quorum disk votes)



#Health check result is written to quorum disk every 2 secs

#if health check fails over 5 tko, 10 (2*5) secs, the node is rebooted by quorum daemon

#Each heuristic check is run very 2 secs and earn 1 score,if shell script return is 0








-----
Note : Need to manually copy this file on each node. But if you do in web interface, you don't need to manually cop. It'll automatically done.

b)Please increase the config_version by 1 and run ccs_tool update /etc/cluster/cluster.conf.
c) Check to verify that the quorum disk has been initialized correctly : #mkqdisk -L and clustat to check its availability.
d)Please note Total votes=quorum votes=5=2+3, if quorum disk vote is less than (node votes+1), the cluster wouldn’t have survived
e) Typically, the heuristics should be snippets of shell code or commands which help determine a node’s usefulness to the cluster or clients. Ideally, you want to add traces for all of your network paths (e.g. check links, or ping routers), and methods to detect availability of shared storage. Only one master is present at any one time in the cluster, regardless of how many partitions exist within the cluster itself. The master is elected by a simple voting scheme in which the lowest node which believes it is capable of running (i.e. scores high enough) bids for master status. If the other nodes agree, it becomes the master. This
algorithm is run whenever no master is present. Here it is "ping -c1 -t1 10.65.211.86". IP may be san ip/ other nodes' IP etc.


7. Configuring Cluster Daemons :
The Red Hat Cluster Manager provides the following daemons to monitor cluster operation:
cluquorumd — Quorum daemon
clusvcmgrd — Service manager daemon
clurmtabd — Synchronizes NFS mount entries in /var/lib/nfs/rmtab with a private copy on a service's mount point
clulockd — Global lock manager (the only client of this daemon is clusvcmgrd)
clumembd — Membership daemon

8. Configuring Storage : (Either SAN/NAS - using multipath or nfs)
In luci interface click on "add a system" and then go to storage tab and assign the storage in the cluster.


To start the cluster software on a member, type the following commands in this order:

1. service ccsd start
2. service lock_gulmd start or service cman start according to the type of lock manager used
3. service fenced start
4. service clvmd start
5. service gfs start, if you are using Red Hat GFS
6. service rgmanager start

To stop the cluster software on a member, type the following commands in this order:

1. service rgmanager stop
2. service gfs stop, if you are using Red Hat GFS
3. service clvmd stop
4. service fenced stop
5. service lock_gulmd stop or service cman stop according to the type of lock manager used
6. service ccsd stop

Stopping the cluster services on a member causes its services to fail over to an active member.
=================

Testing failover domain (Making availability):

Pre-configuration : Installed httpd on node 68 and 86.
Common home directory : /var/www/html

Configure httpd as failover domain in cluster (in luci): add failover domain > Add resources > Add service and allocate fail over domain and resource to this service.

1. First httpd_service was on 86(allotted resource is ip 67 to httpd(daemon:domain on cluster) )

ip :

[root@vm86 ~]# ip add list|grep inet
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
inet 10.65.211.86/22 brd 10.65.211.255 scope global eth0
inet 10.65.211.67/22 scope global secondary eth0
inet6 fe80::216:3eff:fe74:8d56/64 scope link
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
inet6 fe80::200:ff:fe00:0/64 scope link
[root@vm86 ~]#

2. crashed 86 server ie made down it.

3. httpd service was up : relocated on 68 : Able to access page : http://10.65.211.67:/

IP floated to 68 server : proof

[root@vm68 ~]# ip add list | grep inet
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
inet 10.65.211.68/22 brd 10.65.211.255 scope global eth0
inet 10.65.211.67/22 scope global secondary eth0
inet6 fe80::216:3eff:fe74:8d44/64 scope link
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
inet6 fe80::20
================
Read More
Posted in | No comments

Monday, 16 May 2011

"cluster is not quorate. refusing connection"

Posted on 18:50 by Unknown
Guys,

Environment : Red Hat Enterprise Linux 5.6, RHCS
Error : subject line
Issue : I am not sure while I got this error in the system log since quorate is enabled and working fine on non-firewalled machine where SELinux is also disabled. For the two node cluster, all the cluster.conf are same. One node was connected to the cluster and other didn't.

Resolution :

1. Make sure : chkcofing cman off; chkcofing clvmd off; chkcofing rgmanager off;
2. Make sure all cluster.conf fules are same.
3. Check if iptables are temporary off
4. Start cman, clvmd and rgmanager manually one by one

or

I rebooted whole node and it worked like charm :)
Read More
Posted in | No comments

Tuesday, 10 May 2011

How add FTP user from backend in linux?

Posted on 18:34 by Unknown
Use following commands :

Environment : RHEL 6, vsftpd

[root@vm91 ~]# useradd -m testing -G users,ftp,wheel -s /bin/bash
[root@vm91 ~]# passwd testing
Changing password for user testing.
New password:
BAD PASSWORD: it is based on a dictionary word
BAD PASSWORD: is too simple
Retype new password:
passwd: all authentication tokens updated successfully.
[root@vm91 ~]#

[root@vm91 ~]# ll -dr /home/*|grep testing
drwx------. 2 testing testing 4096 May 11 06:58 /home/testing
[root@vm91 ~]#

Test the settings :

You may get following error :

[kmaiti@kmaiti ~]$ ftp IP_FTP_server
Connected to FTP_server (*****).
220 (vsFTPd 2.2.2)
Name (*****:kmaiti): testing
331 Please specify the password.
Password:
500 OOPS: cannot change directory:/home/testing

---

just do like : # getsebool ftp_home_dir
then #setsebool -P ftp_home_dir on

Then retry to access the FTP server :

-----
[kmaiti@kmaiti ~]$ ftp FTP_IP
Connected to ser (ser).
220 (vsFTPd 2.2.2)
Name (ser:kmaiti): testing
331 Please specify the password.
Password:
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp>
-----

Try :)
Read More
Posted in | No comments

Thursday, 5 May 2011

How to make persistant static route

Posted on 13:45 by Unknown
Environment : All RHEL

Steps :

1. vi /etc/sysconfig/network-scripts/route-ethX and add following :

---
GATEWAY=xxx.xxx.xxx.xxx
NETMASK=yyy.yyy.yyy.yyy
ADDRESS=zzz.zzz.zzz.zzz
---

NB: Replace the address here.

2. service network restart
3. If you use bond0 device : add following entries in /etc/sysconfig/network-scripts/route-

---
default via X.X.X.X dev bond0
10.10.10.0/24 via X.X.X.X dev bond0
---

NB: X.X.X.X is the gateway IP address

try :)
Read More
Posted in | No comments

How to create network bonding device?

Posted on 13:43 by Unknown
Environment : RHEL 6

Steps :

1. vi /etc/modprobe.conf and add : alias bond bonding
2. vi /etc/sysconfig/network-scripts/ifcfg-bond and add :

---
DEVICE=bond0
IPADDR=192.168.1.1
NETMASK=255.255.255.0
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
BONDING_OPTS=""
---

NB : N -> 0,1, ...

2. Let two ethernet cards are there like : eth0 and eth1 :They will look like :

----
DEVICE=eth<0/1>
BOOTPROTO=none
ONBOOT=yes
MASTER=bond
SLAVE=yes
USERCTL=no
----

3. Make sure "bonding" kernel module is present on server: lsmod | grep bonding; modprobe bonding;
4. Restart the service network and make the bond0 up.like : #service network restart

Try :)
Read More
Posted in | No comments
Newer Posts Older Posts Home
Subscribe to: Posts (Atom)

Popular Posts

  • NDMP communication failure error
    Guys, Issue : Netbackup server sends alert NDMP communication failure once everyday. But there is no issue to run scheduled backup jobs. Env...
  • unable connect to socket: No route to host (113)
    Guys, This error message usually comes when you try to access remote linux desktop using vncviewer. Please check the firewall in the linux s...
  • How to verify UDP packet communication between two linux system?
    Guys, Today, I had to check UDP packet communication between linux and a windows system. Main purpose of the windows system was to capturing...
  • How to redirect output of script to a file(Need to save log in a file and file should be menioned in the script itself?
    Expectation : @subject Steps : 1. Create a bash script. 2. add line : exec > >(tee /var/log/my_logfile.txt) That's it. All output ...
  • "cluster is not quorate. refusing connection"
    Guys, Environment : Red Hat Enterprise Linux 5.6, RHCS Error : subject line Issue : I am not sure while I got this error in the system log s...
  • Steps to develop patch and apply it to original source file
    1. Create test.c  Above file contains : -------- [kamalma@test-1 C_Programming]$ cat test.c #include #include int main()  {  printf("\n...
  • How to install subversion (svn) on linux ?
    Guys, I have referred the second procedure to install svn on my rhel6 mc. Procedure 1 : ========= cd /usr/local/src/ wget http://subversion...
  • How to add sudo user in linux?
    1. #useradd test123 2. #usermod -G wheel -a test123 //add user to wheel group 3. Uncomment following in /etc/sudoers file : # Uncomment to ...
  • How to change php handler from backend on cpanel server?
    Guys, I have referred the following commands to switch the php handler on the cpanel serevrs: 1. Command to display the current php handler ...
  • How to remotely access the linux desktop from any linux or windows machine?
    Guys, I referred the following steps : ======================= 1. On server-linux(Which will be accessed) : yum install vnc* 2. On client-li...

Categories

  • ACL
  • ESX
  • Linux
  • Storage
  • UCS

Blog Archive

  • ▼  2013 (5)
    • ▼  May (1)
      • NDMP communication failure error
    • ►  April (3)
    • ►  February (1)
  • ►  2012 (10)
    • ►  July (1)
    • ►  June (1)
    • ►  April (1)
    • ►  March (3)
    • ►  February (3)
    • ►  January (1)
  • ►  2011 (86)
    • ►  December (3)
    • ►  November (2)
    • ►  September (19)
    • ►  August (9)
    • ►  July (5)
    • ►  June (9)
    • ►  May (12)
    • ►  April (3)
    • ►  March (4)
    • ►  February (5)
    • ►  January (15)
  • ►  2010 (152)
    • ►  December (9)
    • ►  November (34)
    • ►  October (20)
    • ►  September (14)
    • ►  August (24)
    • ►  July (19)
    • ►  June (3)
    • ►  May (25)
    • ►  April (3)
    • ►  January (1)
Powered by Blogger.