Kmaiti

  • Subscribe to our RSS feed.
  • Twitter
  • StumbleUpon
  • Reddit
  • Facebook
  • Digg

Friday, 16 September 2011

How to check whether current running kernel is tainted(contaminated) or not ?

Posted on 12:05 by Unknown
The Linux kernel maintains a"taint state" which is included in kernel error messages. The taint state provides an indication whether something has happened to the running kernel that affects whether a kernel error or hang can be troubleshoot effectively by analysing the kernel source code. Some of the information in the taint relates to whether the information provided by the kernel in an error message can be considered trustworthy.

Following command could be used :

# cat /proc/sys/kernel/tainted
536870912

Use the following to decipher the taint value :

Non-zero if the kernel has been tainted. Numeric values, which can be ORed together:

1 - A module with a non-GPL license has been loaded, this includes modules with no license. Set by modutils >= 2.4.9 and module-init-tools.
2 - A module was force loaded by insmod -f. Set by modutils >= 2.4.9 and module-init-tools.
4 - Unsafe SMP processors: SMP with CPUs not designed for SMP.
8 - A module was forcibly unloaded from the system by rmmod -f.
16 - A hardware machine check error occurred on the system.
32 - A bad page was discovered on the system.
64 - The user has asked that the system be marked "tainted". This could be because they are running software that directly modifies the hardware, or for other reasons.
128 - The system has died.
256 - The ACPI DSDT has been overridden with one supplied by the user instead of using the one provided by the hardware.
512 - A kernel warning has occurred.
1024 - A module from drivers/staging was loaded.
268435456 - Unsupported hardware
536870912 - Technology Preview code was loaded

The taint status of the kernel not only indicates whether or not the kernel has been tainted but also indicates what type(s) of event caused the kernel to be marked as tainted. This information is encoded through single-character flags in the string following "Tainted:" in a kernel error message.

* P: Proprietary module has been loaded, i.e. a module that is not licensed under the GNU General Public License (GPL) or a compatible license. This may indicate that source code for this module is not available to the Linux kernel developers.
* G: The opposite of P: the kernel has been tainted (for a reason indicated by a different flag), but all modules loaded into it were licensed under the GPL or a license compatible with the GPL.
* F: Module has been forcibly loaded using the force option "-f" of insmod or modprobe, which caused a sanity check of the versioning information from the module (if present) to be skipped.
* S: SMP with CPUs not designed for SMP. The Linux kernel is running with Symmetric MultiProcessor support (SMP), but the CPUs in the system are not designed or certified for SMP use.
* R: User forced a module unload. A module which was in use or was not designed to be removed has been forcefully removed from the running kernel using the force option "-f" of rmmod.
* M: System experienced a machine check exception. A Machine Check Exception (MCE) has been raised while the kernel was running. MCEs are triggered by the hardware to indicate a hardware related problem, for example the CPU's temperature exceeding a treshold or a memory bank signaling an uncorrectable error.
* B: System has hit bad_page, indicating a corruption of the virtual memory subsystem, possibly caused by malfunctioning RAM or cache memory.
* U: Userspace-defined naughtiness.
* D: Kernel has oopsed before
* A: ACPI table overridden.
* W: Taint on warning.
* C: modules from drivers/staging are loaded.
* I: Working around severe firmware bug.

The taint flags above are implemented in the standard Linux kernel and indicate the information provided in kernel error messages is not necessarily to be trusted. Additionally, the following flags are used by the RHEL kernel:

* H: Hardware is unsupported.
* T: Technology Preview code is loaded.
Read More
Posted in | No comments

How to find out which process is using swap space?

Posted on 03:31 by Unknown
If we would like to sort out the running or queueing process as per swap usage we can do like :

#top

Then press capital "o" (ie "O") followed by "p" and press enter. Now processes should be sorted by their swap usage.

We can also use bash script to pick up the process from /proc file system. So, use the following script :

-----
#!/bin/bash
# Get current swap usage for all running processes
SUM=0
OVERALL=0
for DIR in `find /proc/ -maxdepth 1 -type d | egrep "^/proc/[0-9]"` ; do
PID=`echo $DIR | cut -d / -f 3`
PROGNAME=`ps -p $PID -o comm --no-headers`
for SWAP in `grep Swap $DIR/smaps 2>/dev/null| awk '{ print $2 }'`
do
let SUM=$SUM+$SWAP
done
echo "PID=$PID - Swap used: $SUM - ($PROGNAME )"
let OVERALL=$OVERALL+$SUM
SUM=0

done
echo "Overall swap used: $OVERALL"
-----

Save it as getswapusage.sh and change its permission like :

#chmod 755 getswapusage.sh

Now run it like :

#./getswapusage.sh |sort -n -k 5

We can view swap usage at that particular moment by particular process.

We can also monitor total swap usage by following command :

#watch cat /proc/meminfo
Read More
Posted in | No comments

Wednesday, 14 September 2011

Ethernet Device firmware and Linux kernel

Posted on 22:21 by Unknown
Guys,

I would like to just clarify about the firmware of Ethernet(NIC) and firmware that comes along with Linux kernel. Both are two different but their aim is same. Hardware vendor deploys firmware(certain amount of code or program to interact with hardware) in NVRAM (non-volatile RAM not normal RAM). Once we attach the NIC with machine it'll be automatically activated. We can view its version like :

#ethtool -i eth0

Now kernel also contains firmware. This will be loaded in RAM and will override on vendor provided firmware. So, this firmware will be taking care of NIC now. Most of the kernel contains such firmware for NIC. Only difference is that it won't show in "ethtool -i eth0" output.

Take care.
Read More
Posted in | No comments

Saturday, 10 September 2011

Concept about Linux Page Cache and pdflush

Posted on 12:32 by Unknown
Concept about Linux Page Cache and pdflush :

When we try to write data, Linux caches this information in an area of memory called the page cache. We can check this cache memory using free, vmstat or top command. Even we can get information in /proc/meminfo.

[kmaiti@kmaiti ~]$ cat /proc/meminfo
MemTotal: 3848964 kB
MemFree: 2463928 kB
Buffers: 98976 kB
Cached: 408372 kB
SwapCached: 0 kB
Active: 616324 kB
Inactive: 380376 kB
Active(anon): 489800 kB
Inactive(anon): 58324 kB
Active(file): 126524 kB
Inactive(file): 322052 kB
Unevictable: 16 kB
Mlocked: 16 kB
SwapTotal: 4194296 kB
SwapFree: 4194296 kB
Dirty: 8 kB
Writeback: 0 kB
AnonPages: 489352 kB
Mapped: 86784 kB
Shmem: 58788 kB
Slab: 289468 kB
SReclaimable: 141444 kB
SUnreclaim: 148024 kB
KernelStack: 3248 kB
PageTables: 44776 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 6118776 kB
Committed_AS: 1445816 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 561012 kB
VmallocChunk: 34359070804 kB
HardwareCorrupted: 0 kB
AnonHugePages: 210944 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 10240 kB
DirectMap2M: 3979264 kB
[kmaiti@kmaiti ~]

Look here cache memory is 408 MB. As pages are written, the size of the "Dirty" section will increase. Once writes to disk have begun, you'll see the "Writeback" figure go up until the write is finished. It can be very hard to actually catch the Writeback value going high, as its value is very transient and only increases during the brief period when I/O is queued but not yet written.

pdflush (A kernel thread) :

Linux usually writes data out of the page cache using a process called pdflush. At any moment, between 2 and 8 pdflush threads are running on the system. You can monitor how many are active by looking at /proc/sys/vm/nr_pdflush_threads. Whenever all existing pdflush threads are busy for at least one second, an additional pdflush daemon is spawned. The new ones try to write back data to device queues that are not congested, aiming to have each device that's active get its own thread flushing data to that device. Each time a second has passed without any pdflush activity, one of the threads is removed. There are tunables for adjusting the minimum and maximum number of pdflush processes, but it's very rare they need to be adjusted.

Tune pdflush :

Exactly what each pdflush thread does is controlled by a series of parameters in /proc/sys/vm:

1. /proc/sys/vm/dirty_writeback_centisecs (default 500): In hundredths of a second, this is how often pdflush wakes up to write data to disk. The default wakes up the two (or more) active threads every five seconds.

2. /proc/sys/vm/dirty_expire_centiseconds (default 3000): In hundredths of a second, how long data can be in the page cache before it's considered expired and must be written at the next opportunity. Note that this default is very long: a full 30 seconds. That means that under normal circumstances, unless you write enough to trigger the other pdflush method, Linux won't actually commit anything you write until 30 seconds later.

3. /proc/sys/vm/dirty_background_ratio (default 10): Maximum percentage of active that can be filled with dirty pages before pdflush begins to write them

Note that some kernel versions may internally put a lower bound on this value at 5%. So on the system above, where this figure gives 2.5GB, with the default of 10% the system actually begins writing when the total for Dirty pages is slightly less than 250MB--not the 400MB you'd expect based on the total memory figure.

4. /proc/sys/vm/dirty_ratio (default 40): Maximum percentage of total memory that can be filled with dirty pages before processes are forced to write dirty buffers themselves during their time slice instead of being allowed to do more writes.

Note that all processes are blocked for writes when this happens, not just the one that filled the write buffers. This can cause what is perceived as an unfair behavior where one "write-hog" process can block all I/O on the system. The classic way to trigger this behavior is to execute a script that does "dd if=/dev/zero of=hog" and watch what happens.

do like : #dd if=/dev/zero of=hog in one terminal and on other terminal do #watch cat /proc/meminfo

When does pdflush write?

Data written to disk will sit in memory until either a) they're more than 30 seconds old, or b) the dirty pages have consumed more than 10% of the active, working memory.

Tuning Recommendations for write-heavy operations :

Important : The usual issue that people who are writing heavily encounter is that Linux buffers too much information at once, in its attempt to improve efficiency. This is particularly troublesome for operations that require synchronizing the file-system using system calls like fsync. If there is a lot of data in the buffer cache when this call is made, the system can FREEZE for quite some time to process the sync.

dirty_background_ratio: Primary tunable to adjust, probably downward. If your goal is to reduce the amount of data Linux keeps cached in memory, so that it writes it more consistently to the disk rather than in a batch, lowering dirty_background_ratio is the most effective way to do that. It is more likely the default is too large in situations where the system has large amounts of memory and/or slow physical I/O.

dirty_ratio: Secondary tunable to adjust only for some workloads. Applications that can cope with their writes being blocked altogether might benefit from substantially lowering this value. It is easier to encounter when reducing dirty_ratio setting below its default.

dirty_expire_centisecs: Test lowering, but not to extremely low levels. Attempting to speed how long pages sit dirty in memory can be accomplished here, but this will considerably slow average I/O speed because of how much less efficient this is. This is particularly true on systems with slow physical I/O to disk. Because of the way the dirty page writing mechanism works, trying to lower this value to be very quick (less than a few seconds) is unlikely to work well. Constantly trying to write dirty pages out will just trigger the I/O congestion code more frequently.

dirty_writeback_centisecs: Leave alone. The timing of pdflush threads set by this parameter is so complicated by rules in the kernel code for things like write congestion that adjusting this tunable is unlikely to cause any real effect. It's generally advisable to keep it at the default so that this internal timing tuning matches the frequency at which pdflush runs.

Statistical data :


$ free
total used free shared buffers cached
Mem: 4040360 4012200 28160 0 176628 3571348
-/+ buffers/cache: 264224 3776136
Swap: 4200956 12184 4188772
$

In this example the total amount of available memory is 4040360 KB. 264224 KB are used by processes and 3776136 KB are free for other applications. Don't get confused by the first line which shows that 28160KB are free. Using available memory for buffers (file system metadata) and cache (pages with actual contents of files or block devices) helps the system to run faster because disk information is already in memory which saves I/O.

Swap memory : An addition memory taken from harddisk and this will be used in addition with RAM. Dirty data may reside here too and can be directly move to disk for writing.

Value can be viewed by :

grep SwapTotal /proc/meminfo
cat /proc/swaps
free


Shared Memory : A part of RAM which is used for sharing by processes. Shared memory allows processes to access common structures and data by placing them in shared memory segments. It's the fastest form of Interprocess Communication (IPC) available since no kernel involvement occurs when data is passed between the processes. In fact, data does not need to be copied between the processes.

Check shared memory settings : ipcs -lm
See all chared memory : ipcs -m
Details of segment : ipcs -m -i
Remove segment : ipcrm shm

Check semaphore value : ipcs -ls

Change its value : echo 250 32000 100 128 > /proc/sys/kernel/sem

Buffer cache : The is subset of pagecache which stores files in memory.

IO Request Queue Parameters:

nr_requests : This file sets the depth of the request queue. nr_requests sets the maximum number of disk I/O requests that can be queued up. The default value for this is dependent on the selected scheduler.

read_ahead_kb : This file sets the size of read-aheads, in kilobytes. the I/O subsystem will enable read-aheads once it detects a sequential disk block access. This file
sets the amount of data to be “pre-fetched” for an application and cached in memory to improve read response time.

The tunable variables for the cfq scheduler are set in files found under /sys/block// queue/iosched/. These files are:

quantum : Total number of requests to be moved from internal queues to the dispatch queue in each cycle.

queued : Maximum number of requests allowed per internal queue.

Prioritizing I/O Bandwidth for Specific Processes : When the cfq scheduler is used, you can adjust the I/O throughput for a specific process using ionice. ionice allows you to assign any of the following scheduling classes to a program:

• idle (lowest priority)
• best effort (default priority)
• real-time (highest priority)

For more information about ionice, scheduling classes, and scheduling priorities, refer to man ionice.

Deadline scheduler : The deadline scheduler aims to keep latency low, which is ideal for real-time workloads. On servers that receive numerous small requests, the deadline scheduler can help by reducing resource management overhead. This is achieved by ensuring that an application has a relatively low number of outstanding requests at any one time. The tunable variables for the deadline scheduler are set in files found under /sys/
block//queue/iosched/. These files are:

read_expire : The amount of time (in milliseconds) before each read I/O request expires. Since read requests are generally more important than write requests, this is the primary tunable option for the deadline scheduler.

write_expire : The amount of time (in milliseconds) before each write I/O request expires.

fifo_batch : When a request expires, it is moved to a "dispatch" queue for immediate servicing. These expired requests are moved by batch. fifo_batch specifies how many requests are included in each batch.

writes_starved : Determines the priority of reads over writes. writes_starved specifies how many read requests should be moved to the dispatch queue before any write requests are moved.

front_merges : In some instances, a request that enters the deadline scheduler may be contiguous to another request in that queue. When this occurs, the new request is normally merged to the back of the queue.

front_merges controls whether such requests should be merged to the front of the queue instead. To enable this, set front_merges to 1. front_merges is disabled by default (i.e. set to 0).


Anticipatory Scheduler: The tunable variables for the anticipatory scheduler are set in files found under /sys/ block//queue/iosched/. These files are:

read_expire :
The amount of time (in milliseconds) before each read I/O request expires. Once a read or write request expires, it is serviced immediately, regardless of its targeted block device. This tuning option is similar to the read_expire option of the deadline scheduler Read requests are generally more important than write requests; as such, it is advisable to issue a faster expiration time to read_expire. In most cases, this is half of write_expire. For example, if write_expire is set at 248, it is advisable to set read_expire to 124.

write_expire : The amount of time (in milliseconds) before each write I/O request expires.

read_batch_expire : The amount of time (in milliseconds) that the I/O subsystem should spend servicing a batch of read requests before servicing pending write batches (if there are any). . Also, read_batch_expire is typically set as a multiple of read_expire.

write_batch_expire : The amount of time (in milliseconds) that the I/O subsystem should spend servicing a batch of write requests before servicing pending write batches.

antic_expire : The amount of time (in milliseconds) to wait for an application to issue another I/O request before moving on to a new request.
Read More
Posted in | No comments

What is I/O Scheduler for a Hard Disk on linux?

Posted on 12:24 by Unknown
The 2.6 LinuxKernel includes selectable I/O schedulers. They control the way the Kernel commits reads and writes to disks – the intention of providing different schedulers is to allow better optimisation for different classes of workload.

Why does kernel need IO scheduler?

ANS : Without an I/O scheduler, the kernel would basically just issue each request to disk in the order that it received them. This could result in massive HardDisk thrashing: if one process was reading from one part of the disk, and one writing to another, the heads would have to seek back and forth across the disk for every operation. The scheduler’s main goal is to optimise disk access times.

An I/O scheduler can use the following techniques to improve performance:

a)Request merging : The scheduler merges adjacent requests together to reduce disk seeking.
b)Elevator : The scheduler orders requests based on their physical location on the block device, and it basically tries to seek in one direction as much as possible.
c)Prioritisation : The scheduler has complete control over how it prioritises requests, and can do so in a number of ways

All I/O schedulers should also take into account resource starvation, to ensure requests eventually do get serviced!

How to view Current Disk scheduler ?

Assuming that we have a disk name /dev/sda, type :

# cat /sys/block/{DEVICE-NAME}/queue/scheduler
# cat /sys/block/sda/queue/scheduler

Sample output:

noop anticipatory deadline [cfq]

Here used scheduler is cfq.

How to set I/O Scheduler For A Hard Disk ?

To set a specific scheduler, simply type the command as follows:

# echo {SCHEDULER-NAME} > /sys/block/{DEVICE-NAME}/queue/scheduler
For example, set noop scheduler, enter:
# echo noop > /sys/block/hda/queue/scheduler

OR

Edit /boot/grub/grub.conf and enter in kernel line "elevator=noop" or any other scheduler available.

There are currently 4 available IO schedulers :

* No-op Scheduler
* Anticipatory IO Scheduler (AS)
* Deadline Scheduler
* Complete Fair Queueing Scheduler (CFQ)

A) No-op Scheduler : This scheduler only implements request merging.

B) Anticipatory IO Scheduler : The anticipatory scheduler is the default scheduler in older 2.6 kernels – if you've not specified one, this is the one that will be loaded. It implements request merging, a one-way elevator, read and write request batching, and attempts some anticipatory reads by holding off a bit after a read batch if it thinks a user is going to ask for more data. It tries to optimise for physical disks by avoiding head movements if possible – one downside to this is that it probably give highly erratic performance on database or storage systems.

C) Deadline Scheduler : The deadline scheduler implements request merging, a one-way elevator, and imposes a deadline on all operations to prevent resource starvation. Because writes return instantly within Linux, with the actual data being held in cache, the deadline scheduler will also prefer readers – as long as the deadline for a write request hasn't passed. The kernel docs suggest this is the preferred scheduler for database systems, especially if you have TCQ aware disks, or any system with high disk performance.

D) Complete Fair Queueing Scheduler (CFQ) : The complete fair queueing scheduler implements both request merging and the elevator, and attempts to give all users of a particular device the same number of IO requests over a particular time interval. This should make it more efficient for multiuser systems. It seems that Novel SLES sets cfq as the scheduler by default, as does the latest Ubuntu release. As of the 2.6.18 kernel, this is the default schedular in kernel.org releases. RHEL 6 uses default scheduler CFQ.

Changing Schedulers :

The most reliable way to change schedulers is to set the kernel option “elevator” at boot time. You can set it to one of “as”, “cfq”, “deadline” or “noop”, to set the appropriate scheduler. elevator=cfq

It seems under more recent 2.6 kernels (2.6.11, possibly earlier), you can change the scheduler at runtime by echoing the name of the scheduler into /sys/block/$devicename/queue/scheduler, where the device name is the basename of the block device, eg “sda” for /dev/sda.

doc : /usr/src/linux/Documentation/block/switching-sched.txt,
Read More
Posted in | No comments

Wednesday, 7 September 2011

How sendmail works?

Posted on 11:55 by Unknown
How sendmail works?

Outbound email :


1. MUA passes the email to sendmail , which creates in the /var/spool/mqueue (mail queue) directory two files that hold the message while sendmail processes it.
2. To create a unique filename for a particular piece of email, sendmail generates a random string and uses that string in filenames pertaining to the email.
3. The sendmail daemon stores the body of the message in a file named df (data file) followed by the generated string.
4. It stores the headers and other information in a file named qf (queue file) followed by the generated string.
5. If a delivery error occurs, sendmail creates a temporary copy of the message that it stores in a file whose name starts with tf (temporary file) and logs errors in a file whose name starts xf .
6. Once an email has been sent successfully, sendmail removes all files pertaining to that email from /var/spool/mqueue .

Incoming email :

1. By default, the MDA stores incoming messages in users' files in the mail spool directory, /var/spool/mail , in mbox format. Within this directory, each user has a mail file named with the user's username. Mail remains in these files until it is collected, typically by an MUA. Once an MUA collects the mail from the mail spool, the MUA stores the mail as directed by the user, usually in the user 's home directory hierarchy.

mbox versus maildir :

1. The mbox format stores all messages for a user in a single file. To prevent corruption, the file must be locked while a process is adding messages to or deleting messages from the file; you cannot delete a message at the same time the MTA is adding messages. A competing format, maildir , stores each message in a separate file. This format does not use locks, allowing an MUA to read and delete messages at the same time as new mail is delivered. In addition, the maildir format is better able to handle larger mailboxes

Mail logs :

# cat/var/log/maillog
...
Mar 3 16:25:33 MACHINENAME sendmail[7225]: i23GPXvm007224:
to=, ctladdr=
(0/0), delay=00:00:00, xdelay=00:00:00, mailer=local, pri=30514,
dsn=2.0.0, stat=Sent


Each log entry starts with a timestamp, the name of the system sending the email, the name of the mail server ( sendmail ), and a unique identification number. The address of the recipient follows the to= label and the address of the sender follows ctladdr= . Additional fields provide the name of the mailer and the time it took to send the message. If a message is sent correctly, the stat= label is followed by Sent .

Aliases and Forwarding :

Three files can forward email: .forward (page 634), aliases (discussed next ), and virtusertable (page 640). Table 20-1 on page 640 compares the three files.
Table 20-1. Comparison of forwarding techniques


.forward aliases virtusertable

Controlled by non root user root root

Forwards email
addressed to "non root user" "Any real or virtual user on the local system" "Any real or virtual user on any domain recognized by sendmail"

Order of precedence Third Second First

/etc/aliases

Most of the time when you send email, it goes to a specific person; the recipient, user@system , maps to a specific, real user on the specified system. Sometimes you may want email to go to a class of users and not to a specific recipient. Examples of classes of users include postmaster , webmaster , root , and tech_support . Different users may receive this email at different times or the email may be answered by a group of users. You can use the /etc/aliases file to map inbound addresses to local users, files, commands, and remote addresses.

Each line in /etc/aliases contains the name of a local pseudouser, followed by a colon , whitespace, and a comma-separated list of destinations. The default installation includes a number of aliases that redirect messages for certain pseudousers to root . These have the form

system: root


Sending messages to the root account is a good way of making them easy to review. However, because root 's email is rarely checked, you may want to send copies to a real user. The following line forwards mail sent to abuse on the local system to root and alex :

abuse: root, alex


You can create simple mailing lists with this type of alias. For example, the following alias sends copies of all email sent to admin on the local system to several users, including Zach, who is on a different system:

admin: sam, helen, mark, zach@redhat.com


You can direct email to a file by specifying an absolute pathname in place of a destination address. The following alias, which is quite popular among less conscientious system administrators, redirects email sent to complaints to /dev/null where they disappear:

complaints: /dev/null


You can also send email to standard input of a command by preceding the command with a pipe character ( | ). This technique is commonly used with mailing list software such as Mailman. For each list it maintains, Mailman has entries, such as the following entry for mylist , in the aliases file:

mylist: "|/usr/lib/mailman/mail/mailman post mylist"


newaliases

After you edit /etc/aliases , you must either run newaliases as root or restart sendmail to recreate the aliases.db file that sendmail reads.

praliases

You can use praliases to list aliases currently loaded by sendmail :

# /usr/sbin/praliases| head-5
postmaster:root
daemon:root
adm:root
lp:root
shutdown:root


~/.forward

Systemwide aliases are useful in many cases, but non root users cannot make or change them. Sometimes you may want to forward your own mail: Maybe you want mail from several systems to go to one address or perhaps you just want to forward your mail while you are working at another office for a week. The ~/.forward file allows ordinary users to forward their email.

Lines in a .forward file are the same as the right column of the aliases file explained previously: Destinations are listed one per line and can be a local user, a remote email address, a filename, or a command preceded by a pipe character ( | ).

Mail that you forward does not go to your local mailbox. If you want to forward mail and keep a copy in your local mailbox, you must specify your local username preceded by a backslash to prevent an infinite loop. The following example sends Sam's email to himself on the local system and on the system at tcorp.com :

$ cat ~sam/.forward
sams@tcorp.com
\sam


Related Programs

sendmail

The sendmail package includes several programs. The primary program, sendmail , reads from standard input and sends an email to the recipient specified by its argument. You can use sendmail from the command line to check that the mail delivery system is working and to email the output of scripts.

mailq

The mailq utility displays the status of the outgoing mail queue and normally reports there are no messages in the queue. Messages in the queue usually indicate a problem with the local or remote sendmail configuration or a network problem.

# /usr/bin/mailq
/var/spool/mqueue is empty
Total requests: 0


mailstats

The mailstats utility reports on the number and sizes of messages sendmail has sent and received since the date it displays on the first line:

# /usr/sbin/mailstats
Statistics from Sat Dec 24 16:02:34 2005
M msgsfr bytes_from msgsto bytes_to msgsrej msgsdis Mailer
0 0 0K 17181 103904K 0 0 prog
4 368386 4216614K 136456 1568314K 20616 0 esmtp
9 226151 26101362K 479025 12776528K 4590 0 local
============================================================
T 594537 30317976K 632662 14448746K 25206 0
C 694638 499700 146185


In the preceding output, each mailer is identified by the first column, which displays the mailer number, and by the last column, which displays the name of the mailer. The second through fifth columns display the number and total sizes of messages sent and received by the mailer. The sixth and seventh columns display the number of messages rejected and discarded respectively. The row that starts with T lists the column totals, and the row that starts with C lists the number of TCP connections.

Setting Up a Backup Server

You can set up a backup mail server to hold email when the primary mail server experiences problems. For maximum coverage, the backup server should be on a different connection to the Internet from the primary server.

Setting up a backup server is easy. Just remove the leading dnl from the following line in the backup mail server's sendmail.mc file:

dnl FEATURE('relay_based_on_MX')dnl


DNS MX records (page 726) specify where email for a domain should be sent. You can have multiple MX records for a domain, each pointing to a different mail server. When a domain has multiple MX records, each record usually has a different priority; the priority is specified by a two-digit number, where lower numbers specify higher priorities.

When attempting to deliver email, an MTA first tries to deliver email to the highest-priority server. If that delivery attempt fails, it tries to deliver to a lower-priority server. If you activate the relay_based_on_MX feature and point a low-priority MX record at a secondary mail server, the mail server will accept email for the domain. The mail server will then forward email to the server identified by the highest-priority MX record for the domain when that server becomes available.


Other Files in /etc/mail :

The /etc/mail directory holds most of the files that control sendmail . This section discusses three of those files: mailertable , access , and virtusertable .
mailertable : Forwards Email from One Domain to Another

When you run a mail server, you may want to send mail destined for one domain to a different location. The sendmail daemon uses the /etc/mail/mailertable file for this purpose. Each line in mailertable holds the name of a domain and a destination mailer separated by whitespace; when sendmail receives email for the specified domain, it forwards it to the mailer specified on the same line. Red Hat enables this feature by default: Put an entry in the mailertable file and restart sendmail to use it.

The following line in mailertable forwards email sent to tcorp.com to the mailer at bravo.com :

$ cat /etc/mail/mailertable
tcorp.com smtp:[bravo.com]


The square brackets in the example instruct sendmail not to use MX records but rather to send email directly to the SMTP server. Without the brackets, email could enter an infinite loop.

A period in front of a domain name acts as a wildcard and causes the name to match any domain that ends in the specified name. For example, .tcorp.com matches sales.tcorp.com , mktg.tcrop.com , and so on.

The sendmail init script regenerates mailertable.db from mailertable each time you run it, as when you restart sendmail .
access : Sets Up a Relay Host

On a LAN, you may want to set up a single server to process outbound mail, keeping local mail inside the network. A system that processes outbound mail for other systems is called a relay host . The /etc/mail/access file specifies which systems the local server relays email for. As configured by Red Hat, this file lists only the local system:

$ cat /etc/mail/access
...
# by default we allow relaying from localhost...
localhost.localdomain RELAY
localhost RELAY
127.0.0.1 RELAY


You can add systems to the list in access by adding an IP address followed by whitespace and the word RELAY . The following line adds the 192.168. subnet to the list of hosts that the local system relays mail for:

192.168. RELAY


The sendmail init script regenerates access.db from access each time you run it, as when you restart sendmail .
virtusertable : Serves Email to Multiple Domains

When the DNS MX records are set up properly, a single system can serve email to multiple domains. On a system that serves mail to many domains, you need a way to sort the incoming mail so that it goes to the right places. The virtusertable file can forward inbound email addressed to different domains ( aliases cannot do this).

As sendmail is configured by Red Hat, virtusertable is enabled. You need to put forwarding instructions in the /etc/mail/virtusertable file and restart sendmail to serve the specified domains. The virtusertable file is similar to the aliases file (page 633), except the left column contains full email addresses, not just local ones. Each line in virtusertable starts with the address that the email was sent to, followed by whitespace and the address sendmail will forward the email to. As with aliases , the destination can be a local user, an email address, a file, or a pipe symbol ( | ), followed by a command.

The following line from virtusertable forwards mail addressed to zach@tcorp.com to zcs , a local user:

zach@tcorp.com zcs


You can also forward email for a user to a remote email address:

sams@bravo.com sams@tcorp.com


You can forward all email destined for a domain to another domain without specifying each user individually. To forward email for every user at bravo.com to tcorp.com , specify @bravo.com as the first address on the line. When sendmail forwards email, it replaces the %1 in the destination address with the name of the recipient. The next line forwards all email addressed to bravo.com to tcorp.com , keeping the original recipients' names :

@bravo.com %1@tcorp.com


Finally you can specify that email intended for a specific user should be rejected by using the error namespace in the destination. The next example bounces email addressed to spam@tcorp.com with the message 5.7.0:550 Invalid address :

spam@tcorp.com error:5.7.0:550 Invalid address
Read More
Posted in | No comments

How to send one mail to "relay server"(another mail server) using sendmail?

Posted on 10:23 by Unknown
1. Configure sendmail as stated at http://kmaiti.blogspot.com/2011/09/how-to-install-and-configure-sendmail.html

2. edit /etc/mail/sendmail.mc

Add this line to sendmail.mc:

define(`SMART_HOST',`[smarthost.example.net]')dnl

3. Rebuild the sendmail.cf :

#m4 /etc/mail/sendmail.mc > /etc/mail/sendmail.cf

4.Restart sendmail:

/etc/rc.d/init.d/sendmail restart

5. Now send mail and check the maillog. Log will show relay name.

If above is configured on 192.168.1.2. If relay server is 192.168.1.3 and if SMTP server which will actually receive mail is : 192.168.1.4 then mail will go from 192.168.1.2 to 192.168.1.3. Now 192.168.1.3 will send mail to 192.168.1.4 as per MX record. So, we need disable DNS in 192.168.1.2. Say it's a client machine. Here is the steps to disable DNS in 192.168.1.2

sendmail without DNS :

There are a number of steps required to successfully use sendmail when there is limited or no DNS.

1. I assume that domain is resolvable, either by /etc/hosts or DNS, or alternatively we can specify an IP address.
2. Set realy host in /etc/mail/sendmail.mc ie define(`SMART_HOST',`name.of.smart.host')dnl
3. Since the system implicitly have limited resolving capabilities, accept email for unknown domains so use line in /etc/mail/sendmail.mc of the form
FEATURE(accept_unresolvable_domains)dnl
4. We have to make it sure that the ServiceSwitchFile (by default at /etc/mail/service.switch) has content similar to:

----
aliases files
hosts files
-----

5. Setting the submission agent to ignore DNS. Use line in /etc/mail/submit.mc of the form

define(`confDIRECT_SUBMISSION_MODIFIERS',`C')

6. Use line in /etc/mail/submit.mc of the form

FEATURE(accept_unresolvable_domains)dnl

7. Execute : #m4 /etc/mail/submit.mc > /etc/mail/submit.cf
6. # service sendmail restart

That's it.
Read More
Posted in | No comments
Newer Posts Older Posts Home
Subscribe to: Posts (Atom)

Popular Posts

  • NDMP communication failure error
    Guys, Issue : Netbackup server sends alert NDMP communication failure once everyday. But there is no issue to run scheduled backup jobs. Env...
  • unable connect to socket: No route to host (113)
    Guys, This error message usually comes when you try to access remote linux desktop using vncviewer. Please check the firewall in the linux s...
  • How to verify UDP packet communication between two linux system?
    Guys, Today, I had to check UDP packet communication between linux and a windows system. Main purpose of the windows system was to capturing...
  • How to redirect output of script to a file(Need to save log in a file and file should be menioned in the script itself?
    Expectation : @subject Steps : 1. Create a bash script. 2. add line : exec > >(tee /var/log/my_logfile.txt) That's it. All output ...
  • "cluster is not quorate. refusing connection"
    Guys, Environment : Red Hat Enterprise Linux 5.6, RHCS Error : subject line Issue : I am not sure while I got this error in the system log s...
  • Steps to develop patch and apply it to original source file
    1. Create test.c  Above file contains : -------- [kamalma@test-1 C_Programming]$ cat test.c #include #include int main()  {  printf("\n...
  • How to install subversion (svn) on linux ?
    Guys, I have referred the second procedure to install svn on my rhel6 mc. Procedure 1 : ========= cd /usr/local/src/ wget http://subversion...
  • How to add sudo user in linux?
    1. #useradd test123 2. #usermod -G wheel -a test123 //add user to wheel group 3. Uncomment following in /etc/sudoers file : # Uncomment to ...
  • How to change php handler from backend on cpanel server?
    Guys, I have referred the following commands to switch the php handler on the cpanel serevrs: 1. Command to display the current php handler ...
  • How to remotely access the linux desktop from any linux or windows machine?
    Guys, I referred the following steps : ======================= 1. On server-linux(Which will be accessed) : yum install vnc* 2. On client-li...

Categories

  • ACL
  • ESX
  • Linux
  • Storage
  • UCS

Blog Archive

  • ▼  2013 (5)
    • ▼  May (1)
      • NDMP communication failure error
    • ►  April (3)
    • ►  February (1)
  • ►  2012 (10)
    • ►  July (1)
    • ►  June (1)
    • ►  April (1)
    • ►  March (3)
    • ►  February (3)
    • ►  January (1)
  • ►  2011 (86)
    • ►  December (3)
    • ►  November (2)
    • ►  September (19)
    • ►  August (9)
    • ►  July (5)
    • ►  June (9)
    • ►  May (12)
    • ►  April (3)
    • ►  March (4)
    • ►  February (5)
    • ►  January (15)
  • ►  2010 (152)
    • ►  December (9)
    • ►  November (34)
    • ►  October (20)
    • ►  September (14)
    • ►  August (24)
    • ►  July (19)
    • ►  June (3)
    • ►  May (25)
    • ►  April (3)
    • ►  January (1)
Powered by Blogger.