Cari di RHE Linux 
    RHE Linux User Manual
Daftar Isi
(Sebelumnya) 13 : Chapter 19. System Monito ...13 : Chapter 22. Automatic Bug ... (Berikutnya)

Deployment Guide

Chapter 20. Viewing and Managing Log Files

Log files are files that contain messages about the system, including the kernel, services, and applications running on it. There are different log files for different information. For example, there is a default system log file, a log file just for security messages, and a log file for cron tasks.
Log files can be very useful when trying to troubleshoot a problem with the system such as trying to load a kernel driver or when looking for unauthorized login attempts to the system. This chapter discusses where to find log files, how to view log files, and what to look for in log files.
Some log files are controlled by a daemon called rsyslogd. A list of log files maintained by rsyslogd can be found in the /etc/rsyslog.conf configuration file.
rsyslog is an enhanced, multi-threaded syslog daemon which replaced the sysklogd daemon. rsyslog supports the same functionality as sysklogd and extends it with enhanced filtering, encryption protected relaying of messages, various configuration options, or support for transportation via the TCP or UDP protocols. Note that rsyslog is compatible with sysklogd.

20.1. Configuring rsyslog

The main configuration file for rsyslog is /etc/rsyslog.conf. It consists of global directives, rules or comments (any empty lines or any text following a hash sign (#)). Both, global directives and rules are extensively described in the sections below.

20.1.1. Global Directives

Global directives specify configuration options that apply to the rsyslogd daemon. They usually specify a value for a specific pre-defined variable that affects the behavior of the rsyslogd daemon or a rule that follows. All of the global directives must start with a dollar sign ($). Only one directive can be specified per line. The following is an example of a global directive that specifies the maximum size of the syslog message queue:
$MainMsgQueueSize 50000
The default size defined for this directive (10,000 messages) can be overridden by specifying a different value (as shown in the example above).
You may define multiple directives in your /etc/rsyslog.conf configuration file. A directive affects the behavior of all configuration options until another occurrence of that same directive is detected.
A comprehensive list of all available configuration directives and their detailed description can be found in /usr/share/doc/rsyslog-<version-number>/rsyslog_conf_global.html.

20.1.2. Modules

Due to its modular design, rsyslog offers a variety of modules which provide dynamic functionality. Note that modules can be written by third parties. Most modules provide additional inputs (see Input Modules below) or outputs (see Output Modules below). Other modules provide special functionality specific to each module. The modules may provide additional configuration directives that become available after a module is loaded. To load a module, use the following syntax:
$ModLoad <MODULE>
where $ModLoad is the global directive that loads the specified module and <MODULE> represents your desired module. For example, if you want to load the Text File Input Module (imfile - enables rsyslog to convert any standard text files into syslog messages), specify the following line in your /etc/rsyslog.conf configuration file:
$ModLoad imfile
rsyslog offers a number of modules which are split into these main categories:
  • Input Modules - Input modules gather messages from various sources. The name of an input module always starts with the im prefix, such as imfile, imrelp, etc.
  • Output Modules - Output modules provide a facility to store messages into various targets such as sending them across network, storing them in a database or encrypting them. The name of an output module always starts with the om prefix, such as omsnmp, omrelp, etc.
  • Filter Modules - Filter modules provide the ability to filter messages according to specified rules. The name of a filter module always starts with the fm prefix.
  • Parser Modules - Parser modules use the message parsers to parse message content of any received messages. The name of a parser module always starts with the pm prefix, such as pmrfc5424, pmrfc3164, etc.
  • Message Modification Modules - Message modification modules change the content of a syslog message. The message modification modules only differ in their implementation from the output and filter modules but share the same interface.
  • String Generator Modules - String generator modules generate strings based on the message content and strongly cooperate with the template feature provided by rsyslog. For more information on templates, refer to Section 20.1.3.3, "Templates". The name of a string generator module always starts with the sm prefix, such as smfile, smtradfile, etc.
  • Library Modules - Library modules generally provide functionality for other loadable modules. These modules are loaded automatically by rsyslog when needed and cannot be configured by the user.
A comprehensive list of all available modules and their detailed description can be found at http://www.rsyslog.com/doc/rsyslog_conf_modules.html

Make sure you use trustworthy modules only

Note that when rsyslog loads any modules, it provides them with access to some of its functions and data. This poses a possible security threat. To minimize security risks, use trustworthy modules only.

20.1.3. Rules

A rule is specified by a filter part, which selects a subset of syslog messages, and an action part, which specifies what to do with the selected messages. To define a rule in your /etc/rsyslog.conf configuration file, define both, a filter and an action, on one line and separate them with one or more spaces or tabs. For more information on filters, refer to Section 20.1.3.1, "Filter Conditions" and for information on actions, refer to Section 20.1.3.2, "Actions".

20.1.3.1. Filter Conditions

rsyslog offers various ways how to filter syslog messages according to various properties. This sections sums up the most used filter conditions.
Facility/Priority-based filters
The most used and well-known way to filter syslog messages is to use the facility/priority-based filters which filter syslog messages based on two conditions: facility and priority. To create a selector, use the following syntax:
<FACILITY>.<PRIORITY>
where:
  • <FACILITY> specifies the subsystem that produces a specific syslog message. For example, the mail subsystem handles all mail related syslog messages. <FACILITY> can be represented by one of these keywords: auth, authpriv, cron, daemon, kern, lpr, mail, news, syslog, user, uucp, and local0 through local7.
  • <PRIORITY> specifies a priority of a syslog message. <PRIORITY> can be represented by one of these keywords (listed in an ascending order): debug, info, notice, warning, err, crit, alert, and emerg.
    By preceding any priority with an equal sign (=), you specify that only syslog messages with that priority will be selected. All other priorities will be ignored. Conversely, preceding a priority with an exclamation mark (!) selects all syslog messages but those with the defined priority. By not using either of these two extensions, you specify a selection of syslog messages with the defined or higher priority.
In addition to the keywords specified above, you may also use an asterisk (*) to define all facilities or priorities (depending on where you place the asterisk, before or after the dot). Specifying the keyword none serves for facilities with no given priorities.
To define multiple facilities and priorities, simply separate them with a comma (,). To define multiple filters on one line, separate them with a semi-colon (;).
The following are a few examples of simple facility/priority-based filters:
kern.* # Selects all kernel syslog messages with any priority
mail.crit # Selects all mail syslog messages with priority crit and higher.
cron.!info,!debug # Selects all cron syslog messages except those with the info or debug priority.
Property-based filters
Property-based filters let you filter syslog messages by any property, such as timegenerated or syslogtag. For more information on properties, refer to Section 20.1.3.3.2, "Properties". Each of the properties specified in the filters lets you compare it to a specific value using one of the compare-operations listed in Table 20.1, "Property-based compare-operations".

Table 20.1. Property-based compare-operations

Compare-operationDescription
containsChecks whether the provided string matches any part of the text provided by the property.
isequalCompares the provided string against all of the text provided by the property.
startswithChecks whether the provided string matches a prefix of the text provided by the property.
regexCompares the provided POSIX BRE (Basic Regular Expression) regular expression against the text provided by the property.
ereregexCompares the provided POSIX ERE (Extended Regular Expression) regular expression against the text provided by the property.

To define a property-based filter, use the following syntax:
:<PROPERTY>, [!]<COMPARE_OPERATION>, "<STRING>"
where:
  • The <PROPERTY> attribute specifies the desired property (for example, timegenerated, hostname, etc.).
  • The optional exclamation point (!) negates the output of the compare-operation (if prefixing the compare-operation).
  • The <COMPARE_OPERATION> attribute specifies one of the compare-operations listed in Table 20.1, "Property-based compare-operations".
  • The <STRING> attribute specifies the value that the text provided by the property is compared to. To escape certain character (for example a quotation mark (")), use the backslash character (\).
The following are few examples of property-based filters:
  • The following filter selects syslog messages which contain the string error in their message text:
    :msg, contains, "error"
  • The following filter selects syslog messages received from the hostname host1:
    :hostname, isequal, "host1"
  • The following filter selects syslog messages which do not contain any mention of the words fatal and error with any or no text between them (for example, fatal lib error):
    :msg, !regex, "fatal .* error"
Expression-based filters
Expression-based filters select syslog messages according to defined arithmetic, Boolean or string operations. Expression-based filters use rsyslog's own scripting language. The syntax of this language is defined in /usr/share/doc/rsyslog-<version-number>/rscript_abnf.html along with examples of various expression-based filters.
To define an expression-based filter, use the following syntax:
if <EXPRESSION> then <ACTION>
where:
  • The <EXPRESSION> attribute represents an expression to be evaluated, for example: $msg startswith 'DEVNAME' or $syslogfacility-text == 'local0'.
  • The <ACTION> attribute represents an action to be performed if the expression returns the value true.

Define an expression-based filter on a single line

When defining an expression-based filter, it must be defined on a single line.

Do not use regular expressions

Regular expressions are currently not supported in expression-based filters.
BSD-style blocks
rsyslog supports BSD-style blocks inside the /etc/rsyslog.conf configuration file. Each block consists of rules which are preceded with a program or hostname label. Use the '!<PROGRAM>' or '-<PROGRAM>' labels to include or exclude programs, respectively. Use the '+<HOSTNAME> ' or '-<HOSTNAME> ' labels to include or exclude hostnames, respectively.
Example 20.1, "BSD-style block" shows a BSD-style block that saves all messages generated by yum to a file.

Example 20.1. BSD-style block

!yum*.*  /var/log/named.log

20.1.3.2. Actions

Actions specify what is to be done with the messages filtered out by an already-defined selector. The following are some of the actions you can define in your rule:
Saving syslog messages to log files
The majority of actions specify to which log file a syslog message is saved. This is done by specifying a file path after your already-defined selector. The following is a rule comprised of a selector that selects all cron syslog messages and an action that saves them into the /var/log/cron.log log file:
cron.* /var/log/cron.log
Use a dash mark (-) as a prefix of the file path you specified if you want to omit syncing the desired log file after every syslog message is generated.
Your specified file path can be either static or dynamic. Static files are represented by a simple file path as was shown in the example above. Dynamic files are represented by a template and a question mark (?) prefix. For more information on templates, refer to Section 20.1.3.3.1, "Generating dynamic file names".
If the file you specified is an existing tty or /dev/console device, syslog messages are sent to standard output (using special tty-handling) or your console (using special /dev/console-handling) when using the X Window System, respectively.
Sending syslog messages over the network
rsyslog allows you to send and receive syslog messages over the network. This feature allows to administer syslog messages of multiple hosts on one machine. To forward syslog messages to a remote machine, use the following syntax:
@[(<OPTION>)]<HOST>:[<PORT>]
where:
  • The at sign (@) indicates that the syslog messages are forwarded to a host using the UDP protocol. To use the TCP protocol, use two at signs with no space between them (@@).
  • The <OPTION> attribute can be replaced with an option such as z<NUMBER>. This option enables zlib compression for syslog messages; the <NUMBER> attribute specifies the level of compression. To define multiple options, simply separate each one of them with a comma (,).
  • The <HOST> attribute specifies the host which receives the selected syslog messages.
  • The <PORT> attribute specifies the host machine's port.
When specifying an IPv6 address as the host, enclose the address in square brackets ([, ]).
The following are some examples of actions that forward syslog messages over the network (note that all actions are preceded with a selector that selects all messages with any priority):
*.* @192.168.0.1 # Forwards messages to 192.168.0.1 via the UDP protocol
*.* @@example.com:18 # Forwards messages to "example.com" using port 18 and the TCP protocol
*.* @(z9)[2001::1] # Compresses messages with zlib (level 9 compression)  # and forwards them to 2001::1 using the UDP protocol
Output channels
Output channels are primarily used for log file rotation (for more info on log file rotation, refer to Section 20.2.1, "Configuring logrotate"), that is, to specify the maximum size a log file can grow to. To define an output channel, use the following syntax:
$outchannel <NAME>, <FILE_NAME>, <MAX_SIZE>, <ACTION>
where:
  • The <NAME> attribute specifies the name of the output channel.
  • The <FILE_NAME> attribute specifies the name of the output file.
  • The <MAX_SIZE> attribute represents the maximum size the specified file (in <FILE_NAME>) can grow to. This value is specified in bytes.
  • The <ACTION> attribute specifies the action that is taken when the maximum size, defined in <MAX_SIZE>, is hit.
Example 20.2, "Output channel log rotation" shows a simple log rotation through the use of an output channel. First, the output channel is defined via the $outchannel directive and then used in a rule which selects every syslog message with any priority and executes the previously-defined output channel on the acquired syslog messages. Once the limit (in the example 100 MB) is hit, the /home/joe/log_rotation_script is executed. This script can contain anything from moving the file into a different folder, editing specific content out of it, or simply removing it.

Example 20.2. Output channel log rotation

$outchannel log_rotation,/var/log/test_log.log, 104857600, /home/joe/log_rotation_script*.* $log_rotation

Support for output channels is to be removed in the future

Output channels are currently supported by rsyslog, however, they are planned to be removed in the nearby future.
Sending syslog messages to specific users
rsyslog can send syslog messages to specific users by simply specifying a username of the user you wish to send the messages to. To specify more than one user, separate each username with a comma (,). To send messages to every user that is currently logged on, use an asterisk (*).
Executing a program
rsyslog lets you execute a program for selected syslog messages and uses the system() call to execute the program in shell. To specify a program to be executed, prefix it with a caret character (^). Consequently, specify a template that formats the received message and passes it to the specified executable as a one line parameter (for more information on templates, refer to Section 20.1.3.3, "Templates"). In the following example, any syslog message with any priority is selected, formatted with the template template and passed as a parameter to the test-program program, which is then executed with the provided parameter:
*.* ^test-program;template

Be careful when using the shell execute action

When accepting messages from any host, and using the shell execute action, you may be vulnerable to command injection. An attacker may try to inject and execute commands specified by the attacker in the program you specified (in your action) to be executed. To avoid any possible security threats, thoroughly consider the use of the shell execute action.
Inputting syslog messages in a database
Selected syslog messages can be directly written into a database table using the database writer action. The database writer uses the following syntax:
:<PLUGIN>:<DB_HOST>,<DB_NAME>,<DB_USER>,<DB_PASSWORD>;[<TEMPLATE>]
where:
  • The <PLUGIN> calls the specified plug-in that handles the database writing (for example, the ommysql plug-in).
  • The <DB_HOST> attribute specifies the database hostname.
  • The <DB_NAME> attribute specifies the name of the database.
  • The <DB_USER> attribute specifies the database user.
  • The <DB_PASSWORD> attribute specifies the password used with the aforementioned database user.
  • The <TEMPLATE> attribute specifies an optional use of a template that modifies the syslog message. For more information on templates, refer to Section 20.1.3.3, "Templates".

Using MySQL and PostgreSQL

Currently, rsyslog provides support for MySQL (for more information, refer to /usr/share/doc/rsyslog-<version-number>/rsyslog_mysql.html) and PostgreSQL databases only. In order to use the MySQL and PostgreSQL database writer functionality, install the rsyslog-mysql and rsyslog-pgsql packages, respectively. Also, make sure you load the appropriate modules in your /etc/rsyslog.conf configuration file:
$ModLoad ommysql # Output module for MySQL support$ModLoad ompgsql # Output module for PostgreSQL support
For more information on rsyslog modules, refer to Section 20.1.2, "Modules".
Alternatively, you may use a generic database interface provided by the omlibdb module. However, this module is currently not compiled.
Discarding syslog messages
To discard your selected messages, use the tilde character (~). The following rule discards any cron syslog messages:
cron.* ~
For each selector, you are allowed to specify multiple actions. To specify multiple actions for one selector, write each action on a separate line and precede it with an ampersand character (&). Only the first action is allowed to have a selector specified on its line. The following is an example of a rule with multiple actions:
kern.=crit joe& ^test-program;temp& @192.168.0.1
In the example above, all kernel syslog messages with the critical priority (crit) are send to user joe, processed by the template temp and passed on to the test-program executable, and forwarded to 192.168.0.1 via the UDP protocol.
Specifying multiple actions improves the overall performance of the desired outcome since the specified selector has to be evaluated only once.
Note that any action can be followed by a template that formats the message. To specify a template, suffix an action with a semicolon (;) and specify the name of the template.

Using templates

A template must be defined before it is used in an action, otherwise, it is ignored.
For more information on templates, refer to Section 20.1.3.3, "Templates".

20.1.3.3. Templates

Any output that is generated by rsyslog can be modified and formatted according to your needs through the use of templates. To create a template use the following syntax:
$template <TEMPLATE_NAME>,"text %<PROPERTY>% more text", [<OPTION>]
where:
  • $template is the template directive that indicates that the text following it, defines a template.
  • <TEMPLATE_NAME> is the name of the template. Use this name to refer to the template.
  • Anything between the two quotation marks (" . . . . . . ") is the actual template text. Within this text, you are allowed to escape characters in order to use their functionality, such as \n for new line or \r for carriage return. Other characters, such as % or ", have to be escaped in case you want to those characters literally.
    The text specified within two percent signs (%) specifies a property that is consequently replaced with the property's actual value. For more information on properties, refer to Section 20.1.3.3.2, "Properties"
  • The <OPTION> attribute specifies any options that modify the template functionality. Do not mistake these for property options, which are defined inside the template text (between " . . . . . . "). The currently supported template options are sql and stdsql used for formatting the text as an SQL query.

    The sql and stdsql options

    Note that the database writer (for more information, refer to section Inputting syslog messages in a database in Section 20.1.3.2, "Actions") checks whether the sql and stdsql options are specified in the template. If they are not, the database writer does not perform any action. This is to prevent any possible security threats, such as SQL injection.
20.1.3.3.1. Generating dynamic file names
Templates can be used to generate dynamic file names. By specifying a property as a part of the file path, a new file will be created for each unique property. For example, use the timegenerated property to generate a unique file name for each syslog message:
$template DynamicFile,"/var/log/test_logs/%timegenerated%-test.log"
Keep in mind that the $template directive only specifies the template. You must use it inside a rule for it to take effect:
*.* ?DynamicFile
20.1.3.3.2. Properties
Properties defined inside a template (within two percent signs (%)) allow you to access various contents of a syslog message through the use of a property replacer. To define a property inside a template (between the two quotation marks (" . . . . . . ")), use the following syntax:
%<PROPERTY_NAME>[:<FROM_CHAR>:<TO_CHAR>:<OPTION>]%
where:
  • The <PROPERTY_NAME> attribute specifies the name of a property. A comprehensible list of all available properties and their detailed description can be found in /usr/share/doc/rsyslog-<version-number>/property_replacer.html under the section Available Properties.
  • <FROM_CHAR> and <TO_CHAR> attributes denote a range of characters that the specified property will act upon. Alternatively, regular expressions can be used to specify a range of characters. To do so, specify the letter R as the <FROM_CHAR> attribute and specify your desired regular expression as the <TO_CHAR> attribute.
  • The <OPTION> attribute specifies any property options. A comprehensible list of all available properties and their detailed description can be found in /usr/share/doc/rsyslog-<version-number>/property_replacer.html under the section Property Options.
The following are some examples of simple properties:
  • The following property simply obtains the whole message text of a syslog message:
    %msg%
  • The following property obtains the first two characters of the message text of a syslog message:
    %msg:1:2%
  • The following property obtains the whole message text of a syslog message and drops its last line feed character:
    %msg:::drop-last-lf%
  • The following property obtains the first 10 characters of the timestamp that is generated when the syslog message is received and formats it according to the RFC 3999 date standard.
    %timegenerated:1:10:date-rfc3339%
20.1.3.3.3. Template Examples
This section presents few examples of rsyslog templates.
Example 20.3, "A verbose syslog message template" shows a template that formats a syslog message so that it outputs the message's severity, facility, the timestamp of when the message was received, the hostname, the message tag, the message text, and ends with a new line.

Example 20.3. A verbose syslog message template

$template verbose,"%syslogseverity%,%syslogfacility%,%timegenerated%,%HOSTNAME%,%syslogtag%,%msg%\n"

Example 20.4, "A wall message template" shows a template that resembles a traditional wall message (a message that is send to every user that is logged in and has their mesg(1) permission set to yes). This template outputs the message text, along with a hostname, message tag and a timestamp, on a new line (using \r and \n) and rings the bell (using \7).

Example 20.4. A wall message template

$template wallmsg,"\r\n\7Message from syslogd@%HOSTNAME% at %timegenerated% ...\r\n %syslogtag% %msg%\n\r"

Example 20.5, "A database formatted message template" shows a template that formats a syslog message so that it can be used as a database query. Notice the use of the sql option at the end of the template specified as the template option. It tells the database writer to format the message as an MySQL SQL query.

Example 20.5. A database formatted message template

$template dbFormat,"insert into SystemEvents (Message, Facility,FromHost, Priority, DeviceReportedTime, ReceivedAt, InfoUnitID, SysLogTag) values ('%msg%', %syslogfacility%, '%HOSTNAME%',%syslogpriority%, '%timereported:::date-mysql%', '%timegenerated:::date-mysql%', %iut%, '%syslogtag%')",sql

rsyslog also contains a set of predefined templates identified by the RSYSLOG_ prefix. It is advisable to not create a template using this prefix to avoid any conflicts. The following list shows these predefined templates along with their definitions.
RSYSLOG_DebugFormat
"Debug line with all properties:\nFROMHOST: '%FROMHOST%', fromhost-ip: '%fromhost-ip%', HOSTNAME: '%HOSTNAME%', PRI: %PRI%,\nsyslogtag '%syslogtag%', programname: '%programname%', APP-NAME: '%APP-NAME%', PROCID: '%PROCID%', MSGID: '%MSGID%',\nTIMESTAMP: '%TIMESTAMP%', STRUCTURED-DATA: '%STRUCTURED-DATA%',\nmsg: '%msg%'\nescaped msg: '%msg:::drop-cc%'\nrawmsg: '%rawmsg%'\n\n\"
RSYSLOG_SyslogProtocol23Format
"<%PRI%>1 %TIMESTAMP:::date-rfc3339% %HOSTNAME% %APP-NAME% %PROCID% %MSGID% %STRUCTURED-DATA% %msg%\n\"
RSYSLOG_FileFormat
"%TIMESTAMP:::date-rfc3339% %HOSTNAME% %syslogtag%%msg:::sp-if-no-1st-sp%%msg:::drop-last-lf%\n\"
RSYSLOG_TraditionalFileFormat
"%TIMESTAMP% %HOSTNAME% %syslogtag%%msg:::sp-if-no-1st-sp%%msg:::drop-last-lf%\n\"
RSYSLOG_ForwardFormat
"<%PRI%>%TIMESTAMP:::date-rfc3339% %HOSTNAME% %syslogtag:1:32%%msg:::sp-if-no-1st-sp%%msg%\"
RSYSLOG_TraditionalForwardFormat
"<%PRI%>%TIMESTAMP% %HOSTNAME% %syslogtag:1:32%%msg:::sp-if-no-1st-sp%%msg%\"

20.1.4. rsyslog Command Line Configuration

Some of rsyslog's functionality can be configured through the command line options, as sysklogd's can. Note that as of version 3 of rsyslog, this method was deprecated. To enable some of these option, you must specify the compatibility mode rsyslog should run in. However, configuring rsyslog through the command line options should be avoided.
To specify the compatibility mode rsyslog should run in, use the -c option. When no parameter is specified, rsyslog tries to be compatible with sysklogd. This is partially achieved by activating configuration directives that modify your configuration accordingly. Therefore, it is advisable to supply this option with a number that matches the major version of rsyslog that is in use and update your /etc/rsyslog.conf configuration file accordingly. If you want to, for example, use sysklogd options (which were deprecated in version 3 of rsyslog), you can specify so by executing the following command:
~]# rsyslogd -c 2
Options that are passed to the rsyslogd daemon, including the backward compatibility mode, can be specified in the /etc/sysconfig/rsyslog configuration file.
For more information on various rsyslogd options, refer to man rsyslogd.

20.2. Locating Log Files

Most log files are located in the /var/log/ directory. Some applications such as httpd and samba have a directory within /var/log/ for their log files.
You may notice multiple files in the /var/log/ directory with numbers after them (for example, cron-20100906). These numbers represent a timestamp that has been added to a rotated log file. Log files are rotated so their file sizes do not become too large. The logrotate package contains a cron task that automatically rotates log files according to the /etc/logrotate.conf configuration file and the configuration files in the /etc/logrotate.d/ directory.

20.2.1. Configuring logrotate

The following is a sample /etc/logrotate.conf configuration file:
# rotate log files weeklyweekly# keep 4 weeks worth of backlogsrotate 4# uncomment this if you want your log files compressedcompress
All of the lines in the sample configuration file define global options that apply to every log file. In our example, log files are rotated weekly, rotated log files are kept for the duration of 4 weeks, and all rotated log files are compressed by gzip into the .gz format. Any lines that begin with a hash sign (#) are comments and are not processed
You may define configuration options for a specific log file and place it under the global options. However, it is advisable to create a separate configuration file for any specific log file in the /etc/logrotate.d/ directory and define any configuration options there.
The following is an example of a configuration file placed in the /etc/logrotate.d/ directory:
/var/log/messages { rotate 5 weekly postrotate /usr/bin/killall -HUP syslogd endscript}
The configuration options in this file are specific for the /var/log/messages log file only. The settings specified here override the global settings where possible. Thus the rotated /var/log/messages log file will be kept for five weeks instead of four weeks as was defined in the global options.
The following is a list of some of the directives you can specify in your logrotate configuration file:
  • weekly - Specifies the rotation of log files on a weekly basis. Similar directives include:
    • daily
    • monthly
    • yearly
  • compress - Enables compression of rotated log files. Similar directives include:
    • nocompress
    • compresscmd - Specifies the command to be used for compressing.
    • uncompresscmd
    • compressext - Specifies what extension is to be used for compressing.
    • compressoptions - Lets you specify any options that may be passed to the used compression program.
    • delaycompress - Postpones the compression of log files to the next rotation of log files.
  • rotate <INTEGER> - Specifies the number of rotations a log file undergoes before it is removed or mailed to a specific address. If the value 0 is specified, old log files are removed instead of rotated.
  • mail <ADDRESS> - This option enables mailing of log files that have been rotated as many times as is defined by the rotate directive to the specified address. Similar directives include:
    • nomail
    • mailfirst - Specifies that the just-rotated log files are to be mailed, instead of the about-to-expire log files.
    • maillast - Specifies that the about-to-expire log files are to be mailed, instead of the just-rotated log files. This is the default option when mail is enabled.
For the full list of directives and various configuration options, refer to the logrotate man page (man logrotate).

20.3. Viewing Log Files

Most log files are in plain text format. You can view them with any text editor such as Vi or Emacs. Some log files are readable by all users on the system; however, root privileges are required to read most log files.
To view system log files in an interactive, real-time application, use the Log File Viewer.

Installing the gnome-system-log package

In order to use the Log File Viewer, first ensure the gnome-system-log package is installed on your system by running, as root:
~]# yum install gnome-system-log
For more information on installing packages with Yum, refer to Section 6.2.4, "Installing Packages".
After you have installed the gnome-system-log package, you can open the Log File Viewer by clicking on ApplicationsSystem ToolsLog File Viewer, or type the following command at a shell prompt:
~]$ gnome-system-log
The application only displays log files that exist; thus, the list might differ from the one shown in Figure 20.1, "Log File Viewer".
Log File Viewer
Log File Viewer

Figure 20.1. Log File Viewer


The Log File Viewer application lets you filter any existing log file. Click on Filters from the menu and select Manage Filters to define or edit your desired filter.
Log File Viewer - Filters
Log File Viewer - Filters

Figure 20.2. Log File Viewer - Filters


Adding or editing a filter lets you define its parameters as is shown in Figure 20.3, "Log File Viewer - defining a filter".
Log File Viewer - defining a filter
Log File Viewer - Defining a Filter

Figure 20.3. Log File Viewer - defining a filter


When defining a filter, you can edit the following parameters:
  • Name - Specifies the name of the filter.
  • Regular Expression - Specifies the regular expression that will be applied to the log file and will attempt to match any possible strings of text in it.
  • Effect
    • Highlight - If checked, the found results will be highlighted with the selected color. You may select whether to highlight the background or the foreground of the text.
    • Hide - If checked, the found results will be hidden from the log file you are viewing.
When you have at least one filter defined, you may select it from the Filters menu and it will automatically search for the strings you have defined in the filter and highlight/hide every successful match in the log file you are currently viewing.
Log File Viewer - enabling a filter
Log File Viewer - Enabling a Filter

Figure 20.4. Log File Viewer - enabling a filter


When you check the Show matches only option, only the matched strings will be shown in the log file you are currently viewing.

20.4. Adding a Log File

To add a log file you wish to view in the list, select FileOpen. This will display the Open Log window where you can select the directory and file name of the log file you wish to view.Figure 20.5, "Log File Viewer - adding a log file" illustrates the Open Log window.
Log File Viewer - adding a log file
Log File Viewer - Adding a Log File

Figure 20.5. Log File Viewer - adding a log file


Click on the Open button to open the file. The file is immediately added to the viewing list where you can select it and view its contents.

Reading zipped log files

The Log File Viewer also allows you to open log files zipped in the .gz format.

20.5. Monitoring Log Files

Log File Viewer monitors all opened logs by default. If a new line is added to a monitored log file, the log name appears in bold in the log list. If the log file is selected or displayed, the new lines appear in bold at the bottom of the log file. Figure 20.6, "Log File Viewer - new log alert" illustrates a new alert in the cron log file and in the messages log file. Clicking on the cron log file displays the logs in the file with the new lines in bold.
Log File Viewer - new log alert
Log File Viewer - New Log Alert

Figure 20.6. Log File Viewer - new log alert


20.6. Additional Resources

To learn more about rsyslog, logrotate, and log files in general, refer to the following resources.

20.6.1. Installed Documentation

  • rsyslogd(8) - The manual page for the rsyslogd daemon provides more information on its usage.
  • rsyslog.conf(5) - The manual page for the /etc/rsyslog.conf configuration file provides detailed information about available configuration options.
  • logrotate(8) - The manual page for the logrotate utility provides more information on its configuration and usage.
  • /usr/share/doc/rsyslog-<version-number>/ - After installing the rsyslog package, this directory contains extensive documentation rsyslog in the HTML format.

20.6.2. Useful Websites

Chapter 21. Automating System Tasks

Tasks, also known as jobs, can be configured to run automatically within a specified period of time, on a specified date, or when the system load average decreases below 0.8.
Red Hat Enterprise Linux is pre-configured to run important system tasks to keep the system updated. For example, the slocate database used by the locate command is updated daily. A system administrator can use automated tasks to perform periodic backups, monitor the system, run custom scripts, and so on.
Red Hat Enterprise Linux comes with the following automated task utilities: cron, anacron, at, and batch.
Every utility is intended for scheduling a different job type: while Cron and Anacron schedule recurring jobs, At and Batch schedule one-time jobs (refer to Section 21.1, "Cron and Anacron" and Section 21.2, "At and Batch" respectively).

21.1. Cron and Anacron

Both, Cron and Anacron, are daemons that can schedule execution of recurring tasks to a certain point in time defined by the exact time, day of the month, month, day of the week, and week.
Cron jobs can run as often as every minute. However, the utility assumes that the system is running continuously and if the system is not on at the time when a job is scheduled, the job is not executed.
On the other hand, Anacron remembers the scheduled jobs if the system is not running at the time when the job is scheduled. The job is then exectuted as soon as the system is up. However, Anacron can only run a job once a day.

21.1.1. Installing Cron and Anacron

To install Cron and Anacron, you need to install the cronie package with Cron and the cronie-anacron package with Anacron (cronie-anacron is a sub-package of cronie).
To determine if the packages are already installed on your system, issue the rpm -q cronie cronie-anacron command. The command returns full names of the cronie and cronie-anacron packages if already installed or notifies you that the packages are not available.
To install the packages, use the yum command in the following form:
 yum  install  package 
For example, to install both Cron and Anacron, type the following at a shell prompt:
~]# yum install cronie cronie-anacrone
Note that you must have superuser privileges (that is, you must be logged in as root) to run this command. For more information on how to install new packages in Red Hat Enterprise Linux, refer to Section 6.2.4, "Installing Packages".

21.1.2. Running the Crond Service

The cron and anacron jobs are both picked by the crond service. This section provides information on how to start, stop, and restart the crond service, and shows how to enable it in a particular runlevel. For more information on the concept of runlevels and how to manage system services in Red Hat Enterprise Linux in general, refer to Chapter 10, Services and Daemons.

21.1.2.1. Starting and Stopping the Cron Service

To determine if the service is running, use the command service crond status.
To run the crond service in the current session, type the following at a shell prompt as root:
 service crond start 
To configure the service to be automatically started at boot time, use the following command:
 chkconfig crond on 
This command enables the service in runlevel 2, 3, 4, and 5. Alternatively, you can use the Service Configuration utility as described in Section 10.2.1.1, "Enabling and Disabling a Service".

21.1.2.2. Stopping the Cron Service

To stop the crond service, type the following at a shell prompt as root
 service crond stop 
To disable starting the service at boot time, use the following command:
 chkconfig crond off 
This command disables the service in all runlevels. Alternatively, you can use the Service Configuration utility as described in Section 10.2.1.1, "Enabling and Disabling a Service".

21.1.2.3. Restarting the Cron Service

To restart the crond service, type the following at a shell prompt:
 service crond restart 
This command stops the service and starts it again in quick succession.

21.1.3. Configuring Anacron Jobs

The main configuration file to schedule jobs is the /etc/anacrontab file, which can be only accessed by the root user. The file contains the following:
SHELL=/bin/shPATH=/sbin:/bin:/usr/sbin:/usr/binMAILTO=root# the maximal random delay added to the base delay of the jobsRANDOM_DELAY=45# the jobs will be started during the following hours onlySTART_HOURS_RANGE=3-22#period in days   delay in minutes   job-identifier   command1 5 cron.daily nice run-parts /etc/cron.daily7 25 cron.weekly   nice run-parts /etc/cron.weekly@monthly  45 cron.monthly  nice run-parts /etc/cron.monthly
The first three lines define the variables that configure the environment in which the anacron tasks run:
  • SHELL - shell environment used for running jobs (in the example, the Bash shell)
  • PATH - paths to executable programs
  • MAILTO - username of the user who receives the output of the anacron jobs by email
    If the MAILTO variable is not defined (MAILTO=), the email is not sent.
The next two variables modify the scheduled time for the defined jobs:
  • RANDOM_DELAY - maximum number of minutes that will be added to the delay in minutes variable which is specified for each job
    The minimum delay value is set, by default, to 6 minutes.
    If RANDOM_DELAY is, for example, set to 12, then between 6 and 12 minutes are added to the delay in minutes for each job in that particular anacrontab. RANDOM_DELAY can also be set to a value below 6, including 0. When set to 0, no random delay is added. This proves to be useful when, for example, more computers that share one network connection need to download the same data every day.
  • START_HOURS_RANGE - interval, when scheduled jobs can be run, in hours
    In case the time interval is missed, for example due to a power failure, the scheduled jobs are not executed that day.
The remaining lines in the /etc/anacrontab file represent scheduled jobs and follow this format:
period in days   delay in minutes   job-identifier   command
  • period in days - frequency of job execution in days
    The property value can be defined as an integer or a macro (@daily, @weekly, @monthly), where @daily denotes the same value as integer 1, @weekly the same as 7, and @monthly specifies that the job is run once a month regarless of the length of the month.
  • delay in minutes - number of minutes anacron waits before executing the job
    The property value is defined as an integer. If the value is set to 0, no delay applies.
  • job-identifier - unique name referring to a particular job used in the log files
  • command - command to be executed
    The command can be either a command such as ls /proc >> /tmp/proc or a command which executes a custom script.
Any lines that begin with a hash sign (#) are comments and are not processed.

21.1.3.1. Examples of Anacron Jobs

The following example shows a simple /etc/anacrontab file:
SHELL=/bin/shPATH=/sbin:/bin:/usr/sbin:/usr/binMAILTO=root# the maximal random delay added to the base delay of the jobsRANDOM_DELAY=30# the jobs will be started during the following hours onlySTART_HOURS_RANGE=16-20#period in days   delay in minutes   job-identifier   command1 20 dailyjob  nice run-parts /etc/cron.daily7 25 weeklyjob /etc/weeklyjob.bash@monthly  45 monthlyjob ls /proc >> /tmp/proc
All jobs defined in this anacrontab file are randomly delayed by 6-30 minutes and can be executed between 16:00 and 20:00.
The first defined job is triggered daily between 16:26 and 16:50 (RANDOM_DELAY is between 6 and 30 minutes; the delay in minutes property adds 20 minutes). The command specified for this job executes all present programs in the /etc/cron.daily directory using the run-parts script (the run-parts scripts accepts a directory as a command-line argument and sequentially executes every program in the directory).
The second job executes the weeklyjob.bash script in the /etc directory once a week.
The third job runs a command, which writes the contents of /proc to the /tmp/proc file (ls /proc >> /tmp/proc) once a month.

21.1.4. Configuring Cron Jobs

The configuration file for cron jobs is the /etc/crontab, which can be only modified by the root user. The file contains the following:
SHELL=/bin/bashPATH=/sbin:/bin:/usr/sbin:/usr/binMAILTO=rootHOME=/# For details see man 4 crontabs# Example of job definition:# .---------------- minute (0 - 59)# | .------------- hour (0 - 23)# | | .---------- day of month (1 - 31)# | | | .------- month (1 - 12) OR jan,feb,mar,apr ...# | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat# | | | | |# * * * * * username  command to be executed
The first three lines contain the same variable definitions as an anacrontab file: SHELL, PATH, and MAILTO. For more information about these variables, refer to Section 21.1.3, "Configuring Anacron Jobs".
In addition, the file can define the HOME variable. The HOME variable defines the directory, which will be used as the home directory when executing commands or scripts run by the job.
The remaining lines in the /etc/crontab file represent scheduled jobs and have the following format:
minute   hour   day   month   day of week   username   command
The following define the time when the job is to be run:
  • minute - any integer from 0 to 59
  • hour - any integer from 0 to 23
  • day - any integer from 1 to 31 (must be a valid day if a month is specified)
  • month - any integer from 1 to 12 (or the short name of the month such as jan or feb)
  • day of week - any integer from 0 to 7, where 0 or 7 represents Sunday (or the short name of the week such as sun or mon)
The following define other job properties:
  • username - specifies the user under which the jobs are run
  • command - the command to be executed
    The command can be either a command such as ls /proc /tmp/proc or a command which executes a custom script.
For any of the above values, an asterisk (*) can be used to specify all valid values. If you, for example, define the month value as an asterisk, the job will be executed every month within the constraints of the other values.
A hyphen (-) between integers specifies a range of integers. For example, 1-4 means the integers 1, 2, 3, and 4.
A list of values separated by commas (,) specifies a list. For example, 3, 4, 6, 8 indicates exactly these four integers.
The forward slash (/) can be used to specify step values. The value of an integer will be skipped within a range following the range with /integer. For example, minute value defined as 0-59/2 denotes every other minute in the minute field. Step values can also be used with an asterisk. For instance, if the month value is defined as */3, the task will run every third month.
Any lines that begin with a hash sign (#) are comments and are not processed.
Users other than root can configure cron tasks with the crontab utility. The user-defined crontabs are stored in the /var/spool/cron/ directory and executed as if run by the users that created them.
To create a crontab as a user, login as that user and type the command crontab -e to edit the user's crontab with the editor specified in the VISUAL or EDITOR environment variable. The file uses the same format as /etc/crontab. When the changes to the crontab are saved, the crontab is stored according to username and written to the file /var/spool/cron/username. To list the contents of your crontab file, use the crontab -l command.

Do not specify a user

Do not specify the user when defining a job with the crontab utility.
The /etc/cron.d/ directory contains files that have the same syntax as the /etc/crontab file. Only root is allowed to create and modify files in this directory.

Do not restart the daemon to apply the changes

The cron daemon checks the /etc/anacrontab file, the /etc/crontab file, the /etc/cron.d/ directory, and the /var/spool/cron/ directory every minute for changes and the detected changes are loaded into memory. It is therefore not necessary to restart the daemon after an anacrontab or a crontab file have been changed.

21.1.5. Controlling Access to Cron

To restrict the access to Cron, you can use the /etc/cron.allow and /etc/cron.deny files. These access control files use the same format with one username on each line. Mind that no whitespace characters are permitted in either file.
If the cron.allow file exists, only users listed in the file are allowed to use cron, and the cron.deny file is ignored.
If the cron.allow file does not exist, users listed in the cron.deny file are not allowed to use Cron.
The Cron daemon (crond) does not have to be restarted if the access control files are modified. The access control files are checked each time a user tries to add or delete a cron job.
The root user can always use cron, regardless of the usernames listed in the access control files.
You can control the access also through Pluggable Authentication Modules (PAM). The settings are stored in the /etc/security/access.conf file. For example, after adding the following line to the file, no other user but the root user can create crontabs:
-:ALL EXCEPT root :cron
The forbidden jobs are logged in an appropriate log file or, when using "crontab -e", returned to the standard output. For more information, refer to access.conf.5 (that is, man 5 access.conf).

21.1.6. Black and White Listing of Cron Jobs

Black and white listing of jobs is used to define parts of a job that do not need to be executed. This is useful when calling the run-parts script on a Cron directory, such as /etc/cron.daily: if the user adds programs located in the directory to the job black list, the run-parts script will not execute these programs.
To define a black list, create a jobs.deny file in the directory that run-parts scripts will be executing from. For example, if you need to omit a particular program from /etc/cron.daily, create the /etc/cron.daily/jobs.deny file. In this file, specify the names of the programs to be omitted from execution (only programs located in the same directory can be enlisted). If a job runs a command which runs the programs from the cron.daily directory, such as run-parts /etc/cron.daily, the programs defined in the jobs.deny file will not be executed.
To define a white list, create a jobs.allow file.
The principles of jobs.deny and jobs.allow are the same as those of cron.deny and cron.allow described in section Section 21.1.5, "Controlling Access to Cron".

21.2. At and Batch

While Cron is used to schedule recurring tasks, the At utility is used to schedule a one-time task at a specific time and the Batch utility is used to schedule a one-time task to be executed when the system load average drops below 0.8.

21.2.1. Installing At and Batch

To determine if the at package is already installed on your system, issue the rpm -q at command. The command returns the full name of the at package if already installed or notifies you that the package is not available.
To install the packages, use the yum command in the following form:
 yum  install  package 
To install At and Batch, type the following at a shell prompt:
~]# yum install at
Note that you must have superuser privileges (that is, you must be logged in as root) to run this command. For more information on how to install new packages in Red Hat Enterprise Linux, refer to Section 6.2.4, "Installing Packages".

21.2.2. Running the At Service

The At and Batch jobs are both picked by the atd service. This section provides information on how to start, stop, and restart the atd service, and shows how to enable it in a particular runlevel. For more information on the concept of runlevels and how to manage system services in Red Hat Enterprise Linux in general, refer to Chapter 10, Services and Daemons.

21.2.2.1. Starting and Stopping the At Service

To determine if the service is running, use the command service atd status.
To run the atd service in the current session, type the following at a shell prompt as root:
 service atd start 
To configure the service to start automatically at boot, use the following command:
 chkconfig atd on 

Note

It is recommended to start the service at boot automatically.
This command enables the service in runlevel 2, 3, 4, and 5. Alternatively, you can use the Service Configuration utility as described in Section 10.2.1.1, "Enabling and Disabling a Service".

21.2.2.2. Stopping the At Service

To stop the atd service, type the following at a shell prompt as root
 service atd stop 
To disable starting the service at boot time, use the following command:
 chkconfig atd off 
This command disables the service in all runlevels. Alternatively, you can use the Service Configuration utility as described in Section 10.2.1.1, "Enabling and Disabling a Service".

21.2.2.3. Restarting the At Service

To restart the atd service, type the following at a shell prompt:
 service atd restart 
This command stops the service and starts it again in quick succession.

21.2.3. Configuring an At Job

To schedule a one-time job for a specific time with the At utility, do the following:
  1. On the command line, type the command at TIME, where TIME is the time when the command is to be executed.
    The TIME argument can be defined in any of the following formats:
    • HH:MM specifies the exact hour and minute; For example, 04:00 specifies 4:00 a.m.
    • midnight specifies 12:00 a.m.
    • noon specifies 12:00 p.m.
    • teatime specifies 4:00 p.m.
    • MONTHDAYYEAR format; For example, January 15 2012 specifies the 15th day of January in the year 2012. The year value is optional.
    • MMDDYY, MM/DD/YY, or MM.DD.YY formats; For example, 011512 for the 15th day of January in the year 2012.
    • now + TIME where TIME is defined as an integer and the value type: minutes, hours, days, or weeks. For example, now + 5 days specifies that the command will be executed at the same time five days from now.
      The time must be specified first, followed by the optional date. For more information about the time format, refer to the /usr/share/doc/at-<version>/timespec text file.
    If the specified time has past, the job is executed at the time the next day.
  2. In the displayed at> prompt, define the job commands:
    • Type the command the job should execute and press Enter. Optionally, repeat the step to provide multiple commands.
    • Enter a shell script at the prompt and press Enter after each line in the script.
      The job will use the shell set in the user's SHELL environment, the user's login shell, or /bin/sh (whichever is found first).
  3. Once finished, press Ctrl+D on an empty line to exit the prompt.
If the set of commands or the script tries to display information to standard output, the output is emailed to the user.
To view the list of pending jobs, use the atq command. Refer to Section 21.2.5, "Viewing Pending Jobs" for more information.
You can also restrict the usage of the at command. For more information, refer to Section 21.2.7, "Controlling Access to At and Batch" for details.

21.2.4. Configuring a Batch Job

The Batch application executes the defined one-time tasks when the system load average decreases below 0.8.
To define a Batch job, do the following:
  1. On the command line, type the command batch.
  2. In the displayed at> prompt, define the job commands:
    • Type the command the job should execute and press Enter. Optionally, repeat the step to provide multiple commands.
    • Enter a shell script at the prompt and press Enter after each line in the script.
      If a script is entered, the job uses the shell set in the user's SHELL environment, the user's login shell, or /bin/sh (whichever is found first).
  3. Once finished, press Ctrl+D on an empty line to exit the prompt.
If the set of commands or the script tries to display information to standard output, the output is emailed to the user.
To view the list of pending jobs, use the atq command. Refer to Section 21.2.5, "Viewing Pending Jobs" for more information.
You can also restrict the usage of the batch command. For more information, refer to Section 21.2.7, "Controlling Access to At and Batch" for details.

21.2.5. Viewing Pending Jobs

To view the pending At and Batch jobs, run the atq command. The atq command displays a list of pending jobs, with each job on a separate line. Each line follows the job number, date, hour, job class, and username format. Users can only view their own jobs. If the root user executes the atq command, all jobs for all users are displayed.

21.2.6. Additional Command Line Options

Additional command line options for at and batch include the following:

Table 21.1. at and batch Command Line Options

OptionDescription
-fRead the commands or shell script from a file instead of specifying them at the prompt.
-mSend email to the user when the job has been completed.
-vDisplay the time that the job is executed.

21.2.7. Controlling Access to At and Batch

You can restrict the access to the at and batch commands using the /etc/at.allow and /etc/at.deny files. These access control files use the same format defining one username on each line. Mind that no whitespace are permitted in either file.
If the file at.allow exists, only users listed in the file are allowed to use at or batch, and the at.deny file is ignored.
If at.allow does not exist, users listed in at.deny are not allowed to use at or batch.
The at daemon (atd) does not have to be restarted if the access control files are modified. The access control files are read each time a user tries to execute the at or batch commands.
The root user can always execute at and batch commands, regardless of the content of the access control files.

21.3. Additional Resources

To learn more about configuring automated tasks, refer to the following installed documentation:
  • cron man page contains an overview of cron.
  • crontab man pages in sections 1 and 5:
    • The manual page in section 1 contains an overview of the crontab file.
    • The man page in section 5 contains the format for the file and some example entries.
  • anacron manual page contains an overview of anacron.
  • anacrontab manual page contains an overview of the anacrontab file.
  • /usr/share/doc/at-<version>/timespec contains detailed information about the time values that can be used in cron job definitions.
  • at manual page contains descriptions of at and batch and their command line options.
(Sebelumnya) 13 : Chapter 19. System Monito ...13 : Chapter 22. Automatic Bug ... (Berikutnya)