Cari di RHE Linux 
    Red Hat Enterprise Linux Manual
Daftar Isi
(Sebelumnya) 6 : Security Guide6 : Chapter 2. Securing Your N ... (Berikutnya)

Security Guide

Chapter 1. Security Overview

Because of the increased reliance on powerful, networked computers to help run businesses and keep track of our personal information, entire industries have been formed around the practice of network and computer security. Enterprises have solicited the knowledge and skills of security experts to properly audit systems and tailor solutions to fit the operating requirements of their organization. Because most organizations are increasingly dynamic in nature, their workers are accessing critical company IT resources locally and remotely, hence the need for secure computing environments has become more pronounced.
Unfortunately, many organizations (as well as individual users) regard security as more of an afterthought, a process that is overlooked in favor of increased power, productivity, convenience, ease of use, and budgetary concerns. Proper security implementation is often enacted postmortem - after an unauthorized intrusion has already occurred. Taking the correct measures prior to connecting a site to an untrusted network, such as the Internet, is an effective means of thwarting many attempts at intrusion.

Note

This document makes several references to files in the /lib directory. When using 64-bit systems, some of the files mentioned may instead be located in /lib64.

1.1. Introduction to Security

1.1.1. What is Computer Security?

Computer security is a general term that covers a wide area of computing and information processing. Industries that depend on computer systems and networks to conduct daily business transactions and access critical information regard their data as an important part of their overall assets. Several terms and metrics have entered our daily business vocabulary, such as total cost of ownership (TCO), return on investment (ROI), and quality of service (QoS). Using these metrics, industries can calculate aspects such as data integrity and high-availability (HA) as part of their planning and process management costs. In some industries, such as electronic commerce, the availability and trustworthiness of data can mean the difference between success and failure.

1.1.1.1. How did Computer Security come about?

Information security has evolved over the years due to the increasing reliance on public networks not to disclose personal, financial, and other restricted information. There are numerous instances such as the Mitnick [1]and the Vladimir Levin [2]cases that prompted organizations across all industries to re-think the way they handle information, including its transmission and disclosure. The popularity of the Internet was one of the most important developments that prompted an intensified effort in data security.
An ever-growing number of people are using their personal computers to gain access to the resources that the Internet has to offer. From research and information retrieval to electronic mail and commerce transactions, the Internet has been regarded as one of the most important developments of the 20th century.
The Internet and its earlier protocols, however, were developed as a trust-based system. That is, the Internet Protocol (IP) was not designed to be secure in itself. There are no approved security standards built into the TCP/IP communications stack, leaving it open to potentially malicious users and processes across the network. Modern developments have made Internet communication more secure, but there are still several incidents that gain national attention and alert us to the fact that nothing is completely safe.

1.1.1.2. Security Today

In February of 2000, a Distributed Denial of Service (DDoS) attack was unleashed on several of the most heavily-trafficked sites on the Internet. The attack rendered yahoo.com, cnn.com, amazon.com, fbi.gov, and several other sites completely unreachable to normal users, as it tied up routers for several hours with large-byte ICMP packet transfers, also called a ping flood. The attack was brought on by unknown assailants using specially created, widely available programs that scanned vulnerable network servers, installed client applications called Trojans on the servers, and timed an attack with every infected server flooding the victim sites and rendering them unavailable. Many blame the attack on fundamental flaws in the way routers and the protocols used are structured to accept all incoming data, no matter where or for what purpose the packets are sent.
In 2007, a data breach exploiting the widely-known weaknesses of the Wired Equivalent Privacy (WEP) wireless encryption protocol resulted in the theft from a global financial institution of over 45 million credit card numbers.[3]
In a separate incident, the billing records of over 2.2 million patients stored on a backup tape were stolen from the front seat of a courier's car.[4]
Currently, an estimated 1.4 billion people use or have used the Internet worldwide.[5] At the same time:
  • On any given day, there are approximately 225 major incidences of security breach reported to the CERT Coordination Center at Carnegie Mellon University.[6]
  • The number of CERT reported incidences jumped from 52,658 in 2001, 82,094 in 2002 and to 137,529 in 2003.[7]
  • According to the FBI, computer-related crimes cost US businesses $67.2 Billion dollars in 2006.[8]
From a 2009 global survey of security and information technology professionals, "Why Security Matters Now"[9], undertaken by CIO Magazine, some notable results are:
  • Just 23% of respondents have policies for using Web 2.0 technologies. These technologies, such as Twitter, Facebook and LinkedIn may provide a convenient way for companies and individuals to communicate and collaborate, however they open new vulnerabilities, primarily the leaking of confidential data.
  • Even during the recent financial crisis of 2009, security budgets were found in the survey to be mostly at the same amount or increasing over previous years (nearly 2 out of 3 respondents expect spending to increase or remain the same). This is good news and reflects the importance that organizations are placing on information security today.
These results enforce the reality that computer security has become a quantifiable and justifiable expense for IT budgets. Organizations that require data integrity and high availability elicit the skills of system administrators, developers, and engineers to ensure 24x7 reliability of their systems, services, and information. Falling victim to malicious users, processes, or coordinated attacks is a direct threat to the success of the organization.
Unfortunately, system and network security can be a difficult proposition, requiring an intricate knowledge of how an organization regards, uses, manipulates, and transmits its information. Understanding the way an organization (and the people who make up the organization) conducts business is paramount to implementing a proper security plan.

1.1.1.3. Standardizing Security

Enterprises in every industry rely on regulations and rules that are set by standards-making bodies such as the American Medical Association (AMA) or the Institute of Electrical and Electronics Engineers (IEEE). The same ideals hold true for information security. Many security consultants and vendors agree upon the standard security model known as CIA, or Confidentiality, Integrity, and Availability. This three-tiered model is a generally accepted component to assessing risks of sensitive information and establishing security policy. The following describes the CIA model in further detail:
  • Confidentiality - Sensitive information must be available only to a set of pre-defined individuals. Unauthorized transmission and usage of information should be restricted. For example, confidentiality of information ensures that a customer's personal or financial information is not obtained by an unauthorized individual for malicious purposes such as identity theft or credit fraud.
  • Integrity - Information should not be altered in ways that render it incomplete or incorrect. Unauthorized users should be restricted from the ability to modify or destroy sensitive information.
  • Availability - Information should be accessible to authorized users any time that it is needed. Availability is a warranty that information can be obtained with an agreed-upon frequency and timeliness. This is often measured in terms of percentages and agreed to formally in Service Level Agreements (SLAs) used by network service providers and their enterprise clients.

1.1.2. SELinux

Red Hat Enterprise Linux includes an enhancement to the Linux kernel called SELinux, which implements a Mandatory Access Control (MAC) architecture that provides a fine-grained level of control over files, processes, users and applications in the system. Detailed discussion of SELinux is beyond the scope of this document; however, for more information on SELinux and its use in Red Hat Enterprise Linux, refer to the Red Hat Enterprise Linux SELinux User Guide. For more information on configuring and running services that are protected by SELinux, refer to the SELinux Managing Confined Services Guide. Other available resources for SELinux are listed in Chapter 8, References.

1.1.3. Security Controls

Computer security is often divided into three distinct master categories, commonly referred to as controls:
  • Physical
  • Technical
  • Administrative
These three broad categories define the main objectives of proper security implementation. Within these controls are sub-categories that further detail the controls and how to implement them.

1.1.3.1. Physical Controls

Physical control is the implementation of security measures in a defined structure used to deter or prevent unauthorized access to sensitive material. Examples of physical controls are:
  • Closed-circuit surveillance cameras
  • Motion or thermal alarm systems
  • Security guards
  • Picture IDs
  • Locked and dead-bolted steel doors
  • Biometrics (includes fingerprint, voice, face, iris, handwriting, and other automated methods used to recognize individuals)

1.1.3.2. Technical Controls

Technical controls use technology as a basis for controlling the access and usage of sensitive data throughout a physical structure and over a network. Technical controls are far-reaching in scope and encompass such technologies as:
  • Encryption
  • Smart cards
  • Network authentication
  • Access control lists (ACLs)
  • File integrity auditing software

1.1.3.3. Administrative Controls

Administrative controls define the human factors of security. They involve all levels of personnel within an organization and determine which users have access to what resources and information by such means as:
  • Training and awareness
  • Disaster preparedness and recovery plans
  • Personnel recruitment and separation strategies
  • Personnel registration and accounting

1.1.4. Conclusion

Now that you have learned about the origins, reasons, and aspects of security, you will find it easier to determine the appropriate course of action with regard to Red Hat Enterprise Linux. It is important to know what factors and conditions make up security in order to plan and implement a proper strategy. With this information in mind, the process can be formalized and the path becomes clearer as you delve deeper into the specifics of the security process.

1.2. Vulnerability Assessment

Given time, resources, and motivation, an attacker can break into nearly any system. All of the security procedures and technologies currently available cannot guarantee that any systems are completely safe from intrusion. Routers help secure gateways to the Internet. Firewalls help secure the edge of the network. Virtual Private Networks safely pass data in an encrypted stream. Intrusion detection systems warn you of malicious activity. However, the success of each of these technologies is dependent upon a number of variables, including:
  • The expertise of the staff responsible for configuring, monitoring, and maintaining the technologies.
  • The ability to patch and update services and kernels quickly and efficiently.
  • The ability of those responsible to keep constant vigilance over the network.
Given the dynamic state of data systems and technologies, securing corporate resources can be quite complex. Due to this complexity, it is often difficult to find expert resources for all of your systems. While it is possible to have personnel knowledgeable in many areas of information security at a high level, it is difficult to retain staff who are experts in more than a few subject areas. This is mainly because each subject area of information security requires constant attention and focus. Information security does not stand still.

1.2.1. Thinking Like the Enemy

Suppose that you administer an enterprise network. Such networks commonly comprise operating systems, applications, servers, network monitors, firewalls, intrusion detection systems, and more. Now imagine trying to keep current with each of those. Given the complexity of today's software and networking environments, exploits and bugs are a certainty. Keeping current with patches and updates for an entire network can prove to be a daunting task in a large organization with heterogeneous systems.
Combine the expertise requirements with the task of keeping current, and it is inevitable that adverse incidents occur, systems are breached, data is corrupted, and service is interrupted.
To augment security technologies and aid in protecting systems, networks, and data, you must think like a cracker and gauge the security of your systems by checking for weaknesses. Preventative vulnerability assessments against your own systems and network resources can reveal potential issues that can be addressed before a cracker exploits it.
A vulnerability assessment is an internal audit of your network and system security; the results of which indicate the confidentiality, integrity, and availability of your network (as explained in Section 1.1.1.3, "Standardizing Security"). Typically, vulnerability assessment starts with a reconnaissance phase, during which important data regarding the target systems and resources is gathered. This phase leads to the system readiness phase, whereby the target is essentially checked for all known vulnerabilities. The readiness phase culminates in the reporting phase, where the findings are classified into categories of high, medium, and low risk; and methods for improving the security (or mitigating the risk of vulnerability) of the target are discussed.
If you were to perform a vulnerability assessment of your home, you would likely check each door to your home to see if they are closed and locked. You would also check every window, making sure that they closed completely and latch correctly. This same concept applies to systems, networks, and electronic data. Malicious users are the thieves and vandals of your data. Focus on their tools, mentality, and motivations, and you can then react swiftly to their actions.

1.2.2. Defining Assessment and Testing

Vulnerability assessments may be broken down into one of two types: outside looking in and inside looking around.
When performing an outside-looking-in vulnerability assessment, you are attempting to compromise your systems from the outside. Being external to your company provides you with the cracker's viewpoint. You see what a cracker sees - publicly-routable IP addresses, systems on your DMZ, external interfaces of your firewall, and more. DMZ stands for "demilitarized zone", which corresponds to a computer or small subnetwork that sits between a trusted internal network, such as a corporate private LAN, and an untrusted external network, such as the public Internet. Typically, the DMZ contains devices accessible to Internet traffic, such as Web (HTTP) servers, FTP servers, SMTP (e-mail) servers and DNS servers.
When you perform an inside-looking-around vulnerability assessment, you are at an advantage since you are internal and your status is elevated to trusted. This is the viewpoint you and your co-workers have once logged on to your systems. You see print servers, file servers, databases, and other resources.
There are striking distinctions between the two types of vulnerability assessments. Being internal to your company gives you more privileges than an outsider. In most organizations, security is configured to keep intruders out. Very little is done to secure the internals of the organization (such as departmental firewalls, user-level access controls, and authentication procedures for internal resources). Typically, there are many more resources when looking around inside as most systems are internal to a company. Once you are outside the company, your status is untrusted. The systems and resources available to you externally are usually very limited.
Consider the difference between vulnerability assessments and penetration tests. Think of a vulnerability assessment as the first step to a penetration test. The information gleaned from the assessment is used for testing. Whereas the assessment is undertaken to check for holes and potential vulnerabilities, the penetration testing actually attempts to exploit the findings.
Assessing network infrastructure is a dynamic process. Security, both information and physical, is dynamic. Performing an assessment shows an overview, which can turn up false positives and false negatives.
Security administrators are only as good as the tools they use and the knowledge they retain. Take any of the assessment tools currently available, run them against your system, and it is almost a guarantee that there are some false positives. Whether by program fault or user error, the result is the same. The tool may find vulnerabilities which in reality do not exist (false positive); or, even worse, the tool may not find vulnerabilities that actually do exist (false negative).
Now that the difference between a vulnerability assessment and a penetration test is defined, take the findings of the assessment and review them carefully before conducting a penetration test as part of your new best practices approach.

Warning

Attempting to exploit vulnerabilities on production resources can have adverse effects to the productivity and efficiency of your systems and network.
The following list examines some of the benefits to performing vulnerability assessments.
  • Creates proactive focus on information security.
  • Finds potential exploits before crackers find them.
  • Results in systems being kept up to date and patched.
  • Promotes growth and aids in developing staff expertise.
  • Abates financial loss and negative publicity.

1.2.2.1. Establishing a Methodology

To aid in the selection of tools for a vulnerability assessment, it is helpful to establish a vulnerability assessment methodology. Unfortunately, there is no predefined or industry approved methodology at this time; however, common sense and best practices can act as a sufficient guide.
What is the target? Are we looking at one server, or are we looking at our entire network and everything within the network? Are we external or internal to the company? The answers to these questions are important as they help determine not only which tools to select but also the manner in which they are used.
To learn more about establishing methodologies, refer to the following websites:

1.2.3. Evaluating the Tools

An assessment can start by using some form of an information gathering tool. When assessing the entire network, map the layout first to find the hosts that are running. Once located, examine each host individually. Focusing on these hosts requires another set of tools. Knowing which tools to use may be the most crucial step in finding vulnerabilities.
Just as in any aspect of everyday life, there are many different tools that perform the same job. This concept applies to performing vulnerability assessments as well. There are tools specific to operating systems, applications, and even networks (based on the protocols used). Some tools are free; others are not. Some tools are intuitive and easy to use, while others are cryptic and poorly documented but have features that other tools do not.
Finding the right tools may be a daunting task and in the end, experience counts. If possible, set up a test lab and try out as many tools as you can, noting the strengths and weaknesses of each. Review the README file or man page for the tool. Additionally, look to the Internet for more information, such as articles, step-by-step guides, or even mailing lists specific to a tool.
The tools discussed below are just a small sampling of the available tools.

1.2.3.1. Scanning Hosts with Nmap

Nmap is a popular tool that can be used to determine the layout of a network. Nmap has been available for many years and is probably the most often used tool when gathering information. An excellent manual page is included that provides detailed descriptions of its options and usage. Administrators can use Nmap on a network to find host systems and open ports on those systems.
Nmap is a competent first step in vulnerability assessment. You can map out all the hosts within your network and even pass an option that allows Nmap to attempt to identify the operating system running on a particular host. Nmap is a good foundation for establishing a policy of using secure services and restricting unused services.
To install Nmap, run the yum install nmap command as the root user.
1.2.3.1.1. Using Nmap
Nmap can be run from a shell prompt by typing the nmap command followed by the hostname or IP address of the machine to scan:
nmap <hostname>
For example, to scan a machine with hostname foo.example.com, type the following at a shell prompt:
~]$ nmap foo.example.com
The results of a basic scan (which could take up to a few minutes, depending on where the host is located and other network conditions) look similar to the following:
Interesting ports on foo.example.com:Not shown: 1710 filtered portsPORT STATE  SERVICE22/tcp  open   ssh53/tcp  open   domain80/tcp  open   http113/tcp closed auth
Nmap tests the most common network communication ports for listening or waiting services. This knowledge can be helpful to an administrator who wants to close down unnecessary or unused services.
For more information about using Nmap, refer to the official homepage at the following URL:

1.2.3.2. Nessus

Nessus is a full-service security scanner. The plug-in architecture of Nessus allows users to customize it for their systems and networks. As with any scanner, Nessus is only as good as the signature database it relies upon. Fortunately, Nessus is frequently updated and features full reporting, host scanning, and real-time vulnerability searches. Remember that there could be false positives and false negatives, even in a tool as powerful and as frequently updated as Nessus.

Note

The Nessus client and server software requires a subscription to use. It has been included in this document as a reference to users who may be interested in using this popular application.
For more information about Nessus, refer to the official website at the following URL:

1.2.3.3. Nikto

Nikto is an excellent common gateway interface (CGI) script scanner. Nikto not only checks for CGI vulnerabilities but does so in an evasive manner, so as to elude intrusion detection systems. It comes with thorough documentation which should be carefully reviewed prior to running the program. If you have Web servers serving up CGI scripts, Nikto can be an excellent resource for checking the security of these servers.
More information about Nikto can be found at the following URL:

1.2.3.4. Anticipating Your Future Needs

Depending upon your target and resources, there are many tools available. There are tools for wireless networks, Novell networks, Windows systems, Linux systems, and more. Another essential part of performing assessments may include reviewing physical security, personnel screening, or voice/PBX network assessment. New concepts, such as war walking and wardriving, which involves scanning the perimeter of your enterprise's physical structures for wireless network vulnerabilities, are some concepts that you should investigate and, if needed, incorporate into your assessments. Imagination and exposure are the only limits of planning and conducting vulnerability assessments.

1.3. Attackers and Vulnerabilities

To plan and implement a good security strategy, first be aware of some of the issues which determined, motivated attackers exploit to compromise systems. However, before detailing these issues, the terminology used when identifying an attacker must be defined.

1.3.1. A Quick History of Hackers

The modern meaning of the term hacker has origins dating back to the 1960s and the Massachusetts Institute of Technology (MIT) Tech Model Railroad Club, which designed train sets of large scale and intricate detail. Hacker was a name used for club members who discovered a clever trick or workaround for a problem.
The term hacker has since come to describe everything from computer buffs to gifted programmers. A common trait among most hackers is a willingness to explore in detail how computer systems and networks function with little or no outside motivation. Open source software developers often consider themselves and their colleagues to be hackers, and use the word as a term of respect.
Typically, hackers follow a form of the hacker ethic which dictates that the quest for information and expertise is essential, and that sharing this knowledge is the hackers duty to the community. During this quest for knowledge, some hackers enjoy the academic challenges of circumventing security controls on computer systems. For this reason, the press often uses the term hacker to describe those who illicitly access systems and networks with unscrupulous, malicious, or criminal intent. The more accurate term for this type of computer hacker is cracker - a term created by hackers in the mid-1980s to differentiate the two communities.

1.3.1.1. Shades of Gray

Within the community of individuals who find and exploit vulnerabilities in systems and networks are several distinct groups. These groups are often described by the shade of hat that they "wear" when performing their security investigations and this shade is indicative of their intent.
The white hat hacker is one who tests networks and systems to examine their performance and determine how vulnerable they are to intrusion. Usually, white hat hackers crack their own systems or the systems of a client who has specifically employed them for the purposes of security auditing. Academic researchers and professional security consultants are two examples of white hat hackers.
A black hat hacker is synonymous with a cracker. In general, crackers are less focused on programming and the academic side of breaking into systems. They often rely on available cracking programs and exploit well known vulnerabilities in systems to uncover sensitive information for personal gain or to inflict damage on the target system or network.
The gray hat hacker, on the other hand, has the skills and intent of a white hat hacker in most situations but uses his knowledge for less than noble purposes on occasion. A gray hat hacker can be thought of as a white hat hacker who wears a black hat at times to accomplish his own agenda.
Gray hat hackers typically subscribe to another form of the hacker ethic, which says it is acceptable to break into systems as long as the hacker does not commit theft or breach confidentiality. Some would argue, however, that the act of breaking into a system is in itself unethical.
Regardless of the intent of the intruder, it is important to know the weaknesses a cracker may likely attempt to exploit. The remainder of the chapter focuses on these issues.

1.3.2. Threats to Network Security

Bad practices when configuring the following aspects of a network can increase the risk of attack.

1.3.2.1. Insecure Architectures

A misconfigured network is a primary entry point for unauthorized users. Leaving a trust-based, open local network vulnerable to the highly-insecure Internet is much like leaving a door ajar in a crime-ridden neighborhood - nothing may happen for an arbitrary amount of time, but eventually someone exploits the opportunity.
1.3.2.1.1. Broadcast Networks
System administrators often fail to realize the importance of networking hardware in their security schemes. Simple hardware such as hubs and routers rely on the broadcast or non-switched principle; that is, whenever a node transmits data across the network to a recipient node, the hub or router sends a broadcast of the data packets until the recipient node receives and processes the data. This method is the most vulnerable to address resolution protocol (ARP) or media access control (MAC) address spoofing by both outside intruders and unauthorized users on local hosts.
1.3.2.1.2. Centralized Servers
Another potential networking pitfall is the use of centralized computing. A common cost-cutting measure for many businesses is to consolidate all services to a single powerful machine. This can be convenient as it is easier to manage and costs considerably less than multiple-server configurations. However, a centralized server introduces a single point of failure on the network. If the central server is compromised, it may render the network completely useless or worse, prone to data manipulation or theft. In these situations, a central server becomes an open door which allows access to the entire network.

1.3.3. Threats to Server Security

Server security is as important as network security because servers often hold a great deal of an organization's vital information. If a server is compromised, all of its contents may become available for the cracker to steal or manipulate at will. The following sections detail some of the main issues.

1.3.3.1. Unused Services and Open Ports

A full installation of Red Hat Enterprise Linux 6 contains 1000+ application and library packages. However, most server administrators do not opt to install every single package in the distribution, preferring instead to install a base installation of packages, including several server applications.
A common occurrence among system administrators is to install the operating system without paying attention to what programs are actually being installed. This can be problematic because unneeded services may be installed, configured with the default settings, and possibly turned on. This can cause unwanted services, such as Telnet, DHCP, or DNS, to run on a server or workstation without the administrator realizing it, which in turn can cause unwanted traffic to the server, or even, a potential pathway into the system for crackers. Refer To Section 2.2, "Server Security" for information on closing ports and disabling unused services.

1.3.3.2. Unpatched Services

Most server applications that are included in a default installation are solid, thoroughly tested pieces of software. Having been in use in production environments for many years, their code has been thoroughly refined and many of the bugs have been found and fixed.
However, there is no such thing as perfect software and there is always room for further refinement. Moreover, newer software is often not as rigorously tested as one might expect, because of its recent arrival to production environments or because it may not be as popular as other server software.
Developers and system administrators often find exploitable bugs in server applications and publish the information on bug tracking and security-related websites such as the Bugtraq mailing list (http://www.securityfocus.com) or the Computer Emergency Response Team (CERT) website (http://www.cert.org). Although these mechanisms are an effective way of alerting the community to security vulnerabilities, it is up to system administrators to patch their systems promptly. This is particularly true because crackers have access to these same vulnerability tracking services and will use the information to crack unpatched systems whenever they can. Good system administration requires vigilance, constant bug tracking, and proper system maintenance to ensure a more secure computing environment.
Refer to Section 1.5, "Security Updates" for more information about keeping a system up-to-date.

1.3.3.3. Inattentive Administration

Administrators who fail to patch their systems are one of the greatest threats to server security. According to the SysAdmin, Audit, Network, Security Institute (SANS), the primary cause of computer security vulnerability is to "assign untrained people to maintain security and provide neither the training nor the time to make it possible to do the job."[10] This applies as much to inexperienced administrators as it does to overconfident or amotivated administrators.
Some administrators fail to patch their servers and workstations, while others fail to watch log messages from the system kernel or network traffic. Another common error is when default passwords or keys to services are left unchanged. For example, some databases have default administration passwords because the database developers assume that the system administrator changes these passwords immediately after installation. If a database administrator fails to change this password, even an inexperienced cracker can use a widely-known default password to gain administrative privileges to the database. These are only a few examples of how inattentive administration can lead to compromised servers.

1.3.3.4. Inherently Insecure Services

Even the most vigilant organization can fall victim to vulnerabilities if the network services they choose are inherently insecure. For instance, there are many services developed under the assumption that they are used over trusted networks; however, this assumption fails as soon as the service becomes available over the Internet - which is itself inherently untrusted.
One category of insecure network services are those that require unencrypted usernames and passwords for authentication. Telnet and FTP are two such services. If packet sniffing software is monitoring traffic between the remote user and such a service usernames and passwords can be easily intercepted.
Inherently, such services can also more easily fall prey to what the security industry terms the man-in-the-middle attack. In this type of attack, a cracker redirects network traffic by tricking a cracked name server on the network to point to his machine instead of the intended server. Once someone opens a remote session to the server, the attacker's machine acts as an invisible conduit, sitting quietly between the remote service and the unsuspecting user capturing information. In this way a cracker can gather administrative passwords and raw data without the server or the user realizing it.
Another category of insecure services include network file systems and information services such as NFS or NIS, which are developed explicitly for LAN usage but are, unfortunately, extended to include WANs (for remote users). NFS does not, by default, have any authentication or security mechanisms configured to prevent a cracker from mounting the NFS share and accessing anything contained therein. NIS, as well, has vital information that must be known by every computer on a network, including passwords and file permissions, within a plain text ASCII or DBM (ASCII-derived) database. A cracker who gains access to this database can then access every user account on a network, including the administrator's account.
By default, Red Hat Enterprise Linux is released with all such services turned off. However, since administrators often find themselves forced to use these services, careful configuration is critical. Refer to Section 2.2, "Server Security" for more information about setting up services in a safe manner.

1.3.4. Threats to Workstation and Home PC Security

Workstations and home PCs may not be as prone to attack as networks or servers, but since they often contain sensitive data, such as credit card information, they are targeted by system crackers. Workstations can also be co-opted without the user's knowledge and used by attackers as "slave" machines in coordinated attacks. For these reasons, knowing the vulnerabilities of a workstation can save users the headache of reinstalling the operating system, or worse, recovering from data theft.

1.3.4.1. Bad Passwords

Bad passwords are one of the easiest ways for an attacker to gain access to a system. For more on how to avoid common pitfalls when creating a password, refer to Section 2.1.3, "Password Security".

1.3.4.2. Vulnerable Client Applications

Although an administrator may have a fully secure and patched server, that does not mean remote users are secure when accessing it. For instance, if the server offers Telnet or FTP services over a public network, an attacker can capture the plain text usernames and passwords as they pass over the network, and then use the account information to access the remote user's workstation.
Even when using secure protocols, such as SSH, a remote user may be vulnerable to certain attacks if they do not keep their client applications updated. For instance, v.1 SSH clients are vulnerable to an X-forwarding attack from malicious SSH servers. Once connected to the server, the attacker can quietly capture any keystrokes and mouse clicks made by the client over the network. This problem was fixed in the v.2 SSH protocol, but it is up to the user to keep track of what applications have such vulnerabilities and update them as necessary.
Section 2.1, "Workstation Security" discusses in more detail what steps administrators and home users should take to limit the vulnerability of computer workstations.

1.4. Common Exploits and Attacks

Table 1.1, "Common Exploits" details some of the most common exploits and entry points used by intruders to access organizational network resources. Key to these common exploits are the explanations of how they are performed and how administrators can properly safeguard their network against such attacks.

Table 1.1. Common Exploits

ExploitDescriptionNotes
Null or Default PasswordsLeaving administrative passwords blank or using a default password set by the product vendor. This is most common in hardware such as routers and firewalls, but some services that run on Linux can contain default administrator passwords as well (though Red Hat Enterprise Linux does not ship with them).
Commonly associated with networking hardware such as routers, firewalls, VPNs, and network attached storage (NAS) appliances.
Common in many legacy operating systems, especially those that bundle services (such as UNIX and Windows.)
Administrators sometimes create privileged user accounts in a rush and leave the password null, creating a perfect entry point for malicious users who discover the account.
Default Shared KeysSecure services sometimes package default security keys for development or evaluation testing purposes. If these keys are left unchanged and are placed in a production environment on the Internet, all users with the same default keys have access to that shared-key resource, and any sensitive information that it contains.
Most common in wireless access points and preconfigured secure server appliances.
IP SpoofingA remote machine acts as a node on your local network, finds vulnerabilities with your servers, and installs a backdoor program or trojan horse to gain control over your network resources.
Spoofing is quite difficult as it involves the attacker predicting TCP/IP sequence numbers to coordinate a connection to target systems, but several tools are available to assist crackers in performing such a vulnerability.
Depends on target system running services (such as rsh, telnet, FTP and others) that use source-based authentication techniques, which are not recommended when compared to PKI or other forms of encrypted authentication used in ssh or SSL/TLS.
EavesdroppingCollecting data that passes between two active nodes on a network by eavesdropping on the connection between the two nodes.
This type of attack works mostly with plain text transmission protocols such as Telnet, FTP, and HTTP transfers.
Remote attacker must have access to a compromised system on a LAN in order to perform such an attack; usually the cracker has used an active attack (such as IP spoofing or man-in-the-middle) to compromise a system on the LAN.
Preventative measures include services with cryptographic key exchange, one-time passwords, or encrypted authentication to prevent password snooping; strong encryption during transmission is also advised.
Service VulnerabilitiesAn attacker finds a flaw or loophole in a service run over the Internet; through this vulnerability, the attacker compromises the entire system and any data that it may hold, and could possibly compromise other systems on the network.
HTTP-based services such as CGI are vulnerable to remote command execution and even interactive shell access. Even if the HTTP service runs as a non-privileged user such as "nobody", information such as configuration files and network maps can be read, or the attacker can start a denial of service attack which drains system resources or renders it unavailable to other users.
Services sometimes can have vulnerabilities that go unnoticed during development and testing; these vulnerabilities (such as buffer overflows, where attackers crash a service using arbitrary values that fill the memory buffer of an application, giving the attacker an interactive command prompt from which they may execute arbitrary commands) can give complete administrative control to an attacker.
Administrators should make sure that services do not run as the root user, and should stay vigilant of patches and errata updates for applications from vendors or security organizations such as CERT and CVE.
Application VulnerabilitiesAttackers find faults in desktop and workstation applications (such as e-mail clients) and execute arbitrary code, implant trojan horses for future compromise, or crash systems. Further exploitation can occur if the compromised workstation has administrative privileges on the rest of the network.
Workstations and desktops are more prone to exploitation as workers do not have the expertise or experience to prevent or detect a compromise; it is imperative to inform individuals of the risks they are taking when they install unauthorized software or open unsolicited email attachments.
Safeguards can be implemented such that email client software does not automatically open or execute attachments. Additionally, the automatic update of workstation software via Red Hat Network or other system management services can alleviate the burdens of multi-seat security deployments.
Denial of Service (DoS) AttacksAttacker or group of attackers coordinate against an organization's network or server resources by sending unauthorized packets to the target host (either server, router, or workstation). This forces the resource to become unavailable to legitimate users.
The most reported DoS case in the US occurred in 2000. Several highly-trafficked commercial and government sites were rendered unavailable by a coordinated ping flood attack using several compromised systems with high bandwidth connections acting as zombies, or redirected broadcast nodes.
Source packets are usually forged (as well as rebroadcasted), making investigation as to the true source of the attack difficult.
Advances in ingress filtering (IETF rfc2267) using iptables and Network Intrusion Detection Systems such as snort assist administrators in tracking down and preventing distributed DoS attacks.

1.5. Security Updates

As security vulnerabilities are discovered, the affected software must be updated in order to limit any potential security risks. If the software is part of a package within a Red Hat Enterprise Linux distribution that is currently supported, Red Hat is committed to releasing updated packages that fix the vulnerability as soon as is possible. Often, announcements about a given security exploit are accompanied with a patch (or source code that fixes the problem). This patch is then applied to the Red Hat Enterprise Linux package and tested and released as an errata update. However, if an announcement does not include a patch, a developer first works with the maintainer of the software to fix the problem. Once the problem is fixed, the package is tested and released as an errata update.
If an errata update is released for software used on your system, it is highly recommended that you update the affected packages as soon as possible to minimize the amount of time the system is potentially vulnerable.

1.5.1. Updating Packages

When updating software on a system, it is important to download the update from a trusted source. An attacker can easily rebuild a package with the same version number as the one that is supposed to fix the problem but with a different security exploit and release it on the Internet. If this happens, using security measures such as verifying files against the original RPM does not detect the exploit. Thus, it is very important to only download RPMs from trusted sources, such as from Red Hat and to check the signature of the package to verify its integrity.

Note

Red Hat Enterprise Linux includes a convenient panel icon that displays visible alerts when there is an update available.

1.5.2. Verifying Signed Packages

All Red Hat Enterprise Linux packages are signed with the Red Hat GPG key. GPG stands for GNU Privacy Guard, or GnuPG, a free software package used for ensuring the authenticity of distributed files. For example, a private key (secret key) locks the package while the public key unlocks and verifies the package. If the public key distributed by Red Hat Enterprise Linux does not match the private key during RPM verification, the package may have been altered and therefore cannot be trusted.
The RPM utility within Red Hat Enterprise Linux 6 automatically tries to verify the GPG signature of an RPM package before installing it. If the Red Hat GPG key is not installed, install it from a secure, static location, such as a Red Hat installation CD-ROM or DVD.
Assuming the disc is mounted in /mnt/cdrom, use the following command as the root user to import it into the keyring (a database of trusted keys on the system):
~]# rpm --import /mnt/cdrom/RPM-GPG-KEY
Now, the Red Hat GPG key is located in the /etc/pki/rpm-gpg/ directory.
To display a list of all keys installed for RPM verification, execute the following command:
~]# rpm -qa gpg-pubkey*gpg-pubkey-db42a60e-37ea5438
To display details about a specific key, use the rpm -qi command followed by the output from the previous command, as in this example:
~]# rpm -qi gpg-pubkey-db42a60e-37ea5438Name : gpg-pubkey   Relocations: (not relocatable)Version : 2fa658e0  Vendor: (none)Release : 45700c69  Build Date: Fri 07 Oct 2011 02:04:51 PM CESTInstall Date: Fri 07 Oct 2011 02:04:51 PM CEST  Build Host: localhostGroup   : Public Keys   Source RPM: (none)[output truncated]
It is extremely important to verify the signature of the RPM files before installing them to ensure that they have not been altered from the original source of the packages. To verify all the downloaded packages at once, issue the following command:
~]# rpm -K /root/updates/*.rpmalsa-lib-1.0.22-3.el6.x86_64.rpm: rsa sha1 (md5) pgp md5 OKalsa-utils-1.0.21-3.el6.x86_64.rpm: rsa sha1 (md5) pgp md5 OKaspell-0.60.6-12.el6.x86_64.rpm: rsa sha1 (md5) pgp md5 OK
For each package, if the GPG key verifies successfully, the command returns gpg OK. If it doesn't, make sure you are using the correct Red Hat public key, as well as verifying the source of the content. Packages that do not pass GPG verification should not be installed, as they may have been altered by a third party.
After verifying the GPG key and downloading all the packages associated with the errata report, install the packages as root at a shell prompt.
Alternatively, you may use the Yum utility to verify signed packages. Yum provides secure package management by enabling GPG signature verification on GPG-signed packages to be turned on for all package repositories (that is, package sources), or for individual repositories. When signature verification is enabled, Yum will refuse to install any packages not GPG-signed with the correct key for that repository. This means that you can trust that the RPM packages you download and install on your system are from a trusted source, such as Red Hat, and were not modified during transfer.
In order to have automatic GPG signature verification enabled when installing or updating packages via Yum, ensure you have the following option defined under the [main] section of your /etc/yum.conf file:
gpgcheck=1

1.5.3. Installing Signed Packages

Installation for most packages can be done safely (except kernel packages) by issuing the following command as root:
rpm -Uvh <package> . . . . . . 
For example, to install all packages in the /tmp/updates/ directory, run:
~]# rpm -Uvh /root/updates/*.rpmPreparing... ########################################### [100%]   1:alsa-lib   ########################################### [ 33%]   2:alsa-utils ########################################### [ 67%]   3:aspell ########################################### [100%]
For kernel packages, as root use the command in the following form:
rpm -ivh <kernel-package>
For example, to install kernel-2.6.32-220.el6.x86_64.rpm, type the following at a shell prompt:
~]# rpm -ivh /tmp/updates/kernel-2.6.32-220.el6.x86_64.rpmPreparing... ########################################### [100%]   1:kernel ########################################### [100%]
Once the machine has been safely rebooted using the new kernel, the old kernel may be removed using the following command:
rpm -e <old-kernel-package>
For instance, to remove kernel-2.6.32-206.el6.x86_64, type:
~]# rpm -e kernel-2.6.32-206.el6.x86_64
Alternatively, to install packages with Yum, run, as root, the following command:
~]# yum install kernel-2.6.32-220.el6.x86_64.rpm
To install local packages with Yum, run, as root, the following command:
~]# yum localinstall /root/updates/emacs-23.1-21.el6_2.3.x86_64.rpm

Note

It is not a requirement that the old kernel be removed. The default boot loader, GRUB, allows for multiple kernels to be installed, then chosen from a menu at boot time.

Important

Before installing any security errata, be sure to read any special instructions contained in the errata report and execute them accordingly. Refer to Section 1.5.4, "Applying the Changes" for general instructions about applying the changes made by an errata update.

1.5.4. Applying the Changes

After downloading and installing security errata and updates, it is important to halt usage of the older software and begin using the new software. How this is done depends on the type of software that has been updated. The following list itemizes the general categories of software and provides instructions for using the updated versions after a package upgrade.

Note

In general, rebooting the system is the surest way to ensure that the latest version of a software package is used; however, this option is not always required, or available to the system administrator.
Applications
User-space applications are any programs that can be initiated by a system user. Typically, such applications are used only when a user, script, or automated task utility launches them and they do not persist for long periods of time.
Once such a user-space application is updated, halt any instances of the application on the system and launch the program again to use the updated version.
Kernel
The kernel is the core software component for the Red Hat Enterprise Linux operating system. It manages access to memory, the processor, and peripherals as well as schedules all tasks.
Because of its central role, the kernel cannot be restarted without also stopping the computer. Therefore, an updated version of the kernel cannot be used until the system is rebooted.
Shared Libraries
Shared libraries are units of code, such as glibc, which are used by a number of applications and services. Applications utilizing a shared library typically load the shared code when the application is initialized, so any applications using the updated library must be halted and relaunched.
To determine which running applications link against a particular library, use the lsof command:
lsof <path>
For example, to determine which running applications link against the libwrap.so library, type:
~]# lsof /lib64/libwrap.so*COMMAND PID  USER  FD   TYPE DEVICE SIZE/OFF   NODE NAMEsshd  13600 root mem REG  253,0 43256 400501 /lib64/libwrap.so.0.7.6sshd  13603 juan mem REG  253,0 43256 400501 /lib64/libwrap.so.0.7.6gnome-set 14898 juan mem REG  253,0 43256 400501 /lib64/libwrap.so.0.7.6metacity  14925 juan mem REG  253,0 43256 400501 /lib64/libwrap.so.0.7.6[output truncated]
This command returns a list of all the running programs which use TCP wrappers for host access control. Therefore, any program listed must be halted and relaunched if the tcp_wrappers package is updated.
SysV Services
SysV services are persistent server programs launched during the boot process. Examples of SysV services include sshd, vsftpd, and xinetd.
Because these programs usually persist in memory as long as the machine is booted, each updated SysV service must be halted and relaunched after the package is upgraded. This can be done using the Services Configuration Tool or by logging into a root shell prompt and issuing the /sbin/service command:
/sbin/service <service-name> restart
Replace <service-name> with the name of the service, such as sshd.
xinetd Services
Services controlled by the xinetd super service only run when a there is an active connection. Examples of services controlled by xinetd include Telnet, IMAP, and POP3.
Because new instances of these services are launched by xinetd each time a new request is received, connections that occur after an upgrade are handled by the updated software. However, if there are active connections at the time the xinetd controlled service is upgraded, they are serviced by the older version of the software.
To kill off older instances of a particular xinetd controlled service, upgrade the package for the service then halt all processes currently running. To determine if the process is running, use the ps or pgrep command and then use the kill or killall command to halt current instances of the service.
For example, if security errata imap packages are released, upgrade the packages, then type the following command as root into a shell prompt:
~]# pgrep -l imap1439 imapd1788 imapd1793 imapd
This command returns all active IMAP sessions. Individual sessions can then be terminated by issuing the following command as root:
kill <PID>
If this fails to terminate the session, use the following command instead:
kill -9 <PID>
In the previous examples, replace <PID> with the process identification number (found in the second column of the pgrep -l command) for an IMAP session.
To kill all active IMAP sessions, issue the following command:
~]# killall imapd


[1] http://law.jrank.org/pages/3791/Kevin-Mitnick-Case-1999.html
[2] http://www.livinginternet.com/i/ia_hackers_levin.htm
[3] http://www.theregister.co.uk/2007/05/04/txj_nonfeasance/
[4] http://www.fudzilla.com/home/item/3178-university-of-utah-loses-22-million-patient-records
[5] http://www.internetworldstats.com/stats.htm
[6] http://www.cert.org
[7] http://www.cert.org/stats/cert_stats.html
[8] http://news.cnet.com/Computer-crime-costs-67-billion,-FBI-says/2100-7349_3-6028946.html
[9] http://www.cio.com/article/504837/Why_Security_Matters_Now
[10] http://www.sans.org/resources/errors.php
(Sebelumnya) 6 : Security Guide6 : Chapter 2. Securing Your N ... (Berikutnya)