Telecommunications Science    
   
Table of contents
(Prev) Computer scientistComputer simulation (Next)

Computer security

Computer security is information security as applied to computers and networks.

The field covers all the processes and mechanisms by which computer-based equipment, information or data and services are protected from unintended or unauthorized access, change or destruction. Computer security also includes protection from unplanned events and natural disasters.

Contents

Security by design

One way to think of computer security is to reflect security as one of the main features

Some of the techniques in this approach include:

  • The principle of least privilege, where each part of the system has only the privileges that are needed for its function. That way even if an attacker gains access to that part, they have only limited access to the whole system.
  • Automated theorem proving to prove the correctness of crucial software subsystems.
  • Code reviews and unit testing are approaches to make modules more secure where formal correctness proofs are not possible
  • Defense in depth, where the design is such that more than one subsystem needs to be violated to compromise the integrity of the system and the information it holds.
  • Default secure settings, and design to "fail secure" rather than "fail insecure" (see fail-safe for the equivalent in safety engineering). Ideally, a secure system should require a deliberate, conscious, knowledgeable and free decision on the part of legitimate authorities in order to make it insecure.
  • Audit trails tracking system activity, so that when a security breach occurs, the mechanism and extent of the breach can be determined. Storing audit trails remotely, where they can only be appended to, can keep intruders from covering their tracks.
  • Full disclosure to ensure that when bugs are found the "window of vulnerability" is kept as short as possible.

Security architecture

Security Architecture can be defined as the design artifacts that describe how the security controls (security countermeasures) are positioned, and how they relate to the overall information technology architecture. These controls serve to maintain the system's quality attributes: confidentiality, integrity, availability, accountability and assurance services.[1]

Hardware mechanisms that protect computers and data

Hardware based or assisted computer security offers an alternative to software-only computer security. Devices such as dongles, case intrusion detection, drive locks, or disabling USB ports, or CD ROM Drives may be considered more secure due to the physical access required in order to be compromised[original research?].

Secure operating systems

One use of the term computer security refers to technology to implement a secure operating system. Much of this technology is based on science developed in the 1980s and used to produce what may be some of the most impenetrable operating systems ever. Though still valid, the technology is in limited use today, primarily because it imposes some changes to system management and also because it is not widely understood. Such ultra-strong secure operating systems are based on operating system kernel technology that can guarantee that certain security policies are absolutely enforced in an operating environment. An example of such a Computer security policy is the Bell-LaPadula model. The strategy is based on a coupling of special microprocessor hardware features, often involving the memory management unit, to a special correctly implemented operating system kernel. This forms the foundation for a secure operating system which, if certain critical parts are designed and implemented correctly, can ensure the absolute impossibility of penetration by hostile elements. This capability is enabled because the configuration not only imposes a security policy, but in theory completely protects itself from corruption. Ordinary operating systems, on the other hand, lack the features that assure this maximal level of security. The design methodology to produce such secure systems is precise, deterministic and logical.

Systems designed with such methodology represent the state of the art[clarification needed] of computer security although products using such security are not widely known. In sharp contrast to most kinds of software, they meet specifications with verifiable certainty comparable to specifications for size, weight and power. Secure operating systems designed this way are used primarily to protect national security information, military secrets, and the data of international financial institutions. These are very powerful security tools and very few secure operating systems have been certified at the highest level (Orange Book A-1) to operate over the range of "Top Secret" to "unclassified" (including Honeywell SCOMP, USAF SACDIN, NSA Blacker and Boeing MLS LAN.) The assurance of security depends not only on the soundness of the design strategy, but also on the assurance of correctness of the implementation, and therefore there are degrees of security strength defined for COMPUSEC. The Common Criteria quantifies security strength of products in terms of two components, security functionality and assurance level (such as EAL levels), and these are specified in a Protection Profile for requirements and a Security Target for product descriptions. None of these ultra-high assurance secure general purpose operating systems have been produced for decades or certified under Common Criteria.

In USA parlance, the term High Assurance usually suggests the system has the right security functions that are implemented robustly enough to protect DoD and DoE classified information. Medium assurance suggests it can protect less valuable information, such as income tax information. Secure operating systems designed to meet medium robustness levels of security functionality and assurance have seen wider use within both government and commercial markets. Medium robust systems may provide the same security functions as high assurance secure operating systems but do so at a lower assurance level (such as Common Criteria levels EAL4 or EAL5). Lower levels mean we can be less certain that the security functions are implemented flawlessly, and therefore less dependable. These systems are found in use on web servers, guards, database servers, and management hosts and are used not only to protect the data stored on these systems but also to provide a high level of protection for network connections and routing services.

Secure coding

If the operating environment is not based on a secure operating system capable of maintaining a domain for its own execution, and capable of protecting application code from malicious subversion, and capable of protecting the system from subverted code, then high degrees of security are understandably not possible. While such secure operating systems are possible and have been implemented, most commercial systems fall in a 'low security' category because they rely on features not supported by secure operating systems (like portability, and others). In low security operating environments, applications must be relied on to participate in their own protection. There are 'best effort' secure coding practices that can be followed to make an application more resistant to malicious subversion.

In commercial environments, the majority of software subversion vulnerabilities result from a few known kinds of coding defects. Common software defects include buffer overflows, format string vulnerabilities, integer overflow, and code/command injection. These defects can be used to cause the target system to execute putative data. However, the "data" contain executable instructions, allowing the attacker to gain control of the processor.

Some common languages such as C and C++ are vulnerable to all of these defects (see Seacord, "Secure Coding in C and C++"). Other languages, such as Java, are more resistant to some of these defects, but are still prone to code/command injection and other software defects which facilitate subversion.

Recently another bad coding practice has come under scrutiny; dangling pointers. The first known exploit for this particular problem was presented in July 2007. Before this publication the problem was known but considered to be academic and not practically exploitable.[2]

Unfortunately, there is no theoretical model of "secure coding" practices, nor is one practically achievable, insofar as the code (ideally, read-only) and data (generally read/write) generally tends to have some form of defect.

Capabilities and access control lists

Within computer systems, two security models capable of enforcing privilege separation are access control lists (ACLs) and capability-based security. The semantics of ACLs have been proven to be insecure in many situations, for example, the confused deputy problem. It has also been shown that the promise of ACLs of giving access to an object to only one person can never be guaranteed in practice. Both of these problems are resolved by capabilities. This does not mean practical flaws exist in all ACL-based systems, but only that the designers of certain utilities must take responsibility to ensure that they do not introduce flaws.[citation needed]

Capabilities have been mostly restricted to research operating systems and commercial OSs still use ACLs. Capabilities can, however, also be implemented at the language level, leading to a style of programming that is essentially a refinement of standard object-oriented design. An open source project in the area is the E language.

The most secure computers are those not connected to the Internet and shielded from any interference. In the real world, the most secure systems are operating systems where security is not an add-on.

Applications

Computer security is critical in almost any technology-driven industry which operates on computer systems. The issues of computer based systems and addressing their countless vulnerabilities are an integral part of maintaining an operational industry.[3]

Cloud computing security

Security in the cloud is challenging[citation needed], due to varied degrees of security features and management schemes within the cloud entities. In this connection one logical protocol base need to evolve so that the entire gamut of components operates synchronously and securely[original research?].

Aviation

The aviation industry is especially important when analyzing computer security because the involved risks include human life, expensive equipment, cargo, and transportation infrastructure. Security can be compromised by hardware and software malpractice, human error, and faulty operating environments. Threats that exploit computer vulnerabilities can stem from sabotage, espionage, industrial competition, terrorist attack, mechanical malfunction, and human error.[4]

The consequences of a successful deliberate or inadvertent misuse of a computer system in the aviation industry range from loss of confidentiality to loss of system integrity, which may lead to more serious concerns such as data theft or loss, network and air traffic control outages, which in turn can lead to airport closures, loss of aircraft, loss of passenger life. Military systems that control munitions can pose an even greater risk.

A proper attack does not need to be very high tech or well funded; for a power outage at an airport alone can cause repercussions worldwide.[5] One of the easiest and, arguably, the most difficult to trace security vulnerabilities is achievable by transmitting unauthorized communications over specific radio frequencies. These transmissions may spoof air traffic controllers or simply disrupt communications altogether. These incidents are very common, having altered flight courses of commercial aircraft and caused panic and confusion in the past.[citation needed] Controlling aircraft over oceans is especially dangerous because radar surveillance only extends 175 to 225 miles offshore. Beyond the radar's sight controllers must rely on periodic radio communications with a third party.

Lightning, power fluctuations, surges, brownouts, blown fuses, and various other power outages instantly disable all computer systems, since they are dependent on an electrical source. Other accidental and intentional faults have caused significant disruption of safety critical systems throughout the last few decades and dependence on reliable communication and electrical power only jeopardizes computer safety.[citation needed]

Notable system accidents

In 1994, over a hundred intrusions were made by unidentified crackers into the Rome Laboratory, the US Air Force's main command and research facility. Using trojan horse viruses, hackers were able to obtain unrestricted access to Rome's networking systems and remove traces of their activities. The intruders were able to obtain classified files, such as air tasking order systems data and furthermore able to penetrate connected networks of National Aeronautics and Space Administration's Goddard Space Flight Center, Wright-Patterson Air Force Base, some Defense contractors, and other private sector organizations, by posing as a trusted Rome center user.[6]

Cybersecurity breach stories

One true story that shows what mainstream generative technology leads to in terms of online security breaches is the story of the Internet's first worm.

In 1988, 60,000 computers were connected to the Internet, but not all of them were PCs. Most were mainframes, minicomputers and professional workstations. On November 2, 1988, the computers acted strangely. They started to slow down, because they were running a malicious code that demanded processor time and that spread itself to other computers. The purpose of such software was to transmit a copy to the machines and run in parallel with existing software and repeat all over again. It exploited a flaw in a common e-mail transmission program running on a computer by rewriting it to facilitate its entrance or it guessed users' password, because, at that time, passwords were simple (e.g. username 'harry' with a password '...harry') or were obviously related to a list of 432 common passwords tested at each computer.[7]

The software was traced back to 23 year old Cornell University graduate student Robert Tappan Morris, Jr.. When questioned about the motive for his actions, Morris said 'he wanted to count how many machines were connected to the Internet'.[7] His explanation was verified with his code, but it turned out to be buggy, nevertheless.

Computer security policy

United States

Cybersecurity Act of 2010

On April 1, 2009, Senator Jay Rockefeller (D-WV) introduced the "Cybersecurity Act of 2009 - S. 773" (full text) in the Senate; the bill, co-written with Senators Evan Bayh (D-IN), Barbara Mikulski (D-MD), Bill Nelson (D-FL), and Olympia Snowe (R-ME), was referred to the Committee on Commerce, Science, and Transportation, which approved a revised version of the same bill (the "Cybersecurity Act of 2010") on March 24, 2010.[8] The bill seeks to increase collaboration between the public and the private sector on cybersecurity issues, especially those private entities that own infrastructures that are critical to national security interests (the bill quotes John Brennan, the Assistant to the President for Homeland Security and Counterterrorism: "our nation’s security and economic prosperity depend on the security, stability, and integrity of communications and information infrastructure that are largely privately owned and globally operated" and talks about the country's response to a "cyber-Katrina".[9]), increase public awareness on cybersecurity issues, and foster and fund cybersecurity research. Some of the most controversial parts of the bill include Paragraph 315, which grants the President the right to "order the limitation or shutdown of Internet traffic to and from any compromised Federal Government or United States critical infrastructure information system or network."[9] The Electronic Frontier Foundation, an international non-profit digital rights advocacy and legal organization based in the United States, characterized the bill as promoting a "potentially dangerous approach that favors the dramatic over the sober response".[10]

International Cybercrime Reporting and Cooperation Act

On March 25, 2010, Representative Yvette Clarke (D-NY) introduced the "International Cybercrime Reporting and Cooperation Act - H.R.4962" (full text) in the House of Representatives; the bill, co-sponsored by seven other representatives (among whom only one Republican), was referred to three House committees.[11] The bill seeks to make sure that the administration keeps Congress informed on information infrastructure, cybercrime, and end-user protection worldwide. It also "directs the President to give priority for assistance to improve legal, judicial, and enforcement capabilities with respect to cybercrime to countries with low information and communications technology levels of development or utilization in their critical infrastructure, telecommunications systems, and financial industries"[11] as well as to develop an action plan and an annual compliance assessment for countries of "cyber concern".[11]

Protecting Cyberspace as a National Asset Act of 2010

On June 19, 2010, United States Senator Joe Lieberman (I-CT) introduced a bill called "Protecting Cyberspace as a National Asset Act of 2010 - S.3480" (full text in pdf), which he co-wrote with Senator Susan Collins (R-ME) and Senator Thomas Carper (D-DE). If signed into law, this controversial bill, which the American media dubbed the "Kill switch bill", would grant the President emergency powers over the Internet. However, all three co-authors of the bill issued a statement claiming that instead, the bill "[narrowed] existing broad Presidential authority to take over telecommunications networks".[12]

White House proposes cybersecurity legislation

On May 12, 2011, the White House sent Congress a proposed cybersecurity law designed to force companies to do more to fend off cyberattacks, a threat that has been reinforced by recent reports about vulnerabilities in systems used in power and water utilities.[13]

Germany

Berlin starts National Cyber Defense Initiative

On June 16, 2011, the German Minister for Home Affairs, officially opened the new German NCAZ (National Center for Cyber Defense) de:Nationales Cyber-Abwehrzentrum, which is located in Bonn. The NCAZ closely cooperates with BSI (Federal Office for Information Security) de:Bundesamt für Sicherheit in der Informationstechnik, BKA (Federal Police Organisation) de:Bundeskriminalamt (Deutschland), BND (Federal Intelligence Service) de:Bundesnachrichtendienst, MAD (Military Intelligence Service) de:Amt für den Militärischen Abschirmdienst and other national organisations in Germany taking care of national security aspects. According to the Minister the primary task of the new organisation founded on Feb. 23, 2011, is to detect and prevent attacks against the national infrastructure and mentioned incidents like Stuxnet de:Stuxnet

Hacking Back

There has been a significant debate regarding the legality of hacking back against digital attackers (who attempt to or successfully breach an individual's, entity's, or nation's computer). The arguments for such counter-attacks are based on notions of equity, active defense, vigilantism, and the minutiae of the Computer Fraud and Abuse Act (CFAA). The arguments against it are majorly based on the legal definition of intrusion and unauthorized access, as defined by the CFAA. The debate is ongoing.[14]

Terminology

The following terms used in engineering secure systems are explained below.

  • Authentication techniques can be used to ensure that communication end-points are who they say they are.
  • Automated theorem proving and other verification tools can enable critical algorithms and code used in secure systems to be mathematically proven to meet their specifications.
  • Capability and access control list techniques can be used to ensure privilege separation and mandatory access control. This section discusses their use.
  • Chain of trust techniques can be used to attempt to ensure that all software loaded has been certified as authentic by the system's designers.
  • Cryptographic techniques can be used to defend data in transit between systems, reducing the probability that data exchanged between systems can be intercepted or modified.
  • Firewalls can provide some protection from online intrusion.
  • A microkernel is a carefully crafted, deliberately small corpus of software that underlies the operating system per se and is used solely to provide very low-level, very precisely defined primitives upon which an operating system can be developed. A simple example with considerable didactic value is the early '90s GEMSOS (Gemini Computers), which provided extremely low-level primitives, such as "segment" management, atop which an operating system could be built. The theory (in the case of "segments") was that—rather than have the operating system itself worry about mandatory access separation by means of military-style labeling—it is safer if a low-level, independently scrutinized module can be charged solely with the management of individually labeled segments, be they memory "segments" or file system "segments" or executable text "segments." If software below the visibility of the operating system is (as in this case) charged with labeling, there is no theoretically viable means for a clever hacker to subvert the labeling scheme, since the operating system per se does not provide mechanisms for interfering with labeling: the operating system is, essentially, a client (an "application," arguably) atop the microkernel and, as such, subject to its restrictions.
  • Endpoint Security software helps networks to prevent data theft and virus infection through portable storage devices, such as USB drives.
  • Confidentiality is the nondisclosure of information except to another authorized person.[15]
  • Data Integrity is the accuracy and consistency of stored data, indicated by an absence of any alteration in data between two updates of a data record.[16]

Some of the following items may belong to the computer insecurity article:

  • Access authorization restricts access to a computer to group of users through the use of authentication systems. These systems can protect either the whole computer – such as through an interactive login screen – or individual services, such as an FTP server. There are many methods for identifying and authenticating users, such as passwords, identification cards, and, more recently, smart cards and biometric systems.
  • Anti-virus software consists of computer programs that attempt to identify, thwart and eliminate computer viruses and other malicious software (malware).
  • Applications with known security flaws should not be run. Either leave it turned off until it can be patched or otherwise fixed, or delete it and replace it with some other application. Publicly known flaws are the main entry used by worms to automatically break into a system and then spread to other systems connected to it. The security website Secunia provides a search tool for unpatched known flaws in popular products.
  • Backups are a way of securing information; they are another copy of all the important computer files kept in another location. These files are kept on hard disks, CD-Rs, CD-RWs, and tapes. Suggested locations for backups are a fireproof, waterproof, and heat proof safe, or in a separate, offsite location than that in which the original files are contained. Some individuals and companies also keep their backups in safe deposit boxes inside bank vaults. There is also a fourth option, which involves using one of the file hosting services that backs up files over the Internet for both business and individuals.
    • Backups are also important for reasons other than security. Natural disasters, such as earthquakes, hurricanes, or tornadoes, may strike the building where the computer is located. The building can be on fire, or an explosion may occur. There needs to be a recent backup at an alternate secure location, in case of such kind of disaster. Further, it is recommended that the alternate location be placed where the same disaster would not affect both locations. Examples of alternate disaster recovery sites being compromised by the same disaster that affected the primary site include having had a primary site in World Trade Center I and the recovery site in 7 World Trade Center, both of which were destroyed in the 9/11 attack, and having one's primary site and recovery site in the same coastal region, which leads to both being vulnerable to hurricane damage (for example, primary site in New Orleans and recovery site in Jefferson Parish, both of which were hit by Hurricane Katrina in 2005). The backup media should be moved between the geographic sites in a secure manner, in order to prevent them from being stolen.
Cryptographic techniques involve transforming information, scrambling it so it becomes unreadable during transmission. The intended recipient can unscramble the message, but eavesdroppers cannot, ideally.
  • Encryption is used to protect the message from the eyes of others. Cryptographically secure ciphers are designed to make any practical attempt of breaking infeasible. Symmetric-key ciphers are suitable for bulk encryption using shared keys, and public-key encryption using digital certificates can provide a practical solution for the problem of securely communicating when no key is shared in advance.
  • Firewalls are systems that help protect computers and computer networks from attack and subsequent intrusion by restricting the network traffic that can pass through them, based on a set of system administrator-defined rules.
  • Honey pots are computers that are either intentionally or unintentionally left vulnerable to attack by crackers. They can be used to catch crackers or fix vulnerabilities.
  • Intrusion-detection systems can scan a network for people that are on the network but who should not be there or are doing things that they should not be doing, for example trying a lot of passwords to gain access to the network.
  • Pinging The ping application can be used by potential crackers to find if an IP address is reachable. If a cracker finds a computer, they can try a port scan to detect and attack services on that computer.
  • Social engineering awareness keeps employees aware of the dangers of social engineering and/or having a policy in place to prevent social engineering can reduce successful breaches of the network and servers.

Legal issues and global regulation

One of the main challenges to and main complaints about the antivirus industry is the lack of global web regulations, a global base of common rules to judge, and eventually punish, cyber crimes and cyber criminals. In fact, nowadays, even if an antivirus firm locates the cyber criminal behind the creation of a particular virus or piece of malware or again one form of cyber attack, often the local authorities cannot take action due to lack of laws under which to prosecute.[17][18] This is mainly caused by the fact that mostly every country has an own regulation about cyber crimes.

"[Computer viruses] switch from one country to another, from one jurisdiction to another — moving around the world, using the fact that we don't have the capability to globally police operations like this. So the Internet is as if someone [had] given free plane tickets to all the online criminals of the world."[17] (Mikko Hyppönen)

Due to some of the European's antivirus firms (e.g. Avira, BullGuard, F-Secure, Frisk, Panda, TG Soft, ...) to solve this problem, the European Commission has decided to establish the European Cybercrime Centre(EC3).[19] The EC3 effectively opened on 1 January 2013 and will be the focal point in the EU's fight against cyber crime, contributing to faster reaction to online crimes. It will support member states and the EU's institutions in building an operational and analytical capacity for investigations, as well as cooperation with international partners.[20]

Notes

  1. ^ Definitions: IT Security Architecture. SecurityArchitecture.org, Jan, 2006
  2. ^ New hacking technique exploits common programming error. SearchSecurity.com, July 2007
  3. ^ J. C. Willemssen, "FAA Computer Security". GAO/T-AIMD-00-330. Presented at Committee on Science, House of Representatives, 2000.
  4. ^ P. G. Neumann, "Computer Security in Aviation," presented at International Conference on Aviation Safety and Security in the 21st Century, White House Commission on Safety and Security, 1997.
  5. ^ J. Zellan, Aviation Security. Hauppauge, NY: Nova Science, 2003, pp. 65–70.
  6. ^ Information Security. United States Department of Defense, 1986
  7. ^ a b Jonathan Zittrain, 'The Future of The Internet', Penguin Books, 2008
  8. ^ Cybersecurity bill passes first hurdle, Computer World, March 24, 2010. Retrieved on June 26, 2010.
  9. ^ a b Cybersecurity Act of 2009, OpenCongress.org, April 1, 2009. Retrieved on June 26, 2010.
  10. ^ Federal Authority Over the Internet? The Cybersecurity Act of 2009, eff.org, April 10, 2009. Retrieved on June 26, 2010.
  11. ^ a b c H.R.4962 - International Cybercrime Reporting and Cooperation Act, OpenCongress.org. Retrieved on June 26, 2010.
  12. ^ Senators Say Cybersecurity Bill Has No 'Kill Switch', informationweek.com, June 24, 2010. Retrieved on June 25, 2010.
  13. ^ Declan McCullagh, CNET. "White House proposes cybersecurity legislation." May 12, 2011. Retrieved May 12, 2011.
  14. ^ Justin P. Webb, Cybercrime Review. "Hacking Back - are you authorized? A discussion of whether it's an invitation to federal prison or a justified reaction/strategy?." October 16, 2012. Retrieved January 5, 2013.
  15. ^ "Confidentiality". http://medical-dictionary.thefreedict ionary.com/confidentiality. Retrieved 2011-10-31.
  16. ^ "Data Integrity". http://www.businessdictionary.com/def inition/data-integrity.html. Retrieved 2011-10-31.
  17. ^ a b "Mikko Hypponen: Fighting viruses, defending the net". TED. http://www.ted.com/talks/mikko_hyppon en_fighting_viruses_defending_the_net .html.
  18. ^ "Mikko Hypponen - Behind Enemy Lines". Hack In The Box Security Conference. http://www.youtube.com/watch?v=0TMFRO 66Wv4.
  19. ^ "European Cybercrime Centre set for launch". VirusBulletin. http://www.virusbtn.com/news/2013/01_ 09.xml?rss.
  20. ^ "European Cybercrime Centre (EC3)". Europol. https://www.europol.europa.eu/ec3.

See also

References

  • Maj. Dawood Fateh - Cyber Intelligence 2012

External links

(Prev) Computer scientistComputer simulation (Next)