||Home||Switchboard||Unix Administration||Red Hat||TCP/IP Networks||Neoliberalism||Toxic Managers|
|(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and bastardization of classic Unix|
|News||Recommended Books||Recommended Links||Recommended Papers||Anomaly detection||Event Correlation|
|Log analyzers||Honeypots||Network IDS(NIDS)||Host-based IDS||Integrity Checkers||Port Scan Detectors|
|Snort||ACID||Shadow||Architectural Issues of Intrusion Detection Infrastructure in Large Enterprises||Humor||Etc|
Please note that due to the size the introduction was converted to a separate page, an article
Architectural Issues of Intrusion Detection Infrastructure in Large Enterprises
Here are the abstract and the introduction from the paper
The most serious problem in intrusion detection is the problem of distinguishing very weak useful signal in massive noise as well as related problem of data assessment: the need to correlate, evaluate and verify all this mass of junk events (sometimes called alerts ;-) that various IDS are generating. This is a difficult problem that is not solved well by most organizations so most IDS in "rich" network are of limited usefulness.
This problem of false positives is especially acute in Network IDS( NIDS). That's why many NIDS deployments actually have the status of "innocent fraud" to borrow the catch phrase used by famous economist John Kenneth Galbraith in the title of his last book "The Economics of Innocent Fraud"
In Gartner report "Hype Cycle for Information Security, 2003" published on 30 May 2003 Richard Stiennon, who was at this time, a VP of Research (6 years Gartner veteran at the time of publication) courageously stated that "king is naked" in just one short paragraph:
"Intrusion detection systems are a market failure. Vendors are now hyping intrusion prevention systems, which also have stalled. The functionality is moving into firewalls, which will perform deep packet inspection for content and malicious traffic blocking, as well as antivirus activities."
To be useful intrusion detection requires multi-level approach with different layers able to communicate using some kind of common protocol may be on the base of typical EMS system (for example Tivoli).
Formally IDSs fall into two main groups: host-based and network-based:
- A network IDS (NIDS) uses network cards in promiscuous mode, sniffing all packets on one of many network segments. For this purpose either a particular port of switch is mirrored or taps are used. This is the most popular and probably the least useful class of IDS that due to craziness and marketing orientation of mainstream IT media became the politically correct thing to do in any large organization. Actually NIDS have several orders of magnitude less return on investment then log analyzers (see below). That means that IBM by-and-large lost its 1.3 billion dollars by buying ISS in 2006. Producing a log integration solution using Tivoli TEC from firewalls routers and defended hosts would save IBM customers a lot of money and produce dramatically improve the return on investment. I think IBM brass would be better off spending those money in best former TYCO CEO Dennis Kozlowski style with wild orgies somewhere around Cypress or other Greek island :-). At least this way money would be really wasted in style ;-).
A typical NIDS deployment consists of one or more sensors and a central server with usually WEB based console. Central server aggregates data feeds from multiple sensors and additionally can include scanner (like NMAP) for mapping hosts and eliminating obvious false positives. On more advanced level it report events to event management system like Tivoli that can additionally eliminate those events that do not correlated with host logs or host integrity checkers. The later usually look for key system files and detect all instances when they have been altered.
A network IDS with two sensors (one before and the second after the firewall) can be used for firewall rules debugging and maintenance. Network IDS generally can be subdivided into the following subcategories:
- Signature detection systems work in a similar way to virus scanners, trying to match network patterns with their database of "attack signatures." Snort is classic example of such IDS and it suffers from all the typical warts of this class of the product. Commercial products are usually far worse and instead of "innocent fraud" are close as close to real as your can get with, probably, deceptive advertisement being the most probable charge that can stick.
In a very few situations they may provide minimal additional level of protection (simple network with just few protocol used and intelligent specialists responsible for configuration). For example attempt to use TFTP outside of limited list of network devices is a very suspicious activity that should raise some flags independently whether this port is blocked by external and VPN firewalls of not (it generally should be blocked).
In a typical enterprise deployment with stock signatures the level of protection from known attacks (or more correctly from attacks with signatures similar or identical to already known) is marginal as they suffer from the problem of false positives to such extent that that all alerts are completely ignored in a couple of week or month after the initial deployment. People just get sick of "security spam" (aka "mail alerts"). After that the devices happily circulate air and can be replaced by a fan with some savings both in initial cost and consumed electricity :-).
- Anomaly detection systems try to integrate "attack signatures" into a larger semantic blocks using some statistical, heuristic (usually network topology based heuristics are used) approaches. A typical example is scanning a large block of IP addresses belonging to an organization with nmap or similar tools. Snort is capable to detect some attacks bases of statistical anomaly detection but postprocessing of Snort alerts is a much better approach. In case of statistical approach you can get some impression of the set of IP addresses that tend to send strange packets to your network. That's useful information and it should be regularly collected and analyzed. It also can be easily collected on sensors that monitor huge amounts of traffic (for example external traffic flowing to the organization) and thus provide very few other useful information (due to mentioned above problem of "security spam").
- Host-based systems are installed on the server or desktop and can monitor not only network traffic but also key files, events and logs. They have lower level of false positives then NIDS. We can distinguish:
- Firewall-based HIDS combine intrusion detection and packet filtering. It can be very simple like Xinetd and TCP Wrappers or pretty complex like Firewall-1 IDS. In any case firewalls has higher returns on investment the NIDS. See firewalls
- An classic integrity checkers Tripwire is good example of traditional integrity checker. It has its strong points and huge weaknesses that are discussed on a separate page. Generally Tripwire is too primitive and is not a good choice for HIDS.
- Log analyzers -- this type of HIDS have the highest level of useful signal to noise ration and deserve more attention and investment that other types. See special page (Log analyzers ) devoted to this kind of IDS (although they are mostly used not for IDS but for regular troubleshooting.
- Honeypots - this over-hyped concept is essentially an instance of OS (often a virtual partition) with a combination of IDS installed or a regular network sensor with an additional services on vulnerable ports configured as honeypot ports (honeyports :-). Honeypots specifically designed to detect attacks and suspicious traffic and they usually do not perform any other useful function (or such a function is faked). One extremely positive thing about honeypot is that it can be very cheap (a desktop computer with Linux and some IDS software suffice) and have lower noise to useful signal ration then traditional network IDS. Most non-broadcast type of network traffic directed toward them is anomalous from the point of view of normal network functioning as this is a passive device that just listening to the traffic and normally nobody should try to connect to it or scan it. Actually some types of intrusion detection are simpler and more reliably performed on honeypots (port scanning attempts, etc). Here is the definition from http://www.honeypots.net/
Honeypots are closely monitored network decoys serving several purposes: they can distract adversaries from more valuable machines on a network, they can provide early warning about new attack and exploitation trends and they allow in-depth examination of adversaries during and after exploitation of a honeypot.
Dec 2000 | CERIAS, Purdue University
Drawing from the experience obtained during the development and testing of a distributed intrusion detection system, we re ect on the data collection needs of intrusion detection systems, and on the limitations that are faced when using the data collection mechanisms built into most operating systems. We claim that it is best for an intrusion detection system to be able to collect its data by looking directly at the operations of the host, instead of indirectly through audit trails or network packets. Furthermore, for collecting data in an ecient, reliable and complete fashion, incorporation of monitoring mechanisms in the source code of the operating system and its applications is needed. 1
...The high-tech firm got a $.4.4 million contract today from the Defense Advanced Research Projects Agency (DARPA) to develop novel, scalable attack detection algorithms; a flexible and expandable architecture for implementing and deploying the algorithms; and an execution environment for traffic inspection and algorithm execution.
The network monitoring systems is being developed under DARPA's Scalable Network Monitoring program which seeks to bolt down network security in the face of cyber attacks that have grown more subtle and sophisticated. New technologies and applications provide new attack routes and have made traditional signature-based and anomaly detection-based defensive measures inadequate in both speed and sensitivity, BBN added.
To be effective in today's networks, detection algorithms must operate quickly, efficiently, and effectively in large, content-rich environments. DARPA said that because traffic volume is increasing at a faster rate than the number of hosts on the network, the computing power required to provide gateway network monitoring and defense of autonomous systems will continually grow as a fraction of the power of the monitored network. If these trends continue unabated the network will soon consume the majority of its resources solely to defend itself, DARPA said.
New approaches to network-based monitoring are sought that provide maximum coverage of the network (from the gateway down) with performance independent of the network size, DARPA said.
Some of DARPA's Scalable Network Monitoring program requirements include:
- Probability of detection of malicious traffic greater than 99% per attack launched
- A false alarm rate while monitoring traffic of not more than one false alarm per day.
- Support capabilities at conventional gateway line speeds of 1Gbps in Phase I of the contract, while Phase II will demonstrate the scalability of this capability at gateway line speeds of 100Gbps.
BBN earlier this year got $13 million in additional funding from DARPA to develop a system that quickly converts documents in foreign languages into English so that military personnel can react more rapidly to threats.
...Coupled with the functionality included in the release of QRadar SLIM (Simple Log and Information Management; see release issued October 30, 2007), QRadar 6.1 provides several new features and capabilities in the following key areas including:
- New network flow searching capabilities for better network behavior analysis and security forensics
- Quality of Service monitoring for important network applications like VoIP
- Augmented host discovery and asset based alerting
- Tamper proofing of all stored log, event and network flow data
Combining network and security monitoring capabilities serves a growing need in the market. As noted in a recent Gartner report Select the Right Monitoring and Fraud Detection Technology(1) "Network security and operations products are different markets; however, we see these markets converging in 2008 so that one product set will provide a common network monitoring infrastructure for the NOC and the SOC"
QRadar 6.1 - More Than Just Another SIEM and NBA Product
Today's converging enterprise requires access to critical network and security data by both network and security operations teams. QRadar offers best of breed network and security monitoring to meet compliance and threat management drivers from a single platform. Features unique to QRadar 6.1 include:
- Network Behavior Analysis with a simple, flexible flow viewer that provides complete, enterprise network visibility.
- Robust Log Management architecture combined with analysis that can monitor the network and intelligently alert on the state of new threats, users, and hosts/assets in the network
- SIEM with extensive monitoring inputs and analysis capabilities that allow customers to converge the monitoring of their network and security infrastructures
"QRadar was the first product to seamlessly combine NBA and SIEM functionality - something that many of our competitors are now attempting to achieve through technology partnerships or first-step technology integrations," said Tom Turner, VP of Marketing and Product Management for Q1 Labs. "QRadar 6.1 further solidifies our technology lead and provides another step forward in helping the converging infrastructure roll-out leading threat management and compliance management practices."
Pricing and Availability
QRadar 6.1 is available now. Upgrade to QRadar 6.1 is available
for free to existing QRadar customers. Pricing starts at $39,900.
QRadar goes beyond traditional security information/event management (SIEM) products or network behavior analysis (NBA) products to create a command-and-control center that can monitor, analyze, and remediate threats. QRadar combines, analyzes and manages an unequalled set of surveillance data--network behavior, security events, vulnerability profiles and threat information--to empower enterprises to manage business operations on their networks efficiently from a single console. More information about QRadar is available at:
The IT security team at Wayne State University in Detroit wanted to get better visibility into the traffic crossing the urban institution's main and satellite locations. With some 33,000 students and 10,000 faculty, staff and employees using the network which includes 10,000 internal and 50,000 external hosts, the team turned to network behavior analysis (NBA) software from Q1 Labs.
NBA tools monitor and analyze network traffic, looking for abnormalities and patterns that could indicate a zero-day attack, or a server sending too many queries, or one that is trying to connect to the Internet in the middle of the night (Compare Network Monitoring and Management products). The products prove to be another layer of security; in addition to identifying top talkers on the network, NBA technology can help network and security teams detect undocumented vulnerabilities and symptoms of unknown threats before the environment is impacted.
"We have so many sources for network traffic and we needed better insight into the network," says Morris Reynolds, director of information security and access management at Wayne State. "We had a funding opportunity that enabled us to purchase the technology that would help us see what vulnerabilities were coming across our network and how we were at risk."
The university implemented Q1 Labs QRadar technology, which is packaged as an appliance, in July 2007 and upon installation detected between 10 and 15 bot-controlled computers on the network. The security policy at the university cuts those computers off from "the outside world" and gives systems administrators up to four days to remediate the problems. Finding these vulnerabilities helps the security team spot potential vulnerabilities and monitor traffic sources.
"Right off the bat, QRadar gave us a general idea of what was going on in out network. It broke down the traffic by applications...
More than a year ago, I wrote about the increasing risks of data loss because more and more data fits in smaller and smaller packages. Today I use a 4-GB USB memory stick for backup while I am traveling. I like the convenience, but if I lose the tiny thing I risk all my data.
Encryption is the obvious solution for this problem -- I use PGPdisk -- but Secustick sounds even better: It automatically erases itself after a set number of bad password attempts. The company makes a bunch of other impressive claims: The product was commissioned, and eventually approved, by the French intelligence service; it is used by many militaries and banks; its technology is revolutionary.
Unfortunately, the only impressive aspect of Secustick is its hubris, which was revealed when Tweakers.net completely broke its security. There's no data self-destruct feature. The password protection can easily be bypassed. The data isn't even encrypted. As a secure storage device, Secustick is pretty useless.
On the surface, this is just another snake-oil security story. But there's a deeper question: Why are there so many bad security products out there? It's not just that designing good security is hard -- although it is -- and it's not just that anyone can design a security product that he himself cannot break. Why do mediocre security products beat the good ones in the marketplace?
In 1970, American economist George Akerlof wrote a paper called "The Market for 'Lemons'" (abstract and article for pay here), which established asymmetrical information theory. He eventually won a Nobel Prize for his work, which looks at markets where the seller knows a lot more about the product than the buyer.
Akerlof illustrated his ideas with a used car market. A used car market includes both good cars and lousy ones (lemons). The seller knows which is which, but the buyer can't tell the difference -- at least until he's made his purchase. I'll spare you the math, but what ends up happening is that the buyer bases his purchase price on the value of a used car of average quality.
This means that the best cars don't get sold; their prices are too high. Which means that the owners of these best cars don't put their cars on the market. And then this starts spiraling. The removal of the good cars from the market reduces the average price buyers are willing to pay, and then the very good cars no longer sell, and disappear from the market. And then the good cars, and so on until only the lemons are left.
In a market where the seller has more information about the product than the buyer, bad products can drive the good ones out of the market.
The computer security market has a lot of the same characteristics of Akerlof's lemons market. Take the market for encrypted USB memory sticks. Several companies make encrypted USB drives -- Kingston Technology sent me one in the mail a few days ago -- but even I couldn't tell you if Kingston's offering is better than Secustick. Or if it's better than any other encrypted USB drives. They use the same encryption algorithms. They make the same security claims. And if I can't tell the difference, most consumers won't be able to either.
Of course, it's more expensive to make an actually secure USB drive. Good security design takes time, and necessarily means limiting functionality. Good security testing takes even more time, especially if the product is any good. This means the less-secure product will be cheaper, sooner to market and have more features. In this market, the more-secure USB drive is going to lose out.
I see this kind of thing happening over and over in computer security. In the late 1980s and early 1990s, there were more than a hundred competing firewall products. The few that "won" weren't the most secure firewalls; they were the ones that were easy to set up, easy to use and didn't annoy users too much. Because buyers couldn't base their buying decision on the relative security merits, they based them on these other criteria. The intrusion detection system, or IDS, market evolved the same way, and before that the antivirus market. The few products that succeeded weren't the most secure, because buyers couldn't tell the difference.
How do you solve this? You need what economists call a "signal," a way for buyers to tell the difference. Warrantees are a common signal. Alternatively, an independent auto mechanic can tell good cars from lemons, and a buyer can hire his expertise. The Secustick story demonstrates this. If there is a consumer advocate group that has the expertise to evaluate different products, then the lemons can be exposed.
Secustick, for one, seems to have been withdrawn from sale.
But security testing is both expensive and slow, and it just isn't possible for an independent lab to test everything. Unfortunately, the exposure of Secustick is an exception. It was a simple product, and easily exposed once someone bothered to look. A complex software product -- a firewall, an IDS -- is very hard to test well. And, of course, by the time you have tested it, the vendor has a new version on the market.
In reality, we have to rely on a variety of mediocre signals to differentiate the good security products from the bad. Standardization is one signal. The widely used AES encryption standard has reduced, although not eliminated, the number of lousy encryption algorithms on the market. Reputation is a more common signal; we choose security products based on the reputation of the company selling them, the reputation of some security wizard associated with them, magazine reviews, recommendations from colleagues or general buzz in the media.
All these signals have their problems. Even product reviews, which should be as comprehensive as the Tweakers' Secustick review, rarely are. Many firewall comparison reviews focus on things the reviewers can easily measure, like packets per second, rather than how secure the products are. In IDS comparisons, you can find the same bogus "number of signatures" comparison. Buyers lap that stuff up; in the absence of deep understanding, they happily accept shallow data.
With so many mediocre security products on the market, and the difficulty of coming up with a strong quality signal, vendors don't have strong incentives to invest in developing good products. And the vendors that do tend to die a quiet and lonely death.
Comment on this article.
- - -
Bruce Schneier is the CTO of BT Counterpane and the author of Beyond Fear: Thinking Sensibly About Security in an Uncertain World.
The UNIX philosophy is built around the idea of cooperating tools. As quoted by Eric Raymond, Doug McIlroy makes this claim: "This is the UNIX philosophy: Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface." 
Expanding on the idea of cooperating tools brings us to Sguil, an open source suite for performing NSM. Sguil is a cross-platform application designed "by analysts, for analysts," to integrate alert, session, and full content data streams in a single graphical interface. Access to each sort of data is immediate and interconnected, allowing fast retrieval of pertinent information.
Chapter 9 presented Bro and Prelude as two NIDSs that generate alert data. Sguil currently uses Snort as its alert engine. Because Snort is so well covered in other books, here I concentrate on the mechanics of Sguil. It is important to realize that Sguil is not another interface for Snort alerts, like ACID or other products. Sguil brings Snort's alert data, plus session and full content data, into a single suite. This chapter shows how Sguil provides analysts with incident indicators and a large amount of background data. Sguil relies on alert data from Snort for the initial investigative tip-off but expands the investigative options by providing session and full content information.
Other projects correlate and integrate data from multiple sources. The Automated Incident Reporting project (http://aircert.sourceforge.net/) has ties to the popular Snort interface ACID. The Open Source Security Information Management project (http://www.ossim.net/) offers alert correlation, risk assessment, and identification of anomalous activity. The Crusoe Correlated Intrusion Detection System (http://crusoecids.dyndns.org/) collects alerts from honeypots, network IDSs, and firewalls. The Monitoring, Intrusion Detection, [and] Administration System (http://midas-nms.sourceforge.net/) is another option. With so many other tools available, why implement Sguil?
These are projects worthy of attention, but they all converge on a common implementation and worldview. NSM practitioners believe these tools do not present the right information in the best format. First, let's discuss the programmatic means by which nearly all present IDS data. Most modern IDS products display alerts in Web-based interfaces. These include open source tools like ACID as well as commercial tools like Cisco Secure IDS and Sourcefire.
The browser is a powerful interface for many applications, but it is not the best way to present and manipulate information needed to perform dynamic security investigations. Web browsers do not easily display rapidly changing information without using screen refreshes or Java plug-ins. This limitation forces Web-based tools to converge on backward-looking information.  Rather than being an investigative tool, the IDS interface becomes an alert management tool.
Consider ACID, the most mature and popular Web-based interface for Snort data. It tends to present numeric information, such as snapshots showing alert counts over the last 24 or 72 hours. Typically the most numerous alerts are given top billing. The fact that an alert appears high in the rankings may have no relationship whatsoever to the severity of the event. An alert that appears a single time but might be more significant could be buried at the bottom of ACID's alert pile simply because it occurred only once. This backward-looking, count-based method of displaying IDS alert data is partially driven by the programmatic limitations of Web-based interfaces.
Now that we've discussed some of the problems with using Web browsers to investigate security events, let's discuss the sort of information typically offered by those tools. Upon selecting an alert of interest in ACID, usually only the payload of the packet that triggered the IDS rule is available. The unlucky analyst must judge the severity and impact of the event based solely on the meager evidence presented by the alert. The analyst may be able to query for other events involving the source or destination IP addresses, but she is restricted to alert-based information. The intruder may have taken dozens or hundreds of other actions that triggered zero IDS rules. Why is this so?
Most IDS products and interfaces aim for "the perfect detection." They put their effort toward collecting and correlating information in the hopes of presenting their best guess that an intrusion has occurred. This is a noble goal, but NSM analysts recognize that perfect detection can never be achieved. Instead, NSM analysts look for indications and warnings, which they then investigate by analyzing alert, full content, session, and statistical data. The source of the initial tip-off, that first hint that "something bad has happened," almost does not matter. Once NSM analysts have that initial clue, they swing the full weight of their analysis tools to bear. For NSM, the alert is only the beginning of the quest, not the end.
- pdf file (77.5 KB)
Anyone who has worked with an intrusion detection system knows that it can produce an enormous amount of data. For many network security analysts this vast ocean of packets flagged for further inspection quickly becomes an unruly beast to tame. How then to tame the beast?
The simplest and most efficient way to extract needed data from the ever-growing database logging these packets is to use a combination of Berkeley packet filters (bpf) and bitmask filters. Once you're familiar with their syntax and usage, filtering out specific data is easy. Instead of manually checking 200MB of packet data one packet at a time, you can tailor that down to the interesting 500KB. This represents enormous savings in time and trouble
[May 24, 2004]Snort fails to win approval - ZDNet UK News Patrick Gray, ZDNet Australia
The creator of Snort, the open-source network-based Intrusion Detection System (IDS), says the software is up for an overhaul.
IDS has failed to impress the market, Martin Roesch told delegates at the AusCERT computer security conference in Queensland. The inability of many to "tune" an IDS -- minimising the number of false alarms triggered by the monitoring devices -- has been a major draw-back for the widespread acceptance of the technology, he said.
The next generation of Snort will include "passive discovery" features, Roesch said, which will automatically tweak the package's settings.
"IDS is not working as well as had been hoped, or as well as had been hyped," he said. "People have been saying... IDS can be used to secure your network. But that's not the role of an IDS."
Now the chief technology officer of US-based Sourcefire, which sells Snort-based intrusion detection systems, Roesch says auto-discovery features could be used to apply specific detection policies to particular devices on a network.
If the new software detects an Apache server running on Linux, it will only look for attacks relevant to that configuration, instead of monitoring the device for an attack that would affect a Cisco router or Windows server.
"If you don't have a technology that's capable of understanding what's out there on the network... then you going to have big problems," he said.
Speaking to ZDNet Australia after his presentation, Roesch said the new features had been discussed within Sourcefire, but an actual release date to the open-source community is still unclear. "We haven't really talked about this with the open source community yet," he said. "Some big changes need to be made to the [Snort] engine to make this work."
Unlike more passive intrusion detection set-ups, re-vamped Snort will be able to enforce policies through its new capabilities. "The idea is to take a policy like 'thou shalt not run OS X on the network,' and then if someone with a Mac plugs into our network... it can tell the firewall to [block them]," he said.
This section describes some different IDSs, including logfile monitors, integrity monitors, signature scanners, and anomaly detectors.
19.1.1 Host IDSs
Host-based network IDSs may be loosely categorized into log monitors, integrity checkers, and kernel modules. The following section will briefly describe each, with examples.
188.8.131.52 Logfile monitors
The simplest of IDSs, logfile monitors , attempt to detect intrusions by parsing system event logs. For example, a basic logfile monitor might grep (search) an Apache access.log file for characteristic /cgi-bin/ requests. This technology is limited in that it only detects logged events, which attackers can easily alter. In addition, such a system misses low-level system events, since event logging is a relatively high-level operation. For example, such a host IDS will likely miss an attacker reading a confidential file such as /etc/passwd. This will happen unless you mark the file as such and the intrusion detection system has the ability to monitor read operations.
Logfile monitors are a prime example of host-based IDSs, since they primarily lend themselves to monitoring only one machine. However, it is entirely possible to have a host IDS monitor multiple host logs, aggregated to a logging server. The host-based deployment offers some advantages over monitoring with built-in system tools, since host IDSs often have a secure audit transfer channel to a central server, unlike the regular syslog. Also, they allow aggregation of logs that cannot normally be combined on a single machine (such as Windows event logs).
In contrast, network-based IDSs typically scan the network at the packet level, directly off the wire, like sniffers. Network IDSs can coordinate data across multiple hosts. As we will see in this chapter, each type is advantageous in different situations.
One well-known logfile monitor is swatch (http://www.oit.ucsb.edu/~eta/swatch/), short for "Simple Watcher."
... ... ....
swatch uses regular expressions to find lines of interest. Once swatch finds a line that matches a pattern, it takes an action, such as printing it to the screen, emailing an alert, or taking a user-defined action.
The following is an excerpt from a sample swatch configuration script.watchfor /[dD]enied|/DEN.*ED/ echo bold bell 3 mail exec "/etc/call_pager 5551234 08"
In this example, swatch looks for a line that contains the word "denied", "Denied", or anything that starts with "den" and ends with "ed". When swatch finds a line that contains one of the these strings, it echoes the line in bold to the terminal and makes a bell sound (^G) three times. Then, swatch emails the user running swatch (who should have permission to access the monitored logfiles-this often limits the choice to root) with the alert and executes the /etc/call_pager program with the given options.
Logfile monitors can justly be considered intrusion detection systems, albeit a special kind. Logs contain a lot of information not directly related to intrusions (just as network traffic sniffed by the network IDS does). Logs may be considered a vast pool of information-some normal (authorized user connected, daemon reconfigured, etc.), some suspicious (connection from remote IP address, strange root access, etc.), and some malicious (such as the RPC buffer overflow logged by the crashing rpc.statd). Sifting through all the information is only a little easier than sniffing traffic looking for web attacks or malformed packets.
If every application had a nice security log where all "bad" events were recorded and categorized, log analyzers would not be considered intrusion detection systems. In fact, if an event were to show up in this magical log, it would be an intrusion. In real life, however, pattern searches in logs are often just as valuable-if not more so-as looking for patterns on the wire.
In fact, analyzing system logs together with network IDS logs is a useful feature in a log analyzer. The log analyzer sees more than just the wire and creates a meta-IDS functionality. For example, management solutions such as netForensics enable cross-device log analysis, normalization and correlation (rule-based log pattern matching), and statistical (algorithmic) event analysis.
184.108.40.206 Integrity monitors
An integrity monitor watches key system structures for change. For example, a basic integrity monitor uses system files or registry keys as "bait" to track changes by an intruder. Although they are limited, integrity monitors can add an additional layer of protection to other forms of intrusion detection.
The most popular integrity monitor is Tripwire (http://www.tripwire.com). Tripwire is available for Windows and Unix, and it can monitor a number of attributes, including the following:
File additions, deletions, or modifications
File flags (i.e., hidden, read-only, archive, etc.)
Last access time
Last write time
Tripwire's capabilities vary on Unix and Windows due to differing filesystem attributes. Tripwire can be customized to your network's individual characteristics, and multiple Tripwire agents can securely centralize the data. In fact, you can use Tripwire to monitor any change to your system. Thus, it can be a powerful tool in your IDS arsenal. Many other tools (most are free and open source) are written to accomplish the same task. For example, AIDE (http://www.cs.tut.fi/~rammer/aide.html) is a well-known Tripwire clone.
The key to using integrity checkers for intrusion detection is recording a "known safe" baseline. Establishing such a baseline can only be accomplished before the system is connected to the network. Not having a "known safe" state severely limits the utility of such tools, since the attacker might have already introduced her changes to the system before the integrity-checking tool was run the first time.
While most such tools require a baseline pre-attack state, some use their own knowledge of what constitutes malicious. An example is the chkrootkit tool (available at http://www.chkrootkit.org). It looks for multiple generic intrusion clues, which are often present on the compromised system.
Integrity checkers provide maximum value if some simple guidelines are met. First and foremost, they should be deployed on a clean system, so they have no chance of recording a broken or compromised state as normal. For example, Tripwire should be installed on a system from the original vendor media with all the needed applications deployed, before it is connected to a production network.
Also, storing "known good" databases of recorded parameters on read-only media, such as CDROMs, is a very good idea. Knowing that there is one true copy for comparison helps greatly during incident resolution. Despite all of these precautions, however, hackers still might be able to disable such systems.
Moskowit, Ira S., Myong H. Kang, LiWu Chang, & Garth E. Longdon, "Randomly Roving Agents for Intrusion Detection", Proc. 15th IFIP WG 11.3 Working Conference on Database and Application Security, Niagra on the Lake, Canada, July 2001, Kluwer Press.
Agent based intrusion detection systems (IDS) have advantages such as scalability, reconfigurability, and survivability. In this paper, we introduce a mobile-agent based IDS, called ABIDE (Agent Based Intrusion Detection Environment). ABIDE is comprised of various types of agents, all of which are mobile, lightweight, and specialized. The most common form of agent is the DMA (Data Mining Agent), which randomly moves around the network and collects information. The DMA then relays the information it has gathered to a DFA (Data Fusion Agent) which assesses the likelihood of intrusion. As we show in this paper, there is a quantifiable relationship between the number of DMA and the probability of detecting an intrusion. We study this relationship and its implications.
Denial of service is becoming a growing concern. As computer systems communicate more and more with others that they know less and less, they become increasingly vulnerable to hostile intruders who may take advantage of the very protocols intended for the establishment and authentication of communication to tie up resources and disable servers. This paper shows how some principles that have already been used to make cryptographic protocols more resistant to denial of service by trading off the cost to defender against the cost to the attacker can be formalized based on a modification of the Gong-Syverson fail-stop model of cryptographic protocols, and indicates the ways in which existing cryptographic protocol analysis tools could be modified to operate within this formal framework. We also indicate how this framework could be extended to protocols that do not make use of strong authentication.
EXPERTS: IT MAKES SENSE TO OUTSOURCE IDS | News: SearchSecurity.com
Experts say that intrusion-detection systems (IDS) are candidates for outsourcing. Most companies don't have the skill sets or experience to sift through the volumes of logs or decipher inevitable
false-positives. Some companies, however, remain adamant against turning over their security to an outsider.
GeodSoft How-To Homegrown Intrusion Detection -- host based
But will he remain profitable? (Score:4, Interesting)
by jschrod on Monday July 01, @03:18PM (#3802338)
(User #172610 Info | about:blank)
The point is not if he is profitable, but if he will remain to be so after venture capital and the associated demands came into his company. I hope that this guy did a very thorough cost-benefit analysis before he took the money.
Venture capitalists are not in for the long run, they want to capitalize their investments in the mid term. Quite some companies went bankrupt or got in difficulties after external money and the demand for quick market grab came in and drove solid growth strategy out. Look at SuSE for an example from the Linux world.
Disclaimer: I'm owner and CEO of a (privately held, incorporated) company. We still make profits, even in this harsh market, because we didn't join the hype train, but brought solid add-on value to our customers.
I wish Marty Roesch luck in choosing his business strategy...
Obligatory snide comment (Score:1)
by sparty ([email protected]) on Monday July 01, @03:26PM (#3802408)
(User #63226 Info | http://upside.net/~sparty/)
This "take in more money than you spend" concept is a little hard to grasp at first, but the more you think about it, the more sense it makes, at least in a fuddy-duddy, "old economy" kind of way.
As much as I sincerely want to believe that this is attempting to be witty, it's far too close to the *cough*VALinux*cough* truth *cough*Amazon*cough* coming from an OSDN employee.
Step two revealed (Score:5, Insightful)
by gmhowell ([email protected] minus city) on Monday July 01, @03:27PM (#3802428)
(User #26755 Info | http://brewnix.sourceforge.net/ | Last Journal: Sunday June 30, @04:07PM)
First go read the newsforge article.... Okay, the joke is:
Step one: develop open source software
Step two: mumble, mumble
Step three: profit!
Now, it seems that step two is revealed. It's actually a few steps. Now, for the first time ever:
Step two (a): Come up with (proprietary) tools that make the basic (GPL) Snort code easy to understand and use for non-technical managers.
Step two (b): Load Snort and the additional tools into a box, and sell the box as a complete solution, instead of just selling software.
It's been said before that there is no incentive to make OSS easy to use. Here (and elsewhere) is the proof. Make it hard to use. Release it. BUT, make the config tools easy to use, IF you pay for them.
I'm not slagging the guy, he's gotta eat. But it is another notch in the belt for those who are cynical about OSS and business.
Beyond intrusion detection
Liz Simpson [29-05-2002]Making sense of security software event logs, whether it's from your firewall or an expensive intrusion detection system, can be like trying to drink from a fire hose. Even when you find a real problem, what do you do?
But intrusion detection is definitely not a bad idea. No matter how smart you think you are, you've probably overlooked something in your firewall configuration.
More importantly, your firewall has to let certain kinds of traffic through, such as web requests or email, and firewalls are just not designed to pick that traffic apart to tell if it's exploiting the software on the inside.
Several years ago, Bruce Schneier, a well-known cryptographer and a long-time advocate of paying attention to the human factors of security, founded Counterpane Internet Security.
Managed security monitoring
The company provides what it describes as managed security monitoring which is not that different from hiring a firm to monitor your home burglar alarm. To this end, the company has two security operations centres, staffed 24x7 with security analysts.
The two centres vacuum up all the events generated by their clients' networking and security gear, and feed it to a proprietary programme called Socrates.
Socrates sifts through the data and flags the events that merit attention to a live human being. The events are then investigated, and most are dismissed as harmless without ever needing to bother the client.
When something serious does happen, however, Counterpane alerts the client and shows them how to respond to the attack. While not as sexy as designing new ciphers, it's vastly more marketable.
Just up the road from Counterpane in Mountain View, California, Taher El-Gamal, another famous cryptographer, has founded Securify, which has a radically different approach to the same problems.
Securify sells software, not a service. Like traditional intrusion detection software, the company's SecurVantage vacuums up packets and classifies them.
Patterns of activity
But rather than scanning for stock intrusion patterns or signatures, it looks at each packet to determine whether it fits into a pattern of activity permitted by a flexible policy. This is a model that specifies the correct behaviour of the entire network from the transport layer through to the application layer.
This sounds great, until you think about how awful it could be to specify such a model for your legacy network. This is where SecurVantage really stands out.
It combines the monitoring function with the ability to automatically create a sophisticated model of your network, which you can progressively refine according to reported anomalies.
The product is gaining a following in organisations where security is such a high priority that outsourcing is not an option, like the military and banking.
But whether through outsourcing or improved software, it's clear that companies will need to move beyond traditional intrusion detection approaches if they really want to get a grip on network security.
Intrusion Detection System
Increasing Performance in High Speed NIDS
Intrusion Detection:New Directions
The Design and Prototype Implementation of Stateful Network Intrusion Detection System
I 'm graduate student of NEU ( North East University), doing IDS research now. I have just finished my dissertation about some optimization algorithm about IDS based on Petri net, it seemed nobody here has interesting about it, so I hope it can be spreaded so feedbacks are available.
Kurt Seifried - Information security
/ IDS / Honeypotting with ...
... the host operating system without trace, among several possible problems. Access
to the host operating system must be strictly controlled and ...
www.seifried.org/security/ids/ 20020107-honeypot-vmware-basics.html - 15k - Cached - Similar pages
Collection for Security Monitoring [PDF]
Polymorphic Shellcodes vs. Application IDSs Intrusion detection is a technology which enables network and security administrators to detect patterns of
misuse within the content of their network traffic. There are two ways that intrusion detection is implemented in the industry today
- host-based systems and network-based systems. Host-based intrusion detection systems use information from the operating system audit records to watch all operations occurring
on the host that the intrusion detection software has been installed upon. These operations are then compared with a pre-defined
security policy. This analysis of the audit trail imposes potentially significant overhead requirements on the system because of
the increased amount of processing power which must be utilized by the intrusion detection system. Depending on the size of the
audit trail and the processing ability of the system, the review of audit data could result in the loss of a real-time analysis
capability. Network-Based Intrusion Detection
File Format: PDF/Adobe Acrobat - View as HTML
... 1.192 median Impact W/ IDS No IDS Round-trip (s) ... bin/phf 3. GET / HTTP 1.1 Host: victim
Content-Length: 3 123GET /cgi ... 16 Problems 1) Invasive Because the module ...
www.raid-symposium.org/Raid2001/slides/ almgren_lindqvist_raid2001.pdf -
File Format: PDF/Adobe Acrobat - View as HTML
... a better approach, but has some problems too: o Since decrypter engine mutates ... port
scanning, cgi exploitation, etc). * Host IDS : This type of IDS looks ...
Lucidian - About Intrusion Detection
Polymorphic Shellcodes vs. Application IDSs Intrusion detection is a technology which enables network and security administrators to detect patterns of
misuse within the content of their network traffic. There are two ways that intrusion detection is implemented in the industry today
- host-based systems and network-based systems.
Intrusion detection is a technology which enables network and security administrators to detect patterns of misuse within the content of their network traffic. There are two ways that intrusion detection is implemented in the industry today - host-based systems and network-based systems.
Host-based intrusion detection systems use information from the operating system audit records to watch all operations occurring on the host that the intrusion detection software has been installed upon. These operations are then compared with a pre-defined security policy. This analysis of the audit trail imposes potentially significant overhead requirements on the system because of the increased amount of processing power which must be utilized by the intrusion detection system. Depending on the size of the audit trail and the processing ability of the system, the review of audit data could result in the loss of a real-time analysis capability.
Network-Based Intrusion Detection
Network-based intrusion detection passively monitors network activity for indications of attacks. Network monitoring offers several advantages over traditional host-based intrusion detection systems. Because many intrusions occur over networks at some point, and because networks are increasingly becoming the targets of attack, these techniques are an excellent method of detecting many attacks which may be missed by host-based intrusion detection mechanisms.
Lucidian Technologies, Inc. was formed in 1997 by a group of engineers that wanted to create a network intrusion detection system (IDS) that would combine some of the best features of existing IDS products, and solve the speed and modularity problems endemic to current Intrusion Detection systems.
The Speed Problem
One of the principle problems with existing IDS products is their failure to perform at "real" network speeds. Most ID products work fine at 10 Mbps, but when they are installed at the 100 Mbps backbones and departments of leading IS infrastructures, they cannot handle the packet load.
Another speed problem is that existing IDS products will not run all of their attack recognition signatures at high speeds; these products need to have their active signatures reduced significantly in order to detect a few common attacks. In fact, some of the products will not perform any faster by reducing the number of active signatures used! Both of these shortcomings, the ability to handle fast network loads and the failure to process signatures at these speeds, make the products less effective in the real world.
The Modularity Problem
Another problem with existing network intrusion detection systems is their lack of modularity. Most of these IDS systems are designed and sold to work with only one type of processor or management system. They are not adaptable to many of the hardware and software platforms already in place within current networks.
Lucidian's product, named "NetDetect", is a software IDS technology that is designed to identify network attacks and intrusions on 100 Mbps networks. NetDetect combines a real-time attack signature recognition capability with the modularity to be adapted to any real-time processing environment. This capability allows it to be configured within a variety of networked devices including dedicated hosts, switches, routers, firewalls, and security equipment.
NetDetect is not a commercially available product, but rather a software technology that can be adapted to a vendor's set of products. This feature allows NetDetect to act as part of the network, rather than a separately installed and managed entity.
Imagine network security the way it should be. You show up for work in the morning and get a report via e-mail from your Intrusion Detection System (IDS): Everything is OK. You might feel a little dubious about this, but why should you? After all, isn't the goal of network security to prevent problems in the first place?
Today's IDSs don't come close to reaching that goal. Instead, their main allure is reporting what might be wrong in your network, and most products focus on producing lots of alerts about incidents so that you don't feel that you wasted your time and money deploying the system. And while knowing what's happening on your network is a worthwhile goal, it doesn't in itself do anything to solve the problems manifesting through the medium of the IDS console.
Perhaps I am being a bit unfair. But if you do nothing but listen to vendors rave about their products, you are bound to be misled and disappointed when the reality of what your shiny new IDS can't do sets in. IDS products are getting better, but they still have a long way to go.
A Distant Dream?
This article is not about products. Producing a useful review of IDS products is a very difficult task-one that's illustrated well by a review by Greg Shipley, director of consulting services with Neohapsis (see Resources). According to Shipley, the review should be published at roughly the same time as this issue of Network Magazine.
In this column, I'll focus on the technology behind IDSs, as well as an idealized view on some features that would dramatically increase these products' functionality.
First, some terminology. There are at least two general types of IDS: host-based and network-based. Host-based systems are installed on the server or desktop they're designed to protect, and generally monitor logfiles for certain events and key files for changes. Some host-based systems are hybrids, as they also monitor network traffic sent to the host where they're installed. Host-based IDSs send alerts to a central console, as do network-based systems.
Network-based IDSs (NIDSs) sniff network traffic using a system called a sensor. The sensor collects all packets and evaluates both the network headers and the data, looking for signs of misuse. Many sensors focus on attack signatures-that is, a pattern in the header or data that matches a known attack. Some sensors go beyond mere signature matching, attempting to match traffic with correct layer-4 and layer-7 protocols.
While researching this column, I spoke with Shipley, who disclosed some results of the Neohapsis product review. Some of the details were striking. For example, many NIDS sensors crashed during the testing under real, not test, network loads. The fastest sensor caught almost every attack, but it had a user interface that only a Unix sysadmin would adore. A different product with an even faster sensor had a great user interface, but only a tiny signature database-so it missed many attacks.
Another issue for existing NIDS is that some of the best-known vendors are only now catching up to some of the tricks used to evade IDSs. In early 1998, Thomas Ptacek and Timothy Newsham published a paper about bypassing NIDSs using fragmentation and out-of-order packets. And what was true in 1998 remains true to some degree today. While many NIDS do attempt to defragment packets sniffed off a network, it's just not as simple as that. (See Network Defense, July 1999, for more information on fragmentation issues and NIDS.) And while vendors might claim it's not so easy to fragment attacks, Dug Song, while an employee of IDS vendor Anzen, wrote fragrouter, a tool that makes fragmenting any network traffic easy.
A related issue has to do with out-of-order packets. Another way of playing hob with NIDS is sending small packets that contain an attack split among them, so that the attack signature doesn't appear in any one of them, and sending the first packet last. The NIDS sensor's task is to reassemble the data stream, then analyze it for attacks in what IDS vendors now call ⌠maintaining state.■ But adding this detail to an NIDS sensor already operating at its performance limits often pushes it right over the edge.
A sensor claiming to monitor 100Mbits/sec can often only do this part of the time. In another article (see Resources), Shipley claims that this is due in part to packet size. A 100Mbit/sec Ethernet link filled with maximum-size packets can carry, at most, 8,120 packets per second. But send the smallest possible packets (for example, pure TCP ACK packets), and the packets-per-second rate goes up to 144,880. Now, to make things really interesting, let's make many of those packet acknowledgments to ongoing TCP connections that the sensor is supposed to be monitoring in its state table-and you can really begin to understand why many sensors experience meltdown.
It's one thing to sniff 144,880 packets per second (no mean feat), and another to check them for known attack signatures. Matching them to existing data streams before checking them, however, is quite another matter. An attacker can, through a port scanner, quickly set up thousands of connections in the state table, leading to memory overload. According to Shipley, many products he reviewed had memory leaks-places in the code where memory was used but never freed-that led to crashes over time.
IDSs' failings may be legion, but I'm not claiming these products are useless. In fact, a properly working IDS brings up the next big issue for proper deployment: Who will handle the potential security incident that the IDS reports?
I recently spoke with an employee of a large Web hosting and Internet services company about NIDS' practical uses. This employee uses the free NIDS tool snort to monitor the internal backbone at a large colocation facility. At this facility, snort collects between five thousand and six thousand alerts each weekday (fewer over weekends), and this is with a trimmed list of attack signatures. With so many potential incidents, all they do at this site is filter through the alert logs on an hourly basis to determine exactly what is significant and what, if anything, can be done about it.
Most sites won't have to deal with thousands of alerts each day. But even dealing with several alerts each day is a full-time job for someone skilled in network and operating system security. The person monitoring the IDS console must first perform triage-deciding which alert deserves immediate attention, and which can wait. Then this person must locate the potentially compromised system, determine if the attack has succeeded, and, if so, decide the next step. Should the system be further analyzed? Saved for evidence? Or just reformatted, with a new, patched operating system and applications installed? And does this attack hint at more undiscovered vulnerabilities on other systems that should also be patched immediately?
Just installing a single sensor and an NIDS console can easily consume weeks. Proper installation involves not only the physical process of installing the sensor and the console software, but also configuring the NIDS itself. Almost all IDS products support some level of customization. Sometimes the IDS will do this ⌠automatically■-you launch a tool, and the tool scans the network, attempts to identify operating systems, then only watches for attacks specific to those IP addresses and the operating system associated with a particular address. Of course, you must remember to launch this configuration process anytime a system is added to your network.
More than likely, you'll need to manually select sets of attack signatures that you're most interested in. For example, if your organization has nothing but Microsoft systems, you don't want the IDS console annoying you with alerts about Unix/Linux exploits directed at internal systems. You might want the IDS to alert you to attacks against external Unix systems coming from your own network, though. And as you use a tool, you'll begin to learn which alerts will most likely be false positives-something the IDS picks up as an attack, but which actually represent normal network activity at your site.
The console will also be of critical importance to users. It must be capable of providing a high-level view but still permit you to drill down to collect more detailed information about a particular alert. There must be a mechanism for clearing alerts when you're done with them, or annotating the alerts, so you can remember later what you've done about a specific alert.
Hopefully, the console won't just present you with a terse log-style message, but will provide information about what the alert means, how serious it might be, how to actually verify if the alert represents a real event and, finally, how to patch the system involved.
Keeping the Dream Alive?
While some IDSs do hint at what an alert means, and a few even make suggestions about how to check a system and see if the attack has been successful, that's far from the ⌠dream■ IDS envisioned earlier in this column. But let's stay with that dream for a minute and imagine what a Manhattan Project approach ($2 billion in 1945 dollars) could do to implement the ideal IDS.
The ideal IDS would be a hybrid, with both host-based and network-based sensors. The host-based module would also permit operating system software and upgrading on monitored machines, and the ability to adjust file permissions/ownerships, registry keys/configuration, and operating system tuning parameters-all without rebooting. And this host-based IDS would consume few CPU cycles, so installing it even on heavily loaded servers wouldn't be a problem.
The NIDS sensors would be installed in switches (two vendors have already achieved this) so that all traffic could easily be monitored. The sensors wouldn't just be passive network traffic monitors, but would keep track of in-use IP addresses and use passive fingerprinting to identify operating systems, and active fingerprinting if necessary. If a new system appeared, the sensor would go into active mode and perform a vulnerability scan of the new system, installing any operating system or application updates that were previously approved for that platform. The sensor would also inform the central console of a new system on the network.
The NIDS sensor would keep a history of all network traffic. If a new service suddenly appeared on a system, this could be an indication of either a successful attack or the installation of a Trojan. The NIDS would not only alert the console but also might take countermeasures against the possible Trojan-especially if the NIDS sensor detected other suspicious activity, such as a sudden flood or port scan originating from the suspect system.
While this type of system remains a dream, it's not mine alone. Department of Defense (DoD) research projects such as Emerald and Sapphire focus on assimilating reports from an array of sensors to provide the big picture, correlation between reports, and the ability to drill down to get details from individual sensors or hosts. Someday, the dream may come true.
Rik Farrow is an independent security consultant. His Web site, www.spirit.com, contains security links and information about network and computer security courses. He can be reached at [email protected]
Thomas Ptacek and Timothy Newsham's Intrusion Detection (ID) paper is located at this link.
Greg Shipley's article about Intrusion Detection Systems (IDSs) from Network Computing is located here.
Greg Shipley and Neohapsis' review of ID products will appear at Neohapsis sometime in August 2001.
Dug Song wrote fragrouter, a tool for testing network IDSs, while he worked for Anzen. Go to http://packetstorm.securify.com/UNIX/IDS/fragrouter-1.6.tar.gz or www.monkey.org to download fragrouter.
The CERIAS page defines IDS terms and provides IDS references. Go to this link.
A nice summary of different types of IDSs (including vulnerability scanners) can be found here.
For most of its short history, the network and information security industry has aimed to create a static defensive perimeter--the electronic equivalent of a fortified wall. But that wall is far from impenetrable; the size and complexity of modern networks can make it difficult for an administrator to even know where the perimeter is, much less secure it. Moreover, most successful security breaches are perpetrated by the company's own employees, partners, or clients--attackers who start out inside the defensive perimeter.
The development of intrusion detection systems (IDSs) in the late 1990's brought real-time detection and response within the grasp of most mid-to-large sized businesses. Operating on the assumption that at some level an attack looks different from legitimate activity, IDSs automatically collect and analyze different types of data from various sources throughout the network. By monitoring activity as it happens, the IDS can identify suspicious behavioral patterns and either notify network administrators, initiate an automated response to the perceived attack, or both. Administrators can then act to counter a specific attack and/or tailor defenses to defeat similar attacks in the future.
Network-based IDSs (NIDS)--such as NFR's NID, Internet Security System's RealSecure Network Sensor, Intrusion.com's SecureNet Pro, and the open-source application Snort--are the most commonly deployed type of IDS. These systems examine individual data packets as they move throughout the network, and compare them against a database of known attack patterns (or "signatures"), much like anti-viral software. Commercial NIDS packages usually rely on dedicated hardware sensor appliances installed on specific network segments to examine traffic as it passes, but most can also collect traffic data from different firewalls, routers, and hosts.
NIDS are extremely fast, and can automatically block suspicious traffic or adjust network configuration in response to a perceived attack in progress. Because they operate in real time, however, NIDS can act as a traffic bottleneck and adversely affect network performance. The size of a performance impact--if any--is difficult to predict, and will vary widely from moment to moment based on available hardware and software, type and amount of network traffic, and network topology.
Host-based IDSs (HIDS) also look for attack signatures, but monitor operating system activity on specific machines, rather than network traffic.Repeated attempts to guess a log-on password might set off a HIDS alert, for example, as might attempts to access restricted local files. Some host-based tools can also monitor specific applications for strange behavior. eEye's SecureIIS Application Firewall, for example, monitors Microsoft's Internet Information Services (IIS) application. HIDS can operate in real time, use automated responses, and typically share most of NIDS' strengths and weaknesses, but HIDS are best suited to detecting different kinds of attacks.
Most commercial vendors bundle NIDS, HIDS, and other tools such as file integrity checkers and log analyzers into a single package. "These are complementary rather than competing technologies," says Marcus Ranum, CTO at NFR Security. "Each is optimized to find different kinds of problems, and together provide overlapping 'fields of fire.' They can also share data, letting them catch issues that anyone alone would miss."
When they were first introduced, IDSs were heralded as the ultimate weapon against online intruders. Finally, network administrators would have the ability to see the attackers in action, and would therefore be able to stop them in their tracks. Years later, however, IDS packages are only beginning to gain mainstream acceptance, and many view them as too expensive and resource-intensive for most companies. (Purchase price is only a part of the equation. Needs assessment, planning, installation, and configuration will usually add up to far more than list price. Total install cost is going to depend on the specific arrangements negotiated with the party doing the install, and will vary widely according to the size of the project and the relationship between the client and installer.) Moreover, many specialists have begun to question whether IDSs represent the best use of limited security resources.
According to Andrew van der Stock, senior architect at security consultancy e-Secure, "IDS is worse than useless in most environments--in most cases, it only gives a false sense of security. IDS is really only suitable once you have a top-notch security environment and are looking for an additional layer of defense."
Perhaps the most serious difficulty with IDS is what is commonly known as the "tuning problem." Most successful online attacks are specifically designed to closely resemble legitimate activity, and a variety of issues can cause harmless or accidental activity to resemble an attack. Every network, moreover, has different norms for acceptable activity. As a result, IDS packages must be carefully "tuned" to minimize the number of false alarms, while still catching actual attacks. In practice, most IDS packages will produce a substantial number of "false positives" no matter how well tuned; over time, overworked administrators tend to tune out or turn off their IDS.
Moreover, the security community has only begun to develop effective responses to attacks in progress. Though most IDS packages are capable of automated responses, most experts warn against their use on a regular basis. Given the frequency of "false positives," automated responses can easily end up interfering with legitimate activity. For example, a savvy attacker can intentionally trigger automated responses simply to cause interference. On the other hand, manual responses tend to be both slow and non-specific. While administrators can take steps to counter or minimize the damage from a specific attack, they are often left with the choice of isolating their network from the Internet (losing much or most of its functionality) or simply allowing an attack to continue.
Though IDS packages are far from a cure-all, they can be a valuable addition to the security professional's toolbox. They are complex and difficult tools to use, however. Perhaps the most important step is recognizing that an IDS is not a replacement for more traditional security tools; it should be seen as "icing on the cake." Well-developed security policies and procedures, solid network architecture, properly configured firewalls, and strong authentication are all prerequisites for an effective IDS deployment.
Don't underestimate the amount of time and resources necessary to properly plan and prepare for initial IDS installation. Properly placing IDS sensors requires a thorough understanding as to which data and assets you're trying to defend, as well as the types of threats of primary concern. Tuning alerts to minimize false positives requires an intricate understanding of your standard network activity, security policies, and enforcement standards.
Substantial as they are, initial deployment costs are only a small fraction of the total investment needed to make an IDS effective. The most common mistake in deploying an IDS is thinking of it as a "set and forget" tool. Be prepared for an ongoing commitment of staffing, training, and financial resources. If you can't afford a dedicated security staff, consider outsourcing your IDS management to a specialist "managed security" firm such as Counterpane Internet Security, Guardent, or Riptech.
An IDS can be a powerful defensive weapon. If you're looking to improve your security, take a look. But be aware of what you may be getting yourself into.
San Francisco-based security consultant and columnist David Raikow holds a law degree from U.C. Berkeley's Boalt Hall School of Law. You can reach him at [email protected].
A recent National Infrastructure Protection Center (NIPC) Report dated March 15, 2001 and the Internet crime division of the FBI announced a new attack/tool (called "Stick") that attempts to flood a network or computer with too many "false positives" for an intrusion detection software (IDS) to handle, thereby causing it to become inoperative. If successful, a hacker might try to take advantage of the failed IDS to locate and exploit an unrelated vulnerability on the victim's system, perhaps to seek root access. IDS systems play an important role in a layered information security architecture - by scanning the network against a database of known vulnerabilities, IDS systems can potentially detect if someone has accessed the system.
Tracking the physical location of IP addresses can be tricky business, but Clements used a company named Nami Media Inc. to trace back addresses to originating countries, and in some cases, cities. In this case, Nami Media acts as a reseller of tracking information provided by Digital Envoy.
Nami Media's CEO, Gary Mittman, said technological advancements and plain old hard work has refined such IP tracking to the point where it's reliable.
File Format: Microsoft Word 97 for Macintosh - View as HTML
... IDS to spend all its time doing useless work. ... Security Systems, Inc.'s automated
network intrusion detection system. We tested version 220.127.116.11 ...
New Directions in Network Intrusion Detection Author: Jeremy Elson
MIT Lincoln Laboratory - DARPA Intrusion Detection Evaluation Publications Listed are Lincoln Laboratory publications and publications by various members of the Intrusion Detection research community that relate to the DARPA Intrusion Detection Evaluations.
A Hacker's Approach to ID
by Mudge weak
A Glimpse Into the Future of ID by Tim Bass and Dave Gruber
On Reliability by John Sellens
Defending Yourself: The Role of Intrusion Detection Systems
by John McHugh, Alan Christie, and Julia Allen
What is the role of intrusion detection systems in an organization's overall defensive posture? This article provides guidelines for IDS deployment, operation, and maintenance.
Snort - A Look Inside an Intrusion Detection System by Kristy Westphal. This article will explore setting up Snort, how to use the various plugins, how to interpret the output of packet captures from Snort, and how it can complement other IDS's.
Chances are, your company's intrusion detection software stopped suspicious-looking traffic today. Chances are, it was a false alarm, too.
Network attacks, including distributed denial-of-service and buffer overflow incursions, have put intrusion detection software on the front line in the battle against hackers. But the wider the deployment of intrusion detection, the more administrators are realizing the technology's limits and frustrations.
The reason: Too often, the software puts out false-positive alerts, which warn administrators about traffic that turns out to be innocuous but still send IT managers scurrying to plug security holes.
"It got to an absurd point, where every other day we were literally just blowing away our log file," said Robert Boyle, CEO of Tellurian Networks Inc., a managed-service provider in Newton, N.J.
Technically, false-positive intrusions are a hard problem for software companies to solve. The technology is a slave to a statistical phenomenon called the base rate fallacy. Attacks are rare relative to the amount of traffic coming into a network. The rarer the event, the more accurate the test must be to be useful. Right now, intrusion detection is not accurate enough and returns more false positives than true positives.
SECURITY: Network Intrusion Detection Using Snort by mhall
Your network is being scanned for vulnerabilities. This may happen only once a month or twice a day, regardless, there are people out there probing your network and systems for weaknesses. I can say this with confidence because I have yet to work on a network that has not been probed. My personal network of six systems at home is on a dedicated ISDN line. This network has no valuable data, nor represents any organization, yet I get probed two to four times a week. If you have a system or network connected to the Internet, you become a target. This article will discuss how you can protect yourself by detecting these intrusion attempts. I will then cover what you can do when you discover these attempts.
Setting up Intrusion Detection
The methods we will be discussing are simple in use and implementation. For larger or more security conscientious organizations, you may want to consider third party Intrusion Detection Systems, such as Network Flight Recorder (http://www.nfr.net/nfr). These more advanced IDS systems use traffic analysis and advance algorithms to determine if a probe has been conducted. Our approach will be somewhat simpler.
There are a variety of different probes hackers will attempt. The first type we will prepare for is one of the most common, port scans. Port scans are where an inidvidual attempts to connect to a variety of different ports. The scans can be used on a specific target, or used to scan entire IP ranges, often chosen at random This is one of the most popular information gathering methods used by hackers today as it identifies what ports and services are open.
To detect these scans, we will build a system that emails us alerts whenever someone connects to a predetermined port. First, we identify three to five of the most commonly scanned ports. Then we select two to three systems to listen on these ports. When an intruder scans our network, he will most likely hit our systems listening on these ports. When these ports are scanned, the systems log the attempt, execute various predetermined actions, then email an alert to a point of contact.
Jun 19, 2000 | LinuxSecurity.com
"This document takes you through the basics of intrusion detection, the steps necessary to configure a host to run the snort network intrusion detection system, testing its operation, and alerting you to possible intrusion events."
Google matched content
"Intrustion Detection Systems (IDS) -- Product Survey" by Kathleen A Jackson
UNIX Review - Kernel Audit Security Data
Analysis and Response for Intrusion Detection in Large Networks
COAST Audit Trails Format Group
List of accepted papers
BW: Network ICE Offers First Intrusion
Detection System for Linux(May 08, 2000)
LinuxSecurity.com: Build a Secure System with LIDS(Apr 25, 2000)
Lids.org: LIDS Hacking HOWTO(Apr 09, 2000)
Network Computing: Best Practices in Network Security(Mar 18, 2000)
LinuxSecurity.com: Intrusion Detection Primer(Mar 13, 2000)
TechWeb: Linux Suppliers Focus On Improved Security [via Tripwire](Mar 03, 2000)
Security Portal: Some thoughts on (network)
intrusion detection systems(Jan 16, 2000)
Security Portal: Network Intrusion Detection Systems and Virus Scanners - are they the answer?(Jan 09, 2000)
Security Portal: Do you have an Intrusion
Detection Response Plan?(Aug 24, 1999)
Security Portal: Detecting Intruders in Linux(Aug 16, 1999)
Slashdot: Intrusion Detection [Book Review](Jan 27, 2000)
Slashdot: Review: Network Intrusion Detection: An Analysis Handbook(Sep 28, 1999)
http://www.anticode.com - huge selection, nicely layed out, the supermarket
of exploit sites
http://www.rootshell.com - out of date but nicely searchable index
http://www.technotronic.com - exploit code and other tools such as sniffers and intrusion detection
http://www.nmap.org - Nmap scanner
http://www.nessus.org - Nessus intrusion scanner
http://www.marko.net/cheops/ - Cheops
http://www.isag.com - Internet Security Advisors Group (Ira Winkler is President of said company)
http://securityportal.com/research/research.underground.html - Underground Resources
USENIX ;login - intrusion-detection systems
CyberCop, originally produced by Network General, is an IDS recently released by Network Associates <www.nai.com>, an organization that is constantly at work acquiring technologies in the field of security. After their acquisition of Network General's product, Nai made some changes to CyberCop. (It must be remembered that the auditing/scanning software now called CyberCop Scanner -- also known as Ballista -- is different from the totally software-based product we are discussing here.)
Installing CyberCop does not require the network to be reconfigured or plug-ins to be added. Like other IDSs, CyberCop builds a layer of additional software which works by monitoring the ports and services enabled by the firewall.
The first version of CyberCop, announced in 1997, consists of two elements, the management server and the sensors. The latter are positioned at strategic points on the network and communicate any suspicious events to the management server. These events are classed according to a set of 170 different attacks.
If an attempt is made to access the network, the product, currently called CyberCop Server, informs the security administrator in real time, providing a detailed report of the event. The designers feel that within a few minutes CyberCop can give the input the security manager needs to take the necessary steps to resolve the problem. Management of the configuration of CyberCop, as well as the receiving and transmitting of the intrusion detection reports, can take place from a remote location using an encrypted link, which is activated only after recognition of the parties.
Of course, all the traffic monitored is stored in log files which can be consulted at any time by the security manager, both in order to trace the attacks and in order to take subsequent legal action. Configuration and positioning of the sensors are simplified by a preconfigured installation set, which makes operation easier and enables leaks to be limited.
Bro is a realtime IDS devised and developed by Vern Paxson and other experts at the Network Research Group of the Lawrence Berkeley National Laboratory. The source code of Bro is freely available, and the principle on which it is based is decidedly in an academic mold. With its spartan interface, indicating that greater attention has been paid to substance than to appearance, Bro bases its operational capacity on its scanning speed, realtime notification of violations, and a clear separation between the engine, the policy implemented, and the extensibility options.
Bro is partitioned into two components: the "event engine," which translates the traffic intercepted at kernel level into high-level events, and the "policy script interpreter," which defines the policy implemented, always by means of specific instructions written in a proprietary language. In this way, administrators can use the granularity of this IDS to adapt the system to their own requirements. The services monitored on a priority basis by Bro are Finger, FTP, and Telnet. In addition, the Portmapper function of this solution makes it possible to check the activity of the single ports as well.
So far we could say that there is nothing new. All network analyzers (or net sniffers, if you prefer), and therefore all IDSs (which can be considered extensions of them) are normally equipped with these features. The designers of Bro, however, state that during the analysis period they studied in depth the typology of both standard attacks and those that can be brought to bear on the screen in the narrowest sense, and that they were able to identify and describe attacks not referred to in the literature. Again, during the prebuilding phase the designers acquired substantial experience with systems based on offline analysis of tcpdump attempts. All this has given rise to a melting pot of reference information for subsequent implementation of the modules of this IDS.
One of the main objectives of Bro is to ensure traffic speed. In order to do this, Bro monitors DMZ links. These are usually FDDI, so that the monitor must be able to inspect the traffic, which is very bulky in itself, at speeds in excess of 100 Mbps.
Bro's separation of the engine from the rest of the modules, including the script policy interpreter, is essential to streamline the monitoring operations as much as possible (which means no degradation of network performance) and to distinguish the data on the basis of the services to which they belong. All this has been implemented in order to give Bro maximum flexibility. (Flexibility is more or less the reason why the manufacturers of virus-detection programs are revolutionizing the way they are developed. Attacks are becoming increasingly numerous and diversified, and they depend more than ever before on flaws in individual operating systems and their layouts, so they are increasingly well-targeted and unforeseeable. In this context, modularity and extensibility are strategic and can only be achieved if the architecture used is as open as possible.)
More information about Bro may be found at <http://nrg.ee.lbl.gov/nrg/papers.html>.
ISS RealSecure --[ junk NNB]
ISS RealSecure for Windows NT by Internet Security Systems <www.iss.net> is one of the best known and best-selling IDSs on the market. (Indeed, ISS and CheckPoint have joined in partnership to bundle Firewall-1 and RealSecure together.) The basic operating principle is common to the other IDSs: The traffic passing through is monitored, and the activities are compared with the pattern with which it is outfitted. In the event that they match up, an alert is activated and possible automatic countermeasures are implemented.
Suspicious activities, documented with information concerning the chronology of the attack, its source, and destination -- plus other data to be selected -- can be managed extremely dynamically. Monitoring of the traffic consists above all of packet filtering. It is possible to configure RealSecure to check traffic in all its meanings: TCP, UDP, ICMP, source and destination ports, etc. It is also possible to check the traffic on the basis of the services used because the pattern of the attacks follows this schematic distribution.
The designers of ISS used this philosophy: Starting from the assumption that most attacks come from inside, the administrator needs a product able to check all the traffic (not only the traffic permitted by the perimeter security system). In addition, a check of the activity permitted by the firewall is also indispensable, since even an authorized user can "penetrate" a system.
The security policy set up by RealSecure therefore has the objective of checking and identifying beforehand:
- who can access the system and who cannot
- which protocols and/or services are permitted
- which new hosts are added to the network and what rights they have to "dialogue" with the rest of the infrastructure.
Starting from these assumptions, a series of features in Real Secure are aimed at making the work of the administrator as easy as possible and the system as flexible as possible.
NID 2.x is an intrusion-detection suite available freely on the Web <http://ciac.llnl.gov/ctsc/nid/> for various operating systems, including LINUX, but its use is limited, for the moment, to government organizations.
NID works in a manner similar to Bro. It can monitor speeds and layouts, including FDDI and, of course, all IP traffic. NID has these features:
- The software is installed on a dedicated machine.
- A security domain is formed from the management console. In turn, this includes a series of hosts at the discretion of the operator.
- NID starts to audit the network traffic using three fundamental methods.
- Attack-signature recognition
- Vulnerability risk model, i.e., general safety parameters to be observed
- Anomaly detection, i.e., recognition of abnormal behavior inside the network and immediate notification of the system administrator
In NID, too, the analysis model and its operational expression are of the mainly "passive" type; traffic is audited and a consequent comparison with the attack pattern at disposal is made. If a match is found, an alarm is sent to the security administrator.
The software permits sessions of specific UNIX tasks, such as cron, to be run.
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Haterís Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|You can use PayPal to to buy a cup of coffee for authors of this site|
Last modified: March, 12, 2019