Monday, September 05, 2005

Gartner says IDS is dead

IDS is dead, according to Gartner. This subject came up a few weeks ago in a conversation with the CEO of a network management company that works mainly with US Federal clients. He told me, "Federal Agencies have been dropping millions on IDS for years, and it's not doing them any good. They aren't getting any value out of it. My staff thought I was crazy the first time I said this." It's common for security officers, consultants, and staff to think that a lack of management support and a lack of organizational investment is the reason for IDS failure. The other side of the coin is that IDS technology is simply too expensive to operate, and doesn't provide enough ROI. If your car required a full time on-site mechanic to rebuild different parts of the engine and transmission, you couldn't afford to drive, either. One of our clients has an industry leading IDS system. They routinely receive alerts about worm outbreaks on their network from that IDS system two days after it started -- when the new fire-breathing signatures finally arrive. The IDS paradigm, like the AntiVirus paradigm, probably has a "sweet spot", things it can do well. But, like AntiVirus, IDS also has limitations that can't be overcome without stepping outside the paradigm. Stretching IDS outside the sweet spot (without stepping outside the IDS paradigm) inflates the cost of operations, and complexity of implementation. Unfortunately, every major IDS on the market today is reaching beyond the IDS sweet spot. The vendors want to help solve problems like worm and botnet invasions, because those are the most common, most damaging, and most expensive intrusions that potential IDS customers face. IDS systems are not well suited to the AntiWorm task. Even in the sweet spot of the paradigm, IDS suffers from a few basic problems:
  1. many false positives
  2. difficult to implement
  3. costly to operate
The response of the IDS industry to these problems is to "tune down" (or tune off) major chunks of the promised and desired functionality of the IDS system. This reduces the rather stunning false positive rate of the typical IDS system on the typical network, (which, by the way, the IDS industry euphemistically calls "events" rather than "false positives") to a "manageable level". In other words, stop detecting needle of the intrusions so that the system can be operated by the limited and overtaxed security staff available, not by the hypothetical dedicated full time team required to sort through the haystack looking for it. That's the root problem with IDS. It's just not possible to coordinate data from so many disparate sources, looking for so many different potential "security events" without generating an unmanageable event load. Yes, Gartner sometimes has an axe to grind, but in this case I don't see it. They seem to be making an honest assessment that agrees with the honest assessment of the CEO I mentioned -- a professional who makes part of his living installing and operating IDS systems for his clients because they want IDS systems. IDS products are dreadfully out of alignment with the security demands and operational efficiency requirements of a modern network.

Technorati Tags: , , , , , , , , ,

Wednesday, August 24, 2005

W32.Zotob.K and TFTP port 69/udp

Two years and dozens of worm variants after the W32.Blaster.worm worm infected millions of machines using an easy to block TFTP callback mechanism, the latest variant of the Zotob family is using the same technique. The W32.Zotob.K worm may spread on some networks more successfully than previous variants, all of which attempt to exploit the MS05-039 buffer overflow defect in Windows systems. Using this technique, a worm author trades complexity in one area of the worm design (the overall transport logic) for simplicity in another (the code which exploits the buffer overflow). Previous variants have connected to the victim computer on port 139 or port 445, where it hopes to find an unpatched software agent listening. Then, a packet is sent containing some things that the victim expects to receive, and some things it does not -- all must be arranged very precisely. This package includes the message which trips the buffer overflow, and the code the attacker seeks to run on the remote system immediately thereafter -- which includes a copy of the worm. It turns out that most variants of the worms that exploit MS05-039 directly have been limited in their ability to spread, even on networks of systems entirely vulnerable and unpatched. Their slow spread appears to be due to a quirk -- the attempt to execute the complicated instructions and upload the entire worm to the victim will sometimes fail, causing the target system to reboot, without having first been infected. The TFTP callback allows a simpler package to be delivered through the buffer overflow, and probably makes it more reliable as a result. Instead of a big payload with lots of instructions, a small payload can be delivered. Basically, the worm says, "Hey, call me back." The attacking, worm-infested computer first sets up a listener on port 69, which is able to respond to TFTP requests. It's a small bit of code and it has become standard fare in the "off the shelf" worm building toolkits. The instructions sent through the buffer overflow ask the victim computer to fetch a file from the attacker, using a TFTP client software utility built into Windows, and then execute the resulting file. Organizations which have continued exposure of large numbers of Windows systems (unpatched for MS05-039, and with NULL sessions enabled) should consider blocking the TFTP port 69/udp on internal routers before these new variants hit your network. If this TFTP callback on port 69/udp is so easy to block, why do so many organizations still have it open on their networks? It turns out that many network devices including routers and switches occasionally use TFTP to communicate with network management consoles. This is another good reason why the port should be blocked -- just remember to leave it open to and from a small number of network management consoles or subnets, not throughout the entire network. You can easily block these TFTP callback worms without interfering with your ability to manage routers and switches. Will this become an arms race with new variants opening the TFTP callback trojan on a different port each time? Perhaps. Some worms exploiting the MS05-039 vulnerability apparently open their own FTP server on a high numbered port, using that for a callback transport rather than TFTP. However, the TFTP callback remains a popular exploit, and it's easy enough to block it. The TFTP program on Windows seems to be hard-wired to call to port 69, which explains the continued popularity of this particular port. Permanently blocking this port deprives the worm of a propagation technique with a long and successful history. Worm authors might possibly switch to a different protocol. The other obvious choices, FTP and HTTP, would seem to place a greater burden on the instructions that need to be sent through the buffer overflow exploit, sending the worm author back to square one -- a worm that doesn't propagate very well because the buffer overflow exploit is too fragile. In any case, you won't likely be chasing TFTP all over the port map. It has stayed right there on port 69 for years, and partitioning your internal network on this port remains an effective strategy for mitigating the spread of many worm variants.

Technorati Tags: , , , , , , , , ,

Reimage? Sez Who? The Fedz, that's who. And Microsoft.

A number of systems administrators have asked me for some nice authoritative PDF and official looking references to support them in discussions with management regarding recovery strategies (see IRC Botnets: The Needle and The Damage Done). Senior managers are not particularly impressed by blogs, you know. Here are a few that I had handy. Note that this guidance is not universal. AntiVirus vendors, in particular, commonly claim that cleanup tools are sufficient recovery in most cases, although lately even a few of them have become more cautious in their claims for their cleanup tools. If you find any other nicely authoritative references on this topic, let me know and I'll add them to this page. The first reference actually surprised me. I was working on a large team in a gargantuan organization, and a client asked a Microsoft Security Consultant if Microsoft recommended re-imaging from worm attacks, and if they in fact practiced this type of recovery for internal problems. The gentleman responded with a clear and concise explanation that it was one of the 10 Immutable Laws of Security. I was so stunned I forgot to ask about the other 9. It turns out that it's actually number 1 and 2 on their list. (They also have a reasonable overview of Responding to IT Security Incidents . The NIST documentation is more detailed, but this overview is reasonably concise and might be helpful for managers.) 10 Immutable Laws of Security
Law #2: If a bad guy can alter the operating system on your computer, it's not your computer anymore In the end, an operating system is just a series of ones and zeroes that, when interpreted by the processor, cause the computer to do certain things. Change the ones and zeroes, and it will do something different. Where are the ones and zeroes stored? Why, on the computer, right along with everything else! They're just files, and if other people who use the computer are permitted to change those files, it's "game over". To understand why, consider that operating system files are among the most trusted ones on the computer, and they generally run with system-level privileges. That is, they can do absolutely anything. Among other things, they're trusted to manage user accounts, handle password changes, and enforce the rules governing who can do what on the computer. If a bad guy can change them, the now-untrustworthy files will do his bidding, and there's no limit to what he can do. He can steal passwords, make himself an administrator on the computer, or add entirely new functions to the operating system.
Steps for Recovering from a UNIX or NT System Compromise
Keep in mind that if a machine is compromised, anything on that system could have been modified, including the kernel, binaries, datafiles, running processes, and memory. In general, the only way to trust that a machine is free from backdoors and intruder modifications is to reinstall the operating system from the distribution media and install all of the security patches before connecting back to the network. Merely determining and fixing the vulnerability that was used to initially compromise this machine may not be enough. We encourage you to restore your system using known clean binaries. In order to put the machine into a known state, you should re-install the operating system using the original distribution media.
The CERT® Guide to System and Network Security Practices
An intruder may have altered user data and application program areas. Examples where this may occur include
  • installing back doors to provide future access. For example, an intruder installs a program in a local user directory that is called each time the user logs in, providing an unprotected login shell that can be accessed by anyone via the Internet.
  • compromising user data to sabotage the user's work. For example, an intruder makes small changes to spreadsheets that go unnoticed. Depending on how the spreadsheets are used, this can cause minor to major damage.
Use the latest trusted backup to restore user data. For files that have not been compromised, you can consider using the backup that was made closest in time to when an intrusion was detected to avoid user rework. This should be done with caution and is based on having a high level of confidence that restored user files were not compromised. Regardless, you need to encourage users to check for any unexpected changes to their files and warn them about the risk of compromise.
NIST Special Publication 800-61
Computer Security Incident Handling Guide
5.4.3 Eradication and Recovery (Malicious Code Incident Handling) Antivirus software effectively identifies and removes malicious code infections; however, some infected files cannot be disinfected. (Files can be deleted and replaced with clean backup copies; in the case of an application, the affected application can be reinstalled.) If the malicious code provided attackers with root-level access, it may not be possible to determine what other actions the attackers may have performed.(91) In such cases, the system should either be restored from a previous, uninfected backup or be rebuilt from scratch. The system should then be secured so that it will not be susceptible to another infection from the same malicious code. 6.4.3 Eradication and Recovery (Unauthorized Access Incident Handling) Successful attackers frequently install rootkits, which modify or replace dozens or hundreds of files, including system binaries. Rootkits hide much of what they do, making it tricky to identify what was changed.(94) Therefore, if an attacker appears to have gained root access to a system, handlers cannot trust the OS. Typically, the best solution is to restore the system from a known good backup or reinstall the operating system and applications from scratch, and then secure the system properly. Changing all passwords on the system, and possibly on all systems that have trust relationships with the victim system, is also highly recommended.
Checking Microsoft Windows® Systems for Signs of Compromise
If a rootkit is installed on your system, it will be extremely hard to detect. At present, there are only two tools that we aware of that can aid the discovery of a rootkit, and the associated procedures are extremely difficult to follow. It is for precisely this reason we would recommend simply reinstalling the operating system ; it will take far less effort and time. Indeed, it could be argued that these procedures should only be used for either academic curiosity and forensics of an attack, or if the system is of extreme importance. Regardless of your findings, it is still highly likely that a compromised machine will always remain compromised, and thus cannot be trusted.
The following document from the US CERT is less rigorous than the others cited here, as well as a bit ambivalent. Note that it's also internally inconsistent. The document advises trying antivirus cleanup, but then states that reinstallation is "the only way to ensure" a secure recovery of a system. The US CERT should revise this document to be more clear, and to be more clearly in alignment with the overwhelming weight of sound advice, industry best practices, and with consideration of the dramatic increase in sophistication of automated botnet attacks in the last few years. Recovering from a Trojan Horse or Virus
If the previous step failed to clean your computer, the only available option is to reinstall the operating system. Although this corrective action will also result in the loss of all your programs and files, it is the only way to ensure your computer is free from backdoors and intruder modifications.

Sunday, August 21, 2005

IRC botnets: The Needle and The Damage Done

Since August 15th, many organizations have been struggling to recover from the onslaught of the various exploiting the Universal Plug and Play (UPnP) buffer overflow exploit. Those unfortunate enough to see large numbers of systems hit by one or more worm variants face the usual challenge of recovering the systems. Microsoft and the AntiVirus Vendors are eager to help you recover your systems, with several offering their own custom cleanup tool to eradicate the worms. Victims of this crop of would do well to heed the long standing recommendation of information security experts. Recover your contaminated systems by re-imaging them from pristine media, particularly if they were able to contact the outside world even for a few minutes using an control channel. It's often difficult for non-technical management to weigh the risks involved with any given outbreak. This week I've heard this sentiment expressed almost exactly the same way from managers in several different organizations:
"It's just a virus, right? I have those on my home PC all the time and nothing bad has ever happened."
Well, I'm sorry to be the bearer of the bad news, but that's not the way it is, certainly not any longer. The largest to date, in which up to 40 million credit card numbers were recently stolen was reported to be due to a "computer virus". That was undoubtedly a pretty bad event for quite a few people. It can take many months, even years, for an innocent individual to recover from problems deriving from the theft of their identity. Far more people than you might think are affected by identity theft, as described in the Federal Trade Commission – Identity Theft Survey Report from two years ago. Experts acknowledge that the problem is getting worse, as large scale automated attacks by worms and botnets are employed to harvest identity data. Worms and bots execute arbitrary code on the zombied systems hosting them. They typically run with Administrator rights and can do anything the computer can do -- and they start doing it within seconds after the systems is exploited. These things are not just hypothetical. Here are a few of the things that zombied PC systems have been observed to perform, at the request of remote attackers, controlling zombied systems from outside the corporate firewall. By the way, these are not alarmist proclamations, rather, they are mundane work-a-day activities of the typical botnet, observed and documented by many independent security consultants.
  • contact an IRC channel at a remote location, and receive arbitrary instructions
  • update the bot software, install new bot modules
  • scan penetrated networks for other vulnerabilities
  • probe the vulnerable systems and spread the bots
  • perform denial of service attacks on other networks
  • harvest (find and upload to remote servers) private, sensitive, secret or classified documents from hard drives
  • harvest passwords, user names, and other login information (from the Windows Registry, the Internet Explorer cache, cookies, and text or document files on the system)
  • harvest email addresses, contact information
  • sniff network traffic to capture passwords and other information
  • install rootkits, trojans, keystroke loggers and other malicious software
  • use the system to send spam
Botnet controllers could also employ the zombied PC for or other fraud (e.g. for internet advertising). The modern worm and bot attack has all the characteristics of yesteryear's intrusion -- a manual exploitation of a system by a hostile attacker. The universal consensus of the information security community to a crack of a system by an intruder is that a system must be re-imaged to regain assurance of its security. This recommendation hasn't changed in years, despite advances in rootkit detection techniques. The authors of such systems consider them to be useful for forensic analysis, not system recovery. When a modern Botnet invades your network, a remote person (or team of people) unknown to you has (or have) gained Administrator access to your systems. They have taken actions that you cannot trace because they were not logged and because they may have modified system files or installed a rootkit. Somehow, because these attacks evolved slowly over a period of years from mundane virus and ostensibly benign worm attacks, managers sometimes don't take them seriously. The primary difference between a classic intrusion and a botnet invasion is that the cracker quickly (within minutes) gains control of dozens, hundreds, or even thousands of compromised systems with a bot. The tasks allotted to the botnets can be automated as well. The nature of the threat is considerably greater than the virus or worm of days gone by. It's more appropriate to think of a bot as a manual intruder, multiplied times the number of contaminated systems, and treat it with the same degree of seriousness. One last motivation for treating botnet invasions more like traditional "intrusions" is provided by increasing attention of legislative, regulatory and oversight agencies. Private and governmental organizations alike may be under increasing legal and regulatory obligation to provide stronger assurances that recovery strategies are adequate. Legislation at the Federal and State level may require private industry to disclose serious computer breeches which expose their customers, business partners and employees to risk. Sometime in the next year or so, you're going to read about a big problem -- a giant identity theft, a massive leak of confidential or sensitive documents, an organization with hundreds of machines owned by a botmaster for months before it was discovered. Don't let it be your organization that you're reading about. If you didn't focus on prevention after the last botnet invasion, and you got hit again, don't try to cut corners now. Restore compromised systems from pristine media, then get to work on a layered defense posture.

Technorati Tags: , , , , , , , , ,

Thursday, August 18, 2005

Threat Levels: Low, Medium or High? Red, Orange or Yellow?

Microsoft and the AntiVirus Vendors (perhaps a decent name for a band) tend to think of "threat" in terms of the number of machines infected, how many are vulnerable, and certain other primitive measures of damage done by a worm, such as "does it delete data files". By those measures, this worm appears benign. In fact this current crop of worms is far more harmful than some of the most famous worms from a couple years ago. Rather than hitting many millions of machines, these worms hit only a few hundred thousand or a few million perhaps (infestations inside large corporate and government networks are hard to count from the outside, hiding many infected systems.) When the worms are released, they do the most damage in the first few hours. They immediately search the hard drives for interesting files and upload them to remote servers. This damage is done, to the tune of thousands of files and hundreds of MB of data, before you learn which port to block at your firewall. They steal user identity information, documents, and files that store encrypted passwords so they can be cracked at the convenience of the attacker. They often leave very little in the way of evidence about what they have done. If you get lucky and capture an IRC session used to control these things, you'll understand the true nature of the threat. Many infected systems this week were being actively controlled from outside the corporate firewall by hostile forces. I've recently seen a captured IRC session which includes automated traffic from the zombied bots, as well as conversation traffic between members of a team of human attackers who immediately noticed (and thought it was funny) when the client blocked the IRC port published by the antivirus vendors. We have very little forensic evidence on this, but what we do have indicates that the bots appear to have automatically switched to another port/server combination and nary a beat was skipped. Managers at all levels of corporations and government need to understand that these worms are a very serious threat today. Even though the number of systems infected might be smaller than in previous outbreaks, these worms and bots are dramatically more sophisticated. The security industry needs to come up with better measures of the threat level, which include the risk of data theft, identity theft, and execution of arbitrary command and code on internal systems.

Tuesday, August 16, 2005

IRC Botnets: Sysyphus Part II - containment

Many organizations are struggling with containment of and invasions this week, as a result of the vulnerability and myriad variant worms exploiting it (Zotob, Spybot, Esbot, Rxbot, bobax, et. al.) Most of these organizations have patch management, firewalls, IDS and AntiVirus systems in place as part of a layered defense. They may suffer dozens or hundreds of compromised systems regardless of these efforts at prevention. The current crop of worms are nearly all bots -- remote controlled software agents that call out of your network to a remote server, looking for instructions from an attacker. Containing the outbreaks can dramatically reduce the cost of the later cleanup. If you have a large network, with a large population of vulnerable machines, and you don't have a containment strategy in place, consider the following tactics.

Network Partitioning

Consider partitioning your internal network on the ports used by the worm to spread. This worm seems to favor port 445, but some variants also employ port 139. Block these inbound at VPN and dial-up access points. Consider creating zones within your enterprise that are partitioned on these ports, at least until you get all your systems patched. If an outbreak occurs, the damage can be contained within a zone.

Egress Filtering

Don't wait for the AntiVirus vendors to capture and analyze all the variants to determine what ports to block. Start by blocking all of the standard IRC ports if you don't have a critical business need for IRC (most organizations don't). The standard IRC ports are used surprisingly often for botnet control, because they can sometimes be set up on existing IRC servers with relative ease. Although there is a small chance that someone in your organization might be using IRC for legitimate purposes, consider directing them to use a more modern Instant Message protocol, like AIM, Yahoo IM, MSN IM, Jabber/XMPP, etc. Standard IRC ports should be blocked indefinitely. There are other ports associated with IRC, registered with the IANA , but they don't seem to be in use for botnet control at this time. We might need to expand this list at a later time. Note also that several of these ports are identified as commonly used by IRC servers, but not registered with the IANA, they currently show as "unassigned"). 6660/tcp 6661/tcp 6662/tcp 6663/tcp 6664/tcp 6665/tcp 6666/tcp 6667/tcp 6668/tcp 6669/tcp 7000/tcp Also, in your perimeter routers, block and log the IRC ports used by the known variants, as documented by the various AntiVirus Vendors. During an outbreak, don't wait for a variant to hit your network. Block and log the IRC ports as soon as the variant is documented. Have someone one your team assigned to review the emergent documentation from three or four major AntiVirus vendor web sites, and update your perimeter egress filtering rules at least twice a day during an outbreak.

Disable NULL sessions

If you have a software distribution system in place, and if you haven't done it already, consider disabling NULL sessions on the Windows systems which haven't been patched yet. This can be accomplished with a tiny package and distributed much more quickly than large system patches. It appears that this will prevent an infection of an unpatched Windows 2000 system, allowing you more time to patch. Microsoft declined to confirm or deny that the UPnP vulnerability could be exploited with NULL sessions disabled, but apparently the current crop of worms and bots all rely on NULL session to perform the exploit.

Smart Bombs

Practice adroit and vigilant application of cleanup tools as part of your containment strategy, but not recovery (apologies to George F. Kennan). Cleanup tools can be deployed to contaminated systems to kill and delete the probing worm process which is spreading through buffer overflow exploits. If you can test and deploy it quickly enough, such tools can be part of a layered defense -- even if they are the "last line" of that defense. Focus testing on system compatibility -- will it accidentally wreck something on the system, making recovery harder? Probably not, but it's a good idea to check it out in your test lab. Don't squander precious time during the early phases of an outbreak by trying to validate that a cleanup tool kills every variant on your network. Deploy it only to systems that are contaminated and probing other systems. This is an attempt to slow the spread of the worm -- remember, you're engaged in containment, not recovery. If you have good reason to believe it will kill some of the variants on your network, send it out to contaminated systems after basic compatibility testing has been performed against your system image baseline. As follow-up, you can focus your limited forensics resources on the systems that continue to spew worm traffic, despite the cleanup tool, and return with an improved version to taunt the silly worm a second time.

Intrusion Suppression

Our FireBreak AntiWorm can help you identify those infected systems within moments of the start of an outbreak, and with an extremely low false positive rate (no false positives at all on a typical network). FireBreak AntiWorm can also significantly impede the progress of , allowing you more time to respond. The system is appliance based and dramatically simpler than traditional IDS systems. Our solution can be deployed very quickly, so if you're having trouble putting the lid on the worm this week, don't wait -- call us today.

Monday, August 15, 2005

MS05-039 Zotob - and more to come

Zotob reared its slimy head this morning. It exploits a defect in Windows systems (UPnP MS05-039) for which a patch has been available less than a week. Zotob is undoubtedly the first of what will be many Week Zero Worms exploiting this defect -- not quite a Zero Day worm, but close enough to wreck havoc. Every time this happens, the internet discussion forums are flooded with snide comments from smug systems administrators, along these paraphrased lines:
"I patched all 652 of my systems this week before the worm hit. Any organization being hit by this worm is incompetent."
Well, probably not. These well-run one-man shops do impress with their ability to deploy patches quickly and offer some hope for the rest of the universe. However, the prima donna types that make it happen generally don't really understand the magnitude of the problem in a large corporation with, say, 50,000 TCP/IP devices, mostly running Windows. It's not just a matter of patching 77 times as many systems in that same week. The unstable tower of complex software architectures built up on top of the typical network of Windows systems in a large enterprise makes it quite a bit more difficult to plan and execute a system upgrade or a configuration change or even an operating system patch in a larger environment. Explaining this to management in large organizations isn't very hard. Getting them to agree to do something to fix the underlying problems, however, is almost impossible. The people in charge of keeping the engines running are not the same people in charge of all the complicated attachments that get connected to them. All of these arbitrary "business drivers" may be carefully considered by IT people, who conclude that they need to meet the needs of the "customer" (e.g. another business unit, which is often a profit center carrying clout with Senior Management) and concede to the complicated attachments. These other business units are often engaged, sometimes knowingly, in a game of externalized cost. They may buy a software system that must be deployed to every desktop, rather than one that users can access from a web server. Worse yet, they may build one, without divining the best practices which help prevent high-maintenance software architectures. An increasing burden builds up on the IT staff over time. Most of this stuff is extraordinarily difficult to measure. But these costs don't go away. They come back to bite. Other times support issues arise within the IT organization itself, and a clever solution is devised. Often entirely too clever. Unfortunately, this "can do" attitude of most IT shops is sometimes their undoing. Clever solutions interwoven through the layers of the distributed systems and the various creaky but mandated optional components combine to make an overall system architecture which is relatively brittle. Then a worm hits. In a panic, patches are applied, things break, and the mess is cleaned up later. A post-mortem is performed. In the post-crisis exhaustion, the IT organization struggles to put the pieces back together and move forward on the latest set of tasks from the latest set of business drivers. In the standard ongoing chaos, the recommendations are ignored. A few weeks later, another defect, another worm, another crisis which possibly could have been averted in a better world. It's a nasty vicious cycle, but it's definitely related to the sheer size of an organization and its network. So please, all you smug fully patched systems administrators, don't be so hard on your collegues who didn't get 50,000 PCs patched in the same week that you patched 700. This worm gave you several days to patch them, and it took you more than a day. The next worm could hit before the patch is available, and it could be you turning to the forums for advice on how to impede the spread of the worm on your network, contain the damage, and recover your systems. When your number comes up, these folks will have unfortunate experience that you might be able to draw upon.

Technorati Tags: , , , , , , , , ,

Friday, June 10, 2005

Witty Worm Insider? Perhaps not.

In several discussions around the net it has been suggested that the author of the Witty Worm must be an insider. I'm not so sure. Although I agree that it's interesting that the worm was pre-populated with a seed target list, and also interesting that some of those hosts were on a military base, I'm not convinced of the conclusions that others have drawn from these facts, namely that the attacker had to be an insider -- either from the product vendor, or from the company who reported the defect. Likewise, the implication that the attack was directed at the US Military doesn't make sense. A few minutes of scanning could have produced a list of 100 vulnerable hosts. The scanning algorithm might have been something like this:
  • google to find likely customers of the company whose product will be exploited,
  • find address blocks likely to be associated with those clients using various DNS tools,
  • scan randomly until you find a vulnerable host,
  • then walk up and down from that IP address to find others which are likely to be nearby.
The worm could have been sitting around waiting for the seed list and the egg. Vulnerability announced, write the egg, test the worm, scan for some infect-able hosts, and fire away. No insider knowledge required. It's possible that the attack was directed at the military base, but it seems just as likely that it wasn't. The attack was global, and could certainly have been restricted to the IP address ranges assigned to the US Military, or even to major US corporations, but it wasn't. Finally, analysts seem to universally assume that writing the egg to exploit a defect like this would be hard and take a long time. Perhaps the Witty Cracker started months in advance, developing their worm against a previously well documented Windows/x86 UDP exploit, say SQL Slammer or something. When a new vulnerability showed up that was similar enough to allow a single-packet UDP exploit, perhaps it only took them a few hours to write and test their code. Has anybody narrowed down how many hours elapsed between the public announcement of the vulnerability and the start of the worm propagation? It was clearly less than 48 hours. I hope Nicholas Weaver and his colleagues are funded for further research on the Witty Worm. I'd like to see them analyze the worm to determine if it would have been possible to develop in a few hours, given the starting position of a previous "prototype" worm. Other worms are clearly developed this way from existing toolkits that are publicly available. Perhaps this worm was developed from a private toolkit.

Technorati Tags: , , , , , , , , ,

-- NOTE: A few days ago I saw a reference to some that Bruce Schneier had posted to his blog. Last night I surfed it up, and was inspired to post to his comments. This entry is an edited version of my observations, which I post here as our clients browsing the Intrinsic Security blog may be interested. This was quite a worm, still provoking so much thought and discussion all these months later. /gary

Saturday, May 28, 2005

Device Drivers: a hidden worm threat?

One of the more interesting security articles of late, from Security Focus, discusses the potential for device drivers to be exploited, due to many lurking buffer overflow defects. The article discusses Windows and Linux as examples, although presumably any platform which depends upon many 3rd party device drivers could be subject to the same issues. Drivers that listen on a network, such as network card drivers, would of course be vulnerable to remote exploits. People tend to think of device drivers as part of "the system", and the article points out that many if not most of the drivers people use are created by 3rd parties, not by the vendor of the operating system, and typically not by the core kernel developers. The article mentions that the authors of device drivers tend to have wildly varying skill levels, and that many drivers amongst a sample inspected appear not to be properly reviewed for security implications. Of course that's too kind. My own experience has been that device drivers for hardware often appear to be an afterthought of a hardware company in most cases. The article doesn't discuss mystery drivers -- I don't know if there is an industry standard term for these things. I won't point fingers, but I've been surprised a few times by a software package that installs device drivers when the need for a device driver in the application architecture wasn't really clear. Hardware drivers for peripherals and certain root level services like VPN software make sense, given the general system architecture of most contemporary operating systems. The bottom line of course is that drivers today include plenty of buffer overflows lurking. Those which can be remotely exploited provide worm fodder, while the rest provide opportunity for local privilege escalation. Exploit chaining techniques could see worms come in through non-privileged exploits, and then up the voltage through a device driver defect. At that point of course they are free to do all the keystroke logging, email spamming, trojan downloading and rootkit installing that any other administrator level worm can do. But then, one really doesn't see all that many non-privileged remote exploits on Windows. Since the system architecture demands Administrator privileges for so many things, it virtually guarantees that a remote exploit is also fully authorized from the get go.

Technorati Tags: , , , , , , , , ,

Wednesday, May 25, 2005

The Next Big Worm

A systems administrator at a University pondered today, "We haven't seen a really big outbreak for a few months, where are the big worms these days, like Sasser and Blaster? Aren't there any big security holes left to exploit?" Oh, yes. Microsoft releases patches about once a month, and at any given time there are usually a few serious defects that are known, not widely patched, and remotely exploitable. So what's the deal? Worm authorship seems to be more about building and maintaining botnets for revenue generating spam networks, and mining for various data like email addresses, account names and passwords, and the like. Giant worm outbreaks that infect millions of machines work against the aims of this organized criminal activity. Widespread outbreaks get the instant attention of company management, systems administrators, and AntiVirus vendors worldwide. Many small outbreaks, exploiting older known defects don't attract so much attention and serve to slowly build enormous botnets over time.

Friday, January 28, 2005

Clear Thinking & Information Security

I noticed one day in a discussion with a client, that something they said resonated with things other clients have said over the years. Literally some dozens of conversations over the years have gone something like this:
So, you plan to re-install the worm infected systems from clean media, right? No. We get hit by viruses all the time and nothing really bad has ever happened. We'll just delete the worm files, whack the key from the registry and go back about our business. Do you know what the worm did after it contacted the overseas IRC remote control channel? No. How do you know nothing bad happened?
If you're a client and you recognize this conversation, don't feel bad, I'm not quoting you. I've had this same basic conversation with many other clients, you're in good company (I'm not quoting them, either!) Just about every other professional consultant in the information security world that I've ever spoken with has similar "war stories". People who make their living managing Information Technology shops need to have basic logic and reasoning skills, and for the most part, they do. Oddly, with respect to one particular class of problems -- those for which the solution is perceived to be expensive -- circular reasoning seems to be very popular. When managers don't like the answer that they know, and that the entire cadre of professional security consultants the world over agrees, is the right answer to a particularly painful problem, suddenly you can get whiplash trying to keep up with the coming and going in circles. The simple fact is, if somebody "0wnz your box, d00d!", no matter the particulars of how they came to own it, you have a very difficult time assuring the security of that system unless you re-install from pristine media. Yes, there exist a few techniques and a few tools that might help you recover certain types of systems under certain circumstances. Would you like to experiment with those techniques on your production systems today? What if the box in question is the PC on your desk? Do you trust the cleanup tool enough to know it didn't leave behind a keystroke logger that the bot downloaded over IRC? Do you mind if someone outside the organization gains access to your bank account login while your staff are learning the forensic techniques they need to find it? I thought not! I've had an email signature block around for years -- a quote from physicist Richard Feynman. It's a bit long, and I don't use it often. Sometimes when the national debate on some topic or another has degenerated into nonsense, I quietly attach it without comment at the bottom of my emails for a few days. I was deeply moved by this passage from Feynman's observations, attached as an appendix to the final report of the Rogers Commission, which investigated the accident of the Space Shuttle Challenger in 1986. It serves to remind me of the sometimes accidental and sometimes unconscious -- but nonetheless ever-present hubris of a bureaucracy. I work against this hubris at every turn, steadfast in my belief that organizations are made up of people, and most people want to do the right thing. When presented with the facts in a relaxed setting, outside of the office, away from the deadline pressures and the promotion risks and office politics, I'd guess almost all of the managers at NASA would have agreed with Feynman and a number of NASA engineers that jets of burning gas shooting out of leaky brittle O-rings and pointed at a giant tank of hydrogen was a bomb waiting to go off. And finally, after a surprising number of launches sporting extraordinary luck, it sadly did.

"Personal observations on the reliability of the Shuttle" "We have also found that certification criteria used in Flight Readiness Reviews often develop a gradually decreasing strictness. The argument that the same risk was flown before without failure is often accepted as an argument for the safety of accepting it again. Because of this, obvious weaknesses are accepted again and again, sometimes without a sufficiently serious attempt to remedy them, or to delay a flight because of their continued presence." By: Richard P. Feynman (1986) "Personal observations on the reliability of the Shuttle" Included as an appendix to: Report of the PRESIDENTIAL COMMISSION on the Space Shuttle Challenger Accident (known informally but widely as "The Roger's Commission" report)
Now, in most organizations nobody will die from a worm attack. Hospitals, air traffic control, dispatch centers, train control networks, nuclear power plant control centers, various other utilities, and the enormous DoD networks being notable and important possible exceptions, of course. By the way, all of those industries are documented to have suffered worm attacks within the last few years. Certainly I don't mean to over-dramatize the case, it's just that Feynman elegantly cut through mountains of red tape to reveal the rotten core of the decision making process that led to the first space shuttle disaster. Arguably he explains the second disaster, from which NASA is still reeling, as well. Information Technology decisions require clear thinking. Circular arguments have no place in it. Epiblog: I've just got to the end of this essay, when I went to look up one last reference. Out of the blue as a bolt of lightning, I was struck by a most remarkable coincidence. You see, I was looking for a good reference on the different types of fallacies in reasoning, when I browsed one I've had on my shelf for years. Clear Thinking: A Practical Introduction by Hyman Ruchlis. I bought it on a sale table several years ago, and occasionally look up a section on a particular reasoning fallacy or another. In the section on "circular reasoning", Mr. Ruchlis includes this same Feynman quote! Additional information on the Challenger Accident can be found at the Federation of American Scientists web site.