Monday, November 06, 2006
There has been a great deal of discussion about voting systems in the security community following the well documented problems with electronic voting systems in recent American elections, notably those of 2000 and 2004. A new system promises dramatic improvements in the security of voting systems. The Punchscan voting system looks like a big step in the right direction. For background information, see this primer by Bruce Schneier on The Problem with Electronic Voting Machines. To strike an even bigger blow for democracy, the Punchscan system should be extended so that it can support Instant Runoff Voting (aka Ranked Choice Voting).
Thursday, June 29, 2006
Recently there have been a spate of incidents in which U.S. federal government agencies reported data theft or loss, particularly data which could result in identity theft. The losses include the contact information and social security numbers of, literally, millions of federal employees and contractors. Most of these recent incidents were the result of stolen laptop hardware, USB Key fobs, or other computer hardware, although at least two involved unspecified intrusions (electronic theft of the data following a break-in to an online system). In the past several months, as the reports of stolen servers, hard drives, laptops, and USB key fobs have mounted, I've only seen two disclosed instance of an intrusion (in one case apparently targeted) which resulted in the theft of identity data concerning 1,502 people at the Department of Energy: Energy ups security efforts after loss of employee data and 26,000 people at the Department of Agriculture: U.S. Department of Agriculture hacked. Despite the sparse reports of such intrusions, we know that government PC systems are not uniquely protected from these threats. Although it hasn't been reported, there is ample reason to believe that significant data loss has also occurred over the past several years through worm, botnet, spyware, trojan and rootkit infestations. Such malware routinely scans the infected PC and mounted network drives or shares and uploads files and data into the arms of organized crime. This type of loss is harder for organizations to detect and remains underreported as a result. However, it has has undoubtedly resulted in many more exposures of similar magnitude than have theft of laptops. Many tens of thousands of computers in government agencies are infected with worms, bots, adware, spyware, viruses, trojans, and rootkits every year. The infection rates of many government agencies are not radically different from private industry. Why do we see so few reports about data loss from these types of large scale intrusions? The difference is that when a laptop is stolen, a bit of government-owned equipment goes missing. This produces a few unique circumstances that malware infections don't produce. Missing hardware:
- can't be ignored due to strict property accounting requirements,
- can't be denied due to the loss of a physical device,
- and is more easily understood by all levels of oversight and management.
Wednesday, June 28, 2006
Within a few years it's possible that encryption will be the norm in government data storage, and probably large organizations, too. The historical inevitability of this process was given a boost recently. The OMB has provided guidance requiring Federal agencies to take the security of desktop and laptop systems more seriously (see: OMB Sets Guidelines for Federal Employee Laptop Security)in the wake of recent disclosure of several massive losses of data which could lead to identity identity theft. Here are a few stories describing recent incidents which have prompted the concern and gained the attention of the OMB: Navy Finds Data on Thousands of Sailors on Web Site Afghan market sells US military flash drives FTC Loses Personal Data on Identity-Theft Suspects US veterans' data exposed after burglary Veterans Affairs warns of massive privacy breach Officials: Veterans Affairs Department Ignored Repeated Warnings on Data Security Latest Information on Veterans Affairs Data Security Additional background reading on the recent OBM security guidance: OMB targets desktop hole in cybersecurity Before we leap headlong into encrypting everything in the government, however, we should really ponder the technology and its other implications. Earlier this week, President Bush chastised the North Koreans, who have been preparing to test an ICBM (Intercontinental Ballistic Missile), saying that it is worrisome that a "non-transparent regime" is developing such a capability. Transparency in government is a valued characteristic of modern democratic governments. Consider, however, that even in a modern democracy there exists a tension between disclosure and transparency on the one hand, and the desire of government organizations to restrict information flow for a variety of purposes on the other. Also this week, the disclosure of further domestic spying activity highlights that very issue. More directly, even one of the agencies hit by recent data theft ran aground on the sand bar of public relations spin control run amok: Source: Theft of vets' data kept secret for 19 days. At least some organizations will opt to encrypt most data in most databases, most documents, and most filesystems, because it will be easier and cheaper to comply with directives like this by defaulting to encrypted storage for everything than it will be to analyze this mountain of content to determine if it should be encrypted or not. (Most of the stolen data that upsets people is personnel data, which is "sensitive but unclassified," for example.) Although this may help prevent massive loss of data as seen recently, it might also reduce transparency in government. It may well be legitimately more difficult and expensive to satisfy a FOIA (Freedom of Information Act) request for organizations which rely on office documents and distributed (ad-hoc) content creation and storage. Most policy setting organizations do exactly that. The recent OBM guidance is a welcome step in helping to limit the damage. (It should also be noted that encrypted storage doesn't completely solve this problem, as people tend to leave passwords laying about in plain text files to help them access their protected data, and passwords can be cracked with common tools, given sufficient CPU power and time to perform the crack.) Congress should consider the implications of encryption as a response to data theft problems upon the desirable characteristic of transparency in governance, and should attempt to mitigate the potential damage to transparency before it occurs. They might require that all encrypted archvies be searchable, for example, similar to the way email applications search encrypted mail files. Some thought on this issue would undoubtedly produce a few basic guidelines which would help preserve transparency in governance.
Technorati Tags: Afghanistan, arms control, Army, data loss, data security, encryption, identity theft, North Korea, OMB, rootkit, security, spyware, transparency, Trojan, USB, USDA, veterans affairs, virus, worm
Friday, June 16, 2006
A new zero-day exploit of Microsoft Excel has me pondering a standard bit of security advice, "be careful what you click." This meme survives to be repeated at nearly every outbreak, yet it simply isn't very effective. You've probably seen a story or blog post about this already, but in case you haven't here's the alert from the Microsoft technet blog which got me thinking:
Reports of new vulnerability in Microsoft ExcelMany online article and blog postings repeated this advice, unquestioningly. Some folks even praised it, including the respected security professional Brian Krebs. In his post about the issue at the Security Fix blog, he says it's "always good advice" that one be very careful opening unsolicited attachments. Recently similar advice was given to users of various Instant Messaging systems, as a "worm" affected users of Yahoo's system. In fact, the "worm" required the user to click it, meaning that its spread couldn't possibly achieve the "every vulnerable machine got hit" levels of a real automatically propagating network worm. However, these Instant Message viruses and email viruses can affect large numbers of systems in a short amount of time. A year or so ago I saw an outbreak of an email virus hit 1.5% of the systems at a large customer. It hit so many people (over 500) so fast (within an hour or two) that we at first thought it was exploiting an automatic execution hole in the email client. In fact, it had just been a little more clever than average at social engineering—tricking people to click it. I briefly interviewed a few of the victims, some of whom were trained IT professionals, who spent a lot of time during the course of the year explaining to users that they shouldn't click unexpected attachments. Well, the virus in question was somewhat clever. It nearly always appeared to be from someone you know. It sent an attachment which appeared to be a spreadsheet (it was instead an executable virus). It used cleverly mundane subject lines. Nearly all of the victims had received a virus pretending to be a spreadsheet which appeared to be from someone that they regularly receive a spreadsheets from via email. How careful must people be? Scanning a file first wouldn't have protected the victim against zero-day threats like the current Excel threat. We give the same advice to people about web surfing. Be careful where you surf, be careful what you click. It doesn't work there, either. Corporate and home PCs alike see anywhere from 1% to 20% ambient levels of adware and spyware infestation. But the web is a treasure trove of useful and wonderful things you might never discover if, sometimes, you don't click with essentially reckless abandon. The sentiment is pure, but most users are not able to easily tell what to click from what to avoid. Only the most rudimentary of email viruses or phishing can most people filter out at a glance. I've given this advice myself many times, trying to carefully explain how to tell good from bad emails, and good from bad free downloads. I think in general the advice hasn't been helpful to most people most of the time. High levels of ongoing infestation from adware and spyware, widespread damage from Instant Message "worms" and rampant identity theft all tell us that the advice isn't working.
" In order for this attack to be carried out, a user must first open a malicious Excel document that is sent as an email attachment or otherwise provided to them by an attacker. (note that opening it out of email will prompt you to be careful about opening the attachment) So remember to be very careful opening unsolicited attachments from both known and unknown sources."
Friday, June 09, 2006
Security Auditors can be a clever lot, sometimes a bit too clever. You really need to have someone on staff looking over their shoulder throughout the entire audit, from planning through probing, and reporting. If you don't have someone on staff qualified to watch them, you need an independent consultant. A very sharp generalist would do, but someone experienced in security would be better. Basically you need a check and balance system in place, to keep stories like the following from happening to your organization. First the context. The auditors created a custom Trojan, planted it in amidst various other files on USB drives, and seeded them in parking lots and areas of the client's work area where they would likely be discovered by customers. Which, of course, they were. Here's what they say about the experience: Social Engineering, the USB Way
I had one of my guys write a Trojan that, when run, would collect passwords, logins and machine-specific information from the user’s computer, and then email the findings back to us. ... I immediately called my guy that wrote the Trojan and asked if anything was received at his end. Slowly but surely info was being mailed back to him. ... After about three days, we figured we had collected enough data. When I started to review our findings, I was amazed at the results. Of the 20 USB drives we planted, 15 were found by employees, and all had been plugged into company computers. The data we obtained helped us to compromise additional systems, and the best part of the whole scheme was its convenience. We never broke a sweat. Everything that needed to happen did, and in a way it was completely transparent to the users, the network, and credit union management.Yes, you read that right. Their custom trojan emailed the client's account names and passwords and other (presumably important) data out to the auditors' off-site email accounts. Now, unless these guys put rather a lot more effort into their custom trojan than they described, email is a plain text protocol. So, any fifteen year old kid with a summer job sitting on a router or an SMTP gateway at an ISP between the client and the auditor's email basket can read that email. Of course, it's possible the trojan was equipped with an X.509 certificate and encryption system, but it seems to me that if the auditors had thought of this, they would have mentioned it. It would have been a source of pride. For either forgetting to encrypt the data, or failing to mention it in their storytelling, they will undoubtedly be punished by the flood of email they are bound to get from every GSEC and CISSP certified security analyst on the planet. I don't want to be too critical, because they seem to have the best intentions, and their effort served to illustrate a point that clients often don't take seriously -- USB drives really can be dangerous, even if you don't inhale one. However, in their excitement to put the clever idea to the test, these auditors seem to have overlooked one important layer of the security cake and the important dictum, useful to all consultants, "first, do no harm." Of course, this isn't the most egregious error ever committed by an auditor. Far from it, in fact. I've personally seen Auditor's laptops spewing worm traffic on a client's network. Of course, it's likely that the auditor's systems were infected by a worm on the client's network, rather than the other way around, but running 3 systems known to be vulnerable to the same defect that they were spanking the client for was, pardon the pun, an oversight. In the last year or so, several incidents of auditors losing valuable client data including identity information have been reported, notably more than once incident involving Ernst & Young. So, have someone on your staff work closely with the auditors as a sponsor of the audit, or have an independent consultant watching over their shoulder for you. People sometimes get carried away in their exuberance to do great work, and other times are following bureaucratic procedures that just don't make sense. In either case, your sponsor should have veto power over any actions during the audit, to protect your data from accidental exposure. In case you're wondering, you don't need an "auditor for the auditor for the auditor" up an infinite chain. What we're really talking about here is a sponsor with veto power who isn't part of the audit team. This kind of outside watchdog can break the pattern of groupthink that causes people to run off with a half-baked idea and accidentally expose the data they are ostensibly trying to help you protect.
Monday, April 17, 2006
The recent article Does open source encourage rootkits? [NetworkWorld] discusses a McAfee report, "Rootkits", in which McAfee lays the blame for rootkits at the door of the open source community by name, security researchers by implication, and unwittingly at the very doorstep of information sharing -- books, libraries, and printed material. The report was issued due to a large jump in the number of rootkits they detected (nine times as many this quarter as the year ago quarter - a dramatic increase). They specifically blame rootkit.com. The unstated basis for their argument is a classic tension between open sharing of information about security vulnerabilities on the one hand and secret cabals of security research on the other. McAfee is clearly coming down for the "keep it secret to be safe" camp. Most independent security researchers reject this argument, because industry has a very long track record of totally ignoring security issues until they are made public. Most researchers also practice a policy of advanced notification -- give the vendor a reasonable notice before publishing the findings to the world and attempt to work with them so that a fix is available when the notice is published. However, the threat of publication is sometimes the only thing that motivates software companies to fix security problems. Blaming open source, web sites, and information sharing by implication is misguided. The folks who are writing the real malware could (and do) use secret members-only web sites to share ideas and code and whatnot in their pursuit of malfeasance. It's better for the community of researchers to have open sites sharing these ideas. The fact is that you don't need a web site. There are books that do a pretty good job of explaining how rootkits work and how to build them. Are libraries now to blame? Is the publishing division of McAfee's competitor, Symantec Press to blame? ( The Art of Computer Virus Research and Defense). No. Information sharing is not to blame. Symantec is not to blame (at least not in this respect). Books are not to blame. The internet isn't to blame, web sites are not to blame, security researchers are not to blame. I wonder if instead we can attribute the continuing and expensive thorn of malware to humanity's continuing struggle to ride a rapid wave of expanding technology while simultaneously attempting to preserving civil liberties and limit the destruction and damage that can be caused by Evil Doers(TM)? Frankly, we're not very good at it, and we will soon face analogous problems in the much more serious realm of biological engineering. Recall that open source specifications for the 1918 influenza have already been published. We need to get better at this stuff pretty quick, because the clock is ticking. The information genie can't be put back in the bottle, we had better figure out how to tame it. * NOTE: Evil Doers is a Trademark of The Bush Administration.
The New York Times today features an interesting article today, "A Sinister Web Entraps Victims of Cybrerstalking" [annoying but free registration probably required]. The article does a nice job of describing the problem, but it doesn't say much about how to protect yourself. Unfortunately, it's pretty difficult.
Wednesday, March 15, 2006
You should never throw out any piece of paper with any contact information on it. Any such papers should be shredded, rather than tossed out. In particular, never throw out credit card statements, always shred them, preferably in a cross-cut shredder. If you are not taking the risk of identity theft seriously, this article on "The Torn Up Credit Card Application" should strike an appropriate amount of fear, just enough to convince you to buy a small home-office shredder.
Technorati Tags: identity theft
The breeding ground for the computer virus will be expanding continually and rapidly over the next decade as appliances, automobiles, and all manner of other things become equipped with wireless networking and miniature computers. Cell phone and similar networks may enable worms to leap between devices over long distances and other networks over short distances. Researchers have recently demonstrated that RFID tags may be vulnerable next. Articles on the topic: RFID worm created in the lab [NewScientist.com] Viruses leap to smart radio tags [BBC.co.uk] RFID tags could carry computer viruses [SecurityFocus.com] The details for the curious: RFID Viruses and Worms The AntiVirus paradigm that we [the IT community and industry] have foisted upon PC users is already breaking down under the strain of too many virus variants and too many non-technical PC users. The paradigm probably won't work at all for cell phones and the paradigm is completely broken for the typical RFID device which typically lack an end user administration interface of any kind. The AntiVirus paradigm was invented for Enterprise users who were expected to be paid to devote time to protecting a valuable asset, and technical hobbyist users who loved tweaking their PC. It's not designed for users who want to use their PC as a simple household tool, like a television or a refrigerator. The stuff people want to do with RFID technologies is truly amazing. It starts with automating inventory in retail stores, but goes all the way down to things like "washable RFID tags equipped with sensors on all my clothes will allow me to check to see if my favorite suit is at the cleaners, at home in the laundry bag, or at home ready to wear" and "RFID tags will enable my home pantry to let me check from work to see if I have all the ingredients needed to bake a birthday cake, or if I need to stop at the store on my way home". If this stuff is going to work, we will need to be careful that we don't turn the average home into the administrative nightmare that is the average enterprise network. RFID would flop because consumers can't afford to hire an IT staff to maintain IDS and AntiVirus systems for their pantry, wardrobe, stereo, library and toolshed.
Monday, March 13, 2006
False positives are the bane of AntiVirus and IDS/IPS systems. On the one hand, hundreds and even thousands of new threats are released each week, where they must be discovered, submitted to vendors, analyzed by vendors, definitions, signature files or heuristic algorithms must be tweaked, tested, released to customers, and finally deployed to customer systems. All of this must be done in as short a time as possible, since the threats often spread in minutes and hours. AntiVirus signatures are often available within two days from the first appearance of a threat on the network. Polymorphic techniques, even simple ones like automatically generating dozens or more variants at the threat's compile time, are becoming more common making it more difficult for AntiVirus vendors to keep up with the expanding threat pool every year. Today we learned that an error in a signature file caused the McAfee AntiVirus system to delete good files from production systems. This unfortunate accident affected at least a hundred of their customers and probably thousands of PC systems. The final tally of affected systems probably won't be announced. (A similar problem recently caused Microsoft AntiSpyware to zap Symantec AntiVirus from systems.) This incident is receiving more press attention than they usually do. The real wonder is that things like this don't happen more often. McAfee update exterminates Excel
Such problems with security software are called false positives and they happen occasionally. McAfee typically has to do an emergency release of a virus definition file once every three months because of a false positive issue, Telafici said. "This is our once for the quarter I think," he said.Similar rates of false positives are probably seen from other vendors, but this might be the first time that an AntiVirus vendor publicly disclosed information about their false positive rate. Not every customer is affected by every false positive. Many affect 3rd party applications which were previously unknown to the AntiVirus vendor. In cases like these, a DLL from a valid production software system accidentally matches a signature file developed by the AntiVirus vendor, who doesn't have the system to test against. Tracking down these problems sometimes includes a finger-pointing exercise between the AntiVirus vendor and the 3rd party application vendor -- the AntiVirus companies sometimes uncover viruses in shipping code, too, and it may be difficult to tell where the problem lies at first. McAfee update exterminates Excel
However, this time around it was a particularly big goof, because the company faulted Excel, Telafici admitted. "Usually, it is either custom applications or applications that did not exist at the time we wrote the signature file," he said.That bit is particularly interesting. The implication is that after the initial creation and testing, a given signature may not be tested as thoroughly or as often down the line. Several months later, an update to your application software might cause a signature file to break, causing catastrophic damage. In retrospect it makes some sense, as full-on testing of this stuff takes time and resources, and the pressure to test and ship the newest definition or signature files is quite high. However, this revelation probably indicates that the ongoing risks from signature or heuristic approaches may be somewhat higher than previously thought. With the number of threats multiplying every year, and with the number of signature files which require testing increasing concomitantly, older signatures which have been "thoroughly tested and validated in the customer environment" may no longer be assumed to be benign beyond doubt. The current McAfee false positive incident is discussed here: McAfee Anti-Virus Causes Widespread File Damage [Slashdot] Excel = Virus ... At Least to McAfee [RealTechNews]
Saturday, March 11, 2006
I noticed this tidbit from a Gartner researcher quoted in a story about the recently disclosed PIN theft.
PIN Scandal "Worst Hack Ever;" Citibank Only The Start "That's the irony, the PIN was supposed to make debit cards secure," Litan said. "Up until this breach, everyone thought ATMS and PINs could never be compromised." - Avivah Litan, GartnerI wish the reporter or Gartner researchers would have checked with me or someone else who has direct experience auditing software systems. I've been warning my clients for years about the security exposure from data retention for e-commerce and credit card transaction systems and I know a number of other security professionals who've been doing the same. In fact, given the number of thefts of credit card data stolen from 3rd party web sites that have occurred in recent years it's unlikely that this is the first PIN number theft to have occurred, counter to the implication in this story. It might be the first that has occurred since legislation obligated disclosure of such thefts, but even that seems unlikely. There are literally thousands if not tens of thousands of different bits of software involved in credit card transaction processing, custom made, derived from free code available on the internet, purchased from third parties, custom made by third parties. Most of those systems originate in the web development world where robust software development and testing practices are not fully realized and security inspection or auditing is an afterthought if it's a thought at all. PIN numbers and the special security codes printed on credit cards are intended by the vendors to be "transient" data, used but not stored at the point of presence -- e.g. the cash register or web site where the transaction is initiated. However, it's impossible to audit all of the custom made systems in the world. In a recent article here discussing the Verified by Visa program, I speculated that proxy agents could be placed in front of an e-commerce engine on a compromised web server to defeat the Verified by Visa security measures. This technique could be used to harvest PIN numbers and security codes even more transparently. Without conducting a survey, I can tell you from my experience it appears that most organizations with e-commerce shopping carts on their web sites are not prepared to detect such an intrusion. Shopping cart systems are only the tip of the iceberg. I've seen dramatic, gaping security problems in systems that existed for years and were easy to discover by accident through ordinary use of the system. One such system provided full identity information for all accounts within the system, including bank account information, phone numbers, addresses, date of birth and other information -- matched to Social Security Number. The system's entire database could be enumerated by fetching them one at a time, simply by poking a randomly generated Social Security Number into a field. By poking them all in, one at a time, one could fetch the entire database. This could be easily accomplished by a "script kiddie" in a very short time. The system was not instrumented with any logging which would reveal that this type of enumeration has been performed. The system's database included many members of Congress and the Senate. (Surprisingly, all of the information in this paragraph doesn't narrow down the field of applications enough to give away what the application was, nor the agency which ran it.) Oftentimes when such issues are encountered it is a struggle to get the owners of the system to understand the exposure and act upon it. I spent two days trying to convince the Federal Agency that owned this system to act. I was only able to get the hole closed by identifying the private contractor who implemented the system and calling their CEO, who immediately understood the importance of the issue. If you find holes like these that are relatively easy to discover and exist in systems for extended periods of time, you must assume that they have been discovered before. In some cases you may be legally obligated to notify the persons whose data has been exposed. The complexity of e-commerce and other online software systems which handle sensitive data is high, and the cost of securing them and auditing them is very high. An audit performed by a commodity consulting shop may cost tens of thousands of dollars and take a couple weeks. Even then, the auditors will often be ill equipped to discover many of the weaknesses that exist in these systems. If you hire a specialty security firm which brings highly skilled and experienced security engineers and programmers to the table, the cost will likely be even higher. Contrast that with the money that firms typically spend on these systems. Oftentimes they don't spend much at all. They got the internet and find a "free" shopping card, don't audit the code so they really have no idea of how it works internally or even if it has already been instrumented with a data harvesting routine, and slap it up on a web server. Even large corporations are guilty of this, as the division with the need may not be given the budget to "do it right". Conventional wisdom says that the west won the Cold War by outspending the Soviet empire, leading to the eventual bankruptcy and collapse of the Soviet system. The economic principles behind this problem are similar to the issues with security and online software systems storing sensitive data like credit card, debit card, and identity information. The barrier to entry for the attacker is low. The cost to defend is high. The botnet arms race continues, and this time the stakes are your identity information, and your bank account balance.
This whitepaper spoof was written a couple years ago. I tripped over it by accident, and was rewarded with health boosting laughter. Microsoft Windows: A lower Total Cost of 0wnership
Monday, March 06, 2006
Botnets are the big guns in the Identity Theft world, ripping millions of identities from hard drives around the world -- not just home users, but web servers and database servers getting thousands or tens of thousands or millions pieces of data at once. However, low tech methods of data harvesting are still used. Low tech methods, too, appear to be evolving as increasingly organized, larger scale efforts are being uncovered, paralleling what we see in the internet security world. The canonical examples of organized crime driving spyware, worms and botnets has been shady advertising schemes. However, it's clear that identity theft is also a driver. But what drives the identity theft? Well, money obviously, but apparently drugs are behind some of it, too. The North County Times (San Diego) has an interesting story with quite a few details about one gang of Meth users turning to identity theft to pay for their habit. Apparently 14,000 credit card numbers were gathered by the gang of 20 people using a fairly low tech method -- they drove around suburbs looking for mailboxes with raised red flags, and extracted bills and other mail. That may seem like a lot of identity for 20 people to harvest by driving around and stealing mail, but they could probably harvest that much in a month or maybe two at most, working in pairs, and working only a few hours a day. The wonder is that they managed to do this for more than a couple days without getting caught. Neighborhood watch must not be watching the neighbor's mailboxes. The basic organization behind turning stolen data into money has been the same for decades, but the scale is larger than it's ever been.
"There is the collector who steals your identity from mailboxes or trash bins," said Alameda police Sgt. Anthony Munoz, who teaches a class about the connection for the California Narcotics Officers Association. "Then there is the converter, who turns your identity into something, and lastly there is the passer, the person who uses the fraudulent identity."From the perspective of an individual, the short term and low cost solution to this problem is prevention -- start by getting a lockable mailbox. Make sure you shred any paper or other media (floppy, zip disk, cdrom, etc.) that has any name and address information. This includes things like bills that you don't think of as sensitive. However, on the scale of the society, this is problematic, partly because people don't always realize when they are throwing away sensitive data -- because they think of each item separately. "Here's a bill, it just has my name and address," for example. Well, it has other things. It's got your account number with the electric company. With enough different little bits of information stole from mailboxes and dug out of the trash, the Mail Box Meth Gang was able to steal identities and use them to fund expensive drug habits. By picking up several different bits of information out of the trash, or inbound mail, it's possible to assemble a more complete picture of the data needed to steal an identity. We discussed this general technique recently in another context --it's known as "the aggregation problem". In order to deter this kind of theft, a substantial majority of people would need to exercise careful practices with their sensitive data -- thereby raising the cost of gathering the raw data. In actual practice, most people don't realize it's that important, and won't go to the time and expense required. Credit card vendors have responded to the growing identity theft problem by trying to make it more difficult to use a credit card number without the card. That's what those little three-digit and four-digit numbers that appear on the back of the card are about. Those numbers don't appear on the credit card statement, and are required for some online purchases, thus making it more difficult to use a stolen credit card number. Unfortunately for the victims of identity theft, the classic trade-off between security and convenience hasn't been conquered. Further attempts to improve security of the credit card transaction system are clunky at best, typically problematic, and possibly open up new avenues for large scale identity harvesting at worst.
Friday, March 03, 2006
This phishing scam, targeted at customers of Chase bank, is simple and direct. Fear it. Well, at least be aware of the general tendency of phishing scams to exploit basic human trust relationships with increasing sophistication. They get better and better every day, and they are building up quite a library of clever tricks.
- It looks like it came from your bank.
- The text is simple, direct, clear, and free from glaring grammatical errors.
- It appears to be a simple request. The apparent source of the email is obscured.
- It appears to be from: Chase Online Services Team
- It exploits the HTML processing ability of most modern email clients to obscure the actual target of the "click here" link (which I've removed, but which was obviously something other than chase.com.)
Dear Chase Member: We have processed your request to change your e-mail address, based upon the information you supplied. Beginning immediately, we will send all future e-mail messages, excluding Alerts, to you at email@example.com. Any e-mail addresses that receive Alerts about your accounts will need to be updated separately. If you did not request this e-mail address change or have any questions, please cancel this action and reactivate your account by clicking here. Please do not respond to this confirmation e-mail. Sincerely, Online Services TeamPhishing scammers don't use their own systems to harvest data for identy theft and credit card fraud. They use systems that belong to other people, which they have taken over without the knowledge of the owner. Often they take over large numbers of systems with worms or botnets. Intrinsic Security is working with Internet Service Providers to help stop botnets. Help us spread the word by linking to our site from your blog. Link to Intrinsic Security - join the antibotnet campaign.
Friday, February 24, 2006
Microsoft's regularly scheduled (once a month) security updates have received a great deal of criticism in the security community. The practice delays (in theory up to a month) the rollout of vital Windows patches and leaves customers exposed to worms, viruses, adware, spyware and outright hacking for more calendar days than the previous ad-hoc rollout of patches (e.g. as soon as they were ready). In today's world, where exploit code and worms show up within hours or days, these delays can be devastating. The monthly patch strategy has probably helped Microsoft with one key metric -- reducing the number of headlines per month about the latest vulnerability. In the months before Microsoft changed from ad-hoc security patch releases to a monthly schedule, negative security headlines were appearing almost daily. These headlines had begun percolating into the public unconscious, contributing generally to a vague but increasingly common perception that Windows is "insecure". Even though most people don't konw what that means, if you stop random folk on the street and ask about Windows, a significant percentage will tell you Windows is insecure. (RocketBoom dis this recently when they asked, Internet Explorer or firefox?) That torrent of negative headlines was perceived in Redmond as creating potential switchers (to Macintosh or to Linux) not among the unwashed masses, but where it counts -- the corporations on whom Microsoft has had a mind lock for more than a full decade now. The rapid growth of a tumor on the achilles heal of Windows may have contributed to the change in release policy, but that doesn't mean the change itself is entirely bad. By introducing some regularity into the patching lifecyle of Windows, Microsoft may have given IT shops everywhere the lever they needed to convince management to dedicate more resources to patching Windows, and to realize the true (substantial) expense involved. Regular monthly updates have also forced the IT community -- vendors and customer alike -- to get better at patching Windows systems. Prior to this regular and predictable delivery, most companies were still in serious denial about the need to rapidly deploy patches. They were typically going through painful gyrations to determine if every single patch applied to them or not, if they could skip deploying them, etc. in a futile effort to contain workload. They tended to lump the patches themselves into deliveries a few times each year. Now they've been forced by the regular delivery of dozens of patches at once, each month, to come to grips with more or less the non-stop patch deployment process. It can still take many days or weeks to deploy patches in a typical medium sized enterprises (say, one with more than 10,000 nodes), but that's down significantly from many months. Other vendors have been delivering patches in this regularly scheduled way, too, notably Oracle which has also been criticized by customers for untimely patch delivery (and poor documentation of patches). Despite this little ray of sunshine, it's been looking like the monthly patch cycle won't remain viable. Vendors will soon see their customers demanding weekly patch cycles, at least. What will drive this? The Patch Gap is too large in the era of the botnet and the zero day worm, driven by organized crime and state sponsored espionage. The problem with regular patch cycles is that the vendors and customers are both hoping that certain vulnerabilities have not yet been discovered by the cracker underground. Given the large number of vulnerabilities which are discovered each month, and the long period of time in which those vulnerabilities existed in widely deployed software (often years) it's almost certain that this hope is in vain. Crackers certainly know about some of these defects, and know how to exploit them, sometimes years before the script kiddies find them. Evidence that some cracker groups are well funded, probably state or corporation sponsored is mounting. Most recently a few stories have appeared which suggest that several well organized attacks have been traced back to China where state sponsorship is suspected, and industrial and governmental espionage is the motivation. Organized crime and state sponsored internet espionage rings can and do use the same techniques to explore production software for defects in a laboratory environment. The bad guys have the same debuggers and virtual machines and compilers and sniffers and Nessus plugins and documentation that are available to security researchers. The main difference is that the good guys often do this kind of research on a shoestring budget in their spare time, whereas the bad guys are increasingly making a full time job of it. The continual flood of high profile, high damage, automated exploitation of widely known and even long-patched defects which the script kiddies generate strains the security response infrastructure (trained admin and security staff, developers, testers, etc.) The enormous workload from the thousands of new viruses, worms, trojans, adware, spyware and keystroke loggers, combined with the endless stream of botnet attacks makes it more difficult for the industry to assess the real exposure to low-profile cracking from these industry practices of delayed (regularly scheduled) patch delivery. Microsoft, Oracle, and other vendors will be under increasing pressure to shorten their patch cycles, as the organized nature of botnet attacks becomes more apparent to their customers.
Intrinsic Security provides uniquely effective AntiWorm technology which detects zero-day worms and brings botnets to a crawl.
Thursday, February 23, 2006
What can the botmaster 0x80's impending misfortune  teach us about information security? Quite a bit. What the botmaster and the reporter didn't count on is a security risk known as "the aggregation problem" or "point and click aggregation". It's not surprising, as even practicing security professionals are often unaware of this problem, or vaguely aware of the concept but not the name. Information Security dictionaries online generally lack the terms, and don't mention them in their discussion of "disclosure" either. The aggregation problem happens when a series of small facts, any one of which if disclosed present a minimal security risk, combine to present a greater security risk when disclosed together. When aggregated, information from publicly available sources may accidentally disclose information that was intended to remain confidential. As it happens, an IETF glossary contains a definition of the basic term.
RFC 2828: Internet Security Glossary aggregation (I) A circumstance in which a collection of information items is required to be classified at a higher security level than any of the individual items that comprise it.The concept was first defined in the area of classification of national security documents, an area that provides fascinating and relevant illustrative examples. (A friend has told me that there was a story about the guy that invented the concept on NPR or Air America recently. If any of you dear readers have a link to that story, please let me know in the comments.) For several decades following the end of World War II, it was believed that the knowledge required to build an atomic bomb should be protected. (This concept might seem dated now, but it was almost certainly a valuable approach for the first few decades.) More than once during the past half century, curious students have apparently found their research classified, when they demonstrated that the basic plan for building and assembling an atomic bomb could be derived by non-experts from publicly available information. One such story, The Nth Country Project is detailed at the Guardian. This was an official project wherein the U.S. Army learned that indeed, a couple of competent physicists with no knowledge of atomic bombs could indeed figure out how to build one. This was decades before the internet, and it took two guys 30 months. The bar now is considerably lower. I have a recollection that a student created a plan for making a bomb within the last several years, using information gathered from the internet. We can't put the Djinni back into the bottle. Our hacker's [0x80's] problem with aggregation concerns disclosure of confidential information -- his identity -- that both he and the reporter desired to keep secret. Unfortunately, a series of small disclosures accumulated into an aggregation problem. Specifically, a modern, Slashdot and Google-fueled point-and-click aggregation problem. With direct implications for his daily freedom, 0x80's troubles began when he decided to allow himself to be interviewed by a reporter from The Washington Post. Brian Krebs constructed an excellent story, Invasion of the Computer Snatchers profiling what appears to be a typical young ne're-do-well -- albeit one making from $6,000.00 to $10,000.00 each month by unleashing worms which spread throughout the internet, cracking into your computer to install adware and spyware. A shady network of advertising schemes (see: The Hidden Money Trail [PC World]) funnels the money to the botmasters like 0x80, when people click through the pop-up ads which appear on their computers. (Yes, some people really do buy vitamins, Viagra and whatnot off the internet from pop-up ads delivered to their PCs by botnets. Go figure.) Within hours a story appeared on Slashdot, a discussion forum affectionately known as "News for Nerds". The editors linked to the Washington Post story, and opened a discussion, titled Interview with a Botmaster. Within minutes, discussion participants noticed that apparently minor tidbits of information could be aggregated to paint a strikingly clear portrait of the hacker. In the discussion, these facts were assembled:
- male youth
- 21 years old
- lives in small town in the midwest
- slightly long hair that covers his eyebrows
- lives with parents
- parent's house is a brick rambler
- has a small dog with matted fur
- speaks with accent which is mixture of southern drawl with midwestern nasality
- tall, thin build
- dropped out of high school
Tuesday, February 14, 2006
Announced in the batch of new Valentine's Day vulnerabilities from Microsoft today, Microsoft Security Bulletin MS06-007 is an exposure to a remote Denial of Service attack. The bulletin states:
A denial of service vulnerability exists that could allow an attacker to send a specially crafted IGMP packet to an affected system. An attacker could cause the affected system to stop responding.This is rated "important" rather than critical by Microsoft. (See the Microsoft Security Response Center Security Bulletin Severity Rating System for a description of their rating system and the criteria for each category). As a consequence of a couple "critical" defects in this monthly batch, this particular defect doesn't seem to be getting the attention it probably deserves. These types of DoS vulnerabilities are sometimes used by botnets and worms, which are frequently under control of an attacker once they have penetrated a network and spread inside it. If used by a botnet, this DoS could result in the shutdown of a large number of systems, some critical, in a very short amount of time. Brian Krebs of the Washington Post discusses two of the other vulnerabilties announced today which are rated "critcal" by Microsoft in is blog entry today, Microsoft Isues 7 Patches at Security Fix
Recent phishing scams have been noted to employ an SSL certificate as part of the scam web site. In combination with one of many patchable but unpatched and other unpatchable browser defects, these scam sites are now giving the end user the full appearance that they are engaging in a secure transaction with their bank. As reported by Brian Krebs today (see: The New Face of Phishing) as well as predicted here a couple weeks ago (see: Verified by Visa (Veriphied Phishing?)) the latest such phishing scams have begun to exploit the Verified by Visa program by using the name recognition of the campaign as part of their social engineering. Mr. Krebs mentions a few key facts about this latest scam in his article.
- the scam targets a small bank
- the scam exploits the brand awareness campaign surrounding the "Verified by Visa" program
- the scam employs the use of an SSL certificate which appears to have been obtained specifically to set up the scam web site
niche markets as targets of opportunityThe pattern of targeting smaller niche markets has been used to effect in the last couple years by worms and botnets, and it's not surprising to see phishing scams follow suit. Phishers undoubtely assume that people who bank with larger banks are growing weary of the endless flood of phishing spam they receive. Their potential victims are perhaps becoming wary. By targeting smaller banks, they probably hope to find a fresh pool of victims who are not as sophisticated because they haven't yet been educated by the school of hard knocks. Expect more phishing scams targeting the customers of small banks in the future. Small and even relatively large regional banks often rely upon 3rd party vendors to provide their online banking services. Phishing scammers appear to have a better understanding of web technology and internet security than these companies and the anonymous nature of the internet, particularly email, will serve as an avenue leading to more and increasingly sophisticated and effective phishing scams.
exploiting Verified by Visa brandVisa has been running commercials. People have heard the phrase, Verified by Visa many times by now. When an email shows up, they probably half expect it. When that email looks just like the web site of the bank, and when the holes in their web browser make it appear as though they clicked on a link and it took them to their bank's web site, they are all primed and ready to type in their vital statistics, Social Security Number, PIN number, account name and password, credit card number, and even the magic security number on the back of the card.
use of SSL certificates on the phishing scam web siteThere have been previous incidents where compromised web servers are exploited to set up phishing sites with valid SSL. The novelty of this latest phishing scam is that it appears to use an SSL certificate that was obtained specifically to use for phishing in combination with the small bank target and the social engineering of the Verified by Visa program. However, the role of the SSL certificate in phishing scams warrants further consideration. However, there is very little to stop a phisher from obtaining such a certificate. They can already set up fake web servers and email accounts that can't be traced back to a person. The money they steal using stolen identities goes somewhere, too, and that doesn't seem to be easy to trace, either, as so few are caught and credit card fraud alone remains a multiple billion dollar per year industry. Many people, to the extent that they are even aware of the issue, believe that an SSL certificate provides the user with an assurance that they are talking to their bank on the other end of the internet. It most certainly does no such thing, at least not in any meaningful way that you should bet your Social Security Number on. SSL certificates used by most banks and other online shopping sites are issued by a set of companies known collectively as the Certificate Authorities. These companies have managed to place their own "root key" into the major web browsers, so that keys issued by them (for a fee) are "recognized" and "trusted" by the browser. They make only some small pretense for marketing purposes about performing due diligence on certificate purchasers. Most of them, even the big ones, really don't do anything meaningful at all in the way of due diligence. The public perception that they do is largely due to the marketing efforts of these companies as they compete with each other by building "trusted brands". Scam is a bit of a harsh word, but there are many independent security professionals who believe that the whole Certificate marketplace is a sham, if not a scam. As a retail vendor, you must buy an SSL certificate from a Certificate Authority on the pretense that your customers will "be secure" when they are shopping at your site. In reality they only "feel secure". Most of the internet shopping public doesn't really understand SSL. It provides only an encrypted tunnel that prevents 3rd parties from listening in. It doesn't really tell you much useful about the party you are connected with. Should you trust them? Even if it *is* your bank, should you type your Social Security Number into their computer? How good is your bank at protecting your data? What about the retail shopping sites you visit on the internet? The collective record is not very good. Tens of millions of credit card numbers stolen with matching names and addresses last year. Very few people outside security professionals and systems administrators understand that anyone can generate a "self signed" certificate for free, for example. The difference between a certificate you generate yourself and one generated by a root Certificate Authority is that the CA's have a rooted cert in the major web browsers, which prevents a user warning from popping up when you connect to a site over SSL. This basically forces people to pay a small fee to get a certificate from a CA, rather than generating one, to prevent user confusion and annoyance -- not to provide "security". It's probably not entirely the fault of the Certificate Authorities that people expect more from an SSL certificate than even the wildest of their marketing claims promise. There doesn't seem to be any legal requirement for them to validate the identity of the recipient of an SSL certificate. Performing even modest due diligence on a person is fairly expensive, although the cost is now down to the neighborhood of about $9 to $12 (volume discount prices, depending on options, additional fees for additional counties and types of records checked, etc.) per person for a limited records search. That wouldn't include the costs of handling incurred by the client company buying the search, evaluating it, and making a decision to issue or refuse to issue a certificate based on the results. Of course, those checks are performed against a person's name and Social Security Number. Well, if you're a Certificate Authority and you just spent, say, $15.00 validating that indeed public records show a John Smith living at a certain address. How do you know that the person buying your certificate, who claims to be John Smith, really is that person? That's an additional set of expenses, and it's probably non trivial, given that we are dealing with potential customers who have a certain expertise in passing off stolen identities. In a competitive market, optional costs are quickly cut from the production line. Sometimes there is even a race to the bottom where services and quality are cut repeatedly as companies struggle to become the low cost producer in a market. If the Certificate Authorities only make a pretense of validating the identity of a customer seeking an SSL certificate, it's probably because the only part of the entire transaction in which they have a vested financial interest is assuring the charge to the credit card that was used to pay for the certificate. In this case, that card was almost certainly stolen, and the scammer certainly had all the identity information required to make what appeared to be a valid charge on the card. Also, this is a relatively small industry, in terms of the number of providers who can handle bulk requests efficiently and nation-wide, and the Certificate Authorities do not appear to be customers of this industry. A few perhaps might be, I haven't done an exhaustive search, but I am under the impression that none of the Certificate Authorities attempt to validate identity of an SSL customer through public records, except through the most primitive means. They sometimes will call a provided phone number, but normally rely on non-human methods like getting a response click on a link sent to the email provided by the customer. These mechanisms provide some security for the customer purchasing the SSL certificate and some to the vendor selling the certificate, during the process of purchase. They provide little or no security for the customers or victims who connect to a server using the certificate. Even if each and every certificate customer were validated in some slightly more rigorous way, those validated customers could turn out to be shell companies that exist only for the purpose of setting up the scam. Furthermore, the paradigm is fundamentally flawed. Placing the root certificate in the browser for the "convenience" of the end user means that the end user is not confronted with the expectation that they need to be involved in validating each connection they make. Sure, you were connected to a certificate that was valid when it was issued. Has it since been stolen? Are you even looking at a web site that uses the certificate that was issued to your bank? Sure, it looks like your bank, but did you check the certificate to see what it said? It might be a valid user -- a different and evil scamming valid user. It's worth noting that there have been other phishing sites which had valid SSL certificates before. They were set up on compromised web servers using certificates owned by others. However, it seems like the phishers mostly don't concern themselves too much about SSL because their victims don't always remember to check for the little tiny lock symbol or look for the "s" in "https://". I'd guess that attempts to set up phishing sites with valid SSL certificates is all about an increase in marginal profit for the phishers -- the more legitimate their phishing site appears, the more data they harvest. Finally, it's possible to buy the ability to generate a rooted certificate -- one that is detected as valid and "trusted" by most web browsers. Who knows how many of those certificate granting authorities have been sold over the years. How many of them were "lost" in corporate mergers or re-organizations and have since shown up on the black market? In any case, it is an open secret that the Certificate Authorities actually do very little to validate the legitimacy of their certificate customers, and the public has a mistaken and dangerous impression that they do.
Friday, February 10, 2006
Should cookies that track your web surfing be considered spyware? What Would Dilbert Say? (WWDS). To the many millions of people trying desperately to keep their home Windows PC from collapsing under the load of adware, spyware, bots, worms and virii, and looking on the internet for help, it might seem like there is a raging (or at least simmering) debate about cookies -- are they spyware or not? This debate is mainly fueled mainly by the tension between adware vendors (typically shady or at least shadowy new media advertising outfits that match ads to web surfing habits) and anti-spyware vendors. The former need cookies to provide value added advertising, while the latter want to make the malware situation seem as bad as possible by releasing reports periodically about how much worse it's getting. Even if cookies are discounted entirely, the malware situation is indeed getting worse every year, and is very bad here in 2006. There really shouldn't be much debate about this, and there doesn't really seem to be much debate among serious and independant security professionals. Tracking cookies may not be executables, but it's reasonable to consider many of them to be spyware. A cookie can be considered to be spyware any time it's part of a larger adware system which may identify a particular user and their web surfing history, or any time it reports information back to a web server that the user didn't specifically authorize to disclose. This would certainly include disclosure to 3rd party web sites, which is seldom done with the web surfer's knowledge or permission. (I'm probably casting a bit of a wider net here than some folk would.) This argument is also a bit of a slippery slope. It's only a quick slide down that slope to see Dilbert's perspective. Dilbert would say that all cookies should be considered "spyware" unless proven innocent. Given the The Dilbert Principle, "idiots" (that's everyone at one time or another, including you, me, and Scott Adams, author of The Dilbert Principle) will assure that:
- information which shouldn't be stored in cookies will continue to be stored in cookies, and
- browser defects from time to time will continue to allow cookies to be read by 3rd parties.
Thursday, February 02, 2006
If you have used a Visa card to make a purchase online lately you may have encountered a relatively new program, Verified by Visa . I've encountered it twice. The system is an interesting attempt by Visa to reduce online fraud and identity theft. It's a noble effort, but the user experience is unsettling, and the security implications are not exactly crystal clear. Here's what happened to me, both times the Verified by Visa system was activated. I was redirected away from the domain at which I was shopping, to a URL which was:
- not the domain where I was shopping,
- not the domain of the bank that issued my card
- not visa.com