RSA Hacked by automated attack
Saturday, March 19, 2011
Wednesday, March 03, 2010
In a simple demonstration, a hapless team discovers the truth. "Your server is vulnerable. It's already been cracked. Oh, and by the way, it's already distributing malware for a botnet."
Attitude of management in many organizations is one of the biggest barriers to improved security on the internet. People simply don't want to believe that their systems are vulnerable. Denial is pervasive, and affects organizations from the biggest of the Fortune 500 or Federal government agencies, down to modestly sized companies, local governments, and non-profit corporations.
The attitude of the unnamed client described at the "Following the White Rabbit" blog (link above) is all too common. I suspect that an underlying cause is that people want to believe several things that worked pretty well from an evolutionary perspective, but don't work very well on the internet. When everybody around is a bunch of cave dwellers, consumed entirely with finding food, the marginal difference between the capabilities of "our team" and "the other guys" might be pretty modest, or easy to assess (e.g. "there's five of us, and ten of them... run away!") In fact, even considering the industrialized history of the world, we don't have much experience with the type of scalability that a virtual, software driven environment can provide to an attacker.
Consequently, when faced with a vague potential threat from "the internet", people tend to default to reptillian brain denial.
"These vulnerabilities and exploits look complicated. It's not very likely that anybody could actually exploit them."
They might look complicated to you, a manager, or even a programmer (depending on your particular skill set, which typically won't include "cracking").
They look like a modest engineering or programming exercise, to the people who routinely crack computers for a living. There are toolkits and sample code and it isn't very difficult to build a test bench and try permutations over and over until the crack works.
"Our team isn't that stupid. We wouldn't build and deploy a system that can be easily hacked."
Your site doesn't need to be easily crackable, merely crackable. Some exploits require knowledge of assembly language, SQL, C, C++, and some other specific combination of arcane skills with Internet Explorer, Microsoft Windows, Apache, SQL Server, MySQL, and so forth.
Once somebody with the proper combination of skills has developed the exploit, and shared it with the world, your site could be cracked by somebody with little more programming proficiency than a typical user of IRC (Internet Relay Chat), perhaps someone who needed only drag-and-drop proficiency with a mouse.
"Nobody is that interested in hacking us."
You might be boring, yes. You might have no secrets. But you do have something interesting to them: a computer with a full time internet connection running a web server, and people who visit your web site (sometimes they just want to use your site to spread their botnet to your customers). Furthermore, the bad guys don't know that you don't have any secrets, until they've finished perusing your hard drives and data bases.
Finally, there's the typical denial offered by individual people, when pondering the vulnerability of their own workstation:
"I don't surf to bad web sites, so I won't get a virus (trojan, rootkit, botnet, worm or other malware) on my computer!"
You don't need to point a web browser to a "bad" web site to be victimized by a browser-crawlback. The malware that gets onto your computer may also start poking around your company's internal network, and find ways to exploit or infect systems that don't "surf to a bad web site" or any other web site, at all.
The moral of this story is that we cannot afford to live in a state of denial about the importance of application, network and computer systems security. Enterprises, large and small, need to take the security of their web sites, applications, and internal systems more seriously. The bad guys are kicking your butts. They're stealing your data, and you don't even know it. They're using your systems to spread botnets to your customers.
Tuesday, March 02, 2010
Phishing has matured. The bad guys are now so adept at mimicking the actual emails sent by PayPal, that PayPal support apparently cannot tell the actual PayPal email apart from the Phishing emails.
PayPal mistakes own email for phishing attack [The Register]
PayPal admits to Phishing Users [eset.com]
I've wondered for years why the phishing emails were often so terribly lame. The ideal strategy would seem to be to read some actual emails from the intended target, and mimmic those as closely as possible. The traditional excuse offered by the security community is that the emails appear often to be generated by people who speak English as a second language, but that doesn't seem like it would be such a limiting factor, given the ease with which the translations could be corrected, even anonymously, using clever internet tricks, even fairly simple ones.
The real answer seemed to be that the text content of the email didn't much matter, as people don't read them very carefully. It appears to be from their bank. It's got a link. It says to fix your login. Click!
The competitive pressure, both from education efforts which make the population of victims more sensitive to potential identity theft, and from other Phishers seeking to exploit the same population of potential victims, seems to be forcing the emails to evolve to more closely resemble the target company's web site and actual emails. Witness the inevitable result: technical support can't tell the Phishing email from the actual company-generated email contact with their customer base.
Non-authenticated email is a zombie: un-dead, walking.
Monday, August 17, 2009
It was only a matter of time before it became possible to create fake DNA evidence. That time is now.
DNA Evidence Can be Fabricated [New York Times]
Think it's bad when somebody steals your identity, drains your bank account, and spends thousands of dollars on credit cards they opened with your name on it? This run of the mill identity theft can cost you thousands of dollars, and many years to clean up. It pales in comparison to what will happen if biometric data becomes commonly used as proof of identity. Sometimes also called bio-print (like fingerprint) or bio-identity mechanisms, such things as retina scans and fingerprint scans are already in use, or even common use. DNA scans are likely to become possible several years from now, as the technology to read DNA is evolving rapidly. An entire genome can be sequenced by three people and equipment costing a few hundred thousand dollars, in a very short period of time, several days. When it become possible to read DNA in more or less real time, people will undoubtedly clamor to use it as an identity mechanism, for bank access, for voting, and who knows what else.
Even (or perhaps long, if you doubt that day is near) before that's possible, databases will be filled with your DNA sequences, because it will be valuable to you and your doctor. Unless we get unexpectedly better at protecting data, those databases will be protected by the same organizations, people, and technologies which today fail to protect your simple text based identity -- your name, date of birth, social security number, address, and phone number.
With current technology, you can engineer a crime scene. You can make it look like a specific, innocent person committed a homicide, for example. The technology required to do so remains expensive, but it's well within the reach of governments, and the capabilities of research labs.
If you're writing the next hollywood script for Jason Bourne or James Bond, keep your eye on this stuff. It's moving faster than Hollywood.
Tuesday, June 02, 2009
If you ever doubted that the lock on your door was in place to keep out the kids, doubt no more. This fascinating article details one of the world's top lock pickers.
A good friend of mine has been picking locks as a hobby most of his life. This is a skill that can be learned by any bright, patient person.
It's a safe bet there are more people around who know how to pick locks than there are people getting paid to rethink the lock and key.
Monday, May 18, 2009
The first is an absolutely classic Freudian slip:
U.S. offensive cyberwar capabilities have been focused on getting into Chinese government and military computers outfitted with less secure operating systems like those made by Microsoft Corp. (This observation isn't attributed in the article.)
That ought to have you rolling on the floor, laughing, until you realize that these are the very same "less secure operating systems like those made by Microsoft Corp." which the bureaucrats at every level of Federal, State, and local governance in the U.S. has been "standardizing" on. Then your sphincters pucker.
The point of the article is that the Chinese have developed and deployed their own operating system and "hardened" CPU architecture to run it on, and have been deploying it on Chinese government and military systems, rendering substantial portions of the the U.S. strategy for cyber counter-attack irrelevant. Various security "experts" testified before Congress to raise some alarms.
Perhaps it's just poor reporting, but these crack security experts seem to be under the impression that this Kylin thing is mysterious, and don't seem to have noticed that Kylin appears to be a hardened version of FreeBSD (an open source operating system), and that you can apparently download versions of it with a quick google search (see: Some random blogger with links to Kylin iso images.)
Which makes the next bit from this article even more amusing. This statement is attributed to Kevin G. Coleman, but this is the Washington Times, who knows if poor Mr. Coleman actually said any such thing this silly:
U.S. operating system software, including Microsoft, used open-source and offshore code that makes it less secure and vulnerable to software "trap doors" that could allow access in wartime, he explained
Of course, no real security expert would ever mean to imply that Microsoft's security issues were primarily, or even in any meaningful way at all, based on open-source software. Microsoft has used tiny amounts of BSD code in their network stack, but Microsoft's security problems are of their own, proprietary making, and everyone who can spell CISSP or SANS knows that.
The take home lessons:
- do a google search before you try to panic the Congress, and
- if FreeBSD derivatives can be secured such that people panic when China deploys them, maybe U.S. government agencies ought to re-think their obsession and love affair with the less secure Microsoft systems, with which they have been utterly failing to protect U.S. Government assets, secrets, and infrastructure, according to other testimony reported in this and other articles, and perhaps
- rather than inciting panic, somebody ought to be downloading those ISO images, installing Kylin, and running some automated tools against its network services, looking for buffer overflow exploits.