We Need Rules of Engagement for Security Testing
We are now seeing regular instances of botnets from what we assume are the good guys, and an undignified scramble to be first to grab headlines for the latest vulnerability, regardless of a fix being available.
So how far should the security community go in determining and reporting flaws, vulnerabilities, and weaknesses?
Botnets make headline news. Ask a cross section of individuals across all generations and most can name, or at least have heard of, the most well-known of the bots, e.g., Zeus, Conficker, and so on -- without knowing, or even needing to know, what it is that they do behind the scenes. What most will know is that botnets can steal your money, wreck your credit rating, and make off with your identity.
So botnets are bad? Well, not necessarily so. Recent examples show both sides of the exploitation coin, for “good” as well as for “bad,” with intent at the center of the debate.
To a security analyst, a botnet can also provide an essential tool for penetration and security functional testing. Simulating an intruder, you can extract information from targeted computers. It stands to reason that what the “bad guys” use for malicious intent the “good guys” can use for well-meaning purposes. Bots are, after all, simply software robots, but if used maliciously to control a network for nefarious purpose, all hell can be let loose.
An example happened just last week with the conviction of two hackers found guilty of advertising their wares, codenamed Nettick, on a hacker forum and for subsequently using a denial of service (DDoS) attack on hosting providers The Planet and T35 Hosting to prove its worthiness. For this pair, crime didn’t pay, as both are facing heavy fines and up to five years each in jail. But as we are all too sadly aware, convictions such as these are infrequent, with organized gangs of cybercriminals controlling a variety of botnets and benefiting from their easy pickings.
Coincidently, “good” or “bad” bot news, depending upon your viewpoint, has made the headlines this week too in the revelation from security researchers at Goatse who made AT&T Inc. (NYSE: T)’s iPad’s owners’ email addresses public.
Discussion around responsible disclosure is not new, and although the fundamental difference between “good bots” and “bad bots” is the intended outcome, there is no way of telling the difference, as one researcher from McAfee Inc. (NYSE: MFE) found out to his great surprise earlier this month. A “regular” piece of malware, in what looked like a typical attack, was traced back to a legitimate vendor who confirmed the engagement had all been part of a planned penetration test.
“Undercover” bots are regularly used by government agencies. NASA, for example, heavily controls these tests with specific “rules of engagement,” due to the sensitive nature of the testing. As the agency’s documentation states:
“[T]esting is performed in a manner that minimizes impact on operations while maximizing the usefulness of the test results... when security penetration tests are performed against NASA sites (e.g., Centers, facilities, information systems, etc.)....”
Every single step of the bot path is recorded, right from its careful planning to the strict operational guidelines that must be followed.
So we are left with a couple of questions:
(a) Controlled targets within an organization are one thing for vulnerability and pen [penetration] testing, but as the McAfee blogger cited above quite correctly asks, Is it legal to launch bots on an unassociated third party without their permission?
(b) Also, is it not time for a recognized international format, such as NASA’s, for “Rules of Engagement” pertaining to Internet security testing and reporting?