What is becoming very interesting here is the issue of compliance testing, and conformance, and standards.
There are three basic conformance/compliance initiatives being proposed here, one by an industry/end-user consortium backed by ISA, one from the Federal Government, instituted by Idaho National Labs (now why did the government locate its cybersecurity lab so far away from the real puzzle palace, Ft. Meade?-- This place is crawling with No Such Agency alumni...)
and another being pushed by several private security consulting firms.
The big issue is, of course, that there are no standards to determine compliance with, or check conformance to. Yet. But there are almost a gross of standards being developed-- and all of them certain to conflict somehow and somewhere with all the others.
I've been helping the ISA-led industry/end-user partnership get moving for over a year now, and I believe it has the best legs...but if it doesn't get off top-dead-center really soon, it may be laying in the dust with tire tracks on its back.
But there is something bothering the heck out of me: nobody has a clear picture of what metrics those standards, and latterly the compliance/conformance, bodies should be instituting.
There is a huge difference between measuring the number of times a system got penetrated and measuring the ability of the system to defend itself.
The only believable set of metrics I've seen proposed are those that Eric Byres and Miles McQueen (from INL) have been talking about for many months now. They call it MTTC, for Mean Time To Compromise of the system.
They've broken the MTTC metric process into 7 steps:
1. Define Zones
2. Define a predator model
3. Define the attack path model
4. Estimate state times (how and when the attacker moves from state to state in the path model)
5. Build a state time vs skill level matrix (it's easier for an expert cracker to penetrate a given system at a given speed than a script kiddy for example)
6.Determine path probabilities (some paths are so convoluted, the attacker just won't go there)
7. Calculate MTTC levels
And there is, of course, an 8th step: Lather, Rinse, and Repeat.
If this sounds familiar, it should.
It is basically the same, sound engineering approach that the Process Control Safety community has taken when defining SIFs and SILs, and constructing SISes.
Byres cheerfully admits this. "The ideal metric," he says, "should be easy to comprehend by both experts and management." A SIS engineering analog for security, a SecurityIL, would clearly fall into that category.
Byres appears to be gaining momentum in the user community for his approach. The IT-oriented governmental community (mainly because it doesn't know what it is talking about, in the main) is resistant, but the swing vote is going to come from the vendor community-- who at least privately, are agreeing with Byres en masse.