The Three Stages of Security Hygiene: Measurement

David Damato Posted on 04.26.17 — by David Damato

Security hygiene means something different to everyone. Because of this, addressing security hygiene can be a daunting task. In the last of our three-part blog series, Tanium CSO David Damato provides guidelines on what’s worth measuring, and why.

After implementing a leading information security program most organizations declare “mission accomplished.” As a result, security processes and technical controls degrade and/or become irrelevant over time, and the organization fails to keep pace with changing risks. Those companies that excel at security are continuously monitoring their program for weaknesses and constantly improving upon existing controls.

In part one of this series, I explored the evaluation stage of your security hygiene process. In part two, I discussed how aligning with a leading information security program can help balance security investments, satisfy various legal and regulatory requirements, and act as a flexible framework to manage future growth. The next step is to continuously measure your program to ensure it remains effective and relevant.

Security Hygiene: Measurements, Metrics, and Key Performance Indicators (KPI)

There is much industry debate about the difference between measurements, metrics, and KPI. In general, these are all related, yet different. A measurement is simply a data point. Examples include the number of vulnerabilities or incidents. A metric, which is a type of measurement, goes one step further, providing an indication of progress or performance. For example, a metric could include the total number of completed security policies, versus a defined goal (e.g. 50 specific policies). A KPI is simply an important metric that aligns with critical business objectives. An example might include the median time it takes your organization to implement critical patches, which translates into a measurable risk.

Regardless of what you call it, good security measurement should always:

  • Provide relevant and meaningful insight into what you’re attempting to measure;
  • Be actionable — if you can’t influence a measure with clear and specific actions, then it may not be worth measuring;
  • Be calculable over time, allowing leadership to track progress; and
  • Be comparable against industry peers, where a competitive advantage is valuable.

Many organizations use measurements that do not meet the above requirements. NIST 800-55 provides the following example measure to determine the effectiveness of a flaw remediation process: “the percentage of enterprise operating system vulnerabilities for which patches have been applied or that have been otherwise mitigated.” The challenge with such a measurement is that it is most influenced by the number of patches released by a given vendor as well as an organization’s ability to quickly patch vulnerabilities. For this reason, this measurement is difficult to trend over time and is not easily compared against industry peers, which may have different applications and associated vulnerability counts.

Cyber Hygiene: Measuring The Effectiveness Of Patching

A good approach to measuring the effectiveness of patching (i.e. flaw remediation) is median time to remediate software vulnerabilities. In other words, “how long does it take for our organization to remediate a critical patch?” This approach accurately measures our end goal, which is the reduction of risk. The longer it takes to patch a critical vulnerability, the greater the risk to your organization. A trend is easily identified, detailing improvement or deterioration over time. The measurement can also be compared with industry peers or benchmarks.

Other examples of mature measurements which provide the broadest and best visibility into the effectiveness of your security program include:

  • Mean time from detection to remediation of security events. This is a great way to characterize the performance activities related to detection and incident response.
  • Ratio of false positives to the total number of detection events. This provides insight into how staff are spending their time.
  • Percentage of systems matching defined baseline configuration standards. This provides visibility into the effectiveness of your configuration management program and risk of errant configurations.
  • Percentage of managed assets. This provides insight into the effectiveness of your asset management program and risk posed by unknown devices.

Don’t worry if you’re currently not using these measurements. The measures above require mature processes and technical controls. You can’t expect to track mean time to remediate an incident if you don’t have a documented incident response capability. In such cases, consider starting with simpler measurements and aim to improve over time. Some measurements are better than none.

How our Customers use Tanium to Measure and Improve Performance

Many of our customers use Tanium to measure the performance of their security program, using the instantaneous and scalable visibility provided by our communications platform. Simple measurements include the number of missing critical patches, errant host-based firewall configurations, or laptops without Full Disk Encryption (FDE). More advanced customers use Tanium to help collect data required to measure median time to remediate vulnerabilities or mean time from detection to remediation.

Security hygiene means something different to everyone. Because of this, addressing security hygiene can be a daunting task. We hope this three-part blog series provides some focus on how to structure an action plan designed to methodically address all aspects of security hygiene. Security hygiene is also a journey. You can take one point in time to measure your effectiveness at hygiene and, based on your efforts, it will ebb and flow over time. Tanium can help you start measuring your hygiene.

For more on security hygiene, read: