SOC Modernization: Measures and Metrics for Success

SOC Modernization: Measures and Metrics for Success

Ask many SoC managers or business leaders how they measure the success of their security operations capabilities, and they will tell you it's the MTTD and MTTR.

For any of you that haven’t come across these terms, they stand for Mean Time To Detect and Respond. These are effective metrics of part of the outcome of a SOC. 

Others measure capabilities such as total number of events processed, and of course at the most rudimentary level we all track, when it was that the last bad thing that happened.

The question I would like to pose is what are the right metrics for the next generation SOC, as today I see so much focus on MTTD+R, for which I’m going to break some other metrics into some key groups:

Process Metrics

Whilst MTTD+R look at the time to find and resolve, most SOCs are broken into different teams, either split by capabilities (tiers) or specialisms (threat/risk types).

Whichever you follow, funnel metrics should be a part of your daily controls. What are funnel controls? This is the time and volume typically used to monitor the effectiveness of each stage of alerting. 

This can be aspects such as time to identify, time to triage, and time to analyse–or it may be how much is dealt with by automation, how much requires human inspection, and how much requires more specialist skills. The point being here is to be clear on how you measure the journey as well as the outcome.

Capability Controls Metrics

One of the biggest challenges I see for any SOC analyst is having the confidence to apply change. You may see this as trivial but the human brain is wired to be at least 2X more concerned about getting it wrong (losing effectively) as they are about getting it right.

As such, every SOC team should be looking at how they ensure the fidelity of the information they make decisions based upon. This should be assessed at every level possible, for example:

  • Threat intel: What’s the efficacy each source has seen over previous months? 
  • Capability used: what percentage of the time does their detections get the right or wrong decision? 
  • Pre-classification of data: should it occur at every layer possible to ensure that when an analyst verifies the data they know what level of confidence should be associated with each source?

Outcome Metrics

One of the key aspects of the Cybereason Defenders Council mission is in further defining the concept of Defend Forward for the private sector. Defend Forward is an approach which focuses in part on how we can take the fight to the adversary versus the more reactive security posture the industry has taken in the past.

This is not about hacking back or offensive activities most would rightly consider illegal. This is about how we change the economics of cybersecurity–a grail that many have focused on over the years. 

For me the first real example of success I saw was with the Cyber Threat Alliance (CTA) when the membership came together to collaborate on analysis of CryptoWall v3.

The point being here, if you understand the attack with enough depth (in Cybereason parlay this is the MalOp - the full context of a malicious operation end to end), then you could look at what factors remain constant and which change over time. For example, in that scenario we saw over 4000+ different binaries being used in less than 100 campaigns. 

Now the simple reality is that, in times of crisis, we will always take the expedient option to put in a blocking control. But do we then come back and assess its longevity?

To have confidence we can block the attack, we will have invested time and resources to build out the MalOp, and as such we should track our blocking controls to see which have the greater longevity against the adversary. 

Effectively, this is taking the battle to the adversary by increasing their costs to succeed much greater as they have to significantly change their attack versus the Defender’s costs which remain static.

This is an easy step we can all take to Defend Forward. For many this may mean taking a two step process, the quick blocking control and then follow on with a more more strategic control. In other instances this may all happen as one action.

Takeaways

There is so much to cover on the subject of  SOC modernisation, and naturally many look to the great new tools and capabilities that are coming to market. However, I would also challenge everyone to look at our metrics for success:

  • Challenge ourselves on the metrics we use to measure success. One aspect to me that is super clear is that if we are to better automate, we must be able to assure the SOC team the outcomes from the automated decisions provide high fidelity detections, otherwise the human firewall will always be the stalling point for what are increasingly high volume, high speed processes. 
  • Ensure our metrics measure at the value of the actions taken, because we invest so much energy in qualifying the threat that we need to take the Defend Forward stance and look at how much we can disrupt the adversary and prevent them from being successful not just today, but also in the future.

The examples of metrics I have given are far from exhaustive, and I am a believer in different metrics for different organisations based on their risk appetitive, industry and technology reliance. But, hopefully the examples given at least challenge your thought processes as to what other metrics would help you on the journey to a NextGen SOC.



Cybereason is dedicated to teaming with Defenders to end attacks on the endpoint, across enterprise, to everywhere the battle is taking place. Learn more about AI-driven Cybereason XDR here or schedule a demo today to learn how your organization can benefit from an operation-centric approach to security.

Greg Day
About the Author

Greg Day

Greg Day is a Vice President and Global Field CISO for Cybereason in EMEA. Prior to joining Cybereason, Greg held CSO and CTO positions with Palo Alto Networks, FireEye and Symantec. A respected thought leader and long-time advocate for stronger, more proactive cybersecurity, Greg has helped many law enforcement agencies improve detection of cybercriminal behavior. In addition, he previously taught malware forensics to agencies around the world and has worked in advisory capacities for the Council of Europe on cybercrime and the UK National Crime Agency. He currently serves on the Europol cyber security industry advisory board.