Big Data

Why security analytics needs to outgrow its ‘magic phase’


Join AI & data leaders at Transform 2021 for the AI/ML Automation Technology Summit. Watch now!


Gunter Ollmann says security analytics is still in its “magic phase.” The tools, techniques, and underlying technologies are advancing quickly (as are the threats), but there’s still a lot left to be discovered. In particular, there’s the issue of actually understanding and knowing how best to act on everything this rapidly evolving use of AI and data in security analytics makes possible.

“The future area that we need to address, the opportunity and what excites me is ‘How do I explain all the security knowledge and millions of thousands of events as stories that can be understood by non-security experts and actions business owners can take in the context of their responsibilities?’” Ollmann, who recently joined cloud-native security analytics company Devo as its chief security officer, told VentureBeat. Later in the conversation, he went even further, saying, “There’s no way the security industry, particularly, can go forward without that actually happening.”

After three decades at the industry’s forefront, Ollmann — who has led several cybersecurity companies, published hundreds of technical papers, and most recently, guarded Microsoft’s Azure cloud — has some ideas about the future. With VentureBeat, he discussed the ins and outs of how the field is evolving and what comes next.

This interview has been edited for brevity and clarity.

VentureBeat: You were working with security analytics and AI in security long before anyone was really even thinking about AI in enterprises, let alone before it was driving nearly everything. How do you feel about how far this has all come in recent years?

Gunter Ollmann: To sum up where security analytics is today as an industry, I’d say we’re probably still in the magic phase. We’re still experimenting on which approaches work better under which scenarios, especially from a machine learning perspective. There’s still a lot of people’s heads versus methodologies, which is not a problem. It’s just where we are. Security has evolved so fast. And now finally being able to apply those security theories to data lakes and analyze all that data simultaneously — that’s still new, and we’re still experimenting with understanding what value we actually strike from this. It’s borderline between magic and art. If you compare it to other parts of the security industry, like endpoint protection, which is a 40-year-old industry and technology, that’s a very full process and millions of standards and requirements. It’s a well-understood field. We’re not close to that.

VentureBeat: Is there anything that surprises you about the state of data, AI, and security today? How does today’s landscape stack up to what you were envisioning years ago?

Ollmann: I think what we’ve thought and hoped to achieve is in line; it may just be different timing. We probably achieved many of the new understandings we hoped for much faster than expected, but we’ve been much slower in converting that newfound knowledge, understanding impact, and translating it into automated and trusted actions. Back in 2005, the industry started to move from network-based intrusion detection systems to intrusion prevention systems, the theory being that if I can detect things, then I can do smart things to stop them. Fifteen years later, the vast majority of intrusion prevention systems that are deployed do not operate under protection schema. They operate in the detection phrase and so we’re still, as an industry, quite afraid of turning on the automated response and protection capabilities.

VentureBeat: Is there an aspect of the field that’s getting more attention, interest, or use than you expected? And on the other hand, is there anything — a tool, a technique, a philosophy, etc. — you thought would be more prevalent but turned out not to be?

Ollmann: I think an interesting lens to have is that security teams, especially SOC (security operations centers) teams, got more than they asked for. They all wanted universal visibility across the entire organization. They wanted all that data so they can apply analytics, disengage threats, and understand and prevent attacks. They have that in spades today. But what happened is that they moved from having 200 things that were appearing and that they had to be scared about on a daily basis — and having the capacity to deal with five — to now having 5,000 new things coming in, but still only having the capacity to deal with five. So that’s caused alert fatigue. They’re just overloaded with too much information and without enough capability to actually take actions against these threats. And so a lot of the space where innovation still needs to come in is “How do I prioritize all of these new insights so I can actually take action?” The solution path has been to create scores, but the problem with that approach is that every single product has its own score and its own values for individual elements it scores. And so that prioritization part — I think, as an industry, we’re still floundering.

VentureBeat: I was going to ask what you think are the biggest challenges in the space and where it goes from here, but it sounds like that’s the answer.  

Ollmann: Yeah, I think alert fatigue is the bugbear for the entire industry today, and that’s growing into what’s been termed “posture fatigue.” So I’m trying to rationalize all those alerts, and then the other side is that all the new security technologies I’m deploying need to be configured. They all need to have policies. As an industry and organization, I may be regulated, so I need to follow these particular standards. There’s a lot of new tooling, especially being led from the clouds. So we’re seeing posture fatigue also now encumbering security and business teams as they try to figure out not only how to protect themselves and have visibility, but also what they have to do to maintain posture and compliance.

VentureBeat: Are alert and “posture fatigue” getting as much attention as you think is needed?

Ollmann: I think we’re pacing the same as we have on threat detection and alerts. So that increase in posture, posture needs, positive visibility, and posture analytics is generating more data that still requires deep security expertise to be able to understand and determine actions. So unfortunately, I think we’re repeating that same path without moving into that next phase.

VentureBeat: And despite the outstanding challenges and issues, are there any use cases that come to mind that have really effectively demonstrated or showed real promise of this combination of analytics, data, and AI for security? Has anything really impressed you in recent years?

Ollmann: I think the key has been the acceleration of maturity in supervised learning. The classic approach was that every new threat required a new signature, and every time that threat changed, a new signature was required. And every new signature equals a new alert, so that cascades and causes problems. But I think what’s been exciting and made a big impact is that we’ve become much, much better at creating and the pace of creation of well-trained threat detection classifiers. They’re not only at a higher fidelity of detecting the specific threats but because of the nature of the machine learning are much more advanced and stable at detecting variants of threats. That’s translated into not only much faster and much more accurate detection, but also a reduction in the number of distinct alerts and false positives.

VentureBeat: Now that the use of machine learning for security is so widespread, how should enterprises be thinking about their technologies and strategies? What’s important to keep in mind?

Ollmann: One thing is that software, data analysis, and data science are now actually core products of every major business today. It’s not someone else’s problem, and we can’t just sort of buy off the shelf. And as every company now becomes also a software company, there are new classes of threats they didn’t previously have to worry about. Security teams traditionally focused on the infrastructure and identifying attackers from the outside. Now they also need to be able to understand and secure the software development lifecycle; software pipeline for generation; and the data, data lake, and data acquisition processes. This means security responsibilities have changed, but also the tools and technologies required have also changed and become increasingly inward-focused. The attack surface has also changed and is growing. And the threats are becoming quite different, with espionage and nation state-level attacks. Traditional security teams and C-suites have a massive blind spot in-house.

VentureBeat: It’s a little ironic that by collecting all this data and putting forth all these efforts to secure everything, you’re actually increasing the attack surface. Have you ever thought about it like that?

Ollmann: By doing that, you have increased that attack surface. And frankly, there isn’t today the breadth of security tooling and security visibility needed to really counter these threats. So one new area of research for me is adversarial machine learning and adversarial attacks. There’s a great spectrum of threats and attack vectors that appear in adversarial machine learning. At one end it’s, “Can I corrupt and force your smart system to do things you never thought it’d ever do?” There have been some very embarrassing examples, like with my previous employer Microsoft and chatbots becoming racist. But also new tooling to be able to say, “Well you invested all that knowledge into a new model. Can I actually construct my queries against it so that I can now enumerate and replicate what that model was and basically steal the data behind it and use it to better my own business?” So there’s a growing range of new attack vectors, but also brand new opportunities for smart security companies to innovate new detection and protection technologies for this rapidly advancing and expanding field of new threats

VentureBeat: Security always feels like a cycle. We develop and advance, and the bad actors and cybercriminals are always just a step behind. And by advancing, in some ways we make new problems we have to deal with. Do you feel like the use of machine learning and these more intelligent solutions perpetuates that in any way? Will we use that to solve it? Is this going to be a cycle forever?

Ollmann: Am I allowed to say yes, yes, yes? What I’d say is we’re getting better at identifying new things, which increased the volume. And it’s just that you didn’t have this ability to see these things before, and it’s something you do have to take actions against. And as technology changes and tech tools and motivations change, that will continue. We’ll always be able to find more things that we didn’t see before; we have more data, more visibility, and more scope. But that is not allowed to translate into more things that I, as a human being, am expected to deal with every day. And so that smart logic of not just shrinking down and prioritizing, but making real sense of whether these things together are impactful to the business and bringing those things together in one place, is the flexi-opportunity and the need. There’s no way the security industry, particularly, can go forward without that actually happening.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.