A few major themes emerged from RSAC 2024 sessions this year, including how to leverage automation to solve SOC/SecOp problems, as well as the increasing set of challenges that CISOs are facing. At the heart of these themes is the persistent people problem – not enough budget for staff, not enough competent or highly skilled people for hire, or the fact that people cannot operate fast enough to defend against and respond to the number of threats today.
IBM’s keynote, Securing New Limits: Protecting the Pathway for AI Innovation, touched on many of those challenges and the reality of security teams maxing out; 60% of security professionals have experienced burnout on the job, while another 83% say that burnout is leading to breaches and security challenges. That’s all due to constant stress of the job, the lack of control, the long hours and limited resources – there are an infinite number of things to protect against, and a finite amount of resources to do this.
IBM also claims that attackers are using AI for more pervasive and evasive attacks, and that we need to evolve our defenses as a result; we can’t retrofit last era’s tech to solve this era’s problems. [Reports from DBIR and SentinelOne’s Global Threat Overview show less data around attackers using AI, and more of the same old tactics that have worked historically, including credential theft and phishing emails].
According to IBM, AI has changed over the last year to extend the limits of our software and people; helped analysts understand complex security events that take place; and generate security reports, content, queries and policies. Putting more AI before human decision-making is a game-changer in this industry.
In Bye-Bye DIY: Frictionless Security Operations with Google, Google examined the issues related to “DIY SecOps” which are all of the things related to manual security tasks conducted by security operations teams, including making decisions around what kind of data you should keep or for how long. Detection engineering is the most important thing, but it’s also the most labor-intensive and DIY thing we do. It’s also very hard to do well, especially as the threat landscape keeps evolving.
Their philosophy is that not all detection rules will be out of the box or created by somebody else, but you should ultimately strive to keep those detections you’re managing yourself to a relatively small amount. Offloading the majority of curated rules to someone else, like a Google expert, means they are taking on the load of creating, developing and maintaining those rules for you as threats evolve and making sure they’re up to date.
But 90% of those detections that customers create are the same for everybody, meaning they’re not things you need to create for yourself. Leave that to the pros and let them help create the detections – a helpful approach to security that Blumira encompasses with the extensive work of our incident detection engineering team that researches, writes, tests, tunes and maintains the rules that power the detection and response capabilities of our platform.
Making security platforms as efficient and automated as possible can help address the problems that most SOCs are currently facing, or customers that can’t afford to staff or run a SOC. Those problems include the lack of automation and orchestration; too many tools unintegrated; and high staffing requirements. There is also a lack of context of what defenders are actually seeing, which is important for investigation. Piecing together critical information is done in an DIY manner, but, according to Google, AI can really help in this scenario.
That same topic was explored by LogRhythm engineers in their talk, AI Foundations: Mitigate Risks and Boost SOC Efficiency. Some of the ways it can help in cybersecurity include:
The promise of AI-driven automation and its benefits include helping SOC operators and their organizations achieve faster response, reduce their manual workload and further streamline operations. AI is there to provide relief from routine tasks - it can help you write the automation and deal with large amounts of data that are difficult for humans to handle.
For threat detection and response, AI automation can help with triaging alerts, taking response actions, summarizing patterns and providing a behavioral analysis, interpreting complex language inputs and analyzing threat data.
For incident analysis and investigation, AI automation can help effectively filter out false positives and generate playbooks and investigations. For routine tasks, AI automation can automate them while providing actionable insights and reduce overall cognitive load.
Some of the risks and challenges of AI implementation in cybersecurity include the issue of garbage in, garbage out – if the data that you feed it is biased, unrepresentative or bad, you're going to get output that is likely untrustworthy or incorrect. The implications are profound if you rely on it for threat detection and response or making key decisions.
Integrating third-party AI solutions with security products offers benefits like capability and cost-effectiveness but comes with potential downsides like being hard to integrate or introducing potential vulnerabilities into your systems. The opacity of algorithms is another issue -- meaning when they function as black boxes or the output cannot be easily understood or explained by humans, which results in the struggle to trust AI implementations of operations.
Where is AI useful? Analyzing large amounts of data and making the information useful and actionable.
Meanwhile, CISOs are increasingly asked to take on more responsibilities and address more organizational risks, all with slashed budgets and decreasing security personnel, as IANS Research and Artico Search found in their survey data of CISOs.
In their talk, State of the CISO 2024: Doing More With Less, they covered how CISOs have moved from tech leadership to business executive risk management over the past decade as their function has been elevated and the job has gotten much more difficult. Now security is mentioned in quarterly earning calls, addressed during due diligence, etc., and there are more direct lines of communication to the board with a focus on softer skills and the ability to influence and drive change cross-functionally.
Scope creep continues to increase gradually over the year. CISOs are responsible for security A&E, tech risk and compliance, third-party risk management, AppSec, IAM, product security, business continuity, privacy, physical security and fraud.
Meanwhile, security budgets are tightening – from the end of 2022 until now, there’s a lot of cutting tech and security.
Staff growth has also declined, matching budget trends. It's still very difficult to hire competent people, according to IANs.
According to IANS, CISOs have money or can find money for tools that support a business goal. But they cannot get enough money for resources for staff, especially in healthcare. There is also unrealistic data for analyst salaries that often don’t match security skill sets, and as a result, companies are unable to fill headcount.
Meanwhile, hiring for SecOps, AppSec, etc. to staff a security team can come at a premium.
Clearly one of the main challenges to solve is how to keep up with security demands and threats with tightening budgets and teams – which is where the industry is turning to AI and automation to increase security efficiency and effectiveness, without relying on human power.
Blumira’s mission is to help SMBs with small teams and limited resources prevent a data breach. Try out SIEM + XDR for 30 days or sign up for a Free SIEM (forever).