gtag('config', 'AW-10839537686');
top of page
Writer's picturePerformanta

How Generative AI is Changing Cybersecurity



You've heard of ChatGPT, the generative AI tool that caused a lot of excitement at the start of 2022, interpreting normal human communication and presenting complex information in understandable language.

 

Two years later, we're still in the grip of Generative AI/Large Language Model (LLM) fever, causing big waves in cybersecurity and cybercrime circles.

 

Criminals use these tools to create spam messages, engineer attack code, and build victim profiles, leading to an alarming 75% jump in AI-powered cyberattacks. However, LLMs are also significantly changing how cybersecurity operates, even more so than criminal examples.

 

Services such as Microsoft's Copilot of Security are helping cybersecurity teams respond faster, make more sense of threats, and include stakeholders and company employees in their efforts and security awareness, helping make companies cyber-safe by including more than just security teams.

 

Information is the prime security problem

The biggest security threat faced by modern businesses is information: there is too much of it and not enough skilled people to go through everything. Digital systems are complex: full of layers, overlaps and integrations; digitisation is the main reason why cybercrime has exploded in recent years.

 

Cybersecurity professionals are busy people. They ingest and disseminate information to create visibility, contextual insight, and risk treatment. They check dozens to hundreds of alerts a day. They spend much time in meetings to discuss those alerts or other security issues. Security chiefs spend considerable time reading reports and summaries.

 

If you've ever struggled with an overflowing email inbox, you can appreciate the problem with such a culture. There are physical limits to how many things people can check, which comes with a price: loss of focus, exhaustion, and even burnout.

 

LLMs can change this situation when used correctly.

 

How Generative AI is helping cybersecurity

Cybersecurity uses many types of artificial intelligence, most commonly being machine learning models that monitor network traffic, user behaviours, and other bits of information to flag strange activities. AI tools also scan digital environments for issues like outdated patches, open ports, or poorly secured user accounts.

 

But these systems operate in the background, and their output is rarely simple to consume. Cybersecurity staff need to create reliable answers from complex variables. For example, they receive an alert of various failed attempts to access the company's VPN software. Are these attempts part of an attack pattern, or just some users who forgot their passwords?

 

There is a phrase for how long it takes to get the answer: Mean Time To Respond (MTTR), which is the gap between receiving an alert and resuming normal operations. Many organisations measure MTTR in hours. But by using LLM technology correctly, Performanta brings it down to single minutes, even seconds.

 

An LLM is a crucial part of how we achieve such short MTTR, in this case, Microsoft's Copilot for Security, which Microsoft specifically trains to specialise in security queries and information. As a Microsoft-first partner, Performanta enjoyed early access to this AI, and we integrated it into our Safe XDR service. (Performanta is a trailblazer in this regard—we integrated Copilot for Security into our systems even before Microsoft had released a relevant API!)

 

Safe XDR incorporates several powerful security tools, AI automation, managed services, and human skills to deepen clarity and accelerate responses. Our customers and security teams can make plain-language queries and follow-up questions for Safe XDR's Copilot, which delivers clear answers.

 

They use generative AI in several potent ways:
  • Sift through alerts to set priorities, group related events, and provide easy ways to track a chain of events.

     

  • Deliver security event answers to technical and non-technical staff in terms they understand, then ask elaborate questions to explore the information.

     

  • Provide detailed and flexible playbooks that help staff quickly respond to a security situation, detailing steps relevant to their security environment.

     

  • Create reports and summaries suited to different stakeholders, from security engineers to non-technical parties such as managers, senior executives, and board members.

     

  • Generate security awareness content and tests to help employees improve their security habits.

 

Cybersecurity has gained a massive edge thanks to generative AI, provided it is properly integrated and used by experts. You might be tempted to find a generative AI solution for your security operations. But your time and resources are best spent on a security provider that already does so.

 

Performanta is at the forefront of this revolution—if you want to know more, request a demo from us and experience the power of generative AI in cybersecurity.

 

Comments


bottom of page