AI is dominating the cybersecurity market - both in terms of more advanced, sophisticated threats,
as well as vendors are leveraging the rapidly evolving technology to fight the next generation of
attacks.
At e92plus, we're bringing together some of the key vendors who are investing in AI technology, as
well as being at the forefront of the fight where AI is used as a core part of produced more
advanced phishing attacks, malware and expotentially increasing the scale of the assault on our
networks, applications and users.
The big AI debate - the biggest talking point in cybersecurity.
We’re getting to heart of the topic with a panel debate that includes three thought leaders on AI in cybersecurity. It’s a theme that creates huge noise and headlines, but what’s the reality? How big is the threat? How can it help us against AI-driven threats?
Joining us for this episode are three leaders in the field: Deryck Mitchelson, Field CISO at Check Point, Carl Froggett, CISO at Deep Instinct, James Hickey, Director of Sales Engineering at Cofense, and our moderator, Neil Langridge from e92plus.
Our guests share their extensive experience in the banking and healthcare sectors, discuss the importance of guardrails in AI deployment, and provide insights into what the future might hold for AI in cybersecurity.
What might the risks look like?
- Improper permissions — Copilot relies on the Microsoft 365permissions already assigned, so if they’re not set up correctly, sharing sensitive information can quickly spiral out of control, resulting in data breaches and hefty compliance fines.
- Inaccurate data classification — Copilot has inherits the sensitivity labels assigned to protect data. Again, data is at risk from inaccurate labels – and data classification is often inconsistent and incomplete. For example, manual labelling is highly prone to human errors, and Microsoft labelling technology is limited to specific types of files.
- Copilot-generated content — The greatest challenge is for new documents, which doesn’t carry labels or categorisation. This means that new documents containing sensitive data could be shared with unauthorized users. Ensuring these documents are properly classified is a challenge, due to the sheer volume of content that Copilot can produce.
The reverse side is, of course, the ability for AI to compliment and strengthen our defences. The sheer scale of phishing emails (estimated at 3.4 billion a day) means that being able to leverage AI means not just a quicker response when identifying patterns and suspicious activity, but able to using the scale of threat as a weapon by turning that into an intelligence source for the AI to learn from.
For cybersecurity teams, the solution is being able to combine AI with human intelligence. The sheer speed at which new threats arrive makes it impossible to rely on people or even patterns, behaviour or algorithms – 88% of new threats each year have never been before, and 50% of all attacks bypass legacy email gateways.
The threat of AI, however, is not just confirmed to the potential for damaging, more advanced malware (although that’s a big concern) – it’s also around privacy concerns and the use of data that could be even be used as part of legitimate processes and tools, and the interaction of users with AI – such as phishing emails.
The pace of change is arguably the biggest challenge, with new tools being created by bad actors that leverage the AI advances and take the technology in new directions – as well as bringing unparalleled scale to their operations. This increasingly looks like meaning an AI arms race, with an increased focus from cybersecurity teams on prevention to avoid constantly being on the back foot.
Highlights from the recent survey of global CISOs include:
- 69% of organizations have adopted generative AI tools as part of their daily practice.
- 70% of security professionals say generative AI is positively impacting employee productivity and collaboration.
- 63% state the technology has improved employee morale
At present, many GenAI tools can call on a wealth of information and data, but activity is based on that – reproducing and analysing at scale, but tied to that historical context. This more linear approach than true AI makes it easier to build appropriate defences, especially when being able to collate threat intelligence from multiple sources (you can read more examples here.
One big opportunity around AI is the ability to develop bespoke applications, in particular Copilots. When designed for the cybersecurity solutions, they help automate processes and perform common admin tasks, so allowing staff to focus on more high value activities. For example, applying rules or signatures across multiple devices and locations, or deploying updates or policy changes, as well as using playbooks for automated responses in the event of an incident or alert.