In late July 2023, Kenya’s Government faced a huge cyber attack that affected services on a key government online platform called eCitizen portal, which is used by the public to access over 5,000 government services. The hackers claimed to have stolen passport data, which has not only raised concerns over the common ransomware attacks but has also compelled us to address the imminent danger presented by artificial intelligence (AI). Here, Simon Farnfield, event director for Advanced Engineering, explores the landscape of generative AI and cyber resilience, as we must pave the way towards a future where innovation thrives against the encumbrances of cyber threats.
In today’s digital landscape, engineering companies stand at the forefront of innovation, harnessing the potential of generative AI to revolutionise their design processes. Unlike traditional AI systems that are based on rules or pre-defined patterns, generative AI systems use machine learning to create new content that shares similar characteristics with the data they were trained on.
The advent of such technology has brought forth a new era of creativity and efficiency, enabling engineers to design intricate structures, optimise systems and come up with innovations like never before. However, as we embrace generative AI, the paramount concern that looms large is the increasingly growing risk of cyber attacks.
In fact, the UK’s cyber security agency has warned that the number of “hackers for hire” is set to grow over the next five years, leading to more cyber attacks and increasingly unpredictable threats. Not only has the likelihood of these attacks become greater, but they are also less predictable, as more ‘hackers for hire’ are tasked with going after a wider range of targets and off-the-shelf products.
Generative AI attacks
According to research analysis conducted by Darktrace, there has been a whopping 135 per cent rise in social engineering attacks driven by generative AI. Cybercriminals are exploiting these AI-powered tools to hack passwords, expose sensitive data and deceive users across various platforms. As a result, the emergence of these sophisticated scams has sparked a surge of apprehension among employees, with as many as 82 per cent expressing heightened concerns about becoming victims of these manipulative tactics.
AI poses a formidable threat by significantly reducing or even eradicating barriers to fraud and social engineering tactics. Individuals, who are not native speakers or lack proficient language skills, can therefore exploit generative AI to engage in text conversations without errors. Therefore, phishing schemes become increasingly challenging to identify and protect against, heightening the risk of successful attacks that undermine traditional defences.
The well-known Chat GPT platform is a prime example of generative AI. Using the Transformer architecture, which is designed to process sequential data, GPT is a language model that is pre-trained on vast amounts of text data from the internet, allowing it to learn the statistical patterns, syntax and context of human language.
Employees targeted
Unfortunately, cybercriminals have now begun to leverage AI technologies, like ChatGPT, to develop sophisticated spear phishing attacks. In fact, AI-driven spear phishing attacks targeting professionals, such as chief information security officers who oversee an organisation’s information, cyber and technology security, are becoming increasingly sophisticated and challenging to detect.
Here, attackers employ generative AI models to craft highly personalised and convincing phishing emails. These emails are tailored to each target, using their name, interests, recent online activity and even writing style to make them appear genuine and trustworthy. What’s more, these cybercriminals also use AI-generated emails to launch a spear-phishing campaign, targeting individuals within an organisation or specific groups. These emails may contain malicious attachments or links leading to fake login pages that mimic legitimate services.
It is no surprise to learn that most successful infiltrations come from the targeting of employees, who inadvertently fall for the sophisticated phishing emails and become a victim of a cyber attack.
Avoiding AI attacks
Avoiding generative AI attacks requires a comprehensive cybersecurity strategy that includes proactive measures to detect and defend against evolving threats. In keeping with our aforementioned scenario, it is wise to regularly train employees on the latest cyber threats, including generative AI attacks. Teach them to recognise suspicious emails, websites and content and encourage them to report any potential security risks promptly.
While we can somewhat accommodate for human error or misjudgement, it also necessary to invest in advanced cybersecurity solutions that incorporate AI and machine learning to detect and block AI-generated threats. These systems can analyse patterns, behaviours and anomalies to identify potentially malicious content.
After all, the world must look ahead and confront the emerging threat posed by AI-based cybercriminals. The attack on Kenya’s eCitizen portal serves as a reminder of the growing sophistication of cyber threats, as AI-driven techniques could play a destructive role in future of cyber security breaches.
With every obstacle comes an opportunity. As generative AI continues to advance, the cyber resilience becomes paramount, meaning that the relentless pursuit of innovation should not be hindered by the ever-present encumbrances of cyber threats. Instead, this must kickstart a proactive approach in developing robust defences against the malicious use of AI technology.
For cyber security specialists wanting to attend Advanced Engineering, held at the NEC in Birmingham from November 1-2, you can book your stand or register for a visitor pass today.