New Technologies

ChatGPT: key considerations for cybersecurity decision makers

It’s only a matter of time until criminals will use AI. But how? The release of ChatGPT has drawn public attention to the capabilities of artificial intelligence. But how will AI phishing and deep fakes impact the cyber security landscape? Technology continues to advance, and so do the methods of cybercriminals.

finger print on digital screen being scanned
7 minutes to read
With insights from...

Phishing emails are a constant threat that all organizations must contend with. However, with the advent of generative AI, we may be entering a new, more dangerous era of phishing — the era of AI-Phishing. In the past cybercriminals mostly relied on mass-phishing emails containing malicious links or malware. In addition, humanly crafted spear-phishing is another method of cybercriminals, as it is more difficult to detect and can be used to carry out highly targeted attacks on organizations. However, writing these types of emails has proven to be time-consuming and resource-intensive, which until now made it less appealing to cybercriminals. With the new large language models, cybercriminals can now generate highly targeted spear-phishing emails in a fraction of the time, making these attacks more effective and dangerous than ever before.

The use of AI in crafting targeted phishing emails poses a significant threat to the cybersecurity landscape, as many recipients may assume that these emails are written by humans, increasing the level of trust, and making them more likely to be successful. With the abundance of personal data available on the internet, AI algorithms can analyze this information to create tailored messages for the recipient, which may include personal details such as their name or job title. This makes these emails more convincing and increases the likelihood of them being clicked on. However, cyber attackers don't necessarily require a large amount of data to create successful phishing emails. Even a simple piece of information, such as a recent tweet or LinkedIn post, can be enough to craft a message that appears convincing to the recipient and exploits a recent event or situation.

Generate targeted phishing emails in seconds

To test the feasibility of AI-Phishing security researchers created a proof-of-concept only within only a few days of work. The following images illustrate how personal AI-Phishing emails based on LinkedIn information can look like.

AI generated phishing mail-version 1
AI generated phishing mail-version 2
AI phishing attempt - example 3

Phishing emails are a lucrative business, and success is determined by the cost-per-email, chances of success, and the potential payoff. While AI-powered phishing emails have slightly higher cost-per-mail, they significantly boost the chances of success, to the naked eye feeling like humanly written spear-phishing emails. Moreover, AI-based phishing can leverage feedback from previous phishing attempts to enhance its effectiveness. The fact that a victim clicked a link or not can provide valuable data to cybercriminals who can fine-tune their AI models to write even more convincing and successful phishing emails. With enough data, AI-generated phishing emails may outperform their human-written counterparts.

graphic plotting the amoutn of people receiving a phishing mail vs the chances a victim falls for the trap

Make a call with the voice of your CEO

In addition to the advancements in natural language processing, the technology behind deep fakes is constantly evolving, with newer and more sophisticated algorithms emerging.

Already as of today, we see CEO fraud attacks in the wild where criminals use the voice of a company executive to make an employee to transfer money. Such real-time voice cloning techniques might even bypass voice-based authentication, for instances at a support helpline of a bank. Proof-of-concept implementations have shown that AI algorithms can generate convincing voice transfers using only a few seconds of a victim's recorded audio. As most people have some publicly available recordings of their voice, it is easy for fraudsters to obtain such samples or even record a victim's voice through a short phone conversation.

Using deep fakes to open a bank account?

The use of online video identification is another area of concern, particularly in the financial industry where video is commonly used for KYC (Know Your Customer) processes. Many institutions allow customers to open a bank account or apply for a credit card entirely online, with some only requiring a picture of their passport, while others may use a video call to verify the authenticity. It is likely that deep fake technology will bypass both approaches. The picture-based method for identification can already be bypassed today using deep fake technology or traditional image manipulation. While the real-time video method may still require a few more years to be bypassed, it is only a matter of time before this technology catches up. The increasing accessibility and affordability of deep fake technology, coupled with advancements in computing power, make it only a matter of time before video-based identification methods are vulnerable to this technology as well.

From a bank's perspective, it is extremely difficult or even impossible to distinguish whether a video stream is coming directly from a genuine camera or has been manipulated by software on the fly, as the identity verification process done on the server side. Even though some banks only allow identification using their native app and not via a web browser, these hardening measures are not completely foolproof, and determined attackers are likely to find ways to bypass them. While such measures may make it more cumbersome for an attacker to fake a video stream, it is not enough to prevent deep fake attacks entirely.

Many neo-banks rely solely on video identification for their customer onboarding process, which poses a significant risk to their entire customer journey and business model. With the increasing accessibility of deep fake technology, it is only a matter of time until the first bank account is opened with a different name using this technology, which can potentially result in significant financial loss for the bank. In response, regulatory organizations will likely take swift action to prevent such occurrences from happening, either by implementing stricter identification requirements or prohibiting video-based identification methods entirely.

How can CISOs react to those AI threats?

What should organizations do right now in order to cope with the emerging AI-driven threat? This question is on the mind of many CISOs and security professionals.

  • AI-Phishing is a threat to all companies. To face it, the most important measure is increasing the overall awareness. Employees must be educated and need to understand that personal emails can be generated automatically within seconds. But research shows that fake phishing campaigns might not be as effective as widely believed. Dated approaches such as keyword-based filtering still has its place due to the high amount of classical mass-phishing. However, they cannot be seen as a comprehensive defence mechanism anymore. What else can be done?
    Warnings such as "Be careful, this is the first time you receive an email from this address
    " are increasingly important. Plus in that case, fire can be fought with fire. The general patterns of phishing emails don't change, so AI-enabled phishing detection becomes an important pillar.
  • Voice authentication over the phone is now an insecure method that should not be used anymore because practical attacks are possible, and there is no way to rely solely on a voice fingerprint for authentication. Instead, helplines should fall back on more secure mechanisms such as identity questions, calling back a trusted number, or using one-time codes through SMS or push messages. The same applies for employees too that receive incoming calls via a phone line.
  • Online video identification poses a significant challenge due to its integral role in business processes. As a result, we recommend any organization that uses this method to elaborate an alternative method now. We advise having a backup plan in place should regulatory bodies forbid using video identification for KYC. Classical in-person identification might be the only secure method in the future. Using a secure electronic identity like "Singpass" in Singapore is another elegant solution because banks don't need to identify the customer on their own but can rely on the electronic id.

By now, you may be thinking that organisations like OpenAI should be putting safeguards in place to prevent this kind of misuse of AI technology.  And yes, they implemented them. But no, those safeguards don't help at all. They can often be bypassed, and criminals can also do their own training in unrestricted neural networks. Furthermore, expensive training might not even be needed. The weights of Meta's new model LLaMA have just been leaked. This makes it easy for criminals to run their own unrestricted models. And only if we lose track of what the capabilities of those criminals are, it becomes impossible to defend against them.

""
Contact person for Switzerland

Dr. Raphael Reischuk

Group Head Cybersecurity & Partner

Raphael Reischuk is the author of numerous scientific publications in various areas of IT security and cryptography, many of which have received awards. BILANZ and Handelszeitung listed him among the Top 100 Digital Shapers in Switzerland in 2021.

Reischuk is a member of multiple international programme committees for IT security and Vice-President of the Cybersecurity Committee at digitalswitzerland. He is also the co-founder and a board member of the National Test Institute for Cybersecurity (NTC).

In 2017, he joined Zühlke, where he channels the expertise he has gained in various industries into his role as Group Head Cybersecurity & Partner. As an experienced IT security expert, he is driven by curiosity, innovation, technology, a sense of commitment and a strong business ethos.

Contact
Thank you for your message.