By Ben King, Chief Security Officer, EMEA
Deepfake technology is not anything new. It’s been awash on social media for the past few years and most people will have seen instances of it. Many still see the technology as purely being a distant threat, at least for now. However, businesses are starting to wake up to the real risk the technology can pose to organisations. A survey by Tessian revealed that 74% of IT leaders think deepfakes are a threat to their organisation’s security, but businesses are not currently equipped to combat the growing risks.
Businesses need to pay attention to how deepfake attacks can manifest and the realities of what they might look like, particularly as security teams will struggle to technically identify them as techniques continue to improve. While most types of deepfake are not currently at the phase of fooling many, deepfake audio is already incredibly convincing. Awareness of this is lagging, and very soon, it will be almost impossible to distinguish what is real and what is not. From a business perspective, it’s often thought that only the C-suite could be a likely target for attackers harnessing deepfake technology. Notable examples support this notion, for example, 2019 saw cybercriminals use biometric-based deepfake technology to imitate the voice of a chief executive in order to carry out financial fraud. A UK CEO believed he was speaking on the phone with his boss, recognising his accent and the melody of his voice. In this case, the attackers managed to con the business out of £200,000.
But while high profile, high worth targets will generally be the optimal goal, it’s important that businesses remain vigilant and educate their employees to understand that the threat is very real to them too. As phishing attacks increasingly mature by the day and technology becomes more sophisticated, deepfakes are shifting to become a tangible business threat.
Remote working has led to the attack surface growing, and we are seeing spear phishing campaigns on the rise. Hackers are successfully impersonating senior executives, third-party suppliers, IT Helpdesk or other trusted authorities in emails, building rapport over time and deceiving their victims. It was far more difficult when everyone was in the office for attackers to con employees, especially in an open plan office scenario. However, with so many office workers at home, malicious actors can bank that employees will not be physically with their manager, making a cyber attack far more likely to succeed.
To protect employees, IT leaders must consider that security training should include awareness of deepfakes and emphasise how employees must authenticate identities prior to trusting a suspicious message, whether that is delivered via text, email, voicemail or video. Employees are the front line of every organisation’s defences, so they must be armed with the knowledge of what to do when faced with any kind of cyberattack. Employees must be aware of what an attack might look like, and be empowered to understand what they can do to help protect their organisation.
For IT leaders, they need to do more to be involved in conversations about introducing industry wide guidelines and protocols to help counter attacks. Much like IoT security, a movement to achieve greater compliance and industry wise protection will likely come from businesses themselves.
In particular, for CSOs and CISOs, a strong security and compliance culture, backed up by well understood processes, should be implemented in order to combat deepfakes effectively. This can be helped by adopting the zero trust principle of ‘never trust, always verify’. Simply following a dual authorisation process to transfer money, or verifying instructions received with a trusted source, such as calling someone’s direct line, can expose deepfake fraud. These processes are not new, and many organisations will already have them in place, with some regulators already demanding them.
Another solution could be to use biometrics as proof of possession of a device and combine this with additional factors. These could be attributes like known behaviour, contextual information, things that the authorised user alone would know, as well as a pin code or multi-factor authentication (MFA) on a phone. By creating additional layers of security, simply faking one aspect of an identity will not be enough.
With deepfake videos increasing more than 330% since July 2019 to June 2020, it’s not a matter of if an attack will happen, but when. It’s important for organisations to understand the risks and guide employees through rigorous security training programmes which takes into account newer threats and our response to deepfake attack scenarios.