#NSBCS.032 - “Digital Doppelgängers: Deepfake Deception Strategies”

#NSBCS.032 - “Digital Doppelgängers: Deepfake Deception Strategies”

Source: NSB Cyber

 

Digital Doppelgängers: Deepfake Deception Strategies

Deepfakes, highly realistic digital forgeries created using advanced AI and machine learning, pose a significant threat in the world of cybersecurity. These synthetic media forms - manipulated videos, audio, and images - can be weaponised for various malicious activities, including disinformation campaigns, financial fraud, and social engineering attacks.

One of our highlights of last week’s Australian Information Security Association (AISA) #SydSec conference was attending a fascinating talk given by Sieg Lafon on the rise of today’s deepfake challenges. Sieg Lafon’s core message was the growing gap between illusion and reality in today’s society, and how deepfake technology is being leveraged to deceive, astonish and ask questions of the most robust cyber defences. The ease with which deepfakes can be produced and disseminated has lowered the entry barrier for cybercriminals. By leveraging advanced cloud computing and the availability of the service on the Dark Web, attackers can create and distribute deepfakes in real-time, making them increasingly difficult to detect and counter. Europol has warned that deepfake technology can be used in videoconferencing, live-streaming, and even television broadcasts, creating new avenues for deepfake-as-a-service operations.

A notable incident of this kind took place in February of this year included a Hong Kong-based company being defrauded of $25 million USD ($200 million HKD) in a sophisticated deepfake scam where cybercriminals used AI-generated audio and video in a live Microsoft Teams call to impersonate the firm's CFO, convincing an employee to transfer funds. Read article here. This incident highlights the growing threat and sophistication of deepfake technology in facilitating large-scale financial fraud, and serves as a reminder of the threat in today’s relatively resilient cyber-aware world. To mitigate these risks, cybersecurity experts recommend implementing real-time verification, passive detection techniques, and comprehensive response plans explored in more detail in this article.

To counter the emergence of deepfakes, one key approach is the development and deployment of advanced detection technologies. As Alan Turing decoded the enigma machine with a machine counterpart, deepfake detection technologies are being used in the same vain.

Advanced deepfake detection tools can leverage advanced analytics and machine learning to analyse media for signs of manipulation, such as inconsistencies in lighting, shadows, or lip movements that are often missed by the human eye. Techniques like using AI algorithms to detect phoneme-viseme mismatches, or examining biological signals in videos, can help identify deepfakes. Furthermore, the implementation of blockchain technology for media provenance can ensure the authenticity and traceability of digital content, making it harder for deepfakes to deceive.

The potential for deepfakes to falsify electronic evidence further complicates legal proceedings and poses a significant challenge to the legal system. These hyper-realistic forgeries can be used to create false video or audio recordings that appear authentic, potentially misleading judges, juries, and investigators. This complicates legal proceedings as traditional methods of verifying evidence become inadequate. The authenticity of digital evidence can be questioned, leading to longer trials and increased costs. Moreover, the ability of deepfakes to undermine trust in legitimate evidence erodes the foundation of judicial processes, making it imperative for legal frameworks to adapt and incorporate advanced verification technologies. Far simpler methods may also be used to combat deepfakes, even simply asking questions that would be difficult for a threat actor to verify in a real-time conversation, such as personal information only known within the team, may be enough to validate or invalidate the interaction.

As deepfake technology evolves, it threatens to undermine public trust in digital content, creating an environment of "reality apathy" where distinguishing between genuine and fake information becomes increasingly challenging. This erosion of trust can destabilise societies, fuel political and social unrest, and compromise the reliability of digital communications and transactions. Therefore, the cybersecurity industry must prioritise the development of advanced detection tools and collaborative defence strategies to address the multifaceted threat posed by deepfakes.

For info on NSB Cyber’s Cyber Resilience or Defence capabilities, or to book a meeting with our team, click here.


What we read this week

  • TeamViewer Attack Linked to Russian Hackers - TeamViewer confirmed that Russian hacking group Cozy Bear (APT29, Midnight Blizzard) breached their corporate IT environment using an employee's credentials. The company assured that the attack was confined to their corporate IT network, with no access to their product environment or customer data. TeamViewer highlighted the segregation of their corporate IT network from other systems to prevent unauthorised access and lateral movement. The breach has prompted cybersecurity firms to advise heightened monitoring of TeamViewer installations, and TeamViewer continues to investigate the incident.

  • Chinese Hackers Exploiting Cisco Switches Zero-Day to Deliver Malware - Chinese hackers from the group Velvet Ant are exploiting a zero-day vulnerability (CVE-2024-20399) in Cisco NX-OS software, which affects multiple Cisco switch series. This vulnerability allows authenticated, local attackers to execute arbitrary commands as root on the affected devices. By leveraging this flaw, the attackers can remotely connect to compromised devices, upload files, and execute code without triggering syslog messages, thereby concealing their actions. The exploitation requires administrator credentials, and Cisco is aware of attempted exploits as of April 2024.

  • Perth Man Behind Fraudulent Wi-Fi Networks to Steal Data - Michael Clapsis, a 42-year-old West Australian, has been charged with creating fake free public Wi-Fi networks in locations such as airports and domestic flights to steal personal data. Clapsis allegedly used a portable Wi-Fi device to set up "evil twin" networks that mimicked legitimate networks, capturing login credentials from users who connected. The Australian Federal Police (AFP) seized a laptop, mobile phone, and a portable wireless access device during searches, finding dozens of personal credentials.

  • New Zealand’s Elite Fitness confirms DragonForce ransomware attack - New Zealand's Elite Fitness has confirmed a ransomware attack by the DragonForce gang, impacting employee and customer data. The attackers leaked 5.31 gigabytes of data on their dark web site, including sensitive personal information and business documents. Elite Fitness detected the attack on June 26, 2024, and has since contacted affected individuals and notified relevant authorities. The company's website is offline as they continue to address the breach.

  • Home Affairs Funding Health Sector Threat Intelligence Platform - Australia's Department of Home Affairs is providing up to AUD 6.423 million over three years to establish a Health Sector Information Sharing and Analysis Centre (ISAC). The ISAC will facilitate industry-to-industry threat intelligence sharing to enhance the health sector's ability to respond to cyber threats. The initiative aims to improve cyber resilience, establish an intelligence-sharing forum, and ensure compatibility with the national cyber threat intelligence platform.


Previous
Previous

#NSBCS.033 - From the desk of the CEO | Cyber is a team game, and it needs the whole team!

Next
Next

#NSBCS.031 - Cyberattack on CDK Global: A Dual Blow