IBM researchers at Black Hat USA 2018 announced their development of DeepLocker, a proof of concept to raise awareness of AI-powered threats, demonstrate how attackers have the capability to build stealthy malware that can circumvent commonly deployed defences, and provide insights into how to reduce risks and deploy adequate countermeasures.

DeepLocker has changed the game of malware evasion by taking a fundamentally different approach from any other current evasive and targeted malware.

  • Malicious payload is hidden in benign carrier applications, such as a video conference software, to avoid detection by most antivirus and malware scanners.
  • AI makes the “trigger conditions” to unlock the attack almost impossible to reverse engineer.
  • The malicious payload will only be unlocked if the intended target is reached. It achieves this by using an AI model that can use several attributes to identify its target, including visual, audio, geolocation and system-level features.
  • It is virtually impossible to exhaustively enumerate all possible trigger conditions for the AI model, this method would make it extremely challenging for malware analysts to reverse engineer the neural network and recover the mission-critical secrets, including the attack payload and the specifics of the target.

Long story short, while a class of malware like DeepLocker has not been seen in the wild to date, these AI tools are publicly available, as are the malware techniques being employed — so it’s only a matter of time before we start seeing these tools combined by adversarial actors.

According to its creators:

  • A few areas that we should focus on immediately include the use of AI in detectors; going beyond rule-based security, reasoning and automation to enhance the effectiveness of security teams; and cyber deception to misdirect and deactivate AI-powered attacks.
  • Additionally, it would be beneficial to focus on monitoring and analysing how apps behave across user devices, and flagging events when a new app is taking unexpected actions. This detection tactic could help identify these types of attacks in the future.

Yay! Arms race! Monitoring! Job security! Excuse my sarcasm.

  • Last modified: 2019/05/20 07:37