Is there anything AI can't do? It can write, draw, edit, and code, all at an unhuman speed. Turns out, AI can also be used to protect organizations from cybersecurity threats and automate many of the tasks traditionally handled by a security analyst. Unfortunately, adversaries can leverage the same technology to write malicious code and carry out cyberattacks.

The emergence of advanced AI tools and frameworks has introduced attackers to new ways of simulating and executing their attacks. Any SOC analyst should be aware of the deadly capabilities of adversarial AI, which can find routes for defense evasion, lateral movement, enhanced persistence, and so much more.

Adversarial evolution

SOCs across the world receive an overwhelming amount of alerts on a daily basis, and AI and ML can be leveraged to manage and attend to these alerts. AI utilizes training data to improve decision-making capabilities of the model used for security analytics solutions or any other operational purpose. However, attackers can manipulate a target model by injecting and modifying training data and corrupting the algorithm's logic. This process, known as data poisoning, can impair decision-making and bring catastrophic consequences to the organization.

Narrow AI is created to focus on a single problem and its applications include chatbots, medical diagnoses, facial recognition, and so on. Artificial general intelligence (AGI) is a hypothetical technology that can perform cognitive capabilities similar to humans, allowing it to adapt to various environments and requirements. Attackers currently only have access to narrow AI where they leverage basic ML and NLP capabilities. This means that AGI attacks are, for now, out of their reach. When attackers do get this capability, it could result in more intelligent malicious codes, faster vulnerability detection, and advanced social engineering capabilities.

AI-powered cyberthreats

Let's deep dive into a few ways attackers can leverage AI to surpass the enterprise security posture and execute noxious strikes.

  • Social engineering and spearphishing via deepfakes: Synthetic corporate personas and sophisticated emulations of existing employees are the frightening trend in the cybercrime world. Attackers can now utilize AI to develop indistinguishable deepfakes that work like a charm for spearphishing and other initial access techniques. A recent cyber scam involved the use of deepfake technology to impersonate a CEO's voice to carry out a successful attack.
  • Data poisoning: Generative adversarial networks (GANs), reinforcement learning, and many other ML algorithms can be leveraged by threat actors to create malicious training inputs. Every ML model worldwide requires a training data set or knowledge repository to generate predictions or decisions. Manipulation of this data can have irreversible effects on the integrity of the ML model. Once the modified training data has been created, the attacker can then inject the poison into the target model. This can be done by using different methods, such as corrupting the public knowledge repository used for transfer learning or watermarking to manipulate the model's detection capabilities.
  • Dynamic malware: AI is capable of generating polymorphic and metamorphic malware, which has the potential to change behavior during its propagation through the network. This allows it to bypass traditional intrusion detection systems and antivirus software. The creation of dynamic malware is done by training a model with a large database of existing malware, using architecture such as convolutional neural network (CNN) or recurring neural network (RNN). Both CNN and RNN utilizes filters to identify and resolve patterns. Using these architectures, attackers can identify patterns within existing malware and modify them to create a completely new malware with desired capabilities such as defense evasion. Further use of techniques like encryption can make the malware difficult to trace.

A disguised data packet entering the network, undetected by the firewall, and transforming into malware A disguised data packet entering the network, undetected by the firewall, and transforming into malware.

There are many more AI-enabled threats that exist, including target vulnerability discovery, backdoor deployment, and advanced reconnaissance.

How to step up your shield game

The rise of AI has given attackers an advanced platform to execute lethal strikes to the enterprise network. Security professionals must gain awareness of these new threat trends. Challenges are meant to be overcome, so here's how to fight back:

  • Fighting fire with fire: AI leverages capabilities such as User and Entity Behavior Analytics and alert profiles that can help your organization reduce the mean time to detect (MTTD) potential threats. To achieve this, having behavioral baselines and quick response systems are the way to go.
  • Stronger antivirus and intrusion detection systems: Threat detection systems that can analyze large amounts of data and detect anomalous patterns are critical. They should be able to detect hidden malicious packets that are morphed and evade the eyes of traditional endpoint security systems.
  • Preparing the training repository to detect malicious data: This process can improve the detection capability and response mechanism of the model when adversaries attempt to introduce malicious code.
  • Employee awareness: Creating awareness amongst the people who work in your organization is vital, as they are the potential victims to advance social engineering techniques, malware downloads, and spearphishing attacks. Employees should be instructed to contact the enterprise security team if they receive any suspicious mailers.
  • Community collaboration: Constantly updating the public attack database or repository and generating awareness about the new waves of attacks by organizations worldwide is necessary to respond to and remediate rising cyberthreats.

Conclusion

AI is a formidable tool, capable of both boosting and crippling an organization. Being aware of how adversaries can utilize the powerful capabilities of AI as a weapon is sobering and necessary, especially for organizations that use AI to deliver critical products and services to the wider public, such as vehicle manufacturers, government entities, and medical institutions.

Creating a multi-layered security approach, practicing real-time monitoring, and adopting Zero Trust are a few other ways organizations can fortify themselves against AI threats.

To learn more about how to leverage AI to defend against cyberattacks, sign up for a demo of ManageEngine Log360, a unified SIEM solution with integrated UEBA and SOAR capabilities.

×
  • Please enter a business email id
     
  • By clicking 'Read the ebook', you agree to processing of personal data according to the Privacy Policy

Get the latest content delivered
right to your inbox!

Thank you for subscribing.

You will receive regular updates on the latest news on cybersecurity.

  • Please enter a business email id
  •  
  •  
    By clicking on Keep me Updated you agree to processing of personal data according to the Privacy Policy.

Expert Talks

     
 

© 2021 Zoho Corporation Pvt. Ltd. All rights reserved.