AI is no longer just the latest Internet buzzword. The technology is already in use in industries like manufacturing and financial services. Many firms have adopted and integrated AI into their daily operations. Why? Because it promises users savings in cost and time, improves productivity, and guarantees accuracy.
In the field of cybersecurity, AI and machine learning (ML) automate labor-intensive and repetitive tasks, thus aiding in faster threat detection, prevention, and remediation. Like any other technology, though, cybercriminals can also use AI for their gain.
Let’s examine the good and the bad in the following sections.
How Whitehats Use AI
AI, particularly ML, helps threat investigation and incident response teams detect large volumes of anomalies (e.g., malicious files, traffic, emails, links, and others) within a network at a rapid rate. It can pinpoint similarities across indicators of compromise (IoCs)—i.e., malicious URLs, emails, files or abnormalities in network traffic, allowing IT security staff to spot connected sites, email and IP addresses, and files for potential blocking.
AI also helps cybersecurity professionals to proactively block likely threat sources by quickly analyzing complex behavioral patterns. An example would be when an IP address that is not part of a company’s IP range tries to access an internal file. An AI system programmed to disallow such an action should flag this and alert an IT security personnel.
The detection may, however, not entirely be accurate because AI systems can’t discriminate between anomaly types. The user of the external IP address may be a legitimate employee working outside the office. While the system can identify uncommon behaviors, it can’t tell if it is suspicious or benign. As such, AI systems need human intervention to verify if the abnormal behavior is malicious or not thoroughly.
Also, because an AI system is only good as the predefined data it is fed, it may not be as reliable as human analysts when it comes to detecting unknown threats. It does, however, do away with repetitive tasks, freeing up researchers’ plates to deal with more critical security incidents and events.
How Blackhats Use AI
Automation is, as we know, at the heart of AI. As such, some researchers believe that it’s bound to lead to an increase in cyberattacks should threat actors use AI to automate tasks. Let’s take spear-phishing, where attackers send highly targeted emails designed to steal personally identifiable information (PII) to specific targets by impersonating their colleagues or peers, into consideration.
Much of the success of a spear-phishing attack lies in how legitimate-sounding the email used is. That requires thorough research on intended targets on the attackers’ part, which is time-consuming. Threat actors can, however, use AI to automate intelligence gathering. They can use chatbots, for instance, to obtain as much information about their targets as possible.
AI systems can also automate the selection of targets based on a particular algorithm (i.e., how likely they are to download an attachment to or click a link in a malicious email, etc.). Cyber attackers know that preying on humans, said to be the weakest link still when it comes to cybersecurity, works. Harnessing AI’s predictive ability can thus enhance the efficacy and scalability of attacks, putting organizations in even bigger trouble than they are in now.
Finally, like any new technology (e.g., smart cars and devices, etc.), AI systems are not invulnerable. Cybercriminals can exploit bugs or vulnerabilities in them to gain entry into target networks. As more and more companies adopt the technology, more and more threat actors are bound to find weaknesses in it to get as many victims as possible.
AI in Cybersecurity: The Verdict
We know that cyberattacks aren’t going away anytime soon. Experts predict, in fact, that cybercrime will cost companies US$6 trillion by 2021. Given the severe ramifications that threats pose, organizations need to continually think of ways to stay ahead of the bad guys. AI can help, but like any other technology, it can also be a curse. It all depends on who uses it and what their intentions are.