Artificial intelligence (AI) technology is a powerful technology, and because of this, it holds great potential for exploitation by cybercriminals. Considering this, the only way that security leaders can stay ahead of bad actors is by gaining a true understanding of how this technology can be weaponized. Then, they can begin to develop effective strategies for confronting AI threats head-on.
As AI grows in adoption and sophistication, cybercriminals are looking for ways to seize upon its potential. The Electronic Frontier Foundation was already warning about potential malicious uses of AI back in 2018, including threats to digital, physical, and political security. And now, AI precursors combined with swarm technology can be used to infiltrate a network and steal data.
Hacking into a network used to take months. But with AI and machine learning (ML) technologies on their side, cybercriminals can see this time span reduced to a matter of days. As more AI-enhanced attacks are orchestrated, the techniques used in these events become increasingly available and inexpensive for more and more cybercriminals.
Automated and scripted techniques can also exponentially increase the speed and scale of a cyberattack. The ability to automate the entire process of mapping networks, discovering targets, finding vulnerabilities, and launching a custom attack significantly increases the volume of attacks even a single bad actor can pull off.
Often, network security architectures are not designed to stand up to these types of attacks. For example, it’s not uncommon for an organization to use 30 or more security-related point products within their environments. With such a setup, getting a big picture view of the organization’s security architecture requires manual consolidation of data across the different applications.
This also leaves such organizations unable to quickly launch an effective coordinated response to a network-wide attack. And as cybercriminals continue to minimize their exploit times, IT security teams are left struggling to detect attacks at the same speed. In fact, the 2020 Ponemon Cost of a Data Breach Report notes that the average breach detection gap (BDG), which is the time between the initial breach of a network and its discovery, is 280 days. The report also found that the average cost of a data breach in the United States is $8.64M, 124% higher than the global average ($3.86M). Considering this, it is more crucial than ever that organizations adopt new strategies to make sure their networks can function as cohesively as possible.
A skills gap exists in the cybersecurity sector, with security leaders often struggling to bring qualified staff on board. AI-driven security experts, in particular, are even harder to come by.
This is especially dangerous when considering the fact that as AI continues to evolve, so too will the malicious uses of this technology. Organizations are now facing attacks that leverage self-learning technologies that can quickly find vulnerabilities, select or adapt malware, and actively fight off the security efforts that have been put in place to stop them. And when using AI alongside emerging attack methods (i.e., swarmbots), bad actors will gain the ability to break down an attack into its functional elements. These elements can then be assigned to various members of a swarm to enable interactive communications to accelerate the speed of an attack.
When working to defend against these AI-enhanced attack strategies, security teams must embrace a “fighting fire with fire” approach. By understanding how cybercriminals find their success and taking a few pages from their playbooks, security leaders can redesign their strategies in order to level the playing field.
While AI technology can do amazing things, it can have both positive and negative implications. Cybersecurity professionals must confidently employ that same advanced technology in counter-measures to protect networks from bad actors exploiting increasingly-advanced technology. A security strategy that uses AI-enhanced technologies is vital in defending against cybercriminals, especially as networks and the attacks against them grow more complex and sophisticated.
Find out how Fortinet integrates AI and machine learning capabilities across our Security Fabric to detect, identify, and respond to threats at machine speed.
This is a summary of an article written for Security Magazine by Derek Manky, Chief of Security Insights and Global Threat Alliances at FortiGuard Labs. The entire article can be accessed here.