For many criminal organizations, attack techniques are evaluated not only in terms of their effectiveness, but in the overhead required to develop, modify, and implement them. To maximize revenue, for example, they are responding to digital transformation by adopting mainstream strategies, such as agile development to more efficiently produce and refine their attack software, and reducing risk and exposure to increase profitability.
Knowing this, one defensive response is to make changes to people, processes, and technologies that impact the economic model of the attacker. For example, adopting new technologies and strategies such as machine learning and automation to harden the attack surface by updating and patching systems or identifying threats forces criminals to shift attack methods and accelerate their own development efforts.
In an effort to adapt to the increased use of machine learning and automation on the part of their targets, we predict that the cybercriminal community is likely to adopt the following strategies, which the cybersecurity industry as a whole will need to closely follow.
Fuzzing has traditionally been a sophisticated technique used in lab environments by professional threat researchers to discover vulnerabilities in hardware and software interfaces and applications. They do this by injecting invalid, unexpected, or semi-random data into an interface or program and then monitoring for events such as crashes, undocumented jumps to debug routines, failing code assertions, and potential memory leaks.
One reason why fuzzing is used so infrequently, or in such limited ways by criminals is because it is very hard to do. The reality is, there's only a tiny group of people with the expertise needed to develop and run effective fuzzing tools—which is why their use tends to be limited to simple things like DDoS attacks, and why the discovery and use of Zero-Day exploits by cybercriminals tends to be rare.
The reality, however, is that there is likely an incalculable number of vulnerabilities that could be discovered and exploited in commercially available software and operating systems right now using fuzzing technologies, but there simply aren't enough purpose-built fuzzing tools or skilled developers available to discover them.
AIF - Artificial Intelligence Fuzzing
Artificial Intelligence will change that. AI is already beginning to be used to solve the problem of discovering and exploiting software bugs.
Prediction: AI Fuzzing
Applying AI and machine learning models to fuzzing will enable it to become more efficient and effective. Black hat criminals will be able to develop and train fuzzing programs to automate and accelerate the discovery of Zero-Day attacks. Ultimately, such tools could be pointed at a target and automatically mine it for Zero-Day exploits. I call this approach AIF, or Artificial Intelligence Fuzzing.
AIF would include two machine learning phases, Discovery and Exploit. During Discovery, the AIF tool would learn about the functionalities and requirements of a new software target, including the patterns it uses for structured data. Then, in the Exploitation Phase, it would begin to inject intentionally designed structured data into that software, monitor the outcome, use machine learning to refine the attack, and eventually force the target to break—thereby discovering a vulnerability and an exploit at the same time.
This supervised machine learning approach, guided by a trained attacker, could then be repeated continuously, allowing a criminal to run continuous combinations of attacks to continually discover and exploit Zero-Day vulnerabilities. And in an environment where potentially endless Zero-Day attacks are available, even advanced tools such as sandboxing would be quickly overwhelmed.
AIF’s Impact on the Cybercrime Economy
The acceleration in the number and variety of available vulnerabilities and exploits, including the ability to quickly produce Zero-Day exploits, and even provide Zero-Day Mining-as-a-service, may radically impact the types and costs of services available on the dark web. Zero-Day Mining as a Service will completely change how organizations approach security, because there's no way to anticipate where these Zero-Day are located, nor how to properly defend against them, especially using the sorts of isolated, legacy security tools most organizations have deployed in their networks today.
Dramatic advances in swarm-based intelligence and technologies continue to drive us closer to seeing swarms used as both attack and cyberdefense tools. For example, a new methodology was recently announced by scientists in Hong Kong that uses natural swarm behaviors to control clusters of nano-robots. These micro-swarms can be directed to perform precise structural changes with a high degree of reconfigurability, such as extending, shrinking, splitting, and merging.
This same sort of technology can potentially be used to create large swarms of intelligent bots that can operate collaboratively and autonomously. They will not only raise the bar in terms of the technologies needed to defend organizations, but like Zero-Day Mining, they will also have an impact on the underlying criminal business model. Ultimately, as exploit technologies and attack methodologies evolve, their most significant impact will be on the economic models employed by the cybercriminal community.
Right now, the criminal ecosystem is very people-driven. Professional hackers-for-hire build custom exploits for a fee, and even new advances such as Ransomware-as-a-Service requires black hat engineers to stand up different resources, such as building and testing exploits and managing back-end C2 servers. But when you start talking about delivering autonomous, self-learning Swarms-as-a-Service, the amount of direct interaction between a hacker-cunsumer and a black hat entrepreneur drops dramatically.
A la Carte Menus
The ability to subdivide a swarm into different tasks to achieve a desired outcome means that resources in a swarm network could be allocated or reallocated to address specific challenges encountered in an attack chain. Criminal consumers could preselect different types of swarms, to use in a custom attack, such as: pre-programmed swarms that use machine learning to break into a device or network, swarms that perform AI Fuzzing to detect Zero-Day exploit points, swarms designed to move laterally across a network to expand the attack surface, swarms that can evade detection and/or collect and exfiltrate specific data targets, or swarms designed to cross the cyber/physical device divide to take control of a target’s physical as well as networked resources.
Machine learning is one of the most promising tools in the defensive security toolkit. Devices and systems can be trained to perform specific tasks autonomously, such as taking effective countermeasures against a detected attack. Machine learning can also be used to effectively baseline behavior and then apply behavioral analytics to identify sophisticated threats that span environments or leverage evasion strategies. Tedious manual tasks, such as tracking devices based on their exposure to current threat trends and automatically applying patches or updates can also be easily handed over to a properly trained system.
Prediction: Poisoning Machine Learning Systems
This process can also be a two-edged sword. Rather than trying to outthink or outperform a system enhanced with machine learning, it may be easier to simply target the machine learning process itself. The methodology and tools used to train a device or system to perform a specific task are also its greatest Achilles heel. For example, if an attacker is able to compromise a machine learning system and inject instructions, it could train devices or systems to not apply patches or updates to a particular machine so that it remains vulnerable to an attack, or to ignore specific types of applications or behaviors, or to not log specific traffic in order to evade detection.
Machine learning models already regularly use data from potentially untrustworthy sources, such as crowd-sourced and social media data, as well as user-generated information such as satisfaction ratings, purchasing histories, or web traffic. Because of this, cybercriminals could potentially use malicious samples to poison training sets to ignore threats, or even introduce backdoors or Trojans, with relative ease. To prevent this, extra care must be taken to ensure that all machine learning resources and protocols are carefully monitored and protected.
To address the challenges we see on the horizon, the cybersecurity community is going to have to change their traditional approaches to security. The most effective strategy is likely to be one that takes aim at their economic model. Forcing them to re-engineer their attacks, for example, would be expensive and time-consuming, and may force them to seek easier prey.
Deception strategies have been available for some time. But only recently—given the increase in sophistication of attacks that have managed to easily breach traditional perimeter security defenses—has their implementation become more essential.
The basic idea is to create too many choices for an attacker, most of which are dead ends, to force them to slow down and potentially give away their position. If you can generate enticing traffic from a large number of databases, and only one of them is real, attackers will have to slow down to evaluate each data source and potentially even chase down each option. But what if those dead-end options not only contain interesting data, but also exist in an environment where unexpected traffic will immediately stand out, not only increasing a defender’s ability to detect an invader, even if they are using evasion technology, but also trigger an automated response to evict them from the network. This strategy increases both the risk of detection as well as the cost of running an attack.
This approach will impact the cybercriminal business model where targets are chosen based on risk/reward and ROI strategies. Adding layers of complexity that require deep, hands-on analysis means that the cost of launching an attack suddenly escalates. And because most cybercriminals tend to follow the path of least resistance—either to maximize ROI or because many of them are actually quite lazy—they are most likely going to find a more accessible network to exploit.
While advances in security technologies enable some defenders to detect increasingly sophisticated attacks, the vast majority of deployed security solutions still rely on signature matching or other simple detection methods. So, one of the easiest ways for a cybercriminal to maximize their investment in an existing attack solution is to simply make minor changes to their malware. Even something as basic as changing an IP address can enable malware to evade detection by traditional security tools.
One of the most common ways to keep up with such changes is through the active sharing of threat intelligence. New open collaboration efforts currently underway between threat research organizations, security manufacturers, and law enforcement and other government agencies will increase the efficacy, timeliness, and sophistication of threat intelligence. Increasingly collaborative efforts, such as the Cyber Threat Alliance, not only share data between researchers, but also publish that research in the form of playbooks that expose the tactics used by attackers.
This will require cybercriminals to make more complicated and expensive changes to their attack tools, codes, and solutions. And as these Unified Open Collaboration forums expand, organizations will soon also be able to apply behavioral analytics to live data feeds to predict the future behavior of malware, making the digital marketplace safer for everyone.
Getting in front of the cyberthreat paradigm requires organizations to rethink their security strategies in terms of how to impact the underlying economic strategies of criminal organization. Rather than engaging in a perpetual arms race, organizations will be able to leverage the power of automation to anticipate threats and target the economic motivations of cybercriminals in order to force them back to the drawing board.
Disrupting the criminal economic model, however, can only be achieved by tightly integrating security systems into a cohesive, integrated security fabric framework that can freely share information, perform logistical and behavioral analysis to identify attack patterns, and then incorporates that intelligence into an automated system that can not only respond to attacks in a coordinated fashion, but actually begin to anticipate criminal intent and attack vectors.
Read our blog about the latest Fortinet Threat Landscape report and the indices for botnets, malware, and exploits for Q3, 2018.