Business & Technology
There are more than 700 million known malware files in existence today. And in just the last quarter of 2017 alone, the number of malware families increased by 25%, while unique variants grew 19%. Meanwhile, the latest trends include malware-as-a-service that enables novices to launch an attack in minutes while artificial intelligence (AI) and swarm technologies are beginning to be used by cyber criminals to evolve and add new attack vectors. New smart attacks enable cybercriminals to sustain threats by continually identifying new vulnerabilities and matching them to evolving exploits.
Traditional signature-based antivirus defenses simply cannot keep pace with the volume, variety, and velocity of these kinds of assaults. Instead, organizations need sophisticated advanced threat protection (ATP) capabilities, including next-generation sandboxing, to detect unknown threats and initiate automated countermeasures before the network is impacted.
Unfortunately, the evolving threat landscape isn’t the only challenge companies face in maintaining an effective security architecture. Today’s organizations are scrambling to compete in a new digital marketplace, which in turn is driving the unprecedented digital transformation (DX) of their networks. They’re adopting new technologies—cloud services, Internet of Things (IoT) devices, flexible networks, workforce mobility, and consumer-oriented applications and services—that expand and enhance operational capabilities. However, these advancements have also created new potential attack vectors.
Unfortunately, while networks are rapidly becoming distributed, elastic, and borderless, security has largely remained a legacy patchwork of disparate solutions, each working in isolation to detect and/or defend within its own niche. And the results of this status quo security approach aren’t pretty. It currently takes an average of 191 days for an organization to detect a data breach and then another 66 days to contain the problem. Even more concerning, 81% of reported intrusions are not detected by internal security processes but instead by news reports, law enforcement notifications, or external fraud monitoring. At the same time, the time between a breach and the first compromise of data is measured in hours or even minutes. Needless to say, an incredible amount of data loss or other damage can happen within a months-long window of exposure.
The best approach to shrinking the windows from intrusion-to-detection and from detection-to-containment begins with a security architecture that integrates all the various solutions and tools deployed across the organization. A fabric-based architectural approach to security connects the working parts of a defensive posture into a single, integrated system that allows them to share threat information and stop attacks in real-time.
Sandboxing fills a crucial function within this integrated architectural framework. However, most sandbox solutions have not evolved to meet the demands of today’s more dynamic and distributed network. With respect to current needs, first-generation or “traditional” sandboxes have limited performance and outdated capabilities. Many lack important ATP features or may not be able to integrate into a broader security architecture.
Unfortunately, many of these solutions are still on the market. To help organizations review and select sandboxing solutions designed to meet their current and future needs, here are seven evaluation areas that can help separate a traditional sandboxing device from a next-generation solution when it comes time to add or upgrade:
1. Security Effectiveness. Many popular sandboxing solutions are falling behind in security effectiveness. A product’s ability to block and report on successful infections detected anywhere across the distributed network in a timely manner is critical to maintaining the security and functionality of that network.
2. Performance. Organizations are often forced to choose between security’s ability to keep the network safe and “real-world effective performance” (files processed per hour). But the reality is that both of these are necessary for today’s evolving infrastructure. A sandbox’s security functionality should be assessed both by its ability to effectively protect the network as well as its ability to perform its tasks at the sorts of speeds today’s digital networks require.
3. Integration. Many sandboxing technologies reside in their own silos as isolated “point” devices—which means that they can’t share threat intelligence with other security elements across your organization or beleverage real-time threat feeds provided by other technologies. Instead, legacy sandbox solutions depend on manual, high-touch management by a member of the IT team in order to receive and share data with the rest of the network, which is simply inadequate in addressing threats operating at digital speeds.
Sophisticated threats often target a broad attack surface when trying to breach an organization’s network. To help foil these kinds of coordinated assaults, a sandbox that connects with the broader security architecture is absolutely necessary. Specifically, integration enables your solution to receive data from any collection point or sensor, and then share zero-day intelligence out to all inline security controls so they can automatically apply appropriate protections. This level of dynamic integration helps eliminate manual processes, shrinks response windows, and reduces management burden—especially for organizations facing a shortage of skilled security staff.
4. Rapid Deployment. Seamless “plug-and-play” integration also enables transparent visibility, simplified security management, as well as fast and easy sandbox deployment. Avoid devices that must be connected through TAP network components, which creates lengthy deployment cycles.
5. Administration Overhead. Security teams typically face tight budgetary constraints, with 45% of organizations claiming to have a problematic shortage of cybersecurity skills. Many outdated sandboxing products require manual administration, which adds to the strain on human resources. Look for a sandbox that can share zero-day intelligence out to all inline security controls in order to automatically apply appropriate protections across the network.
6. Scalability. Many sandboxes also struggle with scaling to accommodate things like network expansion into different cloud environments, such as Software-as-a-Service (SaaS), that are a fundamental part of DX efforts. Instead, addressing this limitation often requires the purchase of additional devices, which adds unnecessary cost and complexity to your security architecture. Other common scalability concerns include insufficient performance capacity, prohibitive licensing, and physical deployment limits.
7. Cost. Sandbox implementation can be complex, with numerous factors affecting the overall cost of deployment, maintenance, and upkeep. Many sandbox solutions also require the separate purchase of multiple devices and/or subscriptions, which leads to a high total cost of ownership (TCO). Look for a sandbox with a low cost-per-protected Mbps (as measured by third-party testing organizations like NSS Labs) in order to avoid supplemental costs.
The decision to add or upgrade a sandbox should be made carefully—based on real-world needs as well as objective, third-party evaluations of leading solutions. The fallout of choosing the wrong sandbox can have a lasting impact on both a network’s operational abilities and the organization’s IT resources (both financial and human).