One of the distinct advantages of working in the IT industry for over 35 years is all of the direct and indirect experience that brings, as well as the hindsight that comes with that.
One of the more personally interesting experiences for me has been watching the growth and ultimate success of the Open Source Software (OSS) movement from a fringe effort (what business would ever run on OSS?) to what has now become a significant component behind the overall success of the Internet. I was initially reminded of the significance of the Open Source Software movement, and how long it’s actually been around when the technology press recognized the 25th anniversary of the Linux kernel. That, and the decision in January of 1998 by Netscape Communications Corp to release the complete source code for the Communicator web browser, are two of the top reasons for the Internet taking off. Well, the first specification for HTTP helped a little as well, I suppose.
There are, of course, many other examples of OSS software that power the Internet, from the numerous Apache Foundation projects, relational and other database management systems like Postgres, MySQL, MongoDB, and Cassandra. The list of markets and technologies for which there are OSS resources is essentially endless.
This all leads me to the title of this article. Perhaps it’s time to look at Open Security as the next necessary iteration of deploying security technology. Over the last thirty years we have gone through a slow (and often painful) evolution of security deployment models, including:
- Why do I need security (the DARPA days, pre-Mitnick)
- Various iteration of basic firewalls, from packet filter to proxy to stateful
- Best of breed stacked implementation (dedicated IPS, dedicated WebFilter, dedicated caching/optimization)
- Security function consolidation (UTM / NGFW)
- Open Security Architecture
These few examples of change all came with various degrees of pains, gain, and consequences. These were traditionally also very proprietary solutions, with limited abilities to interact. The most common method used to try and collect and correlate information across these isolated devices was the implementation of a SIEM or similar system. This has worked reasonably well until recently, but in an increasing number of environments the scale of the information generated by the security infrastructure is putting ever-increasing pressure on the SIEM, with an end result that is really not much better then an IDS, since manual intervention is still normally required to address a detected threat.
In addition, ever-changing, complicated attack vectors, and an increasingly diverse range of end devices have also driven some of these evolutionary changes.
I’m not advocating something as radical as a security vendor providing all their software as source code, such as Netscape Communications did. After all the R&D side of being a security vendor is incredibly expensive and resource intensive, and without constant, on-going research even the best implemented security products become fairly useless rather quickly (unless the product is an OFF button.) Instead, I’m advocating for security companies to design products that have Open and flexible interfaces, so that as an industry we have a better ability to adapt to the continuously changing threat landscape.
For example, wouldn’t it be cool if your perimeter security solution, which has the ability to detect suspicious malware activity - such as a connection attempt to command and control servers - could instruct your L2/L3 internal switch infrastructure to migrate a particular interface from, say, the regular user L2 network to a different forwarding domain that only contains equally comprised systems? And do this without requiring the client system’s IP address to change, or their existing established network sessions to be interrupted? And then, wouldn’t it be even cooler if the security solution protecting your Data Center or Cloud-located services could then also know to apply more scrutiny to activity from this same, quite likely well-compromised client? All with no SOC interaction? Actually, it sounds a bit like the late 90’s migration from IDS (detection only, with manual intervention) to IPS (actual prevention) systems, doesn’t it?
Do we need to go as far as trying to define these communications and interoperability interfaces via a standards body such as IETF? Frankly, I don’t really think that the current rate of change in the security industry meshes very well with the pace of a traditional standards body. Ultimately, of course, once these sorts of interfaces stabilize, and common denominators are determined, then a standards body-based approach might work.
Some might say this has been tried previously via various approaches, but with the exception of limited-use technologies like WCCP and ICAP, all of the approaches that I have seen have all had some amount of proprietary technology involved. OPSEC anyone?
What’s clear is that the isolated, proprietary security devices most organizations are using are simply not solving today’s cybersecurity challenges. Companies need something different. It seems to me that what they need are open security solutions that can be integrated together to share threat intelligence in order to provide actual protection, and would allow them to seamlessly interoperate across the distributed network infrastructure, from IoT to the cloud.
With such an open, end-to-end fabric of security solutions woven together to scale and adapt as business demands, organizations could finally address the full spectrum of challenges they currently face across the attack lifecycle.
It’s safe to say that open is officially a critical cybersecurity requirement for today’s digital business, and should not only be a requirement for every security solution you consider, but part of your foundational security policy and strategy as well.
Ken McAlpine is VP, Network Security Solutions at Fortinet.
Note: Originally published by SecurityWeek