A good number of years ago, I was engaged as a security consultant for local Cincinnati Fortune 100 company. Many of the tasks I performed were fairly routine, and there was a large staff of system administration specialists to interact with, which made my job much easier.
Among other things, I was involved in some compliance processes for the many servers at that firm. Their server population was quite diverse, running the gamut from different generations of Windows, Windows Server, and a myriad of Unix flavors.
As part of the compliance monitoring, an automated system scanning scheme was implemented. This system worked in concert with the tightly focused scanning that was already occurring on those systems managed by a third-party security monitoring firm. Things were split up for economic reasons.
As part of the scanning process, reports were automatically generated for each system scanned, and the individual reports were emailed to the specific system’s SA along with copies to the security team.
Over the weeks following the implementation, it became hard to ignore the fact that some extensive patching would be needed to ensure all of the systems and servers would be up to the most recent patch levels. Also, there were many questions about whether a particular patch was necessary, and whether the patch actually fixed the problem (and also, whether that patch might ‘break’ things). I’m sure anyone who has ever focused on this will agree these sorts of concerns are important when it comes to running a reliable data center.
The primary concern that the data center managers had at that particular time was the emerging “Day 0” malware concern. For those of you who aren’t aware of what this is here is a brief explanation:
There is a very large community of malware producers (read: “the bad guys”) out in the world and on the Internet. Between the time a new vulnerability is identified and affected systems can be patched to counter the threat, there is a short window of opportunity (in fact, the window is “open” as long as the world is oblivious about the vulnerability!). Systems are most vulnerable during this period because corrective patches have not been developed to counter the threat, and aids such as anti-virus software haven’t been sensitized to detect the threat either.
For system administrators, system patching is probably the surest method available to maintain some defense against such rapidly emergent threats.
Continuing, besides the whole question of whether or not a particular patch has been created in response to a vulnerability, industry news of such vulnerabilities circulates slowly, and mostly in specific communications channels. Vendors create patches for discovered vulnerabilities and release them as quickly as possible, however, often word of these fixes is also slow to emerge.
Returning to the situation I was describing, our team spent some time brainstorming about what we could do. We knew that there were subscription sources that produced vulnerability updates, and in addition to that, many of the major server suppliers produced alert emails whenever patches were released to the field. Slowly we began to piece together a patching information management system to improve our odds at having systems patched.
Fast-forward to today. If you are a Windows user you know about “patch Tuesday” which is the day that Microsoft releases its patches. Every week patches are “pushed” out and the update tools on endpoint systems dutifully apply those patches.
Other systems, such as Linux hosts and Mac’s, perform on-demand updates where patches are periodically fetched and applied. There are a good number of applications that watch over the ‘patch’ domain and provide the tools to apply those patches as well.
Problem solved, right?
Not really – there are a host of concerns involved with automatic patching systems. Some patches are applied and work just fine. Some patches cause things to break, and some just make things worse (or even, don’t work as advertised!) The point being that “fire and forget” patching has its shortcomings, and for some, the shortcomings are threats in and above all.
So the question to the audience is whether or not there are satisfactory patch management solutions these days? I am aware that there are a number of companies to perform these services, so I’m curious as to the level of effectiveness they may have.
One last point. In their 2014 DBIR report, Verizon’s analysts pointed out that many of the exploits used by hackers were ‘old’, and that had those systems been properly patched, the penetrations would never have happened. I believe that the Patching Dragon has yet to be slain.
Here are a couple of links if you want to read a bit more on this: