How critical is patching these days?

A good number of years ago, I was engaged as a security consultant for local Cincinnati Fortune 100 company.  Many of the tasks I performed were fairly routine, and there was a large staff of system administration specialists to interact with, which made my job much easier.

Among other things, I was involved in some compliance processes for the many servers at that firm.  Their server population was quite diverse, running the gamut from different generations of Windows, Windows Server, and a myriad of Unix flavors.

As part of the compliance monitoring, an automated system scanning scheme was implemented.  This system worked in concert with the tightly focused scanning that was already occurring on those systems managed by a third-party security monitoring firm.  Things were split up for economic reasons.

As part of the scanning process, reports were automatically generated for each system scanned, and the individual reports were emailed to the specific system’s SA along with copies to the security team.

Over the weeks following the implementation, it became hard to ignore the fact that some extensive patching would be needed to ensure all of the systems and servers would be up to the most recent patch levels.  Also, there were many questions about whether a particular patch was necessary, and whether the patch actually fixed the problem (and also, whether that patch might ‘break’ things).  I’m sure anyone who has ever focused on this will agree these sorts of concerns are important when it comes to running a reliable data center.

The primary concern that the data center managers had at that particular time was the emerging “Day 0” malware concern.  For those of you who aren’t aware of what this is here is a brief explanation:

There is a very large community of malware producers (read: “the bad guys”) out in the world and on the Internet.  Between the time a new vulnerability is identified and affected systems can be patched to counter the threat, there is a short window of opportunity (in fact, the window is “open” as long as the world is oblivious about the vulnerability!).  Systems are most vulnerable during this period because corrective patches have not been developed to counter the threat, and aids such as  anti-virus software haven’t been sensitized to detect the threat either.

For system administrators, system patching is probably the surest method available to maintain some defense against such rapidly emergent threats.

Continuing, besides the whole question of whether or not a particular patch has been created in response to a vulnerability, industry news of such vulnerabilities circulates slowly, and mostly in specific communications channels.  Vendors create patches for discovered vulnerabilities and release them as quickly as possible, however, often word of these fixes is also slow to emerge.

Returning to the situation I was describing, our team spent some time brainstorming about what we could do.  We knew that there were subscription sources that produced vulnerability updates, and in addition to that, many of the major server suppliers produced alert emails whenever patches were released to the field.  Slowly we began to piece together a patching information management system to improve our odds at having systems patched.

Fast-forward to today.  If you are a Windows user you know about “patch Tuesday” which is the day that Microsoft releases its patches.  Every week patches are “pushed” out and the update tools on endpoint systems dutifully apply those patches.

Other systems, such as Linux hosts and Mac’s, perform on-demand updates where patches are periodically fetched and applied.  There are a good number of applications that watch over the ‘patch’ domain and provide the tools to apply those patches as well.

Problem solved, right?

Not really – there are a host of concerns involved with automatic patching systems.  Some patches are applied and work just fine.  Some patches cause things to break, and some just make things worse (or even, don’t work as advertised!)  The point being that “fire and forget” patching has its shortcomings, and for some, the shortcomings are threats in and above all.

So the question to the audience is whether or not there are satisfactory patch management solutions these days?  I am aware that there are a number of companies to perform these services, so I’m curious as to the level of effectiveness they may have.

One last point.  In their 2014 DBIR report, Verizon’s analysts pointed out that many of the exploits used by hackers were ‘old’, and that had those systems been properly patched, the penetrations would never have happened.  I believe that the Patching Dragon has yet to be slain.

Further Reading:

Here are a couple of links if you want to read a bit more on this:

Security Baseline

By this day and age many organizations have already done multiple threat and security assessments for their existing IT environments.  For some busy organizations this then becomes the endpoint; victory is declared and the assessment’s work is then filed away and forgotten, until the next audit is called for, whenever that might be.

When security and threat management is a discrete task, inevitably it becomes part of a continuum of tasks that blend in with everything else the demanding world of IT requires.  Sadly, this then evolves into a serious problem.

When a threat assessment is done, it is a snapshot at that point in time.  Even the most aggressive assessments will fail to assess 100% of the threats, and in the end, subsequent work to mitigate those threats only will deal with a smaller percentage of what was captured and analyzed.  Then, filing it all away for one, two or three years simply compounds the problem.

The fact of the matter is that the threat environment is constantly evolving.  Aspects that were minor concerns two years ago may be headlining issues now.  A security staff, or an organization that feels they are “on top of the threat” may only be kidding themselves.  Further complicating the picture is that if mitigation steps aren’t taken or are haphazard or weak, the erosion continues unabated.  Eventually the negative effects appear.

In reality, a threat assessment or similar analyses can only be considered a baseline at that point in time.  Two seconds after that analysis is completed, the information is becoming stale!

The trick here is to recognize the perishable nature of the information in the assessment, and instead of taking a snapshot, the organization needs to adopt a change in its practices to include periodic re-assessment of the threat environment.

With a small to medium business, this is further complicated by the scarcity of security resources, forcing a situation where many organizations are only weakly defended.  Indeed, “don’t tell me, I don’t want to know…” may seem to be a strategy until a breach occurs and someone answers for the data loss.

Like it or not, Cyber Security is becoming a way of life.  Any organization doing business on the Internet must resolve itself to that fact.

Here are a few things that an organization can do to prevent this situation:

  1. Periodically review and refresh the threat and risk assessment; (Refresh the baseline).
  2. For those threats actively being mitigated, monitor their effectiveness
  3. For those threats on the “wait list”, review whether the environment has changed.
  4. Consult one of the many online information resources focused on organizational cyber protection.  (For example, the National Institution of Standards and Technology (NIST) has a huge library of security standards – their 800- series.)
  5. Don’t be afraid to reach out for expertise in the industry, in fact – having an outside set of eyes look at your assessments and perhaps provide advice is a healthy security practice.

In the end, recognize that while assessing your organizations risk is a good thing, it is what happens afterwards that determines whether it was worthwhile or was simply a waste of resources.



New Blog – “First Light”

In the astronomical trade, “First Light” is a term used to describe the premier of the first fruits of a particular telescope or system’s operation.  The term has been hijacked by writers such as myself to apply also to the establishment of things of smaller import, in this case, my humble blog.

The Risky Undertaking is a blog focused on the areas of Risk Management, Privacy Issues and more recently, the whole question of Cyber Security.  Essentially it will be the voice of my personal consultancy practice and it will be the place where I hope to inform, promote, and also “sound off” on topics of interest in this area.

About myself:

My name is Richard and I am the owner / operator of this blog – –

I have worked in the IT and Security business areas for several decades.  The bulk of my work has been done in the Southwestern Ohio area, but I have traveled and worked at times elsewhere in the US.

My technical interests are diverse and they run the gamut from bread and butter security and compliance work on down to more concrete areas such as programming ARM based processors to do specific “gadget” tasks.

Besides my independent consulting work, I am also a Microsoft partner, a Cisco partner and a Symantec partner and as a result I am also able to sell their products to my clients (present and future).

I should also point out that my personal humor is somewhat dry, along the lines of the old Monty Python skits of the 1970’s, so I apologize in advance if that comes through in my posts.

Why should you come here?

In anticipation of this question, here are a few reasons:

  1. If you are looking for some alternative perspectives on security, cyber and risk topics.
  2. A “little guy’s” view of the industry as it relates to those areas of my interest.
  3. Views tempered by my long involvement with this industry.

I could probably think of a few more, but that should be sufficient for the moment.

Please stay with us in the weeks and years to come and hopefully I’ll occasionally provide you with some good “stuff”.