Windows Server 2012, like its predecessor, features Windows Server Update Services (WSUS) server role. Quick glance over the updated documentation indicates that archiecturally, it is not all that different from the previous version. One of the key new features seems to be a set of PowerShell commands for the WSUS API. Here’s a collection of notes I made while working on a WSUS deployment.
WSUS Deployment Types
WSUS can be deployed in a simple scenario, with one server pulling updates from Microsoft update servers and distributing patches internally. Or WSUS can be made up of several servers, with one WSUS server pulling patches from Microsoft and then distributing them locally to other WSUS servers.
Replica vs. Autonomous
In environments featuring more than one WSUS server, downstream update servers can be configured as a replica of the master WSUS server, or as an autonomous server. In both cases downstream servers synchronize with the upstream server(s).
WSUS Replica Servers
When configured as a replica, the downstream server mirrors most of the configuration from its master or upstream server (for example, patch categories you want to download). Master server becomes the only place where you manage things like patch approvals. Computer objects calling into downstream servers get synchronized (rolled up) into the master server, along with the relevant compliance / report information. Patches approved/downloaded on the master server get synchronized to the replica servers. Administration happens exclusively on the master server and replicas act as local service mirrors. This setup will be helpful when you are dealing with a distributed corporate infrastructure featuring low-bandwidth or high-utilization WAN links.
One replica downstream server can act as the master for another replica downstream server, essentially chaining WSUS servers. In theory there is no limit as to how deep this master-replica chain can go, but Microsoft documentation recommends that we keep it short to avoid approval and patch delays (each leg will introduce a delay based on the sync schedule – more on this later).
WSUS Autonomous Servers
When configured as autonomous downstream server, WSUS allows regional administrators to approve or reject patches approved on the master server. Operators would use local WSUS console on the autonomous downstream server to perform administration tasks.
Autonomous servers can either store patches locally or refer connected clients to Microsoft update service – while still allowing administrators to control which patches get approved for installation.
Autonomous approach would be preferred in decentralized IT environments where local teams retain control of their servers.
Another situation where autonomous servers would be useful, is in remote locations where localized Microsoft software (non-English) is used. Instead of downloading all patches in multiple languages into the central server, configure a satellite office running non-English software versions to connect directly to Microsoft for approved patch downloads (you cannot achieve this dergee of flexibility with a replica downstream server).
Automatic Patch Installation
I know what you are thinking: “Bad Idea” – on servers, anyway. Maybe so, but depending on staffing and environment size, this may be the only way to actually fit a patch run into the downtime window (ever tried to install Windows Server 2008 R2 SP1 on a Hyper-V server running a dozen guests?) Naturally, you would not do this without testing patches first in a controlled environment.
Automatic, but Controlled
As soon as an entire infrastructure is in a fairly up to date state, configure GPO to download and install approved patches automatically. You are still in control regarding what and when gets installed, because you need to approve the patches first. As soon as patches are approved, GPO can be set to install everything on a weekend, overnight, or otherwise outside the business hours (but in any case during a non-random time slot). This works best if your servers are up to date, because if they are not, they may require several reboots to install everything that is approved, and if detection intervals are spaced out you may see some servers rebooting on a Monday following the patching weekend – this would be pretty bad.
If necessary, you could break up patch targets into mutiple computer groups and stagger approvals, or schedule them in a sequence. For example you may want to target Hypervisors to get patched on a Saturday morning, and then VM machines – the same afternoon. All the operator has to do is check services/applications at the end of the run to make sure they came up after reboots.
Applications Requiring Manual Intervention
Applications not starting up properly is arguably the best reason to continue to exercise control over patch runs, so you need to know your environment and be sure that applications are behaving predictably (or configure service dependencies or delayed starts to make them more predictable). Services not starting in the right order is easy enough to fix, but an application that was written 20 years ago that requires an operator to press a button is probably a good case for manual patching.
Servers of strategic importance and very specific shutdown sequences (like SAP for example) can be excluded from automatic installation policy and be dealt with as exceptions.
Another example is Exchange or SQL clusters/DAG members, where you would want to patch nodes in a controlled sequence.
Still, I would make an argument that kicking off the patch run automatically on the majority of servers can start the work at a time when most people are away from work, without manual intervention or unnecessary time wasting.
Monitoring Speeds Up Post-Patching Verification
If you have a monitoring system (and you should definitely have a monitoring system), unpausing it after the patch run could bring to light most if not all remaining issues requiring operator attention. This ought to speed things up, and is almost a requirement for automated patching, but this approach is only as good as your monitoring system.