Highly Available Windows Admin Center in Azure

Deploying Windows Admin Center in a high availability configuration, all hosted in Azure IaaS.

Windows Admin Center (formerly Project Honolulu) is a web based management application for Windows 10/Server 2016 machines. The application will allow you to RDP and PowerShell in to machines. It can be tied in to Azure AD to introduce MFA before connecting to your servers. Although it doesn’t currently (it may or may not change) allow direct admin of Active Directory (and many other Windows Server roles), we can use it as a gateway to remote desktop in to management servers that can administer AD.

So, I am looking at deploying a highly available deployment of the new Windows Admin Center, however it must be based in Azure and it must be available in multiple regions. Windows Admin Center can deployed in a standard Windows Fail over Cluster, which is great apart from the fact that Azure doesn’t have an option for shared storage. Windows Server 2016 however can assist here with Storage Spaces Direct (S2D). So lets take a look at this setup and see how it all works.

I am not going to cover your vNet setup, this should already be in place. I am using the new Global Peering feature to connect my UK South and Europe North setups. I am using the existing Active Directory Environment. I have provisioned 2 B1ms Machines with one in the UK South and one in Europe North. Both machines are running Windows Server 2016 Core (1803 has a bug which doesn’t allow the Enable-ClusterStorageSpacesDirect cmdlet to run) and both machines have two 10GB Data drives (S2D requires a minimum of 3 disks, this could be 3 nodes, or just multiple drives attached to a server). First task is to install the required features. From a PowerShell prompt, use the following command:

Install-WindowsFeature Failover-Clustering -IncludeManagementTools

Once we have the features installed we can work on the configuration. Lets start by creating our cluster, from one of the nodes use the following:

New-Cluster -Name CLUSTERNAME -Node NODE1, NODE2 -NoStorage

The -NoStorage switch creates a blank cluster without a CSV , it will return a warning which in the log is as follows:

An appropriate disk was not found for configuring a disk witness. The cluster is not configured with a witness. As a best practice, configure a witness to help achieve the highest availability of the cluster. If this cluster does not have shared storage, configure a File Share Witness or a Cloud Witness.

That’s fine, we didn’t do anything with storage or specify a witness. So where are we up to? Well we have the nodes, the cluster and peering so it looks something like this:

Stage1-Cluster

Next, we will add a witness, as this is all based on Microsoft Azure, it makes sense to use a cloud witness. Within Azure create a storage account that is General Purpose, Standard performance and Locally Redundant. I created my storage account in Europe West, which means a regional storage issue shouldn’t take out the whole cluster. Make sure you note the access key. Then from a PowerShell command on the cluster, run the following:

Set-ClusterQuorum -CloudWitness -AccountName <StorageAccountName> -AccessKey <StorageAccountAccessKey>

This should return the cluster with the witness details if successful. Now we need to add some storage! This is where it gets fun as we add Storage Spaces Direct (S2D) in to the mix. Run the following PowerShell cmdlet on each node in your cluster to make sure you’re ready:

Get-PhysicalDisk
This should return something along the lines of

FriendlyName SerialNumber MediaType CanPool OperationalStatus HealthStatus Usage Size
------------ ------------ --------- ------- ----------------- ------------ ----- ----
Virtual HD Unspecified False OK Healthy Auto-Select 30 GB
Virtual HD Unspecified False OK Healthy Auto-Select 4 GB
Msft Virtual Disk Unspecified True OK Healthy Auto-Select 10 GB

Make sure the CanPool option for your data disk is true then execute the following:
Enable-ClusterStorageSpacesDirect

This will run some basic checks then you’ll be asked to confirm that you want to add the disks. So long as you have at least 3 disks to go in the pool this should go through fine. Next stage is to create a volume for use with Windows Admin Center:

New-Volume -FriendlyName "<<VOLUME NAME>>" -FileSystem CSVFS_ReFS -StoragePoolFriendlyName "S2D on <<CLUSTER NAME>>" -UseMaximumSize

This creates the required Cluster Shared Volume for storing Windows Admin Center data. So now our infrastructure looks something like this:

Stage3-Cluster Witness S2D

Azure doesn’t play nice with Virtual IP’s that are used within clusters, the last thing we need to do is add 2 internal load balancers to configure the required IP addresses for the clustering. One Load Balancer per region, with a NAT rule for https traffic. Assign the IP you wish to use in each region.

Before we deploy Windows Admin Cluster, we need to ensure that the cluster computer account has access to create and delete computer objects in Active Directory. The objects get created under the same OU that the clustered computer account resides. From AD Users and Computers, locate the desired OU and then run the Delegate Control Wizard to allow the account access.

Now we have the cluster built, it’s time to deploy Windows Admin Center. First job is to download it from this Microsoft Short Link. You will also need the HA deployment scripts from Windows Admin Center HA Setup Scripts zip file. I am doing this in a lab environment so won’t use a PKI issued certificate. For production use, you should ensure you have a trusted cert. To Install WAC in HA, use the following command:

.\Install-WindowsAdminCenterHA.ps1 -clusterStorage C:\ClusterStorage\Volume1 -clientAccessPoint <<Preferred URL Name>> -msiPath '.\WindowsAdminCenter1804.msi' -generateSslCert

The Generate SSL Cert tells the installation to generate the cert required, if you’re using PKI you would specify the PFX file and password, leaving out this option. This script appears to install the software on each node in the cluster then add a Generic Service cluster role. Monitor FailOver Cluster manager for the role being created, when it is, you will notice it’s tied to the IP address of the NIC. Change these to be the IP Address of the Internal Load Balancer and you should now have an Azure based, highly available deployment of  Windows Admin Center. The architecture looks something like this:

Stage4-LoadBalancers

Not covered in the above is the fact that from each Virtual Network, I have a VPN connection coming back to my home which is how I connect to my Azure estate. I can now failover my Windows Admin Center deployment between regions in the event of an outage. With this in place, I can now lock down RDP to just come from the cluster nodes, allow HTTPS via the VPN and block RDP from outside the cluster.

So this is where things got interesting for me. The cluster was up, I could fail over, fail back etc and all was happy. Except it wasn’t. If I added a server on the UK South node and then failed over, the server wasn’t there. Fail back and it was. The clustered role was obviously using a local copy of the data. After digging through the PowerShell scripts provided for the installation, I noticed the following:

$registryPath = "HKLM:\Software\Microsoft\ServerManagementGateway\Ha"
if (-Not (Test-Path $registryPath))
{
New-Item -Path $registryPath | Out-Null
}
New-ItemProperty -Path $registryPath -Name IsHaEnabled -Value "true" -PropertyType String -Force | Out-Null
New-ItemProperty -Path $registryPath -Name StoragePath -Value $smePath -PropertyType String -Force | Out-Null
New-ItemProperty -Path $registryPath -Name Thumbprint -Value $certThumbprint -PropertyType String -Force | Out-Null
New-ItemProperty -Path $registryPath -Name Port -Value $portNumber -PropertyType DWord -Force | Out-Null
New-ItemProperty -Path $registryPath -Name ClientAccessPoint -Value $clientAccessPoint -PropertyType String -Force | Out-Null
$staticAddressValue = $staticAddress -join ','
New-ItemProperty -Path $registryPath -Name StaticAddress -Value $staticAddress -PropertyType String -Force | Out-Null

Checking the registry path in the first variable on the active node, and it was empty! Check the standby node, empty! DOH! The bit that tells Windows Admin Center where the persistent storage etc lives hasn’t actually updated it. I manually added the keys IsHaEnabled, StoragePath, Thumbprint, Port, ClientAccessPoint as per the above on the primary node, exported the complete path with values, then imported on the remaining node. After a restart of the clustered role, the list of servers was empty as it’s now using the CSV location for persistent storage. Needless to say, I can now fail-over and maintain my settings.

The last stage to the deployment is to add Azure authentication. This allows us to force MFA prior to connecting to servers. When you’re logged in to Windows Admin Center, click the cog in the top right of the screen then click the Gateway Access option on the left. Change the access to be Azure Active Directory. This will prompt you download a PowerShell Script. Once downloaded and extracted, make sure you unblock the script in the file properties otherwise you will get the following error:

.\New-AadApp.ps1 : File C:\temp\WindowsAdminCenterAzureConnectScript-1806\New-AadApp.ps1 cannot be loaded. The file
C:\temp\WindowsAdminCenterAzureConnectScript-1806\New-AadApp.ps1 is not digitally signed. You cannot run this script on the current system. For more
information about running scripts and setting execution policy, see about_Execution_Policies at http://go.microsoft.com/fwlink/?LinkID=135170.
At line:1 char:1
+ .\New-AadApp.ps1
+ ~~~~~~~~~~~~~~~~
+ CategoryInfo : SecurityError: (:) [], PSSecurityException
+ FullyQualifiedErrorId : UnauthorizedAccess

The script needs to be run from a machine that has the PowerShell modules AzureRM and AzureAD installed. If you’re running this in a PowerShell window under a user without admin access to WAC you will need to specify credentials, this is due to the script calling an API on WAC to complete registration:

$Creds = Get-Credential
.\New-AadApp.ps1 -GatewayEndpoint https://<<fqdnOfAccessnode>> -Credential $creds

You’ll be prompted for your admin credentials for Azure AD and the script will proceed to register the application. Once registered you need to grant additional access, a URL will be provided along with very clear instructions. Once the permissions are granted, you refresh the page in WAC you should be able to save the option to use Azure as your identity provider. Now it’s using Azure, you can use Conditional Access to force the use of MFA.

With that you should now have a highly available deployment of Windows Admin Center, complete with MFA authentication before you RDP in to any servers. Without additional applications deployed to your servers. You utilize host firewalls & Azure Network Security Groups to restrict RDP and WinRM traffic down to the nodes in the cluster, removing direct access.

Leave a Reply

Your email address will not be published. Required fields are marked *