Microsoft Hyper-V Cluster Setup |
Going through and researching what it takes to run a Hyper-V cluster. My list of requirements are as follows:
- Systems must be the same level of hardware (can't mix AMD/INTEL)
- All Nodes/Hosts must have all the same networks and listed in the same order (minimum of 2)
- All Nodes/Hosts must have access to what will be the cluster storage
- You should have a primary and secondary AD Controllers for the cluster (physical machines preferred)
- Network settings and IP addresses on the host/nodes should be unique; compare the settings between the network adapter and the switch it connects to and make sure that no settings are in conflict.
- The AD Controllers should be setup for DNS and DHCP Failover (DHCP Failover server 2012R2 and later) The servers in the cluster must be using Domain Name System (DNS) for name resolution. It is recommended that cluster nodes/hosts are just member servers.
- Domain role All servers in the cluster must be in the same Active Directory domain. As a best practice, all clustered servers should have the same domain role
- You need two or more Hosts/Nodes for a fail over cluster
I originally tried to make server 2012R2 run off SAMBA 3 for the shares but found it impossible to get working with FreeNAS as the implementations seem to be a bit different.
Current Hardware Setup:
- 2 x Intel Atom Systems for AD Controllers.
- 1 x AMD Blade System with 6 Nodes.
- 2 x 3.2 Ghz 4 core nodes with 32GB Ram, Dual Gig Intel i350 LACP LAGG Network interface
- 4 x 2.6 Ghz 8 core nodes with 32GB Ram, Dual Gig Intel i350 LACP LAGG Network interface
- 1 x 48 port Allied Telesis Websmart Switch
- 2 x FreeNAS NAS Appliances configured for iSCSI target shares Dual Gig Intel i350 LACP LAGG Network interface
Hyper-V Cluster Setup |
Network Interfaces:
The cluster system has a total of 6 network interfaces that virtual machines will work off the host and are as follows.
Untagged VLAN 300
192.168.0.0/24 - Communication Network/NAS also includes AD Controllers for Cluster
- Cluster/Client Communication Permitted
Tagged Vlan 301
192.168.1.0/24 - Staff Domain Infrastructure network
- no Cluster Communication Permitted
Tagged Vlan 302
192.168.2.0/24 - Infrastructure Network (access to switches etc)
- Cluster/Client Communication Permitted
Tagged Vlan 303
192.168.3.0/24 - Primary Database Application
- no Cluster Communication Permitted
Tagged Vlan 304
192.168.4.0/24 - Contractor Network
- no Cluster Communication Permitted
Tagged Vlan 305
192.168.5.0/24 - Specific DMZ Communications Network
- no Cluster Communication Permitted
The Hyper-V cluster Nics are Teamed using the Intel Driver; FreeNAS does the Teaming in it's software and the switches are teamed as well and all have the VLAN tags setup as stated above. It is important to have the networks in the same order on each of the Hyper-V Cluster Nodes otherwise you will have communication issues. See my post on troubleshooting Hyper-V cluster communication errors.
Storage:
The FreeNAS systems are identical with a vdev mirror setups following best practices in the FreeNAS Guide linked to below.
Enterprise Level SSD x 4 480GB drives - vdev mirror 1TB capacity with multiple disk redundancy
Enterprise Level HDD x 6 4TB drives - vdev mirror 12TB capacity with multiple disk redundancy
Enterprise Level SSD x 240 GB Drive - for the zlog
Enterprise Level SSD x 120 GB Drive - For L2Arc Cache
The FreeNAS has been configured into 2 Tanks which are a set of drives setup in a vdev mirror as stated above. For more information about freenas I recommend going through the FreeNAS Guide.
The NAS has been configured as so
10GB Cluster Witness Disk
1.5 TB High Performance SSD Target
12 TB High Density HHD Target
Cluster Nodes:
The cluster system has a total of 6 network interfaces that virtual machines will work off the host and are as follows.
Untagged VLAN 300
192.168.0.0/24 - Communication Network/NAS also includes AD Controllers for Cluster
- Cluster/Client Communication Permitted
Tagged Vlan 301
192.168.1.0/24 - Staff Domain Infrastructure network
- no Cluster Communication Permitted
Tagged Vlan 302
192.168.2.0/24 - Infrastructure Network (access to switches etc)
- Cluster/Client Communication Permitted
Tagged Vlan 303
192.168.3.0/24 - Primary Database Application
- no Cluster Communication Permitted
Tagged Vlan 304
192.168.4.0/24 - Contractor Network
- no Cluster Communication Permitted
Tagged Vlan 305
192.168.5.0/24 - Specific DMZ Communications Network
- no Cluster Communication Permitted
The Hyper-V cluster Nics are Teamed using the Intel Driver; FreeNAS does the Teaming in it's software and the switches are teamed as well and all have the VLAN tags setup as stated above. It is important to have the networks in the same order on each of the Hyper-V Cluster Nodes otherwise you will have communication issues. See my post on troubleshooting Hyper-V cluster communication errors.
Storage:
The FreeNAS systems are identical with a vdev mirror setups following best practices in the FreeNAS Guide linked to below.
Enterprise Level SSD x 4 480GB drives - vdev mirror 1TB capacity with multiple disk redundancy
Enterprise Level HDD x 6 4TB drives - vdev mirror 12TB capacity with multiple disk redundancy
Enterprise Level SSD x 240 GB Drive - for the zlog
Enterprise Level SSD x 120 GB Drive - For L2Arc Cache
The FreeNAS has been configured into 2 Tanks which are a set of drives setup in a vdev mirror as stated above. For more information about freenas I recommend going through the FreeNAS Guide.
The NAS has been configured as so
10GB Cluster Witness Disk
1.5 TB High Performance SSD Target
12 TB High Density HHD Target
Cluster Nodes:
As stated before the Cluster Nodes are merely members of the Cluster Active Directory which is controlled by 2 Physical Active Directory Controllers. The Nodes have the Microsoft Cluster Roll installed and all nodes are connected to the FreeNAS iSCSI Targets.
View my video on connecting Windows to FreeNAS iSCSI Targets
Once all the the nodes are added (to get them added into a cluster you have to pass the Cluster Configuration Wizard then they will be joined to the cluster. The Cluster shared storage that is accessible is typically C:\ClusterStorage\$DISKNAME
When you start importing your virtual machines save them to the cluster shared disks, then you can use the configure roles wizard to make the VM Guests HA.
Summary:
I've enjoyed learning about Hyper-V clustering and getting it setup and working isn't too difficult; maintaining it is pretty trivial as well. It's not perfect but for what my organization needs it fits the bill quite nicely. I am very proud of the work I've done, and improving the current cluster, it has scaled pretty nicely and a lot of the problems the organization had running of a single hyper-v host have gone away. See below for some items I will need to do to improve the Hyper-V Cluster setup and some good resources for reading about Hyper-V Clustering.
Changes that need to be made:
The iSCSI targets need to be on a separate/private network from the Cluster Communication Network
Separate Cluster Quorum Disk (probably from a different NAS) for updates etc.
Setup the Cluster across multiple switches to prevent failure by a switch being down.
Fix some errors in the Cluster Validation Report I've run recently.
Good Reads
https://technet.microsoft.com/en-us/library/jj863389(v=ws.11).aspx
https://technet.microsoft.com/en-us/library/cc732181(v=ws.10).aspx
https://blogs.technet.microsoft.com/askcore/2014/02/19/configuring-windows-failover-cluster-networks/
https://technet.microsoft.com/en-us/library/hh127064.aspx
https://blogs.technet.microsoft.com/askpfeplat/2013/03/10/windows-server-2012-hyper-v-best-practices-in-easy-checklist-form/
http://www.altaro.com/hyper-v/19-best-practices-hyper-v-cluster/
https://channel9.msdn.com/Events/TechEd/Europe/2014/CDP-B335