Sunday, April 15, 2018

FreeNAS 11.1.4, Cisco Meraki 10Gig network and Server 2016

I
There is only so much data you can get through on a 1 Gig network.  Right now we have a 7 node cluster plus a couple big beefy intel servers with 1 gig lags setup for though put.  We purchased a piece of software called pathsolutions; I had heard about it on This Week in Enterprise Tech and at the time we were having big problems with drop outs,  active directory login times and profiles.  Our infrastructure is currently setup with a minimum 2 gig LACP lagg on all cluster and NAS servers.  Pathsolutions helped up optimize our network but we were still having issues with switches dropping or being unresponsive for a short period of time.  This hasn't cause any issues as to date but it shows that we are at times maxing out our network as you can see below.




As you can see our network is doing ok, there aren't any really big issues but we are totally maxing it out when we run our backups.  We are currently using Allied Telesis switches but we are slowing migrating to Cisco Meraki Switches because of the cloud based network management features.  It is really quite impressive, we looked at the Allied Telesis management solutions and it was just much more cost effective to go with the merakis.

Test Environment Setup:

FreeNAS:
  • OS 11.1.U4
  • AMD 965 Phenom 2 x4 965 @ 3.0 Ghz
  • 16 GB DDR3 RAM (NON-ECC)
  • 4 X 3TB WD Red Drives
  • 4 X 4TB WD Gold Drives
  • Intel 320 80GB SSD Zil Cache
  • Intel 320 80GB SSD L2ARC Cache
  • Intel X540 T2 Supermicro Addon Card.


Drives on the FreeNAS


Drive Setup Mirrored VDevs
iSCSI Connection Disk


Windows:
Test Server

Switches:
  • 2 X Cisco Meraki MS350-24X Switches with stacking enabled
Cables:

The test server connected to the FreeNAS Server via iSCSI target and we did 2 different types of test one that connects direct to the the zpool drives and the other as connecting to a file in the mirrored vdev pool.


iSCSI Target

With the setup done we and the drive formatted, I proceeded with the testing.  We formatted the iSCSI target a few times during our testing particularly looking at the difference between NTFS and ReFS since the latter is suppose to be better for Veeam and most other applications.

Here are the different benchmarks we produced using crystal mark

All the benchmarks.


NTFS iSCSI file target


NTFS iSCSI zpool target


REFS iSCSI File Target


REFS iSCSI zpool target
The numbers I'm getting are fantastic, and I'm going to be looking into if 10Gig can be lagged with 2 1Gig connection (more for the redundancy then anything).  That way if we lose the 10Gig connection, we can continue to "limp" along until we can fix the issue.


Production iSCSI HDD Benchmarks

Production iSCSI SSD Benchmark
As you can see the test server shows a 10 X increase in Seq Q32T1 read performance, and almost a minimum 5 X increase in write, with all the other benchmarks showing an almost 2X increase in performance.  The actual production servers have much better hardware then what is on the test FreeNAS system, with the exception of the 3 and 4 TB spinning drives.  I am really looking forward to getting the new 10Gig put in place the trick is going to be doing this without breaking the infrastructure with the iSCSI targets.

How to fix CURL call imporitng an RSS feed on a site blocking CURL calls

There is a 3rd party service provider that my organization uses called bibliocommons.  They have these nice book carousels.  However the car...