How to setup VMFleet to stress test your Storage Spaces Direct deployment

26 May

As an outcome of the Microsoft’s demos about Storage Spaces Direct at Ignite conference and the Intel Developer Forum, Microsoft published recently a bunch of PowerShell Script known as “VMFleet“.

VMFleet is a basically a collection of PowerShell script to easily create a bunch of VMs and run some stress tests in it (mostly with the DISKSPD tool) to test the performance of an underlying Storage Space Direct deployment or simply for demonstration purposes.

After you have downloaded the files it is not quite obviously how to get started as the included documentation does not give you some simple step by step guidelines how to get all up and running.

So I decided to write my own short guide with the needed steps to get the VMFleet up and running.

Update 03.01.2017: Since I wrote this post in May 2016, Microsoft has apparently extended the VMFleet with some new scripts to setup the environment. So I updated the setup steps below. Thanks to my colleague and coworker Patrick Mutzner for pointing that out!

Prerequisites:

  • A  VHDX file with installed Windows Server 2012 R2 / 2016 Core and the password of the local Administrator account set. (sysprep must not be run!)
    A fixed VHDX file is recommended to eliminate “warmup” effects when starting the test runs.
    Note: The VHDX should be at least 20GB big because the VMFleet Scripts will create a load test file of 10GB inside the VMs.
  • A functional Storage Spaces Direct Cluster (with existing Storage Pool and no configured Virtual Disks)

VMFleet Setup:

  1. First you need to create one Volume/CSV per node to store the test VMs.
  2. Then create an additional CSV to store the VMFleet scripts and for the collection of the test results from the VMs.
  3. Extract the ZIP file with the VMFleet scripts on one of the cluster nodes.
    (For example extract the downloaded ZIP file to C:\Source on the first cluster node)
  4. Run the install-vmfleet.ps1 Script. This will create the needed folder structure on the “collect” CSV. (The volume created at Step 2)
  5. Copy the DISKSPD.exe to C:\ClusterStorage\Collect\Control\Tools

    Note: Everything in this folder will later automatically be copied into every test VM. So you can also copy other files to this folder if you need them later inside the test VMs.

  6. Copy template VHDX to C:\ClusterStorage\Collect 
  7. Run the update-csv.ps1 (unter C:\ClusterStorage\Collect\Control) script to rename the mountpoints of the CSV volumes and to distribute the CSV even in the cluster so every node is the owner of one CSV.
    update-csv
  8. Now it’s time to create a bunch of test VMs on the four CSVs. This is done by the create-vmfleet.ps1 Script.

    As parameters you must specify the path to the template VHDX file, how many VM per Node you like to create, the Administrator password of the template VHDX and a Username and its Password which has access to the Cluster Nodes (HostAdmin in this example). This account will be used inside the VMs to connect back to the cluster so that the script running inside the VMs get access to  the collect CSV  (C:\ClusterStorage\collect).
  9. Because the create-vmfleet.ps1 creates the VMs with the default values from the New-VM Cmdlet you should run now the set-vmfleet.ps1 Script to change the count of vCPUs and the memory size of the test VMs to your desired values.
  10. (Optional) Now before you begin with the tests you can check your Storage Spaces Direct Cluster configuration with the test-clusterhealth.ps1 script. The Script will check on all nodes if there are any RDMA configuration errors and it will do some basic health checks of the Storage Spaces Direct setup.
    test-cluster 
  11. As the last preparation step start the watch-cluster.ps1 script. This will give you a nice dashboard like overview of the most interesting storage performance counters across all nodes in the cluster. So you get an overview of the storage load and performance.
    watch-cluster 
  12. At this Point you are ready to start you first test run. 
    Simply start all test VMs
     manually or with the start-vmfleet.ps1 script. After the VMs have booted up they will automatically look for the run.ps1 script on the collect CSV (at C:\ClusterStorage\Collect\Control). But by default the test run is in paused state. So to start the actual load test simply run the clear-pause.ps1 script. This will kick off the dskspd.exe in every VMs and you can observe how the numbers from the watch-cluster.ps1 will explode… 😉

To change the test pattern simply change the parameters in the run.ps1 Script and either restart all test VMs (with stop-vmfleet.ps1 and start-vmfleet.ps1 scripts) or pause and resume the tests with the set-pause.ps1 and clear-pause.ps1 scripts.

That’s it. VMFleet is ready. So have fun while testing your S2D deployment and while looking at (hopefully) some awesome IOPS numbers. 🙂

SMB Direct connectivity options with HP servers

22 Feb

Are there any 10GBE network cards from HP which you can use for SMB Direct?
Recently I was doing some research about the options of 10GBE/RDMA NICs for HP ProLiant rack servers (DL380/DL360) and I found some interesting new option for the relatively new Gen9 servers:

The 556FLR und 544FLR are FlexFabric adapters. These are special option of ProLiant rack server to extend the onboard connectivity with additional network ports together with the four builtin 1Gbit/s onboard NICs. The CN1200E is a regular PCI-E for installation in normal PCI-E slot.

So the answer is yes, there are now some NICs available from HP which you can use for SMB Direct (and with the NVGRE offload abilities even for efficient VM connectivity with NVGRE).

Why are the HP branded adapters interesting?
Sure you could use any RDMA capable NIC without the HP branding in a HP server but the benefit with HP branded adapter is that you get firmware and driver upgrades included with HP SUM like for any other components of the HP severs. So you can update all drivers and firmwares with one Tool/Setup at once.
Furthermore you have not to debate with HP that its not the network adapter which is responsible for the issue if you have a hardware support case for something else at HP. You know what I mean… 😉

But no iWARP with HP?
It looks like HP has no love for iWARP. There are no HP branded NIC which supports iWARP or at least I couldn’t found it. This meaning if you want to use HP Adapters for SMB Direct you have to go the RoCE path which need some more investments in the networking part (DCB/PFC). MVP Didier Van Hoye has some great blog posts about this topic.

Update April 1, 2015:
As stated in the HP Blog there are now two new Mellanox ConnectX-3 Pro based adapters for Gen9 servers:

Thanks to the Mellanox chip the new adapters support both RDMA (RoCE) and NVGRE offloading like the older 544FLR but at a much lower price.
Although the 544FLR adapter supports also 40GBE and infiniBand. The new 546 adapters support “only” 10GBE.