Cisco HyperFlex Stretched Cluster Performance Testing – HCIBench

I previously performed some performance testing of my new HyperFlex Stretched Cluster using IOMeter. The feedback that I received from some readers was that IOMeter is an “outdated” tool which was used to measure performance of old storage systems and is no longer suitable for new hyper-converged platforms.

Therefore, I ran another performance test using HCIBench which is what VMware recommends for testing VMware VSAN. HCIBench is developed as a “fling” by VMware employees and is basically a wrapper for VDBench by Oracle. VDBench is also used in Cisco’s published performance tests of HyperFlex.

Deploying HCIBench is pretty straightforward as it only involves deploying the HCIBench OVA into the vSphere cluster and then configuring the performance test to run. It does require that VDBench is manually downloaded from Oracle and then uploaded to the controller VM. The controller VM will then spawn as many worker VMs as required and each VM can then be “primed” with random data which is recommended when testing against a system which does deduplication. Worker VMs deployed are configured with 4 vCPU, 8Gb memory and 16Gb OS disk. Additional data disks are created depending on the VDBench options for testing.

VDBench 5.04.07 was used for this test which is the latest version available as of this writing.

As per my previous IOMeter test, the platform remains the same:

Hyperflex M5 40Gb Stretched Cluster Configuration

  • 2 sites – Site1 and Site2 stretched via VXLAN. Latency between sites is <1ms.
  • Each site has a pair of 40Gb UCS Fabric Interconnects (total 4 x UCS FI6332 – 2 per site).
  • Each site has 4 x Hyperflex HX240c-M5SX converged nodes (total 8 x HX240M5 – 4 per site)
  • Each server has 1 x 1.6TB cache SSD and 8 x 1.8TB capacity SAS HDDs.
  • Available raw storage (not accounting for deduplication and compression) is 24TB.
  • Compression and deduplication is enabled by default on Hyperflex.
  • Hyperflex stretched cluster has concept for datastore locality where one of the sites is nominated as the master.
  • 2 datastores are created with locality set in each site (HX-site1-DS01 and HX-site2-DS01).

Test Scenario 1 (Not Primed) (10 x 100Gb VMs / 4K block size / 50% Read/Write / 50% Random) – Generic application workload

VDBench Options:

  • Number of VMs: 10
  • Number of Data Disk (per VM): 5
  • Size of Data Disk: 20Gb
  • Working-Set Percentage: 100%
  • Number of Threads Per Disk: 2
  • Block Size: 4K
  • Read Percentage: 50%
  • Random Percentage: 50%
  • Test Duration: 20 mins

Results:

The performance graphs from HyperFlex shows a maximum read and write IOPs of approximately 59K IOPs for the 1TB workload deployed.

The deduplication and compression ratios are also now much lower at 52.8% storage optimization compared to that when cloned VMs with IOMeter was used in previous test.

The performance results at during the middle of the test (10 min mark) from two of the VMs each deployed on different datastores are shown below:

It seems that each VM is able to occasionally reach peak IOPs of approximately 5000 IOPs.

Results of VM vdbench-HX-site1-DS01-0-1

Aug 16, 2018    interval        i/o   MB/sec   bytes   read     resp
                               rate  1024**2     i/o    pct     time
18:10:42.003         631     4044.0    15.80    4096  49.31    2.466
18:10:43.002         632     4330.0    16.91    4096  50.05    2.296
18:10:44.004         633     3978.0    15.54    4096  49.47    2.502
18:10:45.003         634     4732.0    18.48    4096  49.58    2.124
18:10:46.002         635     4981.0    19.46    4096  49.87    1.997
18:10:47.002         636     3957.0    15.46    4096  48.98    2.521
18:10:48.002         637     3132.0    12.23    4096  49.62    3.187
18:10:49.002         638     4430.0    17.30    4096  49.21    2.250
18:10:50.002         639     2474.0     9.66    4096  49.92    4.014
18:10:51.002         640     4153.0    16.22    4096  49.48    2.411
18:10:52.003         641     3313.0    12.94    4096  50.23    3.019
18:10:53.002         642     3844.0    15.02    4096  50.18    2.594
18:10:54.002         643     3250.0    12.70    4096  49.94    3.042
18:10:55.002         644     3469.0    13.55    4096  50.99    2.888
18:10:56.002         645     3721.0    14.54    4096  48.94    2.678
18:10:57.002         646     3749.0    14.64    4096  50.28    2.671
18:10:58.002         647     5282.0    20.63    4096  51.38    1.856
18:10:59.003         648     4865.0    19.00    4096  50.63    2.088
18:11:00.002         649     2988.0    11.67    4096  50.33    3.331
18:11:01.002         650     4444.0    17.36    4096  50.86    2.237
18:11:02.002         651     3823.0    14.93    4096  50.41    2.624
18:11:03.002         652     4407.0    17.21    4096  50.99    2.241
18:11:04.002         653     3003.0    11.73    4096  49.58    3.330
18:11:05.002         654     4645.0    18.14    4096  51.24    2.144
18:11:06.003         655     3769.0    14.72    4096  49.59    2.669
18:11:07.002         656     1852.0     7.23    4096  49.14    5.391
18:11:08.002         657     2727.0    10.65    4096  50.50    3.623
18:11:09.002         658     3870.0    15.12    4096  51.50    2.605
18:11:10.003         659     4402.0    17.20    4096  49.73    2.261
18:11:11.002         660     4383.0    17.12    4096  49.14    2.276

Results of VM vdbench-HX-site2-DS01-0-1

Aug 16, 2018    interval        i/o   MB/sec   bytes   read     resp
                               rate  1024**2     i/o    pct     time
18:10:12.003         601     3608.0    14.09    4096  51.36    2.768
18:10:13.002         602     4705.0    18.38    4096  50.67    2.127
18:10:14.003         603     3287.0    12.84    4096  50.81    3.029
18:10:15.002         604     3981.0    15.55    4096  49.91    2.507
18:10:16.003         605     3750.0    14.65    4096  50.59    2.655
18:10:17.002         606     3198.0    12.49    4096  50.50    3.129
18:10:18.003         607     4035.0    15.76    4096  49.49    2.481
18:10:19.002         608     4293.0    16.77    4096  50.15    2.322
18:10:20.003         609     3887.0    15.18    4096  50.81    2.571
18:10:21.002         610     4132.0    16.14    4096  49.83    2.408
18:10:22.003         611     3440.0    13.44    4096  50.03    2.908
18:10:23.002         612     4795.0    18.73    4096  49.86    2.074
18:10:24.003         613     2812.0    10.98    4096  49.22    3.537
18:10:25.002         614     3616.0    14.13    4096  50.06    2.764
18:10:26.003         615     3711.0    14.50    4096  51.17    2.704
18:10:27.002         616     3798.0    14.84    4096  50.00    2.623
18:10:28.002         617     3155.0    12.32    4096  48.81    3.150
18:10:29.002         618     4012.0    15.67    4096  50.75    2.486
18:10:30.002         619     4111.0    16.06    4096  50.13    2.428
18:10:31.002         620     2762.0    10.79    4096  51.16    3.593
18:10:32.002         621     2910.0    11.37    4096  48.93    3.454
18:10:33.003         622     3160.0    12.34    4096  49.34    3.176
18:10:34.002         623     3167.0    12.37    4096  49.35    3.145
18:10:35.002         624     4843.0    18.92    4096  50.03    2.063
18:10:36.002         625     4214.0    16.46    4096  49.26    2.363
18:10:37.002         626     3844.0    15.02    4096  51.01    2.584
18:10:38.003         627     5668.0    22.14    4096  49.75    1.770
18:10:39.002         628     3904.0    15.25    4096  49.59    2.539
18:10:40.006         629     4348.0    16.98    4096  49.68    2.302
18:10:41.002         630     4114.0    16.07    4096  51.39    2.430

 

Test Scenario 2 (Not PrimeD) (10 x 100Gb VMs / 8K block size / 50% Read/Write / 50% Random) – Exchange server workload

VDBench Options:

  • Number of VMs: 10
  • Number of Data Disk (per VM): 5
  • Size of Data Disk: 20Gb
  • Working-Set Percentage: 100%
  • Number of Threads Per Disk: 2
  • Block Size: 8K
  • Read Percentage: 50%
  • Random Percentage: 50%
  • Test Duration: 20 mins

Results:

The performance graphs from HyperFlex shows a maximum read and write IOPs of approximately 55K IOPs for the 1TB workload deployed.

The performance results at during the middle of the test (10 min mark) from two of the VMs each deployed on different datastores are shown below:

It seems that each VM is able to occasionally reach peak IOPs of approximately 4200 IOPs.

Results of VM vdbench-HX-site1-DS01-0-1

Aug 17, 2018    interval        i/o   MB/sec   bytes   read     resp
                               rate  1024**2     i/o    pct     time
02:01:25.002         571     2958.0    23.11    8192  50.34    3.362
02:01:26.002         572     3157.0    24.66    8192  49.07    3.165
02:01:27.003         573     2565.0    20.04    8192  49.47    3.880
02:01:28.002         574     4199.0    32.80    8192  50.15    2.386
02:01:29.003         575     3089.0    24.13    8192  51.12    3.177
02:01:30.002         576     3634.0    28.39    8192  50.03    2.773
02:01:31.003         577     2708.0    21.16    8192  49.00    3.698
02:01:32.002         578     2913.0    22.76    8192  49.95    3.422
02:01:33.002         579     3718.0    29.05    8192  50.24    2.675
02:01:34.002         580     3639.0    28.43    8192  48.72    2.744
02:01:35.002         581     2945.0    23.01    8192  47.81    3.390
02:01:36.002         582     3587.0    28.02    8192  50.93    2.776
02:01:37.002         583     3305.0    25.82    8192  49.74    3.019
02:01:38.002         584     3040.0    23.75    8192  49.14    3.282
02:01:39.004         585     3031.0    23.68    8192  49.95    3.200
02:01:40.003         586     2982.0    23.30    8192  48.86    3.447
02:01:41.002         587     3912.0    30.56    8192  50.23    2.533
02:01:42.003         588     3382.0    26.42    8192  49.59    2.961
02:01:43.002         589     2899.0    22.65    8192  49.12    3.430
02:01:44.002         590     3789.0    29.60    8192  50.49    2.643
02:01:45.003         591     3629.0    28.35    8192  49.52    2.744
02:01:46.003         592     3199.0    24.99    8192  49.89    3.124
02:01:47.002         593     3085.0    24.10    8192  48.95    3.221
02:01:48.003         594     4208.0    32.88    8192  50.12    2.374
02:01:49.002         595     3623.0    28.30    8192  49.93    2.731
02:01:50.010         596     3694.0    28.86    8192  50.54    2.718
02:01:51.002         597     3982.0    31.11    8192  49.60    2.505
02:01:52.002         598     4037.0    31.54    8192  49.89    2.465
02:01:53.002         599     3359.0    26.24    8192  48.85    2.966
02:01:54.002         600     3312.0    25.88    8192  48.52    2.998

Results of VM vdbench-HX-site2-DS01-0-1

Aug 17, 2018    interval        i/o   MB/sec   bytes   read     resp
                               rate  1024**2     i/o    pct     time
02:01:24.003         571     3105.0    24.26    8192  49.69    3.212
02:01:25.002         572     3256.0    25.44    8192  49.75    3.067
02:01:26.002         573     4143.0    32.37    8192  49.77    2.408
02:01:27.002         574     4310.0    33.67    8192  50.46    2.315
02:01:28.002         575     3513.0    27.45    8192  49.96    2.825
02:01:29.006         576     3818.0    29.83    8192  50.39    2.626
02:01:30.002         577     2869.0    22.41    8192  48.38    3.459
02:01:31.002         578     3190.0    24.92    8192  49.28    3.134
02:01:32.003         579     3727.0    29.12    8192  49.42    2.690
02:01:33.002         580     4064.0    31.75    8192  50.42    2.450
02:01:34.002         581     3363.0    26.27    8192  50.07    2.959
02:01:35.002         582     3593.0    28.07    8192  49.07    2.770
02:01:36.002         583     3708.0    28.97    8192  49.00    2.702
02:01:37.002         584     3419.0    26.71    8192  50.48    2.922
02:01:38.002         585     3456.0    27.00    8192  50.09    2.875
02:01:39.002         586     3499.0    27.34    8192  49.47    2.865
02:01:40.002         587     4048.0    31.63    8192  49.56    2.452
02:01:41.002         588     3514.0    27.45    8192  48.66    2.845
02:01:42.003         589     3058.0    23.89    8192  50.49    3.275
02:01:43.002         590     3561.0    27.82    8192  51.11    2.798
02:01:44.002         591     4306.0    33.64    8192  50.00    2.293
02:01:45.002         592     3214.0    25.11    8192  49.28    3.111
02:01:46.002         593     3272.0    25.56    8192  50.06    3.071
02:01:47.002         594     4504.0    35.19    8192  48.96    2.214
02:01:48.003         595     4455.0    34.80    8192  50.62    2.224
02:01:49.011         596     4294.0    33.55    8192  50.00    2.331
02:01:50.002         597     3787.0    29.59    8192  49.59    2.619
02:01:51.002         598     4692.0    36.66    8192  51.30    2.138
02:01:52.002         599     3803.0    29.71    8192  49.70    2.601
02:01:53.002         600     3311.0    25.87    8192  50.05    3.036

 

Test Scenario 3 (Priming) (10 x 2TB VMs / 64K block size / 100% Write / Sequential) – SQL Server Log workload

VDBench Options:

  • Number of VMs: 10
  • Number of Data Disk (per VM): 2
  • Size of Data Disk: 1TB
  • Working-Set Percentage: 100%
  • Number of Threads Per Disk: 2
  • Block Size: 64K
  • Read Percentage: 0%
  • Random Percentage: 0%
  • Test Duration: 20 mins

Results:

The performance graphs from HyperFlex shows a maximum read and write IOPs of approximately 8K IOPs for the 20TB workload deployed.

There is no deduplication and compression as all the workloads were generated with random data by VDBench.

The performance results at during the middle of the test (10 min mark) from two of the VMs each deployed on different datastores are shown below:

It seems that each VM is able to occasionally reach peak IOPs of approximately 1200 IOPs.

Results of VM vdbench-HX-site1-DS01-0-1

Aug 20, 2018    interval        i/o   MB/sec   bytes   read     resp
                               rate  1024**2     i/o    pct     time
00:20:43.002         661      588.0    36.75   65536   0.00    6.748
00:20:44.001         662      792.0    49.50   65536   0.00    5.020
00:20:45.002         663      579.0    36.19   65536   0.00    6.877
00:20:46.002         664      605.0    37.81   65536   0.00    6.584
00:20:47.001         665      645.0    40.31   65536   0.00    6.044
00:20:48.001         666      610.0    38.13   65536   0.00    6.597
00:20:49.002         667      703.0    43.94   65536   0.00    5.700
00:20:50.001         668      724.0    45.25   65536   0.00    5.504
00:20:51.002         669      694.0    43.38   65536   0.00    5.700
00:20:52.002         670      588.0    36.75   65536   0.00    6.782
00:20:53.001         671      758.0    47.38   65536   0.00    5.247
00:20:54.001         672      666.0    41.63   65536   0.00    5.971
00:20:55.002         673      810.0    50.63   65536   0.00    4.887
00:20:56.001         674     1542.0    96.38   65536   0.00    2.589
00:20:57.002         675     1269.0    79.31   65536   0.00    3.123
00:20:58.002         676     1207.0    75.44   65536   0.00    3.281
00:20:59.001         677      755.0    47.19   65536   0.00    5.258
00:21:00.002         678      933.0    58.31   65536   0.00    4.254
00:21:01.001         679      885.0    55.31   65536   0.00    4.479
00:21:02.001         680      655.0    40.94   65536   0.00    6.099
00:21:03.002         681      561.0    35.06   65536   0.00    7.090
00:21:04.001         682      730.0    45.63   65536   0.00    5.444
00:21:05.001         683      622.0    38.88   65536   0.00    6.367
00:21:06.002         684      647.0    40.44   65536   0.00    6.145
00:21:07.001         685      755.0    47.19   65536   0.00    5.266
00:21:08.001         686      533.0    33.31   65536   0.00    7.495
00:21:09.002         687      694.0    43.38   65536   0.00    5.733
00:21:10.002         688      539.0    33.69   65536   0.00    7.382
00:21:11.001         689      628.0    39.25   65536   0.00    6.315
00:21:12.002         690      615.0    38.44   65536   0.00    6.396

Results of VM vdbench-HX-site2-DS01-0-1

Aug 20, 2018    interval        i/o   MB/sec   bytes   read     resp
                               rate  1024**2     i/o    pct     time
00:20:43.002         661      631.0    39.44   65536   0.00    6.318
00:20:44.002         662      654.0    40.88   65536   0.00    6.053
00:20:45.001         663      621.0    38.81   65536   0.00    6.445
00:20:46.002         664      742.0    46.38   65536   0.00    5.365
00:20:47.002         665      758.0    47.38   65536   0.00    5.249
00:20:48.001         666      697.0    43.56   65536   0.00    5.705
00:20:49.001         667      682.0    42.63   65536   0.00    5.790
00:20:50.002         668      671.0    41.94   65536   0.00    5.972
00:20:51.001         669      557.0    34.81   65536   0.00    7.084
00:20:52.002         670     1056.0    66.00   65536   0.00    3.793
00:20:53.002         671      951.0    59.44   65536   0.00    4.188
00:20:54.001         672     1329.0    83.06   65536   0.00    2.981
00:20:55.002         673     1229.0    76.81   65536   0.00    3.229
00:20:56.001         674     1047.0    65.44   65536   0.00    3.786
00:20:57.002         675     1077.0    67.31   65536   0.00    3.689
00:20:58.002         676      698.0    43.63   65536   0.00    5.676
00:20:59.001         677      952.0    59.50   65536   0.00    4.189
00:21:00.001         678      716.0    44.75   65536   0.00    5.564
00:21:01.002         679      677.0    42.31   65536   0.00    5.878
00:21:02.001         680      753.0    47.06   65536   0.00    5.273
00:21:03.001         681      638.0    39.88   65536   0.00    6.241
00:21:04.002         682      657.0    41.06   65536   0.00    6.030
00:21:05.001         683      661.0    41.31   65536   0.00    6.037
00:21:06.001         684      616.0    38.50   65536   0.00    6.462
00:21:07.002         685      643.0    40.19   65536   0.00    6.175
00:21:08.001         686      652.0    40.75   65536   0.00    6.124
00:21:09.002         687      585.0    36.56   65536   0.00    6.795
00:21:10.002         688      659.0    41.19   65536   0.00    6.044
00:21:11.001         689      590.0    36.88   65536   0.00    6.726
00:21:12.002         690      835.0    52.19   65536   0.00    4.764

 

Test Scenario 4 (Primed) (8 x 2TB VMs / 8K block size / 50% Write / 50% Random) – Exchange Server workload 2

This test is similar to Test Scenario 2 except that a much larger disk size is used (2x1TB disks instead of 5x20Gb disks) and also with a workload size of 16TB which is larger than the system cache of 12TB.

VDBench Options:

  • Number of VMs: 8
  • Number of Data Disk (per VM): 2
  • Size of Data Disk: 1TB
  • Working-Set Percentage: 100%
  • Number of Threads Per Disk: 2
  • Block Size: 8K
  • Read Percentage: 50%
  • Random Percentage: 50%
  • Test Duration: 20 mins

Results:

The performance graphs from HyperFlex shows a maximum read and write IOPs of approximately 11K IOPs for the 16TB workload deployed.

The performance results at during the middle of the test (10 min mark) from two of the VMs each deployed on different datastores are shown below:

It seems that each VM is able to occasionally reach peak IOPs of approximately 1200 IOPs.

Results of VM vdbench-HX-site1-DS01-0-1

Aug 27, 2018    interval        i/o   MB/sec   bytes   read     resp
                               rate  1024**2     i/o    pct     time
04:51:36.002         631      912.0     7.13    8192  54.39    4.396
04:51:37.002         632      983.0     7.68    8192  51.27    4.067
04:51:38.002         633     1015.0     7.93    8192  50.25    3.929
04:51:39.001         634     1143.0     8.93    8192  52.84    3.492
04:51:40.002         635     1086.0     8.48    8192  48.16    3.683
04:51:41.002         636      967.0     7.55    8192  51.09    4.086
04:51:42.001         637     1062.0     8.30    8192  49.44    3.785
04:51:43.002         638      942.0     7.36    8192  51.17    4.252
04:51:44.003         639     1012.0     7.91    8192  48.32    3.948
04:51:45.001         640      992.0     7.75    8192  54.13    4.004
04:51:46.004         641      958.0     7.48    8192  50.94    4.191
04:51:47.001         642     1055.0     8.24    8192  50.14    3.769
04:51:48.002         643     1000.0     7.81    8192  51.30    4.007
04:51:49.002         644     1027.0     8.02    8192  52.00    3.890
04:51:50.001         645      987.0     7.71    8192  50.76    4.037
04:51:51.002         646      865.0     6.76    8192  49.71    4.615
04:51:52.002         647      969.0     7.57    8192  46.75    4.133
04:51:53.001         648      957.0     7.48    8192  49.01    4.148
04:51:54.002         649      960.0     7.50    8192  49.58    4.176
04:51:55.002         650      961.0     7.51    8192  49.53    4.150
04:51:56.001         651     1047.0     8.18    8192  48.62    3.816
04:51:57.002         652      806.0     6.30    8192  48.88    4.965
04:51:58.002         653     1092.0     8.53    8192  50.82    3.638
04:51:59.001         654     1097.0     8.57    8192  49.04    3.652
04:52:00.002         655     1167.0     9.12    8192  48.07    3.424
04:52:01.002         656     1071.0     8.37    8192  51.26    3.716
04:52:02.001         657     1034.0     8.08    8192  52.22    3.864
04:52:03.002         658     1057.0     8.26    8192  51.84    3.779
04:52:04.002         659     1115.0     8.71    8192  50.04    3.577
04:52:05.001         660     1050.0     8.20    8192  52.29    3.812

Results of VM vdbench-HX-site2-DS01-0-1

Aug 27, 2018    interval        i/o   MB/sec   bytes   read     resp
                               rate  1024**2     i/o    pct     time
04:51:35.002         631      981.0     7.66    8192  49.24    4.047
04:51:36.001         632      917.0     7.16    8192  53.11    4.370
04:51:37.002         633      914.0     7.14    8192  50.55    4.377
04:51:38.001         634      978.0     7.64    8192  49.39    4.077
04:51:39.002         635     1065.0     8.32    8192  48.08    3.755
04:51:40.002         636     1054.0     8.23    8192  52.37    3.765
04:51:41.002         637     1198.0     9.36    8192  48.75    3.351
04:51:42.002         638     1064.0     8.31    8192  49.91    3.736
04:51:43.002         639     1010.0     7.89    8192  49.21    3.972
04:51:44.002         640      942.0     7.36    8192  53.82    4.223
04:51:45.002         641      979.0     7.65    8192  50.56    4.087
04:51:46.002         642     1032.0     8.06    8192  50.00    3.871
04:51:47.001         643      969.0     7.57    8192  53.25    4.111
04:51:48.002         644     1279.0     9.99    8192  50.98    3.114
04:51:49.002         645     1035.0     8.09    8192  50.92    3.869
04:51:50.001         646      982.0     7.67    8192  51.22    4.057
04:51:51.002         647      916.0     7.16    8192  48.80    4.354
04:51:52.002         648      995.0     7.77    8192  47.54    4.018
04:51:53.001         649     1010.0     7.89    8192  50.79    3.822
04:51:54.002         650     1022.0     7.98    8192  46.97    4.025
04:51:55.010         651      997.0     7.79    8192  51.96    4.020
04:51:56.001         652     1013.0     7.91    8192  49.85    3.938
04:51:57.003         653      994.0     7.77    8192  49.30    3.987
04:51:58.002         654      889.0     6.95    8192  48.48    4.516
04:51:59.001         655     1077.0     8.41    8192  48.19    3.699
04:52:00.002         656     1103.0     8.62    8192  52.31    3.639
04:52:01.002         657     1100.0     8.59    8192  48.82    3.622
04:52:02.001         658     1011.0     7.90    8192  50.64    3.949
04:52:03.002         659     1106.0     8.64    8192  49.37    3.610
04:52:04.001         660     1139.0     8.90    8192  48.55    3.505

 

Conclusion

The design of HyperFlex which uses a distributed cache across the cluster allows for better performance when the cluster size is large enough compared to other hyper-converged products which uses cache on the local host where the VM resides. The distributed cache design allows for more VMs per host and better scalability. This HyperFlex Stretched Cluster has 8 nodes with 1.6TB SSD in each node with a total of approximately 12TB of cache.

Comparing the results from the IOMeter test, HCIBench was able to deliver higher IOPs of 59K IOPs for 1TB workload deployed. However, as the deployed workload was only 1TB, it was not possible to overwhelm the distributed total cache of 12TB in the cluster. HCIBench was superior compared to IOMeter when performing testing on a system which supports deduplication as HCIBench has the option to randomize the workload data so that the results are not skewed by the deduplication feature.

The third test scenario demonstrates a worst case with 64K 100% sequential writes on the whole storage system. The system was still able to achieve 8300 IOPs across the whole system with some VMs achieving a maximum of 1500 IOPs. As the deployed storage in this scenario is 20TB which is larger than the total cache of 12TB, some of the data would have to been needed to be flushed to the SAS disks.

After discussing these findings with Cisco BU and their HyperFlex performance engineer, their explanation for the high IOPs in test scenarios 1 and 2 was due to the storage system not being primed first. They refer to “priming” as filling up the storage system with random sequential data prior to performing tests. As the system is new and never primed, the system knows that there wouldn’t be any data on the SAS disks and all operations would be handled by the cache.

Therefore, the test results from test scenarios 1 and 2 were not accurate and does not reflects a production system. Test results from test scenario 4 would be the most accurate as it was run after the storage was primed.

Performance stats will need to be collected when this cluster is used for production workloads which will be a better representation of customer production environments.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.