Best Practices for NBD Transport

Virtual Disk Development Kit (VDDK) | vSphere 6.x and 7.x | 6 March 2020

These performance tips apply to recent VDDK releases. Revised 9 April 2021.

NBD (network block device) is the most universal of the VDDK transport modes. It does not require dedicated backup proxy VMs like HotAdd, and works on all datastore types, not just SAN. Sections below give tips for improving NFC (network file copy) performance for NBD backups.

Parallel jobs on one NFC server

ESXi hosts have two NFC servers: one in hostd and the other in vpxa. For connections to vCenter, VDDK as an NFC client connects to the NFC server in vpxa. For connections to ESXi hosts, VDDK connects to the NFC server in hostd.

If programs connect directly to ESXi hosts, the NFC server memory limit in hostd can be increased from default 48MB by editing the /etc/vmware/hostd/config.xml file. If programs connect through vCenter, the NFC memory limit in vpxa is not configurable.

If connecting through vCenter Server, VMware recommends backing up 50 or fewer disks in parallel on a host. The NFC server cannot handle too many requests at the same time. It will queue requests in the list until previous requests have completed.

NFC compress flags

In vSphere 6.5 and later, NBD performance can be significantly improved using data compression. Three types are available (zlib, fastlz, and skipz) specified as flags when opening virtual disks with the VixDiskLib_Open() call. Data layout may impact the performance of these different algorithms.

  • VIXDISKLIB_FLAG_OPEN_COMPRESSION_ZLIBzlib compression
  • VIXDISKLIB_FLAG_OPEN_COMPRESSION_FASTLZfastlz compression
  • VIXDISKLIB_FLAG_OPEN_COMPRESSION_SKIPZskipz compression

Asynchronous I/O

In vSphere 6.7 and later, asynchronous I/O for NBD transport mode is available. It can greatly improve data transfer speed of NBD transport mode. To implement asynchronous I/O for NBD, use the new functions VixDiskLib_ReadAsync and VixDiskLib_WriteAsync with callback VixDiskLib_Wait to wait for all asynchronous operations to complete. In the development kit, see vixDiskLibSample.cpp for code examples, following the logic for -readasyncbench and -writeasyncbench options.

There are many factors impacting write performance. Network latency is not necessarily a significant factor. Here are test results showing improvements with VDDK 6.5.3:

  • one stream read over 10 Gbps network with async I/O, speed of NBD is ~210 MBps.
  • one stream read over 10 Gbps network with block I/O, speed of NBD is ~160 MBps.
  • one stream write over 10 Gbps network with async I/O, speed of NBD is ~70 MBps.
  • one stream write over 10 Gbps network with block I/O, speed of NBD is ~60 MBps.

I/O buffering

New: As of VDDK 7.0.3, users can configure asynchronous I/O buffers for NBD(SSL) transport. With high latency storage, backup and restore performance may improve after increasing NFC AIO buffer size. If servers are capable of high concurrency, backup and restore throughput may improve with more NFC AIO buffers. Defaults are buffer size 64K (1) and buffer count 4. In testing, these InitEx configuration file settings performed best, but this depends on hardware setup:
vixDiskLib.nfcAio.Session.BufSizeIn64KB=16
vixDiskLib.nfcAio.Session.BufCount=4

New: As of vSphere 7.0, changed block tracking (CBT) has adaptable block size and configurable VMkernel memory limits for higher performance. This feature requires no developer intervention and is transparent to users. It is applied automatically by vSphere when a VM is created or upgraded to hardware version 17, after CBT set or reset.

In vSphere 6.7 and later, VDDK splits read and write buffers into 64KB chunks. Changing the buffer size on the VDDK side does not lead to different memory consumption results on the NFC server side.

In vSphere 6.5 and earlier, the larger the buffer size on the VDDK side, the more memory is consumed on the NFC server side. If the buffer size is set to 1MB, VMware recommends backing up no more than 20 disks in parallel on an ESXi host. For a 2MB I/O buffer, no more than 10 disks, and so on.

Session limits and reuse vCenter session feature

In vSphere 6.7 and later, programs can reuse a vCenter Server session to avoid connection overflow. Set the credential type to VIXDISKLIB_CRED_SESSIONID and supply the value of the SOAP session cookie from a live vCenter Server session. You can get the vCenter sessionId cookie from a VIM HTTP header by looking for the vmware_soap_session cookie name.

if (appGlobals.isRemote) {
  cnxParams.vmName = vmxSpec;
  cnxParams.serverName = hostName;
  cnxParams.credType = VIXDISKLIB_CRED_SESSIONID;
  cnxParams.creds.sessionId.cookie = cookie;
  cnxParams.creds.sessionId.userName = userName;
  cnxParams.creds.sessionId.key = password;
  cnxParams.port = port;
}

Network bandwidth considerations

VMware suggests that NBD backups should be done on a network with bandwidth of 10 Gbps or higher.

Operations such as VM cloning or offline migration will also consume memory in the NFC server. Users must try to arrange their backup window to avoid conflict.

Log analysis for performance issues

The VDDK sample code can be run to assist with I/O performance analysis.

In the proxy config file, set the NFC log level to its highest value vixDiskLib.nfc.LogLevel=4. There is no need to set log level in the server for NFC asynchronous I/O. Then run sample code and investigate vddk.log and the vpxa log to assess performance.