Virtual Disk Development Kit 6.0.1 Release Notes

Release date: 10 SEP 2015 | Builds: ESXi 3039244, VDDK 2942432.
For vSphere 6.0 U1. Last document update: 5 JAN 2017.
Check frequently for additions and updates to these release notes.

Contents



About the Virtual Disk Development Kit

The Virtual Disk Development Kit (VDDK) 6.0.1 released with vSphere 6.0 U1 is a bug-fix update to VDDK 6.0. VDDK 6.0.1 supports the same operating systems for proxy backup as VDDK 6.0.

VDDK is used with vSphere Storage APIs for Data Protection (VADP) to develop backup and restore software. For general information about this development kit, how to obtain and install the software, programming details, and redistribution, see the VDDK documentation landing page.

The VMware policy concerning backward and forward compatibility is for VDDK to support N-2 and N+1 releases. In other words, VDDK 6.0 and all of its update releases support vSphere 5.1, 5.5, and 6.5 (except new features).

New in VDDK 6.0.1 Releases

New setting to control NFC port. The VDDK 6.0.1 release provides a new variable cnxParams.nfcHostPort to set the NFC data copy port when connecting to ESXi hosts. The Virtual Disk Programming Guide section “Connect to VMware vSphere” shows six elements in the cnxParams structure, including port, but not just-added nfcHostPort. The cnxParams.port is where vCenter Server and ESXi hosts listen for API queries. Specifying null allows the library to select the port for SOAP requests, usually 443 (HTTPS). The data copy port is then determined by the NFC ticket resulting from your program's vmxspec. As of VDDK 6.0.1, cnxParams.nfsHostPort sets the host NFC data copy port. Specifying null allows the library to select the host NFC port, usually 902. This is the port for NFC data copy only when directly connected to an ESX/ESXi host, in cases where your program never presents vmxspec. When connecting directly to an ESXi host:

  • If vmxspec is specified, all the port details are retrieved from the NFC ticket.
  • If vmxspec is not specified, the NFC port defaults to 902, unless customers set nfcHostPort explicitly.

For a list of new feaures in the VDDK 6.0 release, see the VDDK 6.0 Release Notes.

Compatibility Notices for Partners

Only VDDK 5.5.4 is forward compatible with vSphere 6.0. In the vSphere 6.0 release, only TLS connections are allowed. Connections from VDDK clients using SSLv3 are refused with an “SSL exception” error. Although partners can tell customers how to enable SSLv3 for vmacore, VMware strongly recommends upgrading old clients to VDDK 5.5.4, which uses TLS.

Avoid creating snapshots for restore on VVol datastores. Due to Virtual Volume (VVol) implementation in this vSphere release, VMware recommends against creating a snapshot (then reverting and deleting it) when restoring files on a VVol datastore. This is because writing to a non-leaf disk (such as a snapshot backing file) is considered a layering violation and throws an error on many VVol storage devices. Writing to the leaf disk of the snapshot is not a layering violation. The restore snapshot remains mandatory for SAN transport, which is not supported on VVol datastores anyway.

Redistributed C++ Linux library. To provide support for old Linux proxies such as RHEL 5.9 and SLES 10, VDDK 6.0 includes libstdc++.so.6.0.13 in the distribution. If your software does not use one of these older Linux proxies, you can delete libstdc++ from your proxy package.

Support for SEsparse disks. Virtual disk libraries can handle SEsparse disk as implemented in vSphere 5.5. SEsparse disks are used for snapshots of > 2TB disks, and for linked clones in the VMware View environment (VDI). Backup and restore of SEsparse is supported for the advanced transports (NBD, NBDSSL, HotAdd, and SAN) but not for the host-based file transport.

For older compatibility notices from the VDDK 5.5 release, see the VDDK 5.5 Release Notes.

Recently Resolved Issues

The VDDK 6.0.1 release resolves the following issues.

  • VDDK version number incremented.

    The VDDK version number is 6.0.1 for vSphere 6.0 U1.

  • Clone function fails connection with at-sign in password.

    If a user set the access password to include at-sign (@) the VixDiskLib_Clone function failed with “CnxAuthdConnect” or “NfcNewAuthdConnectionEx“ error. The issue was seen only when connecting directly to an ESXi host, and when the virtual machine's MoRef was not specified. This behavior was a regression from VDDK 5.1 and is fixed in this release.

  • OpenSSL library updated.

    In this VDDK release, the OpenSSL library was updated to version openssl-1.0.1j (from m) to remedy the CVE-2015-0204 vulnerability.

  • SAN mode VDDK 6.0 searches for virtual machines by BIOS UUID.

    When using SAN transport, VDDK 6.0.0 tried to find requested virtual machines by looking up their BIOS UUID, instead of by the MoRef provided, as in previous VDDK releases. Because a BIOS UUID is not guaranteed unique, unlike a MoRef, this new behavior caused problems after cloning, out of place VM restore, and so forth. The problem happened with both SAN backup and restore when two VMs managed by the same vCenter Server have identical BIOS UUID, whether the VMs are powered on or off. The error occurred when VixDiskLib_Open(Ex) fetches the virtual disk's block map. This issue has been fixed in this release.

  • Avoiding application crash when retrieving ServiceInstance content.

    Backup vendors reported crashes when their applications attempted to retrieve ServiceInstance content from vCenter Server. The cause was apparently the failure to acquire a lock on the connection. This release creates a connection lock, which resolves the problem.

  • VDDK clients with OpenSSL before 0.9.8za fail to connect.

    When trying to use VDDK 5.5 advanced transport (HotAdd or SAN) for backup or restore against vSphere 6.0.x, the proxy connection fails with “SSL Exception” error. This is because old VDDK clients do not support TLS, which vSphere 6 requires. The solution is to upgrade your client on the proxy virtual machine to VDDK 5.5.4.

For VDDK issues that were resolved before the vSphere 6.0.1 release, see the VDDK 6.0 Release Notes and various VDDK 5.5.x Release Notes.

Known Issues and Workarounds

The following issue applies to VDDK 6.0.1 only.

  • Backup and restore using SAN transport fails with VDDK 6.0.1 for VMs residing on ESXi 5.5 or earlier.

    When VDDK 6.0.1 connects to an ESXi 5.5.x host using SAN transport, and tries to retrieve the virtual machine MoRef from the snapshot managed object, the operation fails with the vmodl.fault.InvalidRequest error. This backward compatibility failure is due to changes in the getVM method between vSphere 5.5, 6.0, and 6.0.1. The workaround is to use non-SAN transport when connecting to ESXi 5.5 and earlier hosts. A fix has been identified and will appear in VDDK 6.0.2.

The following issues were discovered in the field after VDDK 6.0 release.

  • Hosted virtual disk > 2TB is not supported.

    Although ESXi 5.5 and later support > 2TB managed disk, VMware Workstation does not support > 2TB hosted disk. Virtual disk of type MONOLITHIC_SPARSE created by VixDiskLib_Create cannot be extended past its maximum size of 2TB. To transfer a large virtual disk over to vSphere, programs must clone a < 2TB hosted disk to an ESXi host, then extend it past 2TB after cloning.

  • Incremental backups might lose data for CBT calls during snapshot consolidation.

    When consolidating a snapshot, changed block tracking (CBT) information gets lost on its way to VixDiskLib, so any sectors changed during snapshot consolidation do not get saved by VADP based incremental backups. This could result in data loss upon recovery, and is an ESXi 6.0 regression. A solution has been found and will appear in patch releases and in ESXi 6.0 U2. For workarounds, see KB 2136854.

  • Changed Block Tracking fails for virtual machines with IOfilter attached.

    For virtual disk on a VMFS datastore, backups may be initiated after enabling Changed Block Tracking (CBT) on a VM with no snapshots. Calling QueryChangedDiskAreas with changeID = * will return only modified disk sectors, which mimics full backup while avoiding unallocated sectors. However on VMs using vSphere APIs for I/O Filtering (VAIO) the QueryChangedDiskAreas call fails with “IOFSendDate... Unrecoverable error... errno = 9” so backup software must run a full backup without changeID = *. The cause has been identified but not fixed.

  • VDDK cannot HotAdd a > 2TB disk on VVol datastores.

    When trying to open a > 2TB virtual disk for writing on VVol datastores, the following error message appeared: “Failed to hot-add SCSI targets: Vmomi::MethodFault::Exception: vim.fault.GenericVmConfigFault.” No workaround is known for HotAdd, but programs can switch to NBD transport. A solution has been found and will appear in ESXi 6.0 U2.

  • VVol support and older virtual hardware versions.

    If VMs of virtual hardware version < 11, have memory snapshots, Storage vMotion from VVol to VMFS fails, and vice versa. This is not a backup issue as such. A workaround is to collapse the snapshots, upgrade the hardware version, then migrate.

  • VDDK 6.0 generates unnecessary log files at temporary location.

    The VDDK logging subsystem places many log messages in /tmp/vmware-root or the Temp folder. These are redundant and will be created even if the logging functions are hooked in. A fix has been identified and will be available in a future release.

  • HotAdd fails with more than five concurrent backup operations.

    When a backup application uses more than five (5) concurrent processes to back up or restore virtual machines using the HotAdd transport mode, one or more of the operations may fail. Logs will contain errors such as “The directory is not empty” and “Error acquiring instance lock” then “HotAdd ManagerLoop caught an exception.” The workaround is to reduce the number of concurrent backup or restore processes to five (5) or less.

  • Intermittent SAN mode read or write failure due to SAN LUN busy.

    On rare occasions, a SAN mode read or write can fail because the SAN LUN is busy. In the VDDK log file, an error message will appear such as “SAN read failed, error 170 (The requested resource is in use).” The workaround is to retry the read or write operation. A fix has been identified and will be available in a future release.

  • Failure to mount a logical volume spanned across multiple disks.

    If logical volume (LVM) is spanned across multiple disks, and the disks are HotAdded to the proxy VM in read/write mode, then the logical volume is mounted read/write using the VixMntapi library, the volume sometimes fails to mount. Read-only mount is successful using the same set-up. The mount failure can occur with all releases of Windows Server, but the issue is not always reproducible.

  • Disk open in HotAdd mode can hang if manager loop fails to initialize.

    Very infrequently, the HotAdd manager loop fails to start. Normally it starts once and runs for the life of a VDDK program. However if the first VixDiskLib_Open or VixDiskLib_OpenEx call tries to start HotAdd manager on the proxy VM simultaneously with start-up of a guest operation (originating from another program), a race condition occurs. The VixDiskLib Open operation fails and the HotAdd manager loop does not start, which causes the second Open to hang in function HotAddManager::GetManager. The workaround is to kill the program, run VixDiskLib_Cleanup in a separate program, then restart the original VDDK program. A VDDK fix has been identified and will be available in a future release.

  • VixMntapi for Linux fails after read-only mount is attempted.

    When a program tries to mount disk volumes read-only on Linux, it fails after VixMntapi_OpenDisks with the error message “Cannot read or parse the partition table on the virtual disk.” The workaround is to mount disk volumes read/write, or follow advice in the Virtual Disk Programming Guide (see section Read-Only Mount on Linux). VMware expects that Linux read-only mount will be allowed in a future release.

  • Failure to get volume information when mounting without loopback devices.

    VixMntapi for Linux requires loopback functionality on the backup proxy. When a Linux system is configured without loopback devices, and a volume mount is attempted, the following error appears in the log immediately following getVolumeInfo and the mount fails: “VixMntapi_MountVolume: Failed to mount partition 1 of disk... FUSE error 29: All loop devices.” The solution is to load the loop module manually, and/or create loop devices by running this command as root:
    # for i in `seq 0 8`; do mknod -m660 /dev/loop$i b 7 $i; done

  • Incremental restores using HotAdd transport can fail with deviceKey error.

    After the backup proxy has done an incremental backup and later tries to restore incrementally using HotAdd transport, the restore (or subsequent backups) may fail with the following error message: “A specified parameter was not correct: deviceKey.” If disk consolidation is done beforehand, the error message is instead: “You do not have access rights to this file.” The root cause is that Changed Block Tracking (CBT) is always disabled on hot-remove. CBT should be disabled on the restored virtual machine, but not on the HotAdd proxy. The workaround is to call QueryChangedDiskAreas("*") early in your restore program and remember results past hot-remove.

  • For HotAdd with VVols, proxy must be on same datastore.

    When writing virtual disk on a VVol (virtual volumes) datastore in HotAdd mode, the restore proxy must be on the same VVol datastore as the target virtual machines. If this is not possible, the restore proxy must use a different transport mode.

  • HotAdd mode does not work with a vSAN-sparse delta disk.

    ESXi hosts do not allow HotAdd of a vSAN-sparse delta disk on a proxy virtual machine with datastore types other than vSAN. Every time you snapshot a virtual machine residing on a vSAN datastore, a vSAN-sparse delta disk gets created. When the proxy then attempts to HotAdd a VMDK named diskname-NNNNNN.vmdk (where NNNNNN is a zero filled integer starting with 1 and continuing to 999999) unless the HotAdd is also on a vSAN datastore, the operation will fail. To prevent this situation from occurring, one workaround is to ensure that a VM has no snapshots before moving it to vSAN, and have the proxy create HotAdded VDDK on the vSAN datastore.

Remaining issues are carried over from VDDK 5.5.x and still apply.

  • Slow read performance with NBD transport.

    This is not a regression; NBD was always slower than advanced transports. When reading disk data using NBD transport, VDDK makes synchronous calls. That is, VDDK requests a block of data and waits for a response. The block is read from disk and copied into a buffer on the server side, then sent over the network. Meanwhile, no data gets copied over the network, adding to wait time. To some extent, you can overcome this issue by using multiple streams to simultaneously read from a single disk or multiple disks, taking advantage of parallelism.

  • VixMntapi has mode selection issues on Linux.

    When the disklib plug-in libdiskLibPlugin.so has been previously loaded, even in local file or NBD/NBDSSL transport mode, VixMntapi_OpenDisks fails with the error message “Volume mounts using advanced transports not supported: Error 6.” The workaround is to start another process (using non-HotAdd non-SAN transport) without libdiskLibPlugin.so loaded.

  • Metadata write is not supported for HotAdd and SAN transport.

    When a program calls VixDiskLib_WriteMetadata with HotAdd or SAN transport, the function returns an error saying “The operation is not supported” and supplied metadata never gets written to virtual disk. The workaround is to close the disk from HotAdd or SAN mode, reopen the disk using NBD or NBDSSL mode, then call VixDiskLib_WriteMetadata.