Virtual Disk Development Kit Release Notes

Release date: 15 NOV 2016 | Builds: ESXi 4564106, VDDK 4604867
For vSphere 6.5 GA. Last document update: 30 OCT 2018
Check frequently for additions and updates to these release notes.


About the Virtual Disk Development Kit

The Virtual Disk Development Kit (VDDK) 6.5 is an update to support vSphere 6.5 and to resolve issues discovered in previous releases. VDDK 6.5 adds support for ESXi 6.5 and vCenter Server 6.5, and was tested for backward compatibility against vSphere 6.0 and 5.5.

VDDK is used with vSphere Storage APIs for Data Protection (VADP) to develop backup and restore software. For general information about this development kit, how to obtain the software, programming details, and redistribution, see the VDDK documentation landing page.

The VMware policy concerning backward and forward compatibility is for VDDK to support N-2 and N+1 releases. In other words, VDDK 6.5 and all its update releases support vSphere 5.5, 6.0, and the next major release (except new features).

VDDK 6.5 was tested with the following operating systems to perform proxy backup:

  • Windows Server 2012 and 2012 R2
  • Windows Server 2008 R2
  • Windows 10
  • Red Hat Enterprise Linux RHEL 6.7, 6.8, and 7.2.
  • SUSE Linux Enterprise Server SLES 11 SP4 and 12 SP1.

Changes and New Features

Backing up encrypted disks. The vSphere 6.5 release includes support for virtual machine encryption. VDDK 6.5 contains enhancements for backup and restore of encrypted data. On backup media, virtual disks are stored in the clear (not encrypted) unless separately encrypted by backup software. Portions of VM home may be stored encrypted on backup media. At restore time, encryption is governed by storage policy. See “Encrypted VM Backup and Restore” in the Virtual Disk Development Kit Programming Guide.

NBDSSL Compression. Performance of NBDSSL mode data transfer can be improved with this option. Three compression types are available – ZLIB, FASTLZ, and SKIPZ – specified as flags when opening files with the VixDiskLib_Open call. (In VDDK 6.5 NBD mode is switched to NBDSSL mode, but this will be fixed in future update releases.)

Back up First Class Disk. First Class Disk (FCD) is a named virtual disk unassociated with a VM. VDDK supports the backup of FCD in attached mode, but not in detached mode. This support works with any advanced transport mode.

Reuse vCenter Server session. To help surmount the vCenter Server session limitation, you can reuse an existing session by recycling its session cookie. In VixDiskLibConnectParams, set VixDiskLibCredType = 2, then set VixDiskLibSessionIdCreds for cookie, user name, and password key.

VSS improvements. This release includes support for additional Volume Shadow-copy Service (VSS) backup configurations. (1) The vssBackupType VSS_BT_COPY was previously used as the default for CreateSnapshot_Task but now VSS_BT_FULL, VSS_BT_INCREMENTAL, VSS_BT_DIFFERENTIAL, and VSS_BT_LOG are available as well. (2) The vssBackupContext is introduced to enforce application or filesystem quiescing. (3) The timeout (default = 15 minutes) for quiescing VMs can be configured anywhere from 5 minutes to 4 hours. To support these new features and configurations, the function CreateSnapshotEx_Task was added to the vSphere API, superseding CreateSnapshot_Task.

Encryption and VSS quiescing. Because the VSS manifest file contains information that exposes guest OS data, when programs request VSS quiescing for backing up an encrypted VM, the system falls back to filesystem quiescing.

Compatibility Notices

Outdated statement in documentation. In the Virtual Disk Development Kit Programming Guide, section “Credentials and Privileges for VMDK Access” says the Global > License privilege is required for backup, however a license has not been required since vSphere 6.0.

Statement about Virtual SAN (vSAN) support. VDDK 6.5 was tested for functionality and interoperability with Virtual SAN (vSAN) 6.2, and more recently with vSAN 6.5. Both 6.2 and 6.5 support on-disk format 3. ESXi hosts that support on-disk format 3 also support previous on-disk formats. To check the highest on-disk format version supported by all member hosts, call the public API RetrieveSupportedVsanFormatVersion.

Error in documentation. “Best Practices for HotAdd Transport” formerly recommended the LSI controller over the paravirtual SCSI controller (PVSCSI). Documentation has been corrected; PVSCSI is preferred for HotAdd in VDDK 6.5. The current recommendation is to use PVSCSI only and upgrade to 6.5 or later. If problems occur, HotAdd a dummy disk in case the first HotAdd found an unexpected target/unit on the guest OS,

VMFS-6 requires 6.5 for SAN transport. For customers to back up VMFS-6 datastores using SAN transport, the datastore must be connected to an ESXi 6.5 host.

Virtual machine encryption. With keys managed by the vCenter Server, ESXi hosts now offer virtual disk encryption support. This form of encryption is not compatible with VMware Fusion or Workstation hosted disk encryption.

VMware Tools 10.1.0. To support the VSS improvements in VDDK 6.5 (see above) CreateSnapshotEx_Task requires VMware Tools 10.1.0 or later to be installed on Windows machines being backed up. New versions of VMware Tools contain an upgraded VMware VSS service that coordinates with VMX, although the VMware VSS service may be uninstalled separately.

Backward compatibility of TLS with vSphere 5.5U3. If a vSphere 6.5 customer sets TLS v1.2 authentication as mandatory, backups fail on ESXi 5.5U3 and earlier hosts, with “SSL Exception” errors. The fix is to upgrade those ESXi hosts to 5.5U3e or later. A workaround is to modify one of two configuration files on the VDDK proxy. The /etc/vmware/config or CommonAppDataFolder\config.ini file sets the entire proxy, while $USER/.vmware/config or %USERNAME%\AppData\config.ini sets just one user. Add the following line to the appropriate file:

Restoring VM with VAIO Filters. When applications restore a VM that had VAIO Filters attached when it was backed up, its VAIO Filters are detached, so the VM might not boot until the VAIO Filters are re-attached using a Storage Policy from vCenter Server. In some cases the VM may boot but VAIO Filters remain detached. A similar caveat applies to vSphere Replication (VR), also called host based replication: VAIO Filters must be re-attached after failover.

Avoid creating snapshots for restore on VVol datastores. Due to Virtual Volume (VVol) implementation in the vSphere 6.0 release, VMware recommends against creating a snapshot (then reverting and deleting it) when restoring files on a VVol datastore. This is because writing to a non-leaf disk (such as a snapshot backing file) is considered a layering violation and throws an error on many VVol storage devices. Writing to the leaf disk of the snapshot is not a layering violation. The restore snapshot remains mandatory for SAN transport, which is not supported on VVol datastores anyway.

Support for SEsparse disks. In VDDK 6.0, backup and restore of SEsparse disks was supported for advanced transports only (NBD, NBDSSL, HotAdd, and SAN). In this release, host-based file transport is also supported.

For older compatibility notices from the VDDK 6.0 release, see the VDDK 6.0 Release Notes.

Recently Resolved Issues

The VDDK 6.5 release resolves the following issues.

  • RHEL 7.2 proxy did not support LSI Logic SAS controller for HotAdd.

    If a RedHat Enterprise Linux (RHEL) 7.2 proxy has an LSI Logic SAS controller, HotAdd fails with this error: “VixDiskLib: VixDiskLib_Open: Cannot open disk *.vmdk. Error 13 (You do not have access rights to this file).” The LSI Logic SAS controller is now supported for HotAdd.

  • Incremental backups might lose data for CBT calls during snapshot consolidation.

    When consolidating a snapshot, changed block tracking (CBT) information gets lost on its way to VixDiskLib, so any sectors changed during snapshot consolidation do not get saved by VADP based incremental backups. This could result in data loss upon recovery, and is an ESXi 6.0 regression. To avoid this issue, upgrade to ESXi 6.0 U2 or higher. See also KB 2136854.

  • Error not returned when ESXi host reaches NFC connection limit.

    This issue may cause an apparent hang in backup, or slowness of vCenter Server. The VIX_E_OUT_OF_MEMORY error is returned in cases when an ESXi host reaches the NFC memory limit (a fixed issue), but not for other NFC limits such as the number of connections. Programs should check that the “Number of current connections” is less than the VMware recommended NFC session connection limit before making a new VixDiskLib_Open(Ex) call. This is not a defect but an NFC limitation. See section NFC Session Limits in the Virtual Disk Programming Guide. ESXi host limits depend on consumed buffer space (32MB) not just the number of connections. VixDiskLib_Open(Ex) uses one connection for every virtual disk accessed on an ESXi host. It is not possible to share a connection across virtual disks. Connections made through vCenter Server are limited to a maximum of 52, then the ESXi host limits listed above apply.

  • Windows proxy backup could fail with IPv6 enabled.

    When IPv6 was enabled on one or more network interfaces in a Windows proxy, VM backup could fail when the proxy attempted to open a disk. The cause was stack-buffer overrun in the libcurl.dll library when trying to enumerate DNS servers having multiple IPv6 network interfaces. This has been fixed in this release.

  • Clone function failed connection with at-sign in password.

    If a user set the access password to include at-sign (@) the VixDiskLib_Clone function failed with “CnxAuthdConnect” or “NfcNewAuthdConnectionEx“ error. The issue was seen only when connecting directly to an ESXi host, and when the VM's MoRef was not specified. This behavior was a regression from VDDK 5.1 and is fixed in this release.

  • SAN mode VDDK 6.0 searched for virtual machines by BIOS UUID.

    When using SAN transport, VDDK 6.0.0 tried to find requested VMs by looking up their BIOS UUID, instead of by the MoRef provided, as in previous VDDK releases. Because a BIOS UUID is not guaranteed unique, unlike a MoRef, this new behavior caused problems after cloning, out of place VM restore, and so forth. The problem happened with both SAN backup and restore when two VMs managed by the same vCenter Server have identical BIOS UUID, whether the VMs are powered on or off. The error occurred when VixDiskLib_Open(Ex) fetches the virtual disk's block map. This issue has been fixed in this release.

  • VDDK could not HotAdd a > 2TB disk on VVol datastores.

    When trying to open a > 2TB virtual disk for writing on VVol datastores, the following error message appeared: “Failed to hot-add SCSI targets: Vmomi::MethodFault::Exception: vim.fault.GenericVmConfigFault.” The fix is actually not in VDDK but in ESXi hosts, which should be upgraded to version 6.0 U2 or 6.5 to resolve the issue.

  • Changed block tracking on NFS datastores with thin-provisioning.

    With changed block tracking (CBT) enabled for backups on NFS datastores, everything works without error. However, if backups are based on QueryChangedDiskAreas("*"), the common practice for full backup, then a thin-provisioned virtual disk may get restored as thick. When using NFS servers without mapped extents, here is a workaround. After creating a VM, activate CBT, and take a full backup before initial power-on. Record the ChangeID of this backup, labeling it the “original” ChangeID. Thereafter, continue with your usual backup policy, except that, instead of a full backup based on ChangeID "*", you have a “base” backup based on your recorded original ChangeID. Incremental backups continue to be based on the ChangeID of the previous backup, as usual. With this workaround, all restores are accomplished from base and incremental backups, so you avoid the problem of NFS thickening.

  • VDDK 6.0 generated unnecessary log files at temporary location.

    The VDDK logging subsystem places many log messages in /tmp/vmware-root or the Temp folder. These are redundant and will be created even if custom logging functions are hooked in. This issue has been fixed in this release.

  • HotAdd failed with more than five concurrent backup operations.

    When a backup application uses more than five (5) concurrent processes to back up or restore VMs using the HotAdd transport mode, one or more of the operations may fail. Logs will contain errors such as “The directory is not empty” and “Error acquiring instance lock” then “HotAdd ManagerLoop caught an exception.” This issue has been fixed in this release.

  • Intermittent SAN mode read or write failure due to SAN LUN busy.

    On rare occasions, a SAN mode read or write can fail because the SAN LUN is busy. In the VDDK log file, an error message will appear such as “SAN read failed, error 170 (The requested resource is in use).” This issue has been fixed in this release.

  • Disk open in HotAdd mode could hang if manager loop fails to initialize.

    Very infrequently, the HotAdd manager loop fails to start. Normally it starts once and runs for the life of a VDDK program. However if the first VixDiskLib_Open call tries to start HotAdd manager on the proxy VM simultaneously with start-up of a guest operation (originating from another program), a race condition occurs. The Open operation fails and the HotAdd manager loop does not start, which causes the second Open to hang in function HotAddManager::GetManager. A VDDK fix was identified and appears in this release.

  • VDDK clients with OpenSSL before 0.9.8za fail to connect.

    When trying to use VDDK 5.5 advanced transport (HotAdd or SAN) for backup or restore against vSphere 6.0.x, the proxy connection fails with “SSL Exception” error. This is because old VDDK clients do not support TLS, which vSphere 6 requires. The solution is to upgrade your client on the proxy VM to VDDK 5.5.4 or later.

  • Failure to get volume information when mounting without loopback devices.

    VixMntapi for Linux requires loopback functionality on the backup proxy. When a Linux system is configured without loopback devices, and a volume mount is attempted, the following error appears in the log immediately following getVolumeInfo and the mount fails: “VixMntapi_MountVolume: Failed to mount partition 1 of disk... FUSE error 29: All loop devices.” The solution is to load the loop module manually, and/or create loop devices by running this command as root:
    # for i in `seq 0 8`; do mknod -m660 /dev/loop$i b 7 $i; done

  • VixMntapi has mode selection issues on Linux.

    When the disklib plug-in has been previously loaded, even in local file or NBDSSL transport mode, VixMntapi_OpenDisks fails with the error message “Volume mounts using advanced transports not supported: Error 6.” The workaround is to start another process (using non-HotAdd non-SAN transport) without loaded.

  • Read-only mount of Windows ReFS with VixMntapi.

    On Windows Server 2012 systems with Resilient File Systems (ReFS), the VixMntapi library could mount ReFS partitions read-only, but not read/write. In this release, ReFS partitions can be mounted read/write.

For VDDK issues that were resolved before the vSphere 6.5 release, see the VDDK 6.0 Release Notes.

Known Issues and Workarounds

The following issues were found in VDDK 6.5.

  • After guest OS issues unmap, CBT reports more changed blocks than expected.

    If a guest VM has automatic space reclamation (unmap) enabled, incremental backup sometimes takes extraordinary long. When you check blocks that were backed up, you find the total size of those blocks is close to the total disk size, regardless how many blocks were actually modified. This occurs in all versions of vSphere 6.5, vSphere 6.7, and VMware Cloud (VMC).

    When automatic space reclamation is enabled in the guest, the OS issues unmap requests to underlying storage. However requested blocks include not only unmapped blocks but also unallocated blocks. All those blocks are captured by CBT and considered as “changed blocks” then returned to backup software by the vSphere API queryChangedDiskAreas(changeId).

    Backup software can filter out those unallocated blocks if it calls VDDK 6.7 or later libraries. It can get allocated area of a virtual disk by calling the VDDK API VixDiskLib_QueryAllocatedBlocks() then taking the intersection with changed blocks reported by CBT. The result is the set of actually changed blocks.

    One workaround is to disable unmap in the guest OS, but that loses the valuable space reclamation feature. If your software can use VDDK 6.7 or later libraries, a solution is take the intersection of VixDiskLib_QueryAllocatedBlocks() and queryChangedDiskAreas(changeId) to calculate the actually changed blocks. See the 6.7 VDDK Programming Guide for details.

  • Maximum 25 NFC sessions with ESXi 6.5 hosts.

    In all 6.5 releases of ESXi, including 6.5 U2, the number of NFC sessions is limited to 25, due to a lowered thread count limit. This issue reduces the concurrency of NBD backups to 25 from 48, as documented for vSphere 6. Upgrading to ESXi 6.7 is one solution. Updating VDDK does not help. The vSphere 6.7 solution may be backported to a future vSphere 6.5.x release.

  • SHA256 security certificate expired for VixMntapi.

    Although VDDK 6.5 supports Windows 10, the vstor2 driver used for VixMntapi is not WHQL certified, so after its SHA256 certificate expires on 23 August 2018, vstor2 will be rejected by driver signature verification in any Secure Boot enabled VM running Windows 10 or Server 2016. The solution is to upgrade VDDK libraries to 6.7 or later.

  • HotAdd proxy failure with Windows Server backups

    If there is SATA controller in the Windows backup proxy, HotAdd mode might not work. The cause is that VDDK does not rescan SATA controllers after HotAdding, so if there exist multiple SATA controllers or ACHI controllers, VDDK might use the wrong controller ID and fail to find the HotAdded disk. Disk open fails, resulting in “HotAdd ManagerLoop caught an exception” and “Error 13 (You do not have access rights to this file)” errors. The workaround in this case is to remove the SATA controller from the Windows backup proxy. A fix has been found and will be provided in a future release. See KB 2151091.

  • Direct IPv6 connection to ESXi host fails during disk open.

    On an IPv6 network, VDDK can connect to through vCenter Server to an ESXi host, however when connecting directly to the ESXi host with NBD(SSL), VixDiskLib_Open fails with “Failed to connect to peer” error. Two workarounds are to use HotAdd, or connect through vCenter.

  • Intermittent hang observed when opening virtual disks.

    A customer observed intermittent hanging while opening virtual disks. The hang was found to occur inside the VixDiskLibVim module, in VixDiskLib_PrepareForAccess, VixDiskLib_EndAccess, or GetNfcTicket, and may be caused by network problems. Fixes have been identified and will appear in a future release. One fix is a timeout for HTTP requests, and another is a timeout for the open operation.

  • QueryChangedDiskAreas reports no error if VM has snapshot when enabling CBT.

    If a snapshot is present on a VM when Changed Block Tracking (CBT) is enabled, ChangeId * returns an empty change set, probably resulting in a no-data backup. In previous releases, enabling CBT with existing snapshots caused an error. Backup software should check for pre-existing snapshots before enabling CBT. Customers should be advised to delete or consolidate snapshots before enabling CBT.

  • VDDK crashes after disk open call and session login attempt.

    When trying to retrieve the NFC ticket for a disk after VixDiskLib_OpenEx, VDDK crashes intermittently (due to a null pointer read in gvmomi) when it attempts a session login to vCenter Server. The fix will appear in a future release.

  • SAN mode access to vSAN or VVol datastore hangs ESXi host.

    VVol and vSAN datastores do not support SAN mode transport. If a program calls VixDiskLib_ConnectEx on a VVol or vSAN datastore, specifying SAN mode or nothing (auto-detected SAN mode), the ESXi host stops responding and the VDDK connection fails. One workaround is to explicitly specify another mode (NBDSSL or HotAdd). A harder workaround is to move the virtual disk to a VMFS datastore. The fix, avoiding SAN transport for VVol and vSAN, will appear in a future release.

  • Cannot clone local disks to remote datastore without VM credentials.

    Due to a vSphere 6.5 security enhancement, VixDiskLib_Clone can no longer clone local disks to remote disks that do not belong to any VM. Previous releases made use of a retained username and password to allow this, but now enhanced security verification is required. The procedure for VixDiskLib_Clone to make a remote connection is first to get an NFC ticket from the ESXi host based on the MoRef of the VM that manages the disks being cloned-to. Then build an authenticated NFC connection to the host using that ticket. The MoRef of the VM must be specified in vmxSpec of the remote connection parameter passed to VixDiskLib_Clone. The VM must be powered off, and the target disks must already exist before cloning, not necessarily with the same names. Afterwards the VM may be removed from the inventory with UnregisterVM, but it must remain on the datastore with its cloned virtual disks.

  • Cannot remote clone an encrypted virtual disk.

    If a virtual disk was encrypted (with vSphere virtual machine encryption) the VixDiskLib_Clone function does not allow cloning the virtual disk from a remote host to the local host. This restriction was imposed to avoid decrypting an encrypted disk by accident. Cloning from VM to VM is allowed on the local host.

  • First Class Disk (FCD) requires attachment for backup.

    In this release there is a limitation for backup of First Class Disk (FCD). VDDK does not support backing up a detached FCD. The workaround is to attach the FCD to a dummy VM, which you then back up. The dummy can be a empty VM without a guest OS. To attach an FCD, click Edit virtual machine settings > New device: Select and add the FCD as an Existing Hard Disk.

  • Application level quiescing not compatible with FT or encryption.

    When CreateSnapshotEx_Task is called and VirtualMachineWindowsQuiesceSpec calls for application-consistent quiescing, an error will result if the VM is encrypted, or if it is FT (fault tolerance) enabled. Backup software should check if a VM is encrypted or FT enabled before requesting application-consistent quiescing. The snapshot task will fall back to filesystem-consistent quiescing if QuiesceMode was set to application in these cases.

  • NVMe controller should not be used on the HotAdd backup proxy.

    Using HotAdd mode you can back up disks on VMs with an NVMe controller, because virtual disks attached to an NVMe controller on the target VMs can be HotAdded using other types of SCSI controllers, such as PVSCSI (ParaVirtual SCSI). However the backup proxy always uses its SCSI controller for HotAdd backup and restore, so it does not require an NVMe controller, but must have a SCSI controller.

  • HotAdd skipped if proxy has same BIOS UUID as the target VM.

    The VixDiskLib_ConnectEx function contains a pre-check for HotAdd to validate that the proxy is a virtual machine. (HotAdd is not allowed on physical machines.) If the proxy VM and the target VM have the same BIOS UUID, validation fails because multiple VMs are returned from pre-check. This results in skipping HotAdd mode and reverting to another transport. The workaround is to make the BIOS UUID keys unique.

  • DLL conflict when VDDK installed on vCenter Server.

    Usually VDDK libraries are installed on a backup proxy (physical or virtual machine) but when they are installed on vCenter Server, libldap_r.dll and liblber.dll could be different versions, causing “One or more required subsystems failed to initialize” errors. The workaround is to copy both aforementioned DLL files from the VDDK distribution to the folder where you are running the backup executable.

  • Slow read performance with NBD transport.

    This is not a regression; NBD was always slower than advanced transports. When reading disk data using NBD transport, VDDK makes synchronous calls. That is, VDDK requests a block of data and waits for a response. The block is read from disk and copied into a buffer on the server side, then sent over the network. Meanwhile, no data gets copied over the network, adding to wait time. To some extent, you can overcome this issue by using multiple streams to simultaneously read from a single disk or multiple disks, taking advantage of parallelism.

  • Incremental restores using HotAdd transport can fail with deviceKey error.

    After the backup proxy has done an incremental backup and later tries to restore incrementally using HotAdd transport, the restore (or subsequent backups) may fail with the following error message: “A specified parameter was not correct: deviceKey.” If disk consolidation is done beforehand, the error message is instead: “You do not have access rights to this file.” The root cause is that Changed Block Tracking (CBT) is always disabled on hot-remove. CBT should be disabled on the restored VM, but not on the HotAdd proxy. The workaround is to call QueryChangedDiskAreas("*") early in your restore program and remember results past hot-remove, or for the proxy to reset CBT according to the VM involved.

  • HotAdd mode does not work with a vSAN-sparse delta disk.

    ESXi hosts do not allow HotAdd of a vSAN-sparse delta disk on a proxy VM with datastore types other than vSAN. Every time you snapshot a VM residing on a vSAN datastore, a vSAN-sparse delta disk gets created. When the proxy then attempts to HotAdd a VMDK named diskname-NNNNNN.vmdk (where NNNNNN is a zero filled integer starting with 1 and continuing to 999999) unless the HotAdd is also on a vSAN datastore, the operation will fail. To prevent this situation from occurring, one workaround is to ensure that a VM has no snapshots before moving it to vSAN, and have the proxy create HotAdded VDDK on the vSAN datastore.

  • Always reboot after uninstalling VDDK on a Windows proxy.

    Attempts to install or uninstall the Windows vstor2 driver using the provided scripts can be unsuccessful under certain circumstances. When uninstalling VDDK from a Windows proxy, you should reboot after the uninstall. If this is not done, it may prove impossible to (re)install the vstor2 driver until the proxy is rebooted.

  • Metadata write is not supported for HotAdd and SAN transport.

    When a program calls VixDiskLib_WriteMetadata with HotAdd or SAN transport, the function returns an error saying “The operation is not supported” and supplied metadata never gets written to virtual disk. The workaround is to close the disk from HotAdd or SAN mode, reopen the disk using NBDSSL mode, then call VixDiskLib_WriteMetadata.

  • Cannot restore linked clones using SAN transport.

    VDDK 5.5 contained revised snapshot and linked clone libraries to allow read access anywhere in the snapshot chain on any platform. During testing VMware noticed errors when trying to restore multiple linked clones with SAN transport, which supports only a single snapshot. To avoid errors the SAN transport library was revised to explicitly disable writing to a disk that is not the base disk in a snapshot hierarchy. The libraries throw an error on open instead of waiting for fatal disk write errors. To restore linked clones, VMware recommends use of HotAdd or NBDSSL transport. SAN transport can still read (back up) any disk in a snapshot hierarchy.

  • Boot disk should be scsi0:0 for use with mount API.

    When a VM's boot disk is not on scsi0:0, the VixMntApi_GetVolumeInfo function does not correctly return the inGuestMountPoints (such as drive letter C:) or the numGuestMountPoints. On a scsi0:0 disk the correct information is returned, but not otherwise. For example with a scsi0:2 boot disk, the two properties are not populated. This issue was recently reported against VDDK 5.0 and later.

  • Error 87 with Windows 2012 NTFS.

    NTFS virtual disk from Windows Server 2012 might produce “Error 87” when a backup proxy mounts the disk read-only on earlier versions of Windows Server. The VDDK log contains the error “Volume/NTFS filesystem is corrupt (87).” This is a Windows backward compatibility issue. If customers encounter this error, one workaround is to mount these virtual disks read/write so that the system has a chance to repair them.

  • Possible segmentation violation if running multiple backup processes.

    When many simultaneous backup processes are running, some of them might crash with a SIGSEGV after many iterations. This is due to a possible race condition in VixDiskLib, which can be reduced by calling VixDiskLib_Exit() at the end of your program.