Virtual Disk Development Kit 6.0 Release Notes

Release date: 12 MAR 2015 | Builds: ESXi 2494585, VDDK 2498720.
For vSphere 6.0 GA. Last document update: 17 AUG 2017.
Check frequently for additions and updates to these release notes.

Contents



About the Virtual Disk Development Kit

The Virtual Disk Development Kit (VDDK) 6.0 is an update to support vSphere 6.0 and to resolve issues discovered in previous releases. VDDK 6.0 adds support for ESXi 6.0 and vCenter Server 6.0, and was tested for backward compatibility against vSphere 5.5 and 5.1. Because TLSv1 is required and SSLv3 connections are refused, VDDK 5.5.4 is the only previous release that is upward compatible with vSphere 6.0.

VDDK is used with vSphere Storage APIs for Data Protection (VADP) to develop backup and restore software. For general information about this development kit, how to obtain and install the software, programming details, and redistribution, see the VDDK documentation landing page.

The VMware policy concerning backward and forward compatibility is for VDDK to support N-2 and N+1 releases. In other words, VDDK 6.0 and all of its update releases support vSphere 5.1, 5.5, and 6.5 (except new features).

VDDK 6.0 was tested with the following operating systems to perform proxy backup:

  • Windows Server 2008 R2
  • Windows Server 2012 and 2012 R2
  • Windows 10 (Technical Preview)
  • Red Hat Enterprise Linux RHEL 6.6 and 7.0.
  • SUSE Linux Enterprise Server SLES 11.3 and 12.

New in This Release

No SAN mode support for VMFS-6. VDDK 6.0 does not support back up of VMs residing on VMFS-6 datastores using SAN transport mode. VDDK 6.5 and higher can back up VMs residing on VMFS-6 datastores using SAN transport.

Virtual Volumes (VVols). The vSphere 6.0 release includes Virtual Volumes to support individual virtual machine storage at the VMDK level. Backup and restore of VVols is mostly transparent to the application. SAN transport is not supported on VVol datastores.

Internet Protocol version 6 (IPv6). The VDDK libraries are updated to handle IPv6, while remaining backward compatible with IPv4.

SAN multipathing. For SAN devices with multiple paths to LUNs, this release has whitelist and blacklist for setting path preferences. See vixDiskLib.transport.san in the VDDK configuration.

Check and repair function for sparse disk. The new VixDiskLib_CheckRepair function is available to check the metadata of a sparse disk, and repair the metadata if necessary.

SSL certificate and thumbprint checking now mandatory. Different Windows and Linux mechanisms to enable SSL verification were unified in 6.0 Beta to have just one setting vixDiskLib.ssl.verifyCertificates in the VixDiskLib_Init configuration file. After that Beta release, SSL certificate verification became mandatory, and setting verifyCertificates = 0 has no effect.

Compatibility Notices for Partners

Outdated statement in documentation. In the Virtual Disk Programming Guide, section “Credentials and Privileges for VMDK Access” says the Global > License privilege is required for backup, however as of vSphere 6.0 a license is not required.

Only VDDK 5.5.4 is forward compatible with vSphere 6.0. In the vSphere 6.0 release, only TLS connections are allowed. Connections from VDDK clients using SSLv3 are refused with an “SSL exception” error. Although partners can tell customers how to enable SSLv3 for vmacore, VMware strongly recommends upgrading old clients to VDDK 5.5.4, which uses TLS.

Avoid creating snapshots for restore on VVol datastores. Due to Virtual Volume (VVol) implementation in this vSphere release, VMware recommends against creating a snapshot (then reverting and deleting it) when restoring files on a VVol datastore. This is because writing to a non-leaf disk (such as a snapshot backing file) is considered a layering violation and throws an error on many VVol storage devices. Writing to the leaf disk of the snapshot is not a layering violation. The restore snapshot remains mandatory for SAN transport, which is not supported on VVol datastores anyway.

Redistributed C++ Linux library. To provide support for old Linux proxies such as RHEL 5.9 and SLES 10, VDDK 6.0 includes libstdc++.so.6.0.13 in the distribution. If your software does not use one of these older Linux proxies, you can delete libstdc++ from your proxy package.

Support for SEsparse disks. Virtual disk libraries can handle SEsparse disk as implemented in vSphere 5.5. SEsparse disks are used for snapshots of > 2TB disks, and for linked clones in the VMware View environment (VDI). Backup and restore of SEsparse is supported for the advanced transports (NBD, NBDSSL, HotAdd, and SAN) but not for the host-based file transport.

Disable SMB and RDP on Windows Server. Before installing VADP based software, for instance VMware Data Protection, for security reasons you should disable network protocol SMB/CIFS, and remote desktop protocol (RDP). If you need these protocols for other purposes, always install the latest Windows updates.

For older compatibility notices from the VDDK 5.5 release, see the VDDK 5.5 Release Notes.

Recently Resolved Issues

The VDDK 6.0 release resolves the following issues.

  • Changed Block Tracking problem at 128GB boundaries.

    The QueryChangedDiskAreas function returned incorrect sectors after extending a virtual machine's VMDK file with Changed Block Tracking (CBT) enabled. See KB 2090639 for details. This issue has been fixed in this release.

  • End Access call crashed if hostname did not resolve.

    Calls to VixDiskLib_EndAccess crashed at VixDiskLibVim_AllowVMotion+0x169 if the server name provided in connection parameters (cnxParams.serverName) did not resolve correctly as a valid DNS hostname. This has been fixed in this release.

  • Error returned when ESXi host reaches NFC memory limit.

    Previously, VDDK would hang during I/O operations if the host ran out of NFC memory. This has been fixed in this release. You must upgrade both VDDK and ESXi to get the fix. VDDK now returns the VIX_E_OUT_OF_MEMORY error, leaving the disk handle in a still-valid state. Upon receiving this error, the caller can try again with a smaller I/O buffer, retry at a later time, perform other operations on the disk, or close the disk.

  • Cleanup function did not remove the HotAdd disks.

    After virtual disks were HotAdded with VixDiskLib_Open, they were not automatically cleaned up (hot-removed) by VixDiskLib_Cleanup. After disk is HotAdded or mounted, if a backup application crashes during backup due to network failure or power outage or any other issue, the leftover HotAdded disk must be cleaned up manually. Starting the backup application after a crash and calling VixDiskLib_Cleanup thereafter will clean up the mount directory, but will not hot-remove the disk. The fix is to clean up or hot-remove any HotAdd disk based on mount information when VixDiskLib_Cleanup is called.

  • Snapshots not required during restore.

    Older VDDK documentation advised taking a snapshot for SAN restore, and sometimes recommended a restore snapshot for other transport methods. Documentation now states that the restore snapshot is optional except for SAN restore, and discouraged for VVol datastores, where writing to non-leaf disks is considered a layering violation. See Table 7-4 in the Virtual Disk Programming Guide. The next major release might enforce non-use of restore snapshots, except with SAN transport.

  • HotAdd transport not possible with SATA virtual disks.

    ESXi 5.5 added support for SATA disks in virtual machines created with, or upgraded to, virtual hardware version 10. However HotAdd transport did not work with SATA disks. ESXi 6.0 adds support for HotAdd transport for SATA disks.

  • Threads still running after VixDiskLib_Exit call.

    When the VDDK plugin was loaded, several threads were started that could not be shut down until the calling application exited. For instance on Windows Server 2008, VDDK libraries left threads running after the VixDiskLib_Exit call. Still-running threads were from the vCenter vmacore library and could not be unloaded until the application finished. The leftover threads should be harmless, but in some cases the calling application could terminate unexpectedly. Even with the fix in this release, threads will continue to run as before. This fix addresses only the logging attempts that caused problems, not thread termination, due to extensive code changes required for termination.

  • Error code may not be set correctly before VDDK exit.

    An application calling VDDK libraries should return its own exit code, but QuickExit (part of the vmacore library) sometimes terminated the VDDK application before it set the proper exit code. This issue began when VDDK 5.1 was modified to call QuickExit, but is fixed in this release.

  • Prepare-for-access did not detect Storage vMotion in progress.

    The VixDiskLib_PrepareForAccess function was unable to detect that a Storage vMotion, started before disabling RelocateVM_Task, was still in progress. This has been fixed by returning VIX_E_OBJECT_IS_BUSY for that condition. Programs must now check the return code from VixDiskLib_PrepareForAccess and continue calling it in a delay loop until VIX_OK is returned. The VixDiskLib_PrepareForAccess call that returns VIX_OK must be matched by VixDiskLib_EndAccess; this is not necessary for VixDiskLib_PrepareForAccess calls that return busy.

  • VDDK supports IPv6 networks.

    Although vSphere 5.0 had some support for IP version 6, the VDDK libraries were not updated for IPv6. In this release, VDDK is updated for IPv6, while remaining backward compatible with IPv4. Libraries will work in either protocol or in a mixture of both. VDDK supports IPv6 addresses of the form [fc00:10:118:66::92] or [fc00:10:118:66::92]:port. On IPv6 networks, the SSL thumbprint database is indexed by IPv6 address minus square brackets.

  • SAN asynchronous I/O returning wrong error.

    Due to a change in the error enumeration, SAN asynchronous I/O manager was returning FILEIO_CANCELED instead of FILEIO_ERROR. The issue is fixed in this release.

  • VixDiskLib should check SAN accessibility on disk open.

    In some cases, disk I/O failed after a disk was successfully opened in SAN transport when a program specified SAN or Auto transport modes. This happened when the disk was thin provisioned or thick lazy-zeroed with no data written to it (a typical use case for restore). The cause was that the disk was incorrectly identified as being accessible through the LUN, when in fact it was not. The issue is fixed in this release. Attempting to open such a disk when using SAN transport mode now correctly results in a VIX_E_FILE_ACCESS_ERROR fault.

For VDDK issues that were resolved before the vSphere 6.0 release, see the VDDK 5.5 Release Notes and the VDDK 5.5.1 Release Notes.

Known Issues and Workarounds

The following issues were discovered after the release of VDDK 6.0 U1.

  • Intermittent hang observed when opening virtual disks.

    A customer observed intermittent hanging while opening virtual disks. The hang was found to occur inside the VixDiskLibVim module, in VixDiskLib_PrepareForAccess, VixDiskLib_EndAccess, or GetNfcTicket, and may be caused by network problems. Fixes have been identified and will appear in a future release. One fix is a timeout for HTTP requests, and another is a timeout for the open operation.

  • Hosted virtual disk > 2TB is not supported.

    Although ESXi 5.5 and later support > 2TB managed disk, VMware Workstation does not support > 2TB hosted disk. Virtual disk of type MONOLITHIC_SPARSE created by VixDiskLib_Create cannot be extended past its maximum size of 2TB. To transfer a large virtual disk over to vSphere, programs must clone a < 2TB hosted disk to an ESXi host, then extend it past 2TB after cloning.

  • Errata in VDDK documentation concerning offset into SAN.

    In the Virtual Disk Programming Guide section “SAN Mode on Linux Uses Direct Mode” the first bullet item is in error. There is no way for programmers to control offset into the SAN. For SAN transport, one major factor impacting performance is that the read buffer should be aligned with the sector size, currently 512. You can specify three parameters for VixDiskLib_Read: start sector, number of sectors to read, and the buffer to hold data. The proper read buffer size can be allocated using, for example, _aligned_malloc on Windows or posix_memalign on Linux. SAN mode performs best with about six concurrent streams per ESXi host. More than six streams usually results in slower total throughput.

  • With large thin-provisioned disk, Map Disk Region events fill log files.

    During SAN backup of a thin-provisioned virtual disk with many scattered sectors, the otherwise harmless event “Map disk region” throws many log entries, consuming unnecessary space and overwhelming other entries in the log file. There are two workarounds. The first is to switch from SAN transport to NBDSSL mode for incremental backups, when speed is not a big issue, for problematic virtual machines. The second is to change the provisioning type for the problematic virtual disk from thin to thick provisioning, then perform a Storage vMotion on its VM (thick consumes more space but avoids the logging issue).

  • VixDiskLib sometimes crashes with a core dump in HotAdd manager loop.

    When performing a backup using HotAdd transport mode, VixDiskLib might crash and may produce a core dump. The backtrace includes “VcbLib::HotAdd::HotAddMgr::ManagerLoop() from ... /lib64/libdiskLibPlugin.so” followed by libvmacore.so addresses. This crash occurs when the connection is marked NULL prematurely before ManagerLoop() exits. A fix is being tested and should appear in a future release.

  • VDDK programs can crash at connect time in unsupported environments.

    When backup-restore applications call VixDiskLib_ConnectEx without specifying transport mode, passing NULL to accept the default, a crash may result if the application is running outside the VMware environment. For example, if backup software runs on Citrix Xen Server, vSphere interprets it as a physical proxy, rather than a virtual machine proxy, and sets SAN mode as the default transport, instead of HotAdd mode. The workaround is to specify nbd or nbdssl as the transport mode; do not pass NULL to accept the default transport mode. SAN mode is the default for a physical proxy, while HotAdd is the default for a virtual proxy.

  • Error not returned when ESXi host reaches NFC connection limit.

    This issue causes an apparent hang in backup, or slowness of vCenter Server. The VIX_E_OUT_OF_MEMORY error is returned in cases when an ESXi host reaches the NFC memory limit (see fixed issue above), but not for other NFC limits such as the number of connections. As a workaround programs can check that the “Number of current connections” is less than the VMware recommended NFC session connection limit before making another VixDiskLib_Open(Ex) call. See section NFC Session Limits in the Virtual Disk Programming Guide. ESXi host limits depend on consumed buffer space (32MB) not just the number of connections. VixDiskLib_Open(Ex) uses one connection for every virtual disk accessed on an ESXi host. It is not possible to share a connection across virtual disks. Connections made through vCenter Server are limited to a maxiumum of 52, then the ESXi host limits listed above apply.

  • Incremental backups might lose data for CBT calls during snapshot consolidation.

    When consolidating a snapshot, changed block tracking (CBT) data gets lost on its way to VixDiskLib, so any sectors changed during snapshot consolidation will not get saved by VADP based incremental backups. This could result in data loss upon recovery, and is an ESXi 6.0 regression. A solution has been found and will appear in patch releases and in ESXi 6.0 U2. For workarounds, see KB 2136854.

The following issues were discovered during VDDK 6.0 testing.

  • HotAdd skipped if proxy has same BIOS UUID as the target VM.

    The VixDiskLib_ConnectEx function contains a pre-check for HotAdd to validate that the proxy is virtual machine. (HotAdd is not allowed on physical machines.) If the proxy VM and the target VM have the same BIOS UUID, validation fails because multiple VMs are returned from pre-check. This results in skipping HotAdd mode and reverting to another transport. The workaround is to make the BIOS UUID keys unique.

  • Windows proxy backup could fail with IPv6 enabled.

    When IPv6 is enabled on one or more network interfaces in a Windows proxy, virtual machine backup can fail while attempting to open a disk with the proxy. The cause is a stack buffer overrun in the libcurl.dll library when trying to enumerate DNS servers having multiple IPv6 network interfaces. Although IPv6 is enabled in all versions of Windows 8 and Windows Server 2012, the only known workaround is to disable IPv6 on the backup proxy's network interfaces.

  • DLL conflict when VDDK installed on vCenter Server.

    Usually VDDK libraries are installed on a backup proxy (physical or virtual machine) but when they are installed on vCenter Server, libldap_r.dll and liblber.dll could be different versions, causing “One or more required subsystems failed to initialize” errors. The workaround is to copy both aforementioned DLL files from the VDDK distribution to the folder where you are running the backup executable.

  • Clone function fails connection with at-sign in password.

    If a user sets the access password to include at-sign (@) the VixDiskLib_Clone function will fail with “CnxAuthdConnect” or “NfcNewAuthdConnectionEx“ error. The issue is seen only when connecting directly to an ESXi host, and when the virtual machine's MoRef was not specified. This behavior is a regression from VDDK 5.1. The workaround is to set the root (or other login) password to avoid at-sign.

  • SAN mode VDDK 6.0 searches for virtual machines by BIOS UUID.

    When using SAN transport, VDDK 6.0.0 tries to find requested virtual machines by looking up their BIOS UUID, instead of by the MoRef provided, as in previous VDDK releases. Because a BIOS UUID is not guaranteed unique, unlike a MoRef, this new behavior may cause problems after cloning, out of place VM restore, and so forth. The problem happens with both SAN backup and restore when two VMs managed by the same vCenter Server have identical BIOS UUID, whether the VMs are powered on or off. The error occurs when VixDiskLib_Open(Ex) fetches the virtual disk's block map. A fix has been identified and will be available in a future release.

  • VVol support and older virtual hardware versions.

    If VMs of virtual hardware version < 11, have memory snapshots, Storage vMotion from VVol to VMFS fails, and vice versa. This is not a backup issue as such. A workaround is to collapse the snapshots, upgrade the hardware version, then migrate.

  • VDDK cannot HotAdd a > 2TB disk on VVol datastores.

    When trying to open a > 2TB virtual disk for reading or writing on VVol datastores, the following error message appears: “Failed to hot-add SCSI targets: Vmomi::MethodFault::Exception: vim.fault.GenericVmConfigFault.” No workaround is known for HotAdd, but programs can switch to NBD transport.

  • VDDK 6.0 generates unnecessary log files at temporary location.

    The VDDK logging subsystem places many log messages in /tmp/vmware-root or the Temp folder. These are redundant and will be created even if the logging functions are hooked in. A fix has been identified and will be available in a future release.

  • HotAdd fails with more than five concurrent backup operations.

    When a backup application uses more than five (5) concurrent processes to back up or restore virtual machines using the HotAdd transport mode, one or more of the operations may fail. Logs will contain errors such as “The directory is not empty” and “Error acquiring instance lock” then “HotAdd ManagerLoop caught an exception.” The workaround is to reduce the number of concurrent backup or restore processes to five (5) or less.

  • Intermittent SAN mode read or write failure due to SAN LUN busy.

    On rare occasions, a SAN mode read or write can fail because the SAN LUN is busy. In the VDDK log file, an error message will appear such as “SAN read failed, error 170 (The requested resource is in use).” The workaround is to retry the read or write operation. A fix has been identified and will be available in a future release.

  • Slow read performance with NBD transport.

    This is not a regression; NBD was always slower than advanced transports. When reading disk data using NBD transport, VDDK makes synchronous calls. That is, VDDK requests a block of data and waits for a response. The block is read from disk and copied into a buffer on the server side, then sent over the network. Meanwhile, no data gets copied over the network, adding to wait time. To some extent, you can overcome this issue by using multiple streams to simultaneously read from a single disk or multiple disks, taking advantage of parallelism.

  • Failure to mount a logical volume spanned across multiple disks.

    If logical volume (LVM) is spanned across multiple disks, and the disks are HotAdded to the proxy VM in read/write mode, then the logical volume is mounted read/write using the VixMntapi library, the volume sometimes fails to mount. Read-only mount is successful using the same set-up. The mount failure can occur with all releases of Windows Server, but the issue is not always reproducible.

  • Disk open in HotAdd mode can hang if manager loop fails to initialize.

    Very infrequently, the HotAdd manager loop fails to start. Normally it starts once and runs for the life of a VDDK program. However if the first VixDiskLib_Open or VixDiskLib_OpenEx call tries to start HotAdd manager on the proxy VM simultaneously with start-up of a guest operation (originating from another program), a race condition occurs. The VixDiskLib Open operation fails and the HotAdd manager loop does not start, which causes the second Open to hang in function HotAddManager::GetManager. The workaround is to kill the program, run VixDiskLib_Cleanup in a separate program, then restart the original VDDK program. A VDDK fix has been identified and will be available in a future release.

  • VDDK clients with OpenSSL before 0.9.8za fail to connect.

    When trying to use VDDK 5.5.x advanced transport (HotAdd or SAN) for backup or restore against vSphere 6.0, the proxy connection fails with “SSL Exception” error. This is because old VDDK clients do not support TLS, which vSphere 6 requires. The solution is to upgrade your client on the proxy virtual machine to VDDK 5.5.4. A work-around is to temporarily allow SSLv3, but this is poor security practice.

  • VixMntapi for Linux fails after read-only mount is attempted.

    When a program tries to mount disk volumes read-only on Linux, it fails after VixMntapi_OpenDisks with the error message “Cannot read or parse the partition table on the virtual disk.” The workaround is to mount disk volumes read/write, or follow advice in the Virtual Disk Programming Guide (see section Read-Only Mount on Linux). VMware expects that Linux read-only mount will be allowed in a future release.

  • Failure to get volume information when mounting without loopback devices.

    VixMntapi for Linux requires loopback functionality on the backup proxy. When a Linux system is configured without loopback devices, and a volume mount is attempted, the following error appears in the log immediately following getVolumeInfo and the mount fails: “VixMntapi_MountVolume: Failed to mount partition 1 of disk... FUSE error 29: All loop devices.” The solution is to load the loop module manually, and/or create loop devices by running this command as root:
    # for i in `seq 0 8`; do mknod -m660 /dev/loop$i b 7 $i; done

  • Incremental restores using HotAdd transport can fail with deviceKey error.

    After the backup proxy has done an incremental backup and later tries to restore incrementally using HotAdd transport, the restore (or subsequent backups) may fail with the following error message: “A specified parameter was not correct: deviceKey.” If disk consolidation is done beforehand, the error message is instead: “You do not have access rights to this file.” The root cause is that Changed Block Tracking (CBT) is always disabled on hot-remove. CBT should be disabled on the restored virtual machine, but not on the HotAdd proxy. The workaround is to call QueryChangedDiskAreas("*") early in your restore program and remember results past hot-remove.

  • For HotAdd with VVols, proxy must be on same datastore.

    When writing virtual disk on a VVol (virtual volumes) datastore in HotAdd mode, the restore proxy must be on the same VVol datastore as the target virtual machines. If this is not possible, the restore proxy must use a different transport mode.

  • HotAdd mode does not work with a vSAN-sparse delta disk.

    ESXi hosts do not allow HotAdd of a vSAN-sparse delta disk on a proxy virtual machine with datastore types other than vSAN. Every time you snapshot a virtual machine residing on a vSAN datastore, a vSAN-sparse delta disk gets created. When the proxy then attempts to HotAdd a VMDK named diskname-NNNNNN.vmdk (where NNNNNN is a zero filled integer starting with 1 and continuing to 999999) unless the HotAdd is also on a vSAN datastore, the operation will fail. To prevent this situation from occurring, one workaround is to ensure that a VM has no snapshots before moving it to vSAN, and have the proxy create HotAdded VDDK on the vSAN datastore.

Remaining issues are carried over from VDDK 5.5.x and still apply.

  • VixMntapi has mode selection issues on Linux.

    When the disklib plug-in libdiskLibPlugin.so has been previously loaded, even in local file or NBD/NBDSSL transport mode, VixMntapi_OpenDisks fails with the error message “Volume mounts using advanced transports not supported: Error 6.” The workaround is to start another process (using non-HotAdd non-SAN transport) without libdiskLibPlugin.so loaded.

  • Always reboot after uninstalling VDDK on a Windows proxy.

    Attempts to install or uninstall the Windows vstor2 driver using the provided scripts can be unsuccessful under certain circumstances. When uninstalling VDDK from a Windows proxy, you should reboot after the uninstall. If this is not done, it may prove impossible to (re)install the vstor2 driver until the proxy is rebooted.

  • Metadata write is not supported for HotAdd and SAN transport.

    When a program calls VixDiskLib_WriteMetadata with HotAdd or SAN transport, the function returns an error saying “The operation is not supported” and supplied metadata never gets written to virtual disk. The workaround is to close the disk from HotAdd or SAN mode, reopen the disk using NBD or NBDSSL mode, then call VixDiskLib_WriteMetadata.

  • Read-only mount of Windows ReFS with VixMntapi.

    On Windows Server 2012 systems with Resilient File Systems (ReFS), the VixMntapi library can mount ReFS partitions read-only, but not read/write.

  • Cannot restore linked clones using SAN transport.

    VDDK 5.5 contained revised snapshot and linked clone libraries to allow read access anywhere in the snapshot chain on any platform. During testing VMware noticed errors when trying to restore multiple linked clones with SAN transport, which supports only a single snapshot. To avoid errors the SAN transport library was revised to explicitly disable writing to a disk that is not the base disk in a snapshot hierarchy. The libraries throw an error on open instead of waiting for fatal disk write errors. To restore linked clones, VMware recommends use of HotAdd, NBDSSL, or NBD transport. SAN transport can still read (back up) any disk in a snapshot hierarchy.

  • Boot disk should be scsi0:0 for use with mount API.

    When a virtual machine's boot disk is not on scsi0:0, the VixMntApi_GetVolumeInfo function does not correctly return the inGuestMountPoints (such as drive letter C:) or the numGuestMountPoints. On a scsi0:0 disk the correct information is returned, but not otherwise. For example with a scsi0:2 boot disk, the two properties are not populated. This issue was recently reported against VDDK 5.0 and later.

  • Error 87 with Windows 2012 NTFS.

    NTFS virtual disk from Windows Server 2012 might produce “Error 87” when a backup proxy mounts the disk read-only on earlier versions of Windows Server. The VDDK log contains the error “Volume/NTFS filesystem is corrupt (87).” This is a Windows backward compatability issue. If customers encounter this error, one workaround is to mount these virtual disks read/write so that the system has a chance to repair them.

  • Possible segmentation violation if running multiple backup processes.

    When many simultaneous backup processes are running, some of them might crash with a SIGSEGV after many iterations. This is due to a possible race condition in VixDiskLib, which can be reduced by calling VixDiskLib_Exit() at the end of your program.