Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 9 additions & 1 deletion documentation/modules/known-issues-2-10.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@

.Raw Device Mapping files prevent `copyoffload` migration of NFS-based VMs

If you try to migrate an NFS-based VM by using the `copyoffload` method, the `vmkfstools` command-line utility attempts to use the fallback method because NFS is file-based. However, NFS datastores do not support Raw Device Mapping (RDM) pointer files, and `vmkfstools` creates RDM files as cloning targets.
If you try to migrate an NFS-based VM by using the `copyoffload` method, the `vmkfstools` command-line utility attempts to use the fallback method because NFS is file-based. However, NFS datastores do not support Raw Device Mapping (RDM) pointer files, and `vmkfstools` creates RDM files as cloning targets.

*Workaround:* You can use one of the following migration methods to address the limitation:

Expand All @@ -24,5 +24,13 @@ If you try to migrate an NFS-based VM by using the `copyoffload` method, the `vm

When you create an Open Virtual Appliance (OVA) provider in the MTV UI, `ConnectionTestFailed` error messages are displayed before the provider status changes to `Ready`. The error messages are misleading and do not accurately reflect in-progress status of the connection. link:https://issues.redhat.com/browse/MTV-3613[(MTV-3613)]

.Storage-Offload ignores "max_vm_inflight" value when limiting active migrations to 2

Copy-offload migration plans may fail when more than two migrations are triggered simultaneously on a single `ESXi` host. This was observed in tests using 10 and 50 VMs (each with two 50GB disks) on a single `ESXi` host running vSphere 7.0.3. The core problem stems from an incorrect scheduling cost function during storage-offload migrations. This miscalculation leads to multiple populate-pods starting concurrently on the same `ESXi` host, even when the configured internal limit (`max_vm_inflight`) is set to *2*. Running more than one disk copy operation simultaneously on the same `ESXi` host significantly risks this failure, leading to errors and overall migration instability. This is especially relevant when migrating multiple VMs, each potentially having multiple disks, from a single `ESXi` host.

**Impact:**

Because VMware's underlying disk utility, vmkfstools, enforces a strict limit of 2 active copy operations per ESXi host, exceeding this limit results in errors and plan failure. Consequently, customers attempting single-host storage-offload migrations for multiple VMs at once may experience parallel migration failures, negatively impacting overall migration stability and user experience. link:https://issues.redhat.com/browse/MTV-3630[(MTV-3630)]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This applies only specific storage when doing xcopy, when we do the the fallback we are capable of doing more migrations at once. @TzahiAshkenazi @rgolangh

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@TzahiAshkenazi & @rgolangh - please could i ask you to help with @mnecas 's comment

thanks

Copy link
Collaborator

@rgolangh rgolangh Nov 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Theoretically the fallback could be less limited, but we see more factors causing the failures we see, like rescans, and vib related commands. so we are unsure yet. that comment as it is is clear and correct for now.


//For a complete list of all known issues in this release, see the list of link:https://issues.redhat.com/issues/?filter=12472621[Known Issues] in Jira.