Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ ifdef::context[:parent-context: {context}]

[id="assembly_planning-migration-osp_{context}"]

= Planning migration of virtual machines from {osp}
= Planning a migration of virtual machines from {osp}

//:context: planning-osp

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ ifdef::context[:parent-context: {context}]

[id="assembly_planning-migration-vmware_{context}"]

= Planning migration of virtual machines from VMware vSphere
= Planning a migration of virtual machines from VMware vSphere

//:context: planning-vmware

Expand All @@ -27,6 +27,10 @@ include::../modules/creating-yaml-based-network-maps-ui.adoc[leveloffset=+1]

include::../modules/creating-form-based-storage-maps-ui-vmware.adoc[leveloffset=+1]

include::../modules/about-storage-copy-offload.adoc[leveloffset=+1]

include::../modules/proc_storage-copy-offload.adoc[leveloffset=+2]

include::../modules/creating-yaml-based-storage-maps-ui.adoc[leveloffset=+1]

include::../modules/adding-source-provider.adoc[leveloffset=+1]
Expand Down
3 changes: 0 additions & 3 deletions documentation/modules/about-cold-warm-migration.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -66,9 +66,6 @@ You can start the cutover stage manually by using the {project-short} console or

The table that follows offers a more detailed description of the advantages and disadvantages of cold migration and warm migration. It assumes that you have installed Red Hat Enterprise Linux (RHEL) 9 on the {ocp} platform on which you installed {project-short}:

[cols="1,1,1",options="header"]
.Detailed description of advantages and disadvantages

[cols="1,1,1",options="header"]
.Advantages and disadvantages of cold and warm migrations
|===
Expand Down
69 changes: 69 additions & 0 deletions documentation/modules/about-storage-copy-offload.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
// Module included in the following assemblies:
//
// * documentation/doc-Planninig_your_migration/assembly_planning-migration-vmware.adoc

:_content-type: CONCEPT
[id="about-storage-copy-offload_{context}"]
= About migrating {vmw} virtual machines by using storage copy offload

[role="_abstract"]
You can migrate {vmw} virtual machines (VMs) that are in a storage array network (SAN) more efficiently by using a method called _storage copy offload_. You use this method to accelerate migration speed and reduce the load on your network.

{vmw}'s vSphere Storage APIs-Array Integration (VAAI) includes a command named `vmkfstools`. This command sends the `XCOPY` command, which is part of the SCSI protocol. The `XCOPY` commandS lets you copy data inside a SAN more efficiently than copying the data over a network.

{project-first} 2.10.0 leverages this command as the basis for storage copy offload, allowing you to delegate cloning your VMs' data to the storage hardware instead having it to transit it between {project-short} and {virt}. You do this by configuring the storage map in your migration plan to point to your storage array instead of the network you usually use for migration. When you start the migration plan, {project-short} migrates your VMs by copying them to the storage array you choose and using `XCOPY` to copy them directly to {virt}, instead of transmitting the contents of your VMs to {virt}.

The storage copy offload feature has some unique configuration prerequisites, which are discussed in xref:proc_storage-copy-offload_vmware[Migrating {vmw} vSphere VMs by using storage copy offload]. Once you configure your system, you can migrate plans using storage copy offload by using either the {project-short} UI or its CLI. Instructions for using storage offload have been integrated into the procedures for migrating {vmw} VMs for both the Ui and CLI.

[IMPORTANT] [VDDK mage OK, VM can't migrate using both VDDK and copy-offload]
====
A migration plan cannot mix VDDK mappings with copy-offload mappings. Because the migration controller copies disks either through CDI volumes (VDDK) or through Volume Populators (copy-offload), all storage pairs in the plan must either include copy-offload details (secret + product) or none of them must; otherwise the plan will fail.
====

Storage copy offload is available as a Technology Preview feature for {project-first} 2.10.0 for cold migration and as a Developer Preview feature for warm migration.

[IMPORTANT]
====
Storage copy offload for cold migration is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them
in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview
features, see https://access.redhat.com/support/offerings/techpreview/.
====

[IMPORTANT]
====
Storage copy offload for warm migration is a Developer Preview feature only. Developer Preview software is not supported by Red{nbsp}Hat in any way and is not functionally complete or production-ready. Do not use Developer Preview software for production or business-critical workloads. Developer Preview software provides early access to upcoming product software in advance of its possible inclusion in a Red{nbsp}Hat product offering. Customers can use this software to test functionality and provide feedback during the development process. This software might not have any documentation, is subject to change or removal at any time, and has received limited testing. Red{nbsp}Hat might provide ways to submit feedback on Developer Preview software without an associated SLA.

For more information about the support scope of Red{nbsp}Hat Developer Preview software, see link:https://access.redhat.com/support/offerings/devpreview/[Developer Preview Support Scope].
====

[id="how-storage-copy-offload-works_{context}"]
== How storage copy offload works

Without storage copy offload, {project-short} migrates a virtual disk as follows:

. {project-short} reads the disk from the source storage.
. {project-short} sends the data over a network to {virt}.
. {virt} writes the data to its storage.
+
This method can be slow and consume significant network and host resources.

With storage copy offload, the process is streamlined:

. {project-short} initiates a disk transfer request.
. Instead of sending the data, {project-short} instructs the storage array in which the Vsphere Virtual Machine File System (VMFS) datastore holds the source VMs to perform a direct copy from the source storage to the target volume, on the same array, in the correct storage class.
+
The storage array handles the cloning of the VM disk internally, often at a much higher speed than a network-based transfer.

The Forklift project, a key component of {project-short}, includes a specialized `vsphere-xcopy-volume-populator` that directly interacts with {vmw}'s VAAI. This allows {project-short} to trigger the high-speed, array-level data copy operation for supported storage systems.

[IMPORTANT]
====
The storage arrays must be the ones specified above. Otherwise `XCOPY` will perform a network copy on the ESXi. Although a network copy on the ESXi is usually considerably faster than a standard migration using an VDDK image, neither is as quick as a properly configured storage copy offload migration.
====

[role="_additional-resources"]
.Additional resources

* xref:proc_storage-copy-offload_vmware[Migrating {vmw} vSphere VMs by using storage copy offload]
Original file line number Diff line number Diff line change
Expand Up @@ -28,15 +28,13 @@ You can create ownerless storage maps by using the form page of the {project-sho

. Optional: If this is a storage map for a migration using storage copy offload, specify the following offload options:

* *Offload plugin*: Select from the list.
* *Offload plugin*: Select `vSphere XCOPY` from the list.
* *Storage secret*: Select from the list.
* *Storage product*: Select from the list.
+
[IMPORTANT]
[NOTE]
====
Storage copy offload is Developer Preview software only. Developer Preview software is not supported by Red{nbsp}Hat in any way and is not functionally complete or production-ready. Do not use Developer Preview software for production or business-critical workloads. Developer Preview software provides early access to upcoming product software in advance of its possible inclusion in a Red{nbsp}Hat product offering. Customers can use this software to test functionality and provide feedback during the development process. This software might not have any documentation, is subject to change or removal at any time, and has received limited testing. Red{nbsp}Hat might provide ways to submit feedback on Developer Preview software without an associated SLA.

For more information about the support scope of Red{nbsp}Hat Developer Preview software, see link:https://access.redhat.com/support/offerings/devpreview/[Developer Preview Support Scope].
Storage copy offload is a feature that allows you to migrate {vmw} virtual machines (VMs) that are in a storage array network (SAN) more efficiently. This feature makes use of the command `vmkfstools` on the ESXi host, which invokes the `XCOPY` command on the storage array using an Internet Small Computer Systems Interface (iSCSI) or Fibre Channel (FC) connection. Storage cpy offload lets you copy data inside a SAN more efficiently than copying the data over a network. Storage copy offload is available as a Technology Preview feature for {project-first} 2.10 for cold migration and as a Developer Preview feature for warm migration. For more information, see xref:about-storage-copy-offload_vmware[About migrating {vmw} virtual machines by using storage copy offload].
====

. Optional: Click *Add mapping* to create additional storage maps, including mapping multiple storage sources to a single target storage class.
Expand Down
12 changes: 12 additions & 0 deletions documentation/modules/preparing-storage-copy-offload.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
// Module included in the following assemblies:
//
// * documentation/doc-Migration_Toolkit_for_Virtualization/master.adoc

:_content-type: PROCEDURE
[id="preparing-storage-copy-offload_{context}"]
= Preparing to use storage copy offload

[role="_abstract"]
You need to do the following to use storage copy offload:

Might be more than one module here.
14 changes: 13 additions & 1 deletion documentation/modules/proc_migrating-virtual-machines-cli.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -618,7 +618,7 @@ spec:
- destination:
storageClass: <storage_class>
accessMode: <access_mode> <1>
source:
source
id: <source_datastore> <2>
provider:
source:
Expand All @@ -627,10 +627,22 @@ spec:
destination:
name: <destination_provider>
namespace: <namespace>
offloadOptions:
offloadPlugin: <offload_plugin> <3>
storageSecret:
name: <storage_secret_name> <4>
namespace: <namespace>
storageVendorProduct: <storage_product> <5>
EOF
----
<1> Allowed values are `ReadWriteOnce` and `ReadWriteMany`.
<2> Specify the {vmw} vSphere datastore moRef. For example, `f2737930-b567-451a-9ceb-2887f6207009`. To retrieve the moRef, see xref:retrieving-vmware-moref_vmware[Retrieving a {vmw} vSphere moRef].
<3> Storage copy offload feature only: Name of the offload plugin you are using for this migration. Currently, the only valid value is `vsphereXcopyConfig`.
<4> Storage copy offload feature only: Name of the Kubernetes Secret used in this migration.
<5> Storage copy offload feature only: Name of the storage product used in the migration. For example, `vantara` for Hitachi Vantara.
+
Storage copy offload is a feature that allows you to migrate {vmw} virtual machines (VMs) that are in a storage array network (SAN) more efficiently. This feature makes use of the command `vmkfstools` on the ESXi host, which invokes the `XCOPY` command on the storage array using an Internet Small Computer Systems Interface (iSCSI) or Fibre Channel (FC) connection. Storage cpy offload lets you copy data inside a SAN more efficiently than copying the data over a network. Storage copy offload is available as a Technology Preview feature for {project-first} 2.10 for cold migration and as a Developer Preview feature for warm migration. For more information, see https://docs.redhat.com/en/documentation/migration_toolkit_for_virtualization/2.10/html/planning_your_migration_to_red_hat_openshift_virtualization/assembly_planning-migration-vmware#about-storage-copy-offload_vmware[About migrating {vmw} virtual machines by using storage copy offload].

endif::[]

ifdef::rhv[]
Expand Down
219 changes: 219 additions & 0 deletions documentation/modules/proc_storage-copy-offload.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,219 @@
// Module included in the following assemblies:
//
// * documentation/doc-Migration_Toolkit_for_Virtualization/master.adoc

:_content-type: PROCEDURE
[id="proc_storage-copy-offload_{context}"]
= Migrating {vmw} vSphere VMs by using storage copy offload

[role="_abstract"]
You can use the storage copy offload feature of {project-first} to migrate {vmw} vSphere virtual machines (VMs) faster than by other methods.

.Prerequisites

In addition to the regular xref:vmware-prerequisites_mtv[{vmw} prerequisites], storage copy offload has the following additional prerequisites:

* A Hitachi Vantara or NetApp ONTAP storage system or one of the following:
** A configured Dell PowerFlex Storage Data Client (SDC) or
** A configured Internet Small Computer System Interface (iSCSI) storage system
** A configured Fibre Channel storage system
* A working Container Storage Interface (CSI) driver connected to the above and to {virt}
* A configured {vmw} vSphere provider
* vSphere users must have a role that includes thw following privileges (suggested name: `StorgeOffloader`):

** Global
*** Settings
** Datastore
*** Browse datastore
*** Low level file operations
** Host Configuration
*** Advanced settings
*** Query patch
*** Storage partition configuration

.Procedure

. In the {project-short} Operator, set the value of `feature_copy_offload` to `true` in `forklift-controller` by running the following command:
+
[source, terminal]
----
oc patch forkliftcontrollers.forklift.konveyor.io forklift-controller --type merge -p '{"spec": {"feature_copy_offload": "true"}}' -n openshift-mtv
----

. Create a `Secret` in the namespace in which the migration provider is set up, usually `openshift-mtv`. Include the following credentials in the `Secret`. Note that Hitachi has a different set of credentials:
+
[cols="4", options="header"]
.Credentials for a non-Hitachi storage copy offload Secret
|===
|Key| Description| Mandatory?| Default

| `STORAGE_HOSTNAME`
| IP or URL of the host (string)
| Yes
| NA

| `STORAGE_USERNAME`
| The user's name (string)
| Yes
| NA

| `STORAGE_PASSWORD`
| The user's password (string)
| Yes
| NA

| `STORAGE_SKIP_SSL_VERIFICATION`
| If set to `true`, SSL verification is not performed (`true`, `false`).
| No
|`false`
|===
+
[cols="4", options="header"]
.Credentials for a Hitachi storage copy offload Secret
|===
|Key| Description| Mandatory?| Default

| `GOVMOMI_HOSTNAME`
| hostname or URL of the vSphere API (string)
| Yes
| NA

| `GOVMOMI_USERNAME`
| User name of the vSphere API (string)
| Yes
| NA

| `GOVMOMI_PASSWORD`
| Password of the vSphere API (string)
| Yes
| NA

| `STORAGE_HOSTNAME`
| The hostname or URL of the storage vendor API (string)
| Yes
| NA

| `STORAGE_USERNAME`
| The username of the storage vendor API (string)
| Yes
| NA

| `STORAGE_PASSWORD`
| The password of the storage vendor API (string)
| Yes
| NA

| `STORAGE_PORT`
| The port of the storage vendor API (string)
| Yes
| NA

| `STORAGE_ID`
| Storage array serial number (string)
| Yes
| NA

| `HOSTGROUP_ID_LIST`
| List of IO ports and host group IDs, for example. `CL1-A,1:CL2-B,2:CL4-A,1:CL6-A,1`
| Yes
| NA
|===

. In either the UI or the CLI, complete the following steps:

.. In the UI, complete the following steps:

... Create an ownerless storage map by using the procedure in xref:creating-form-based-storage-maps-ui-vmware_vmware[Creating ownerless storage maps using the form page of the {project-short} UI]. Use the *Offload plugin* named `vSphere XCOPY`.
... Create a migration plan by using the procedure in xref:creating-plan-wizard-vmware_vmware[Creating a VMware vSphere migration plan by using the MTV wizard].


.. In the CLI, complete the following steps:

... Create a `StorageMap` custom resource (CR) according to the following example:
+
[source,yaml,subs="attributes+"]
----
apiVersion: forklift.konveyor.io/v1beta1
kind: StorageMap
metadata:
name: copy-offload
namespace: openshift-mtv
spec:
map:
- destination:
accessMode: ReadWriteMany <1>
storageClass: <storage_class> <2>
offloadPlugin:
vsphereXcopyConfig:
secretRef: <Secret_for_the_storage_vendor_product> <3>
storageVendorProduct: <storage_vendor_product> <4>
source:
id: <datastore_ID> <5>
provider:
destination:
apiVersion: forklift.konveyor.io/v1beta1
kind: Provider
name: host
namespace: openshift-mtv
uid: <ID_of_provider_host>
source:
apiVersion: forklift.konveyor.io/v1beta1
kind: Provider
name: <name_of_vSphere_provider>
namespace: openshift-mtv
uid: <ID_of_vSphere_provider>
----
<1> Optional label.
<2> The storage class for the target Persistent Volume Claim (PVC) of the VM.
<3> `Secret` that contains the storage provider credentials.
<4> String that identifies the storage product. Valid strings are listed in the table that follows this CR.
<5> Datastore ID as set by {vmw} vSphere.
+
[cols="1,1",options="header"]
.Supported storage vendors and their identifying strings in the CLI
|===
|Vendor
|Identifying string (Value of `storageVendorProduct` label)

|Hitachi Vantara
|`vantara`

|NetApp
|`ontap`

|Hewlett Packard Enterprise
|`primera3par`

|Pure Storage
|`pureFlashArray`

| Dell (PowerFlex)
|`powerflex`

| Dell (PowerMax)
|`powermax`

| Dell (PowerStore)
|`powerstore`

| Infinidat
|`infinibox`

| IBM
|`flashsystem`
|===

... Create a migration plan using the procedure in link:https://docs.redhat.com/en/documentation/migration_toolkit_for_virtualization/2.10/html/migrating_your_virtual_machines_to_red_hat_openshift_virtualization/assembly_migrating-from-vmware_mtv#proc_migrating-virtual-machines-cli_vmware[Running a {vmw} vSphere migration from the command-line].
... In the `Plan` CR, modify the `spec:map:storage` portion of the CR as follows:
+
[source,yaml,subs="attributes+"]
----
spec:
map:
storage:
apiVersion: forklift.konveyor.io/v1beta1
kind: StorageMap
name: <storage_map_in_StorageMap_CR>
namespace: <namespace>
----

Loading