VMware has long been a pioneer and a benchmark in the field of virtualisation, be it with regard to workstations with VMware Workstation or server environments with VMware ESXi Server. Since the acquisition by Broadcom, new policies such as ending perpetual licences and increasing costs through subscriptions based on the number of CPU cores, organisations have been re-evaluating their infrastructures and the solutions that comprise them. In this search for alternative solutions, OpenStack is a strategic and highly competitive choice for companies of all sizes.
Compared to VMware ESXi Server, OpenStack offers many advantages:
- Greater flexibility in terms of software and hardware
- Significant reduction in costs (especially in terms of software licences)
- No vendor lock-in (open-source solution with open technical standards)
- Interoperability guaranteed
Now let’s get to the heart of the matter, namely how to migrate one or (several) VMs using VMware ESXI Server technology to OpenStack infrastructures.
1. Technical prerequisites
- An instance on the destination Openstack infrastructure with a Rocky Linux operating system or equivalent, that will be referred to throughout this article as the “migration appliance”. During migrations, this instance will be used as a “flatpass” and it needs to have direct access to the destination volume(s) of the VMs we are going to migrate. Personally, I will use Infomaniak’s Public Cloud, which is extremely competitive compared to AWS, GCP or Azure.
- SSH access (port 22 only) to the ESXi servers from the migration appliance (either by login and password or by SSH key).
- All actions performed in this article are to be carried out with the root user
💡 Two more important points before moving on to the practical stage:
- This procedure is only valid for VMware ESXi and not vSphere. By exploring the documentation of the solutions used in this article, you can easily adapt it if necessary.
- Back up your virtual machines before the migration process, I don’t want to be responsible for your next sleepless nights if something were to go wrong!
That’s it! Now let’s move on to the configuration of our migration appliance 🚀
2. Installing the migration appliance
In order for our migration appliance to be functional, we need to install a few packets once connected to it by SSH:
dnf install centos-release-openstack-caracal dnf install python-openstackclient virt-v2v
3. Authentication and login
We still have two things to do before we can start our first migration so that our migration appliance can connect to our ESXi server and our Openstack platform:
- For authentication on the ESXi server, two solutions are possible: either with a key pair or with the user name and password.
- For authentication with the Openstack platform, you will need to retrieve the openrc profile file and send it to our migration appliance.
3.1 Authentication with ESXi server
I assume that the firewall rules between your Openstack instance and your ESXi server are configured and functional, as well as the activation of the SSH service on the ESXi server.
For those who do not wish to implement key pair authentication, I invite you to skip straight to the next section on authentication with the Openstack platform.
We will now implement our key pair authentication. Nothing complicated, we’re going to go to our migration appliance and we’re going to run the following commands:
- We start by generating our key pair (the key pair uses RSA to maximise compatibility with older versions of ESXi):
ssh-keygen -t rsa -b 4096
- Once your key pair is generated, we will deploy it on our ESXi server :
ssh-copy-id root@IP_SERVEUR_ESXI
This command will ask you to enter the root password for your virtualisation server, and will copy your public key to it. This will mean that you no longer need to enter a password to connect to your server.
I’ll let you check that everything worked correctly by trying to connect to your ESXi server directly from the migration appliance.
For those who want to use username and password authentication, simply create a text file containing the password of your ESXi server in your migration appliance:
touch passwordfile echo "PASSWORD_ESXI" > passwordfile
Of course, I’ll leave you to check that the login credentials are correct by testing them by establishing a connection to your ESXi server.
3.2 Authentication with the Openstack platform
We still need to authenticate our appliance with the Openstack platform. To do this, you must send the openrc profile file to your instance, which we will retrieve from the Horizon dashboard before executing it.
To confirm that you are authenticated on the Openstack platform, you can run the following command:
Openstack token issue
4. Launch of VM migrations
To migrate our virtual machines, we will use the virt-v2v utility developed by RedHat, which is specifically designed for this type of case and already incorporates all the necessary functions and options. Without going into detail, virt-v2v will create a new volume in the Openstack project where your migration appliance is located to copy your ESXi VM in its entirety. Once the data has been copied, virt-v2v will prepare the volume so that it can be started in an Openstack.
One last important point before we start (I promise, this is the last one 🙈): the VMs to be migrated must be switched off. Remember also to check your volume quotas on the Openstack platform so you don’t get stuck during the migration.
That’s it, now we can get going 😎
Here’s the command that will allow us to migrate our virtual machines. Make sure you adapt it according to your information!
With key pair authentication
virt-v2v -i vmx -it ssh ssh://root@IP_SERVEUR_ESXI/vmfs/volumes/NOM_DATASTORE/NOM_DE_VM/NOM_DE_VM.vmx -o openstack -oo server-id=ID_INSTANCE_OPENSTACK
With authentication by username and password
virt-v2v -i vmx -it ssh -ip FICHIER_MOT_DE_PASSE ssh://root@IP_SERVEUR_ESXI/vmfs/volumes/NOM_DATASTORE/NOM_DE_VM/NOM_DE_VM.vmx -o openstack -oo server-id=ID_INSTANCE_OPENSTACK
💡 A little explanation of these options
- In the previous commands, virt-v2v is configured to use the VMX file of the virtual machine (-i vmx) directly. To access this file, the SSH protocol (-it SSH) is used.
- The following options allow you to specify where the VMX file is located:
- IP_SERVEUR EXSI: the IP address of the ESXI server
- NOM_DATASTORE: the name of the datastore where the virtual machine is stored
- NOM_DE_VM: the name of the folder that contains the virtual machine files
- NOM_DE_VM.vmx: the name of the VMX file
- The -ip FICHIER_MOT_DE_PASSE option allows you to specify the file that contains the password for your ESXi server if you have chosen username and password authentication.
- The -o openstack option is destination information that indicates that we are sending our VM to an Openstack platform. -oo server-id=ID_INSTANCE_OPENSTACK is used to indicate the use of our migration appliance.
To retrieve the ID of your migration appliance, you can enter the following command directly into it:
dmidecode -s system-serial-number
That’s it, our migration is in progress and you just have to wait ⏱️ The tool will take care of retrieving the information about the virtual machine directly from the ESXi server. It will automatically perform the necessary actions for the VM to work correctly with Openstack.
5. Starting migrated virtual machines
Once our migration is complete, all that remains is to restart our instance via the Openstack boot from volume function. This feature allows you to use a bootable volume for an instance directly rather than using an image that is injected into a new volume.
To begin with, we will retrieve the ID of the volume that was created for our migration. So let’s start by listing the volumes of our Openstack project:
openstack volume list
In the list of returned volumes, you should see one with the name of your virtual machine followed by -sda (or sdb, sdc, etc. if your VM has multiple disks). It is this volume that contains what interests us.
Copy its ID (ID column in the return of the previous command) and then enter the following command:
openstack server create --flavor a2-ram4-disk0 –volume --network ext-net1
That’s it, your instance is starting 😊 You can check that everything is working correctly via the VNC console by entering the URL returned by the following command in your browser:
openstack console url show
You can also connect to it using the usual protocol (SSH, RDP, etc.), making sure you have opened the necessary ports in your security group.
6. Specificity with Infomaniak’s Public Cloud
There is a small difference with Infomaniak Public Cloud to ensure that the volume we’ve just created to work. When creating our volume, virt-v2v automatically adds metadata to our volume. This metadata allows you to specify how to configure the virtual machine that will use our volume in the hypervisor, but some are not supported on all regions of the platform (region dc3-a).
To fix this, simply remove the property that is currently not compatible with the platform. The property in question is hw_machine_type=q35
. We will therefore remove the property from our volume with the following command:
openstack volume unset --image-property hw_machine_type
Once this is done, you can start your instance as described above with the platform’s boot from volume function.
More
- Discover Infomaniak Public Cloud
- Why choose OpenStack for your Public Cloud
- Discover how Infomaniak’s sovereign cloud works
Kevin Allioli is a Cloud & System Architect at Infomaniak and an OpenStack and AWS expert. With more than 10 years of experience in cloud computing, he contributes to the development of Infomaniak’s Public Cloud.
RTBF chooses Infomaniak for a high-availability infrastructure dedicated to more than 2 million users
Monday November 11th, 2024
Infomaniak commissions two Meyer Burger solar power plants manufactured in Europe
Friday April 5th, 2024
Case study: RGOODS develops international NGO shops using Infomaniak’s cloud solutions
Thursday March 28th, 2024