Hi Jonathan,
Thank you very much for your feedback. Unfortunately I only have access to one host in this environment, and I can't get away with nested instances as the purpose of the lab is to test a real-world single-host portable deployment so the test environment needs to appropriately reflect the real future environment.
I understand that making changes without rollback is risky, but is there a method that *should* work, or am I guaranteed to lock myself out? If the latter occurred, I assume I would be locked out of the host GUI too, correct?
In the worst case scenario, how bad is it to have to use the network recovery option through the host console (I do have IPMI access), and what state would that leave the networking in?
Thanks!
------------------------------
Zac Pollock
Engineer/Specialist
Technical Systems Integrators, Inc.
Apopka FL
------------------------------
Original Message:
Sent: 06-26-2020 09:57 AM
From: Install/Upgrade Expert - Jonathan Ebenezer
Subject: vSphereAMA - Moving vSwitch to DVswitch with Single Uplink
Hello Zac,
Although this is a lab environment, this method is not recommended as it is an expected behavior in case of the single NIC.It is recommended to migrate the vCenter to another host and move the host to dvs. There are high chances of locking out of GUI, if the rollback is disabled.
If you are only limited to one host and if the resource on that host is high enough, you can go in for nested virtualization meaning you can run multiple ESXi hosts as VM on the the parent baremetal host to suffice to your lab setup needs and for testing purposes.
Support for running ESXi/ESX as a nested virtualization solution -
https://kb.vmware.com/s/article/2009916?lang=en_us
------------------------------
Install/Upgrade Expert - Jonathan Ebenezer
Original Message:
Sent: 06-26-2020 09:24 AM
From: Zac Pollock
Subject: vSphereAMA - Moving vSwitch to DVswitch with Single Uplink
We are setting up a small temporary lab with a single host vSphere environment on a hosted bare metal server. VCSA resides on the host it is controlling, and because it is a hosted solution we do not have direct access to, or control over, the physical NIC cabling.
The host is currently deployed with the single default vswitch0, utilizing eth0 on the host. The VMkernel port and management network reside on this switch. Since eth0 is the only physical connection to the private network our host provider allows, we would like to migrate the NIC (eth0), the VMkernel port, and the VCSA port group to a distributed vSwitch. This process is typically pretty straightforward, and we have attempted the move using the standard procedure of creating the DVswitch, uplink port, and port groups, then using host management to migrate the physical NIC, VMkernal port, and VCSA port group at the same time. This results in making the changes in the host and they all appear successful, but after the networkTimeout period the changes are rolled back due to a loss of connectivity between vCenter and the host.
Some articles have suggested that this is typical behavior in a single NIC/uplink environment since there is no redundancy, and that disabling the automatic rollback feature will resolve the issue and allow completion. Is that a correct assessment, or would that result in locking ourselves out of the GUI? Is there a better procedure for performing a VMkernel control port migration in a single NIC scenario?
Thanks!
#vSphereAMA