
I recently encountered an error in VMware vLCM: “Feature cannot be enabled on this cluster”. This was followed by another message: “Failed to fetch vSAN witness host associated with the cluster.”

I recently encountered an error in VMware vLCM: “Feature cannot be enabled on this cluster”. This was followed by another message: “Failed to fetch vSAN witness host associated with the cluster.”
I have recently encountered the following error on several Lenovo VX systems, which prevents vLCM from initiating the upgrade from ESXi 7 to ESXi 8:
Error: Version iavmd-3.0.0.1038 of the manually added Component Intel VMD driver with VROC support is an unsupported downgrade from version iavmd-3.5.1.1002 on the host.
Replace the Component in the image with one of the same or higher version.
If you encounter the error mentioned above while using VMware vLCM in conjunction with the “HPE OneView for vCenter Plugin,” there is a high likelihood that you can resolve the issue by executing a specific command directly on the ESXi host. To do this, you will need to access the ESXi shell or SSH into the host:
sut -set mode=AutoDeploy
This command sets the mode to “Auto Deploy,” which is essential for ensuring proper functionality with VMware vLCM and the HPE OneView integration.
To verify the current mode of your ESXi host, you can retrieve this information by executing the following command:
sut -exportconfig
I recently encountered the above error in VMware vLCM while working in a multi-site vCenter environment. The issue was initially identified by a site administrator who had full administrative rights over the datacenter object they managed.
The root cause of this issue lies in the site administrators restricted access. Although they had full permissions for their respective datacenter, they lacked global administrative privileges across the entire vCenter. For vLCM to function correctly, broader access rights are required.
After investigating the vCenter roles and permissions, I was able to identify the minimal privileges needed to resolve the issue without granting excessive access.
The solution:
Continue readingIf disabling the Efficient Cores (in BIOS) on your new NUC system doesn’t help you to install ESXi 8 on your new NUC – then read on!
Continue readingYou are properly not going to use it anyway (after you have installed VMware Tools).
The big “WHY”? – because if you remove it – then you don’t have to worry about current and future vulnerabilities on the virtual USB controller (there have been a few).
Take a look at my colleague blogpost (To get some more details 🙂 ): VMware VMs and USB Controller – Virtual Allan (virtual-allan.com)
You are not very likely to bump into Windows 2003 physical servers anymore – but nevertheless that just happened to me a week ago. The task was clear – this server needs to be virtualized into a vSphere 7 environment, running vSAN.
The problem with this task is that to convert (P2V) a 2003 server you need to install vCenter Converter 6.2 on it, since the latest release 6.3 simply doesn’t work on 2003 servers (It won’t install).
Next problem is that vCenter Converter 6.2 doesn’t work with vSAN 7 – only “traditional storage” can be used as target – but in this case there were no other storage than vSAN that could be used as target.
What to do? – read on…
If you run in to the above error when installing Windows 11 as a virtual machine on ESXi (or other virtual platforms) then be aware that there is a workaround.
Continue readingI recently ran into the above problem with a customer while trying to upgrade ESXi from Prism interface. The KB6470 Mentioned that it might be related to the ESXi scratch partition not having enough available space. But that wasn’t the issue here.
Continue readingAfter I have upgraded my home lab from ESXi 7.0 to 7.0 update 1c (17325551) I ran into an issue updating VMware Tools on my VMs.
I tried both update options (“automatically” and “manually”), but both failed with a VIX error.
Automatilly update output = “vix error code = 21004”
Manually update = “vix error code = 21009”