A few days ago, I decided to update my vCenter server to version 6.7 U2c – normally this is an easy task with the update section in the VAMI interface. But this time I just encountered this error message when I tried to search for the update:
I’ve just seen this in the release notes for VMware vCenter Server Appliance 6.7 Update 1b:
Removing a virtual machine folder from the inventory by using the vSphere Client might delete all virtual machines
In the vSphere Client, if you right-click on a folder and select Remove from Inventory from the drop-down menu, the action might delete all virtual machines in that folder from the disk and underlying datastore, and cause data loss.
This issue is resolved in this release.
I’ve just checked this in my lab on the 6.7 U1b release – when I delete a VMfolder with a VM inside – the VM gets removed from inventory but not deleted on the datastore!
If I delete a VMfolder containing a VM, in vCenter 6.5 the VM gets deleted in the datastore!
This error is seen when you open the Veeam Backup and Replication console “Failed to check certificate expiration date”
Its seems like there is a bug in Veeam Backup and Replication that hits you 11 month after you install or upgrade to 9.5 U3 – luckily Veeam is already aware of it and some helpful URLs has been revealed in Anton Gostev weekly newsletter that can assist in solving this error:
For quite some time I have observed a LOM warning in VMware health status tab on an HPE Proliant ML350 Gen10 server. It seems like it reports the warning on two NIC that are down, even though there are unused by ESXi.
A couple of days ago I was visiting a customer to setup their Lenovo host to run vSAN – after the initial setup of vSAN kernel IPs, disk groups and so on, I took a look at the “VSAN Health check” to make sure that everything was healthy and supported.
Under the “hardware compatibility” part all checkmarks where green, but the Controller firmware version was not detected – so I did found it a bit strange that it reports the disk controller as supported without knowing what version it actually was running.
This issue is not new to me, as I have seen it a couple of times before, but this time it was different after all.
This morning I faced a strange issue in my vSphere Lab when a wanted to login to VAMI interface – of course to install the newly released “vSphere 6.7 U1” update.
I opened the VAMI URL for my Platform service controller (PSC): https://<FQDN>:5480 and typed in my root credentials as a normally would. However, the only thing that showed on the screen was a message saying: “Unable to login”.
After this I tried to type in my password multiple times to make sure that I was actually typing in the correct one, but still, I just got the same error message.
This blog post is not about the L1 Terminal Fault (L1TF -> VMware KB56931) but about the HTAware Mitigation tool version 220.127.116.11 (HTAwareMitigation-18.104.22.168.zip) that seems to have issues when used on single hosts (instead of clusters) – here is the problem that I have observed.
So, you find yourself in a situation where you have lost the root password for your ESXi host(s). Luckily there are multiple ways of resetting it – but the best method depends on the exact situation. Ill try to outline three different scenarios (of course, more exists) – maybe your are placed in a completely different scenario but maybe this post can help you anyway.
Ill like to highlight that after updating “Veeam Backup and Replication” to “version 9.5 update 3a” you might start to see warnings like this in your Veeam status reports:
Warning: [TDB]Unable to update SQL backupset for instance : Code = 0x80040e09 Code meaning = IDispatch error #3081 Source = Microsoft OLE DB Provider for SQL Server Description = The UPDATE permission was denied on the object ‘backupset’, database ‘msdb’, schema ‘dbo’.