Error again. Some googling led me to techbrainblog’s excellent page on using these utilities and also the solutions to some common but cryptic errors. Very useful. The solution to this error in particular is to simply shut down the old PSC. It needs to be offline before the command is ran.
Problem: A fresh install of HPE branded ESXi 6.5 U1 cannot see the LUNs on the SAN during the installation. The server boots from SAN which means I need to be able to connect to the remote LUNs during installation. There is no local storage. Currently on 5.5u3, it is working fine. The HPE branded 6.5U1 installer does not see the LUNs presented by my SAN. A quick boot into the 5.5 installer confirms it can see the LUNS with no problems ruling out zoning issues, physical issues, etc.
The HPE ESXi 6.5 image seems to be lacking support for the Qlogic BR-815/Qlogic BR-825/Brocade-415/Brocade-825 FC cards which are all mostly the same card. After verifying compatibility of the server, and of the BR-815 FC cards, I determined that the driver simply is not included in the HPE image.
Here are the steps I took to roll my own installer using the HPE branded one as a base using the VMWare Image Builder toolset:
Customizing installations with Image Builder: https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.install.doc/GUID-48AC6D6A-B936-4585-8720-A1F344E366F9.html
HPE VMWare Support and Certification Matrices: http://h17007.www1.hpe.com/us/en/enterprise/servers/supportmatrix/vmware.aspx
Info on HPE Custom Images: https://www.hpe.com/us/en/servers/hpe-esxi.html
Supported driver firmware versions for I/O devices: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2030818
Identify OEM’s software depot URL, in this case the HPE ESXi 6.5U1 image http://vibsdepot.hpe.com/index-ecli-650.xml
Identify where the VIB is available for the driver. In my case, the Brocade BR-815 driver was downloaded via the VMWare compatibility site: https://www.vmware.com/resources/compatibility/detail.php?deviceCategory=io&productid=5346 — Note the VIB is actually inside a zip file inside the zip you download. It will be looking for a index.xml file in the root of the zip.
Use the esx-image-creator.ps1 to generate a new image with the newly included software: https://github.com/vmware/PowerCLI-Example-Scripts/blob/master/Scripts/esxi-image-creator.ps1
Use Export-EsxImageProfile to generate a ISO for installation.
<msg>('http://vcenter.redacted.lan:9084/vum/repository/hostupdate/csco/csco-VEM-5.5.0-metadata.zip','/tmp/tmp6q7F56','[Errno 4] IOError: <urlopen error [Errno -2] Name or service not known>')</msg>
I recently needed to change the IP address of my PSC. Unfortunately it was already inaccessible so I was unable to do it via the standard GUI methods. I SSH’d into the box and had a look but it pretty immediately becomes apparent you can’t just update things the way you would a normal linux box. Enter vami_config_net. I believe this utility is available on any of the VMWare appliances that utilize VAMI/photon but I could be wrong. As you may notice int he article it refers to this being for the vCetner Support Assistant, but it worked just the same for me on my external PSC.
In preparation for migrating from vCenter 6.5 w/ embedded PSC, to a external PSC I needed to validate the replication between my new external PSC and the embedded platform services controller. To validate PSC replication partners, the vdcrepadmin utility can be used. For more information see https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2127057
I have used to below commands to recover from a failed PSC deployment. When trying to redeploy after the failed deployment, I encountered the error:
“Failed to run vdcpromo”
Following the below steps on the current PSC resolved the error and I was then able to successfully restart the PSC deployment.
Also, protip to avoid having to keep redeploying the appliance, take a snapshot right after phase 1 completes. Then you can simply restore the snap and access your vm via the web interface to try again.
VMware vCenter Server Appliance188.8.131.5200
Type:vCenter Server with an embedded Platform Services Controller
Additional info: I also ran into this when trying to deploy an additional PSC that had a failed installation, but got a completely different error (see below). Going to Administration -> System Configuration in the flash vSphere web client also displays the failed PSC. Login to the live PSC and use the above commands to cleanup, then restart the new PSC deployment. Refreshing the System Configuration page once the vdcleavefed command was ran confirms the cleanup is complete and the failed install is no longer listed.
The error I received when deploying this PSC was:
Could notconnect toVMware Directory Service via LDAP.Verify VMware Directory Service isrunning on the appropriate system andisreachable from thishost.
Removing the failed deployment via vdcleavefed did not resolve the issue.
I decided to test LDAP connectivity to the PSC from the failed PSC deployment. I SSH’d into the box and did the following:
The VCSA has it’s own CA built in. It uses that CA to generate certs for all the various services. There are two options available to ensure that the certificate is trusted in the browser:
Generate a CSR for the cert and submit to a CA who can generate the cert.
Use Microsoft Active Directory GPO to push out the VCSA’s root CA cert, thereby allowing the workstations to trust the cert already installed.
I went with the second one because the VCSA is using vcenter.mydomain.lan and is only accessible from inside my network which also means only machines on the domain will be connecting to the web interface. This was very simple to make happen…
On the DC:
To distribute certificates to client computers by using Group Policy
On a domain controller in the forest of the account partner organization, start the Group Policy Management snap-in.
Find an existing Group Policy Object (GPO) or create a new GPO to contain the certificate settings. Ensure that the GPO is associated with the domain, site, or organizational unit (OU) where the appropriate user and computer accounts reside.
Right-click the GPO, and then click Edit.
In the console tree, open Computer Configuration\Policies\Windows Settings\Security Settings\Public Key Policies, right-click Trusted Root Certification Authorities, and then click Import.
On the Welcome to the Certificate Import Wizard page, click Next.
On the File to Import page, type the path to the appropriate certificate files (for example, \\fs1\c$\fs1.cer), and then click Next.
On the Certificate Store page, click Place all certificates in the following store, and then click Next.
On the Completing the Certificate Import Wizard page, verify that the information you provided is accurate, and then click Finish.
Repeat steps 2 through 6 to add additional certificates for each of the federation servers in the farm.
Once the policy is setup, you will need to either wait for machine reboots, or for the GP tp update. As an alternative, you can also run gpupdate /force to cause the update to occur immediately. Once complete, you can verify the cert was installed by running certmgr.msc and inspecting the Trusted Root Certification Authorities tree for the cert. It was my experience that the machine still required a reboot due to the browser still not recognizing the new root CA and therefore still displaying the ugly SSL browser error. After a reboot it was good to go.
The issues I ran into with the migration assistant complained of the SSL certs not matching. Upon inspecting the certs I found all were issues for domain.lan except for one which was issued to domain.net. I followed the following articles to generate a new vCenter cert and install it:
Generate SSL cert using openssl: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2074942
Install and activate cert: https://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=2061973
As the Appliance Installed reached Stage 2 of the install where it copies the data to the new VCSA, I received the following error (note the yellow warning in the background along with the details in the foreground):
To resolve this error, I followed the following articles:
Upgrading to VMware vCenter 6.0 fails with the error: Error attempting Backup PBM Please check Insvc upgrade logs for details (2127574): https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2127574
Resetting the VMware vCenter Server 5.x Inventory Service database (2042200): https://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=2042200#3
Which essentially had me reset the inventory service’s database due to corruption. I had noticed the vSphere client slow in recent weeks, this could be a side effect.
Additional more generic docs for tshooting vCenter upgrades: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2106760
Attempting to join a freshly deployed VCSA server to a AD domain can be problematic if SMB1 is disabled. In my case it was 5.5 but I believe this issue persists in 6.x. SMB1 was disabled on the DC as it should be as it is broken and insecure. The problem lies in the fact that VCSA doesn’t support SMB2 and this causes the error. The VAMI (web interface) might report something like the following when attempting to join the domain:
Error:Enabling Active Directory failed.
Additionally, on the VCSA, /var/log/vmware/vpx/vpxd_cfg.log contains entries like the following:
2017-08-1614:30:0726987:ERROR:Enabling active directory failed:Joining toAD Domain:domain.lan
With Computer DNS Name:vcenter-server.domain.lan
Of course DNS resolution of the VCSA’s hostname should be validated before continuing, but assuming everything else is in working order, the fix is to enable SMB2 on the VCSA.
Verify SMB2 is disabled (note the Smb2Enabled key is 0: