The VCSA has it’s own CA built in. It uses that CA to generate certs for all the various services. There are two options available to ensure that the certificate is trusted in the browser:
Generate a CSR for the cert and submit to a CA who can generate the cert.
Use Microsoft Active Directory GPO to push out the VCSA’s root CA cert, thereby allowing the workstations to trust the cert already installed.
I went with the second one because the VCSA is using vcenter.mydomain.lan and is only accessible from inside my network which also means only machines on the domain will be connecting to the web interface. This was very simple to make happen…
On the DC:
To distribute certificates to client computers by using Group Policy
On a domain controller in the forest of the account partner organization, start the Group Policy Management snap-in.
Find an existing Group Policy Object (GPO) or create a new GPO to contain the certificate settings. Ensure that the GPO is associated with the domain, site, or organizational unit (OU) where the appropriate user and computer accounts reside.
Right-click the GPO, and then click Edit.
In the console tree, open Computer Configuration\Policies\Windows Settings\Security Settings\Public Key Policies, right-click Trusted Root Certification Authorities, and then click Import.
On the Welcome to the Certificate Import Wizard page, click Next.
On the File to Import page, type the path to the appropriate certificate files (for example, \\fs1\c$\fs1.cer), and then click Next.
On the Certificate Store page, click Place all certificates in the following store, and then click Next.
On the Completing the Certificate Import Wizard page, verify that the information you provided is accurate, and then click Finish.
Repeat steps 2 through 6 to add additional certificates for each of the federation servers in the farm.
Once the policy is setup, you will need to either wait for machine reboots, or for the GP tp update. As an alternative, you can also run gpupdate /force to cause the update to occur immediately. Once complete, you can verify the cert was installed by running certmgr.msc and inspecting the Trusted Root Certification Authorities tree for the cert. It was my experience that the machine still required a reboot due to the browser still not recognizing the new root CA and therefore still displaying the ugly SSL browser error. After a reboot it was good to go.
The issues I ran into with the migration assistant complained of the SSL certs not matching. Upon inspecting the certs I found all were issues for domain.lan except for one which was issued to domain.net. I followed the following articles to generate a new vCenter cert and install it:
Generate SSL cert using openssl: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2074942
Install and activate cert: https://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=2061973
As the Appliance Installed reached Stage 2 of the install where it copies the data to the new VCSA, I received the following error (note the yellow warning in the background along with the details in the foreground):
To resolve this error, I followed the following articles:
Upgrading to VMware vCenter 6.0 fails with the error: Error attempting Backup PBM Please check Insvc upgrade logs for details (2127574): https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2127574
Resetting the VMware vCenter Server 5.x Inventory Service database (2042200): https://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=2042200#3
Which essentially had me reset the inventory service’s database due to corruption. I had noticed the vSphere client slow in recent weeks, this could be a side effect.
Additional more generic docs for tshooting vCenter upgrades: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2106760
Attempting to join a freshly deployed VCSA server to a AD domain can be problematic if SMB1 is disabled. In my case it was 5.5 but I believe this issue persists in 6.x. SMB1 was disabled on the DC as it should be as it is broken and insecure. The problem lies in the fact that VCSA doesn’t support SMB2 and this causes the error. The VAMI (web interface) might report something like the following when attempting to join the domain:
Error:Enabling Active Directory failed.
Additionally, on the VCSA, /var/log/vmware/vpx/vpxd_cfg.log contains entries like the following:
2017-08-1614:30:0726987:ERROR:Enabling active directory failed:Joining toAD Domain:domain.lan
With Computer DNS Name:vcenter-server.domain.lan
Of course DNS resolution of the VCSA’s hostname should be validated before continuing, but assuming everything else is in working order, the fix is to enable SMB2 on the VCSA.
Verify SMB2 is disabled (note the Smb2Enabled key is 0:
Using the sas2ircu utility from LSI, we can blink the drive LED to help ID the failed drive correctly. Of course this requires a LSI card. Some LSI cards may need to use the sas3ircu utility instead. There have been some reports from the interwebs that this utility failed to blink the correct drive, but I have not experienced this myself.
As always use the supercomputer between your ears to ensure the physical serial and the serial reported by the system match, etc etc.
[root@jetstore]~# sas2ircu list
LSI Corporation SAS2 IR Configuration Utility.
Copyright(c)2008-2014LSI Corporation.All rights reserved.
Back to the sas2ircu utility in a moment. We need to first acquire the serial number of the failed disk. For a system that is multipath, we can find the actual dev names by running the following to locate a disk in the fail state:
[root@jetstore]~# gmultipath list | grep -i -B 10 fail
Now we can see da16 is failed. Time to get the serial number of that disk. Or da43. they are the same just multipaths.
[root@jetstore]~# smartctl -a /dev/da16 | grep Serial
Save that serial number for the next step.
Smartctl also outputs other useful information about the drive, statistics, etc. Worth checking out, but not relevant here.
Next, we can display the disks attached to one of those controllers. Be sure to input the correct serial number in the grep command:
Get the enclosure and slot # of the failed drive and turn the led on:
Turn the led off:
NOTE: If you are replacing a disk that is multipath, e.g. you see something like the following when you offline and remove a disk, ensure that the LED above is OFF or GEOM_MULTIPATH will not pickup the new disk as multipath. See the below log for what happens when a disk is inserted with the LED blinking Vs not blinking: