I’ve written about this in past posts. Here is an updated article straight from VMWare: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1006371
Power off the virtual machine.
Power on the virtual machine.
Identify the device name, which is by default /dev/sda, and confirm the new size by running the command:
# fdisk -l
Create a new primary partition:
- Run the command:
# fdisk /dev/sda (depending the results of the step 4)
- Press p to print the partition table to identify the number of partitions. By default, there are 2: sda1 and sda2.
- Press n to create a new primary partition.
- Press p for primary.
- Press 3 for the partition number, depending on the output of the partition table print.
- Press Enter two times.
- Press t to change the system’s partition ID.
- Press 3 to select the newly creation partition.
- Type 8e to change the Hex Code of the partition for Linux LVM.
- Press w to write the changes to the partition table.
- Restart the virtual machine.
Run this command to verify that the changes were saved to the partition table and that the new partition has an 8e type:
# fdisk -l
Run this command to convert the new partition to a physical volume:
Note: The number for the sda can change depending on system setup. Use the sda number that was created in step 5.
# pvcreate /dev/sda3
Run this command to extend the physical volume:
# vgextend VolGroup00 /dev/sda3
Note: To determine which volume group to extend, use the command vgdisplay.
Run this command to verify how many physical extents are available to the Volume Group:
# vgdisplay VolGroup00 | grep “Free”
Run the following command to extend the Logical Volume:
# lvextend -L+#G /dev/VolGroup00/LogVol00
Where # is the number of Free space in GB available as per the previous command. Use the full number output from Step 10 including any decimals.
Note: To determine which logical volume to extend, use the command lvdisplay.
Run the following command to expand the ext3 filesystem online, inside of the Logical Volume:
# ext2online /dev/VolGroup00/LogVol00
- Use resize2fs instead of ext2online if it is not a Red Hat virtual machine.
- By default, Red Hat and CentOS 7 use the XFS file system you can grow the file system by running the xfs_growfs command.
- Run the following command to verify that the / filesystem has the new space available:
# df -h /
Problem: A fresh install of HPE branded ESXi 6.5 U1 cannot see the LUNs on the SAN during the installation. The server boots from SAN which means I need to be able to connect to the remote LUNs during installation. There is no local storage. Currently on 5.5u3, it is working fine. The HPE branded 6.5U1 installer does not see the LUNs presented by my SAN. A quick boot into the 5.5 installer confirms it can see the LUNS with no problems ruling out zoning issues, physical issues, etc.
The HPE ESXi 6.5 image seems to be lacking support for the Qlogic BR-815/Qlogic BR-825/Brocade-415/Brocade-825 FC cards which are all mostly the same card. After verifying compatibility of the server, and of the BR-815 FC cards, I determined that the driver simply is not included in the HPE image.
Here are the steps I took to roll my own installer using the HPE branded one as a base using the VMWare Image Builder toolset:
- Customizing installations with Image Builder: https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.install.doc/GUID-48AC6D6A-B936-4585-8720-A1F344E366F9.html
- Add VIBs to an image profile: pubs.vmware.com/vsphere-51/index.jsp#com…
- Export image profile to a ISO: pubs.vmware.com/vsphere-51/index.jsp#com…
- HPE vibs Depot: http://vibsdepot.hpe.com
- Using vibsdepot with Image Builder: http://vibsdepot.hpe.com/getting_started.html
- Applying VIBS to a image walkthrough: https://blogs.vmware.com/vsphere/2017/05/apply-latest-vmware-esxi-security-patches-oem-custom-images-visualize-differences.html
- VMWare Compatibility Guide: https://www.vmware.com/resources/compatibility/search.php
- HPE VMWare Support and Certification Matrices: http://h17007.www1.hpe.com/us/en/enterprise/servers/supportmatrix/vmware.aspx
- Info on HPE Custom Images: https://www.hpe.com/us/en/servers/hpe-esxi.html
- Supported driver firmware versions for I/O devices: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2030818
- Identify OEM’s software depot URL, in this case the HPE ESXi 6.5U1 image http://vibsdepot.hpe.com/index-ecli-650.xml
- Identify where the VIB is available for the driver. In my case, the Brocade BR-815 driver was downloaded via the VMWare compatibility site: https://www.vmware.com/resources/compatibility/detail.php?deviceCategory=io&productid=5346 — Note the VIB is actually inside a zip file inside the zip you download. It will be looking for a index.xml file in the root of the zip.
- Use the esx-image-creator.ps1 to generate a new image with the newly included software: https://github.com/vmware/PowerCLI-Example-Scripts/blob/master/Scripts/esxi-image-creator.ps1
- Use Export-EsxImageProfile to generate a ISO for installation.
Booting the server with the newly built ISO enables me to see the LUNs so I can complete my boot-from-san installation.
I received a fairly generic error when running VMWARE Update Manager against some hosts:
No real useful information. The actual log is available on the VCSA 6.5 at: /var/log/vmware/vmware-updatemgr/vum-server/vmware-vum-server-log4cpp.log
In my case it was as simple as the DNS being set incorrectly on the ESXi hosts due to some networking changes:
Other threads that might be related:
Resetting the VMWare Update Manager Database: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2147284
PowerCLI snippets to get a VM’s disks
This command will retrieve the specified VM’s attached disk paths:
But we can also focus on the filename:
We can also see the other columns available:
We could also do something like get the Disk paths for all guests:
I recently needed to change the IP address of my PSC. Unfortunately it was already inaccessible so I was unable to do it via the standard GUI methods. I SSH’d into the box and had a look but it pretty immediately becomes apparent you can’t just update things the way you would a normal linux box. Enter vami_config_net. I believe this utility is available on any of the VMWare appliances that utilize VAMI/photon but I could be wrong. As you may notice int he article it refers to this being for the vCetner Support Assistant, but it worked just the same for me on my external PSC.
In preparation for migrating from vCenter 6.5 w/ embedded PSC, to a external PSC I needed to validate the replication between my new external PSC and the embedded platform services controller. To validate PSC replication partners, the vdcrepadmin utility can be used. For more information see https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2127057
Note in the above commands, for the -w parameter, non alpha characters must be escaped with a \ otherwise you may get authentication failures.
I am now able to continue with the external psc migration as detailed here: docs.vmware.com/en/VMware-vSphere/6.5/co…
And finally, from the external PSC we can verify replication partners again to see that the embedded PSC has been decommissioned, and the external PSC is the only one listed:
I have used to below commands to recover from a failed PSC deployment. When trying to redeploy after the failed deployment, I encountered the error:
“Failed to run vdcpromo”
Following the below steps on the current PSC resolved the error and I was then able to successfully restart the PSC deployment.
Also, protip to avoid having to keep redeploying the appliance, take a snapshot right after phase 1 completes. Then you can simply restore the snap and access your vm via the web interface to try again.
Additional info: I also ran into this when trying to deploy an additional PSC that had a failed installation, but got a completely different error (see below). Going to Administration -> System Configuration in the flash vSphere web client also displays the failed PSC. Login to the live PSC and use the above commands to cleanup, then restart the new PSC deployment. Refreshing the System Configuration page once the vdcleavefed command was ran confirms the cleanup is complete and the failed install is no longer listed.
The error I received when deploying this PSC was:
Removing the failed deployment via vdcleavefed did not resolve the issue.
I decided to test LDAP connectivity to the PSC from the failed PSC deployment. I SSH’d into the box and did the following:
Edit: Additional semi-related data
Get machine’s guid
Get machine’s pnid (machine/host name?)
Get services in the directory
The VCSA has it’s own CA built in. It uses that CA to generate certs for all the various services. There are two options available to ensure that the certificate is trusted in the browser:
- Generate a CSR for the cert and submit to a CA who can generate the cert.
- Use Microsoft Active Directory GPO to push out the VCSA’s root CA cert, thereby allowing the workstations to trust the cert already installed.
I went with the second one because the VCSA is using vcenter.mydomain.lan and is only accessible from inside my network which also means only machines on the domain will be connecting to the web interface. This was very simple to make happen…
On the DC:
To distribute certificates to client computers by using Group Policy
- On a domain controller in the forest of the account partner organization, start the Group Policy Management snap-in.
- Find an existing Group Policy Object (GPO) or create a new GPO to contain the certificate settings. Ensure that the GPO is associated with the domain, site, or organizational unit (OU) where the appropriate user and computer accounts reside.
- Right-click the GPO, and then click Edit.
- In the console tree, open Computer Configuration\Policies\Windows Settings\Security Settings\Public Key Policies, right-click Trusted Root Certification Authorities, and then click Import.
- On the Welcome to the Certificate Import Wizard page, click Next.
- On the File to Import page, type the path to the appropriate certificate files (for example, \\fs1\c$\fs1.cer), and then click Next.
- On the Certificate Store page, click Place all certificates in the following store, and then click Next.
- On the Completing the Certificate Import Wizard page, verify that the information you provided is accurate, and then click Finish.
- Repeat steps 2 through 6 to add additional certificates for each of the federation servers in the farm.
Once the policy is setup, you will need to either wait for machine reboots, or for the GP tp update. As an alternative, you can also run gpupdate /force to cause the update to occur immediately. Once complete, you can verify the cert was installed by running certmgr.msc and inspecting the Trusted Root Certification Authorities tree for the cert. It was my experience that the machine still required a reboot due to the browser still not recognizing the new root CA and therefore still displaying the ugly SSL browser error. After a reboot it was good to go.
Ran into some issues with the ssl certs on the vCenter server when trying to run the Migration Assistant. Notes on the will follow, but first links to articles on the actual upgrade:
The issues I ran into with the migration assistant complained of the SSL certs not matching. Upon inspecting the certs I found all were issues for domain.lan except for one which was issued to domain.net. I followed the following articles to generate a new vCenter cert and install it:
- Generate SSL cert using openssl: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2074942
- Install and activate cert: https://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=2061973
As the Appliance Installed reached Stage 2 of the install where it copies the data to the new VCSA, I received the following error (note the yellow warning in the background along with the details in the foreground):
To resolve this error, I followed the following articles:
- Upgrading to VMware vCenter 6.0 fails with the error: Error attempting Backup PBM Please check Insvc upgrade logs for details (2127574): https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2127574
- Resetting the VMware vCenter Server 5.x Inventory Service database (2042200): https://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=2042200#3
Which essentially had me reset the inventory service’s database due to corruption. I had noticed the vSphere client slow in recent weeks, this could be a side effect.
- Additional more generic docs for tshooting vCenter upgrades: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2106760
Attempting to join a freshly deployed VCSA server to a AD domain can be problematic if SMB1 is disabled. In my case it was 5.5 but I believe this issue persists in 6.x. SMB1 was disabled on the DC as it should be as it is broken and insecure. The problem lies in the fact that VCSA doesn’t support SMB2 and this causes the error. The VAMI (web interface) might report something like the following when attempting to join the domain:
Additionally, on the VCSA, /var/log/vmware/vpx/vpxd_cfg.log contains entries like the following:
Of course DNS resolution of the VCSA’s hostname should be validated before continuing, but assuming everything else is in working order, the fix is to enable SMB2 on the VCSA.
Verify SMB2 is disabled (note the Smb2Enabled key is 0:
Restart the lwio service:
Log out of VAMI web interface, log back in and retry joining to the domain.
I was tasked with moving a client’s vmdk’s off our vSphere4 farm and off to their ESXi server.
Apparently after I had already moved the data to a ext3 formatted usb disk and shipped it off that they were running ESXi which means it won’t support USB disks. Anyway, the process is supposed to be like this:
Once the vmdk’s are copied over to the new server use vmkfstools to clone the vmdk:
[root@esx01 hercules]# vmkfstools -i Hercules.vmdk Hercules-new.vmdk
Destination disk format: VMFS thick
Cloning disk ‘Hercules.vmdk’…
Clone: 100% done.
Now create a new virtual machine in ESX, use Custom configuration. When you’re at the disk configuration window, select “use an existing virtual disk” and select the newly created vmdk file. Complete the wizard.
Check if all properties (network interface, …) are correct and boot up the machine.
I have a bunch of data/refrences on the subject:
-Main Cacti ESX forum thread here: forums.cacti.net/about3730-0-asc-0.html
-ESX Scripts for monitoring VM Host: bable.cybermarshall.com/2008/12/14/track…
More scripts (not as good as the above): www.it-slav.net/blogs/?p=262
I’ve been playing with VirtualBox 2.06 on Ubuntu 8.05 for some sandbox testing. XP Installed in about is minutes on my 2.4ghz 1gb memory system. It runs pretty good even with a lot of other apps running in the background. I need to look into the CLI tools… I don’t think it can match anywhere near the robustness of vmware.