Posted by & filed under Server Admin, VMWare.

I’ve written about this in past posts. Here is an updated article straight from VMWare: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1006371

 

  1. Power off the virtual machine.
  2. Edit the virtual machine settings and extend the virtual disk size. For more information, see Increasing the size of a virtual disk (1004047).
  3. Power on the virtual machine.
  4. Identify the device name, which is by default /dev/sda, and confirm the new size by running the command:

    # fdisk -l

  5. Create a new primary partition:
    1. Run the command:

      # fdisk /dev/sda (depending the results of the step 4)

    2. Press p to print the partition table to identify the number of partitions. By default, there are 2: sda1 and sda2.
    3. Press n to create a new primary partition.
    4. Press p for primary.
    5. Press 3 for the partition number, depending on the output of the partition table print.
    6. Press Enter two times.
    7. Press t to change the system’s partition ID.
    8. Press 3 to select the newly creation partition.
    9. Type 8e to change the Hex Code of the partition for Linux LVM.
    10. Press w to write the changes to the partition table.
  6. Restart the virtual machine.
  7. Run this command to verify that the changes were saved to the partition table and that the new partition has an 8e type:

    # fdisk -l

  8. Run this command to convert the new partition to a physical volume:

    Note: The number for the sda can change depending on system setup. Use the sda number that was created in step 5.

    # pvcreate /dev/sda3

  9. Run this command to extend the physical volume:

    # vgextend VolGroup00 /dev/sda3

    Note: To determine which volume group to extend, use the command vgdisplay.

  10. Run this command to verify how many physical extents are available to the Volume Group:

    # vgdisplay VolGroup00 | grep “Free”

  11. Run the following command to extend the Logical Volume:

    # lvextend -L+#G /dev/VolGroup00/LogVol00

    Where # is the number of Free space in GB available as per the previous command. Use the full number output from Step 10 including any decimals.

    Note: To determine which logical volume to extend, use the command lvdisplay.

  12. Run the following command to expand the ext3 filesystem online, inside of the Logical Volume:

    # ext2online /dev/VolGroup00/LogVol00

    Notes:

    • Use resize2fs instead of ext2online if it is not a Red Hat virtual machine.
    • By default, Red Hat and CentOS 7 use the XFS file system you can grow the file system by running the xfs_growfs command.
  13. Run the following command to verify that the / filesystem has the new space available:

    # df -h /

Posted by & filed under Virtualization, VMWare.

Problem: A fresh install of HPE branded ESXi 6.5 U1 cannot see the LUNs on the SAN during the installation. The server boots from SAN which means I need to be able to connect to the remote LUNs during installation. There is no local storage. Currently on 5.5u3, it is working fine. The HPE branded 6.5U1 installer does not see the LUNs presented by my SAN. A quick boot into the 5.5 installer confirms it can see the LUNS with no problems ruling out zoning issues, physical issues, etc.

The HPE ESXi 6.5 image seems to be lacking support for the Qlogic BR-815/Qlogic BR-825/Brocade-415/Brocade-825 FC cards which are all mostly the same card. After verifying compatibility of the server, and of the BR-815 FC cards, I determined that the driver simply is not included in the HPE image.

Here are the steps I took to roll my own installer using the HPE branded one as a base using the VMWare Image Builder toolset:

Resources:

  • Customizing installations with Image Builder: https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.install.doc/GUID-48AC6D6A-B936-4585-8720-A1F344E366F9.html
  • Add VIBs to an image profile: pubs.vmware.com/vsphere-51/index.jsp#com…
  • Export image profile to a ISO: pubs.vmware.com/vsphere-51/index.jsp#com…
  • HPE vibs Depot: http://vibsdepot.hpe.com
  • Using vibsdepot with Image Builder: http://vibsdepot.hpe.com/getting_started.html
  • Applying VIBS to a image walkthrough: https://blogs.vmware.com/vsphere/2017/05/apply-latest-vmware-esxi-security-patches-oem-custom-images-visualize-differences.html
  • VMWare Compatibility Guide: https://www.vmware.com/resources/compatibility/search.php
  • HPE VMWare Support and Certification Matrices: http://h17007.www1.hpe.com/us/en/enterprise/servers/supportmatrix/vmware.aspx
  • Info on HPE Custom Images: https://www.hpe.com/us/en/servers/hpe-esxi.html
  • Supported driver firmware versions for I/O devices: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2030818

Basic steps:

  • Identify OEM’s software depot URL, in this case the HPE ESXi 6.5U1 image http://vibsdepot.hpe.com/index-ecli-650.xml
  • Identify where the VIB is available for the driver. In my case, the Brocade BR-815 driver was downloaded via the VMWare compatibility site: https://www.vmware.com/resources/compatibility/detail.php?deviceCategory=io&productid=5346 — Note the VIB is actually inside a zip file inside the zip you download. It will be looking for a index.xml file in the root of the zip.
  • Use the esx-image-creator.ps1 to generate a new image with the newly included software: https://github.com/vmware/PowerCLI-Example-Scripts/blob/master/Scripts/esxi-image-creator.ps1
  • Use Export-EsxImageProfile to generate a ISO for installation.

 

Booting the server with the newly built ISO enables me to see the LUNs so I can complete my boot-from-san installation.

Posted by & filed under Uncategorized, Virtualization, VMWare.

I received a fairly generic error when running VMWARE Update Manager against some hosts:

No real useful information. The actual log is available on the VCSA 6.5 at: /var/log/vmware/vmware-updatemgr/vum-server/vmware-vum-server-log4cpp.log

In my case it was as simple as the DNS being set incorrectly on the ESXi hosts due to some networking changes:

Other threads that might be related:

communities.vmware.com/thread/546976

Resetting the VMWare Update Manager Database: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2147284

Posted by & filed under Virtualization, VMWare.

PowerCLI snippets to get a VM’s disks

This command will retrieve the specified VM’s attached disk paths:

But we can also focus on the filename:

We can also see the other columns available:

We could also do something like get the Disk paths for all guests:

 

Posted by & filed under Virtualization, VMWare.

I recently needed to change the IP address of my PSC. Unfortunately it was already inaccessible so I was unable to do it via the standard GUI methods. I SSH’d into the box and had a look but it pretty immediately becomes apparent you can’t just update things the way you would a normal linux box. Enter vami_config_net. I believe this utility is available on any of the VMWare appliances that utilize VAMI/photon but I could be wrong. As you may notice int he article it refers to this being for the vCetner Support Assistant, but it worked just the same for me on my external PSC.

kb.vmware.com/selfservice/microsites/sea…

Posted by & filed under Virtualization, VMWare.

In preparation for migrating from vCenter 6.5 w/ embedded PSC, to a external PSC I needed to validate the replication between my new external PSC and the embedded platform services controller. To validate PSC replication partners, the vdcrepadmin utility can be used. For more information see https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2127057

Note in the above commands, for the -w parameter, non alpha characters must be escaped with a \ otherwise you may get authentication failures.

I am now able to continue with the external psc migration as detailed here: docs.vmware.com/en/VMware-vSphere/6.5/co…

And finally, from the external PSC we can verify replication partners again to see that the embedded PSC has been decommissioned, and the external PSC is the only one listed:

Posted by & filed under Virtualization, VMWare.

I have used to below commands to recover from a failed PSC deployment. When trying to redeploy after the failed deployment, I encountered the error:

“Failed to run vdcpromo”

Following the below steps on the current PSC resolved the error and I was then able to successfully restart the PSC deployment.

Also, protip to avoid having to keep redeploying the appliance, take a snapshot right after phase 1 completes. Then you can simply restore the snap and access your vm via the web interface to try again.

 

docs.vmware.com/en/VMware-vSphere/6.5/co…

Additional info: I also ran into this when trying to deploy an additional PSC that had a failed installation, but got a completely different error (see below). Going to Administration -> System Configuration in the flash vSphere web client also displays the failed PSC. Login to the live PSC and use the above commands to cleanup, then restart the new PSC deployment. Refreshing the System Configuration page once the vdcleavefed command was ran confirms the cleanup is complete and the failed install is no longer listed.

The error I received when deploying this PSC was:

Removing the failed deployment via vdcleavefed did not resolve the issue.

I decided to test LDAP connectivity to the PSC from the failed PSC deployment. I SSH’d into the box and did the following:

Edit: Additional semi-related data

Get machine’s guid

Get machine’s pnid (machine/host name?)

Get services in the directory

Posted by & filed under Active Directory, Server Admin, Virtualization, VMWare.

The VCSA has it’s own CA built in. It uses that CA to generate certs for all the various services. There are two options available to ensure that the certificate is trusted in the browser:

  1. Generate a CSR for the cert and submit to a CA who can generate the cert.
  2. Use Microsoft Active Directory GPO to push out the VCSA’s root CA cert, thereby allowing the workstations to trust the cert already installed.

I went with the second one because the VCSA is using vcenter.mydomain.lan and is only accessible from inside my network which also means only machines on the domain will be connecting to the web interface. This was very simple to make happen…

On the DC:

To distribute certificates to client computers by using Group Policy

  1. On a domain controller in the forest of the account partner organization, start the Group Policy Management snap-in.
  2. Find an existing Group Policy Object (GPO) or create a new GPO to contain the certificate settings. Ensure that the GPO is associated with the domain, site, or organizational unit (OU) where the appropriate user and computer accounts reside.
  3. Right-click the GPO, and then click Edit.
  4. In the console tree, open Computer Configuration\Policies\Windows Settings\Security Settings\Public Key Policies, right-click Trusted Root Certification Authorities, and then click Import.
  5. On the Welcome to the Certificate Import Wizard page, click Next.
  6. On the File to Import page, type the path to the appropriate certificate files (for example, \\fs1\c$\fs1.cer), and then click Next.
  7. On the Certificate Store page, click Place all certificates in the following store, and then click Next.
  8. On the Completing the Certificate Import Wizard page, verify that the information you provided is accurate, and then click Finish.
  9. Repeat steps 2 through 6 to add additional certificates for each of the federation servers in the farm.

Once the policy is setup, you will need to either wait for machine reboots, or for the GP tp update. As an alternative, you can also run gpupdate /force to cause the update to occur immediately. Once complete, you can verify the cert was installed by running certmgr.msc and inspecting the Trusted Root Certification Authorities tree for the cert. It was my experience that the machine still required a reboot due to the browser still not recognizing the new root CA and therefore still displaying the ugly SSL browser error. After a reboot it was good to go.

Reference: https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/deployment/distribute-certificates-to-client-computers-by-using-group-policy

Posted by & filed under Server Admin, Virtualization, VMWare.

Ran into some issues with the ssl certs on the vCenter server when trying to run the Migration Assistant. Notes on the will follow, but first links to articles on the actual upgrade:

The issues I ran into with the migration assistant complained of the SSL certs not matching. Upon inspecting the certs I found all were issues for domain.lan except for one which was issued to domain.net. I followed the following articles to generate a new vCenter cert and install it:

  • Generate SSL cert using openssl: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2074942
  • Install and activate cert: https://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=2061973

As the Appliance Installed reached Stage 2 of the install where it copies the data to the new VCSA, I received the following error (note the yellow warning in the background along with the details in the foreground):

To resolve this error, I followed the following articles:

  • Upgrading to VMware vCenter 6.0 fails with the error: Error attempting Backup PBM Please check Insvc upgrade logs for details (2127574): https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2127574
  • Resetting the VMware vCenter Server 5.x Inventory Service database (2042200): https://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=2042200#3

Which essentially had me reset the inventory service’s database due to corruption. I had noticed the vSphere client slow in recent weeks, this could be a side effect.

  • Additional more generic docs for tshooting vCenter upgrades: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2106760

 

Posted by & filed under Active Directory, Server Admin, Virtualization, VMWare.

Attempting to join a freshly deployed VCSA server to a AD domain can be problematic if SMB1 is disabled. In my case it was 5.5 but I believe this issue persists in 6.x. SMB1 was disabled on the DC as it should be as it is broken and insecure. The problem lies in the fact that VCSA doesn’t support SMB2 and this causes the error. The VAMI (web interface) might report something like the following when attempting to join the domain:

Additionally, on the VCSA, /var/log/vmware/vpx/vpxd_cfg.log contains entries like the following:

Of course DNS resolution of the VCSA’s hostname should be validated before continuing, but assuming everything else is in working order, the fix is to enable SMB2 on the VCSA.

Verify SMB2 is disabled (note the Smb2Enabled key is 0:

Enable SMB2:

Restart the lwio service:

Log out of VAMI web interface, log back in and retry joining to the domain.

Posted by & filed under Hardware.

Using the sas2ircu utility from LSI, we can blink the drive LED to help ID the failed drive correctly. Of course this requires a LSI card. Some LSI cards may need to use the sas3ircu utility instead. There have been some reports from the interwebs that this utility failed to blink the correct drive, but I have not experienced this myself.

As always use the supercomputer between your ears to ensure the physical serial and the serial reported by the system match, etc etc.

Back to the sas2ircu utility in a moment. We need to first acquire the serial number of the failed disk. For a system that is multipath, we can find the actual dev names by running the following to locate a disk in the fail state:

Now we can see da16 is failed. Time to get the serial number of that disk. Or da43. they are the same just multipaths.

Save that serial number for the next step.

Smartctl also outputs other useful information about the drive, statistics, etc. Worth checking out, but not relevant here.

Next, we can display the disks attached to one of those controllers. Be sure to input the correct serial number in the grep command:

Get the enclosure and slot # of the failed drive and turn the led on:

Turn the led off:

NOTE: If you are replacing a disk that is multipath, e.g. you see something like the following when you offline and remove a disk, ensure that the LED above is OFF or GEOM_MULTIPATH will not pickup the new disk as multipath. See the below log for what happens when a disk is inserted with the LED blinking Vs not blinking:

 

Posted by & filed under Adruino, Hardware, Hardware Development, Programming.

PlatformIO is an open source ecosystem for IoT development
Cross-platform build system. Continuous and IDE integration. Arduino and ARM mbed compatible

 

Came across this cool IDE, built on top of Atom for dev of iot. There is also a commercially supported offering. http://platformio.org/