Wednesday, August 22, 2018

VMware VirtualCenter Operational Dashboard

There is a not so popular feature in vCenter that gives you a lot of details and stats. Its called the VMware VirtualCenter Operational Dashboard

Browse to the following URL and replace the vCenter name. This needs authentication. 

  https://vCENTER_SERVER_FQDN/vod/index.html

On the Home page, you can get detailed stats about - 
  • vCenter Uptime
  • Virtual Machine & Host Operations (invocations/min)
  • Client Communication 
  • Agent Communication
There are 6 detail pages available.

Example - If you click on "Host Status" you will get detailed info about Hostname, IPs, MOID, Last heartbeat time etc.



Monday, June 25, 2018

The case of a forgotten Site Recovery Manager (SRM) DB Admin password

While getting ready to upgrade our vCenter from 6.0 to 6.5 and SRM from 6.1.1 to 6.5 we faced an issue were the SRM Embedded DB Admin Password was not documented.

Follow the steps to reset your forgotten password - 

1) You will need to edit the pg_hba.config file under -

C:\ProgramData\VMware\VMware vCenter Site Recovery Manager Embedded Database\data\pg_hba.conf

We had our install on the E:\ drive but the data folder was nowhere to be found. 

So I started searching for the pg_hba.config file and found it under - 

C:\ProgramData\VMware\VMware vCenter Site Recovery Manager Embedded Database\data

2) Make a backup of pg_hba.config

3) Stop service -
       Display Name -  VMware vCenter Site Recovery Manager Embedded Database
       Service Name - vmware-dr-vpostgres

4) Edit pg_hba.config in Wordpad.

5) Locate the following in the file - 

             # TYPE DATABASE USER ADDRESS METHOD
             # IPv4 local connections:
                 host all all 127.0.0.1/32 md5
             # IPv6 local connections:
                 host all all ::1/128 md5

6) Replace md5 with trust so the changes look like the following - 

             # TYPE DATABASE USER ADDRESS METHOD
             # IPv4 local connections:
                 host all all 127.0.0.1/32 trust
             # IPv6 local connections:
                 host all all ::1/128 trust

7) Save the file and start VMware vCenter Site Recovery Manager Embedded Database service.

8) Open command prompt as Administrator and navigate to - 
     E:\ProgramData\VMware\VMware vCenter Site Recovery Manager Embedded Database\bin
  Note: You might have installed in a different drive. 

9) Connect to the postgres database using - 
        psql -U postgres -p 5678 
     This will bring you to a prompt -  postgres=#
     Note: 5678 is the default port. If you chose a different port during installation, replace it                
    accordingly. If unsure open ODBC and check under System DSN.

10) Run the following to change the password - 
       ALTER USER "enter srm db user here" PASSWORD 'new_password';
     Note: srm db user can be found under ODBC - System DSN. new_password should be in single quotes.

11) If the command is successful, you will see the following output - ALTER ROLE


12) Your password has now been reset. Now you can take a backup of the Database.
       Note: You cannot take a backup without resetting the password because it will prompt you for a password 😊

13) To take a backup run the following - 
       pg_dump.exe -Fc --host 127.0.0.1 --port 5678 --username=dbaadmin srm_db > e:\
       destination_location
 

14) Revert the changes made to the pg_hba.config file (Replace trust with md5 or just replace the file you previously backed up.

15) Restart the VMware vCenter Site Recovery Manager Embedded Database service.

16) Go to Add/Remove Programs, select Vmware vCenter Site Recovery Manager and click Change.


17) Be patient it takes a while to load. On the next screen select Modify and click next.

18) Enter PSC address and the username/password.


19) Accept the Certificate

20) Enter information to register the SRM extension

21) Next choose if you want to create a new certificate or to use existing. Mine was still valid so used the existing one.

22) And finally you enter the new password you reset in Step 10. 

23) Once the setup is complete. Make sure the  VMware vCenter Site Recovery Manager Server service started. If not, manually start it. 

Reference: A quick google search brought up the following and we also had a case open with VMware who suggested to follow the same exact blog   http://www.virtuallypeculiar.com/2018/01/resetting-site-recovery-managers.html

Tuesday, May 8, 2018

docker: Error response from daemon: Server error from portlayer

docker: Error response from daemon: Server error from portlayer:

I got this error while running a new container in VIC.


I tried to move the VCH to a different ESXi Host, but no luck.

It is not advisable to reboot the VCH from vCenter, so had to skip that.

Tried to search the internet and came up with a similar error but a different cause - https://github.com/vmware/vic/issues/5782

This did not get me anywhere so I started looking from a higher level and this is what I noticed.


Turned out I had upgraded the VIC from 1.2.1 to 1.3 last weekend and complete forgot to upgrade this last VCH.

As you can see in the image above the VCH I was trying to run a container against was running on 1.2.1. Duh !

So I upgraded the VCH by running the following command -

./vic-machine-linux upgrade --id xxxxxx

And here is the result -


And ta-da I was able to run the new nginx container I was trying to deploy earlier.

Thursday, March 29, 2018

How to find the disk size for an Avamar BMR restore

We had a scenario where a BMR (Bare Metal Restore) system state restore was required.

A skeleton VM was created with drives matching the original server and booted from the Avamar BMR ISO. The restore was kicked off and after 90% the restore failed. Why?

Because the disk sizes did not match. The VM had several disks, some were stripped volumes some as spanned volumes. When the restore skeleton VM was created, the Admins just matched the volumes to the old VM. Avamar has no visiblity of how the disks are laid out within the OS.

Example of how the disks were laid out in VMware and inside the OS - 












So the question is how do you find the disk size if the VM is down and you need a BMR restore?

Avamar has a very nice Avtar command line utility. I could not find an official document from EMC but there is bits and pieces of information on the web.

avtar.exe --help will give you a wide range of options.

Steps to find the drive size for a BMR restore -
  • On your Windows workstation, install the Avamar client software and do not register it with the Avamar console.
  • The default installation path will be - C:\Program Files\avs\
  • Run the following command 
C:\Program Files\avs\bin>avtar.exe -x --server=IP_of_Avamar_Server --id=Username --ap=
Password --path=/clients/servername_FQDN 

--labelnum=label_number_of_the_backup_to_be_restored --internal --target=.\tmp\ .system_info

--target=.\tmp will create a tmp directory under avs\bin and the output of the command will be under this directory.
  • There will be numerous XML files under \tmp.
  • The useful files are CriticalVolumesMapping.xml and partitiontables.xml
CriticalVolumesMapping.xml will give you the details of how the disks are laid out within the OS. 

Example - 
-<VolumeMappings Version="2.0">
<Volume DiskNumbers="0" SubwidIdx="1" DisplayName="c:\" UniqueID="\\?\Volume{6e1e483e-8e40-11e1-a235-806e6f6e6963}\"/>
<Volume DiskNumbers="8,9" SubwidIdx="2" DisplayName="i:\" UniqueID="\\?\Volume{106195a0-5f22-11e4-b3e6-005056bc0037}\"/>
<Volume DiskNumbers="3,5,6,4" SubwidIdx="3" DisplayName="e:\" UniqueID="\\?\Volume{69f9f6c7-8e4f-11e1-a46f-005056bc0037}\"/>
<Volume DiskNumbers="1" SubwidIdx="4" DisplayName="g:\" UniqueID="\\?\Volume{a2f72c19-d096-11e5-a54e-005056bc0037}\"/>
<Volume DiskNumbers="0" SubwidIdx="5" DisplayName="\\?\volume{6e1e483d-8e40-11e1-a235-806e6f6e6963}\" UniqueID="\\?\Volume{6e1e483d-8e40-11e1-a235-806e6f6e6963}\"/>
</VolumeMappings>

We can clearly see that -

Disks 3,4,5,6 make up Logical volume E:
Disks 8,9 make up Logical volume I:

partitiontables.xml will provide you with the disk sizes

Example - 

-<PhysicalDisk NumPartitions="4" DiskSize_bytes="274872407040" PartioningScheme="MBR" MBRSignature="1720029347" DiskType="Fixed" DiskNumber="8" DiskSize_Gbytes="255" SectorSize_bytes="512">
-<PartitionList>
<Partition Size_bytes="274876826112" Start_bytes="32256" partStyle="MBR" SerialNumber_dec="0" SerialNumber_hex="0" PartitionNumber="0" Size_Gbytes="255" Type="Alternate Linux swap" Bootable="false"/>
<Partition Size_bytes="0" Start_bytes="0" partStyle="MBR" SerialNumber_dec="0" SerialNumber_hex="0" PartitionNumber="1" Size_Gbytes="0" Type="Empty" Bootable="false"/>
<Partition Size_bytes="0" Start_bytes="0" partStyle="MBR" SerialNumber_dec="0" SerialNumber_hex="0" PartitionNumber="2" Size_Gbytes="0" Type="Empty" Bootable="false"/>
<Partition Size_bytes="0" Start_bytes="0" partStyle="MBR" SerialNumber_dec="0" SerialNumber_hex="0" PartitionNumber="3" Size_Gbytes="0" Type="Empty" Bootable="false"/>
</PartitionList>
</PhysicalDisk>

-<PhysicalDisk NumPartitions="4" DiskSize_bytes="274872407040" PartioningScheme="MBR" MBRSignature="1720029346" DiskType="Fixed" DiskNumber="9" DiskSize_Gbytes="255" SectorSize_bytes="512">
-<PartitionList>
<Partition Size_bytes="274876826112" Start_bytes="32256" partStyle="MBR" SerialNumber_dec="0" SerialNumber_hex="0" PartitionNumber="0" Size_Gbytes="255" Type="Alternate Linux swap" Bootable="false"/>
<Partition Size_bytes="0" Start_bytes="0" partStyle="MBR" SerialNumber_dec="0" SerialNumber_hex="0" PartitionNumber="1" Size_Gbytes="0" Type="Empty" Bootable="false"/>
<Partition Size_bytes="0" Start_bytes="0" partStyle="MBR" SerialNumber_dec="0" SerialNumber_hex="0" PartitionNumber="2" Size_Gbytes="0" Type="Empty" Bootable="false"/>
<Partition Size_bytes="0" Start_bytes="0" partStyle="MBR" SerialNumber_dec="0" SerialNumber_hex="0" PartitionNumber="3" Size_Gbytes="0" Type="Empty" Bootable="false"/>

In the above scenario, Disks 8 and 9 together make up Volume I: and from the partitiontables.xml file you know the total size of I: will be 510GB. While creating the skeleton VM you will create a 510 GB drive.

Once you create the exact size and number of drives required, Avamar will perform a restore without any hiccups. 

Problem solved !

Thursday, March 15, 2018

VMware Stack Upgrade

I’m working on a plan to upgrade our existing VMware Stack and wanted to write a detailed post about all components involved and the order of upgrade.

Current State –

·        Primary and Recovery site with XtremIO Arrays on both ends with physical RecoverPoint appliances for array based replication.
·        vCenter Servers on both sites (with embedded Platform Services Controller) running on Windows Server 2012 R2 with an external SQL Database.
·         ESXi 6.0.0, 3825889
·         Site Recovery Manager (SRM) 6.1.1.13825
·         Storage Replication Adapter (SRA) 2.2.0.3
·         XtremIO (XIOS) 4.0.15-24, XMS 4.2.1
·         RecoverPoint (RPA) 4.4.1
·         Avamar 7.3

Future State –
·         vCenter Server Appliance (VCSA) 6.5 U1f
·         ESXi 6.5, 7388607
·         SRM 6.5.1, 6014840
·         SRA 2.2.0.3
·         XIOS 6.0.1, XMS 6.0.1
·         RecoverPoint (RPA) 5.1
·         Avamar 7.5

 There are numerous inter-dependencies to get to the future state.

·         Avamar 7.3 is not compatible with vCenter 6.5
·         RPA 4.4.1 is not compatible with ESXi 6.5
·         Upgrading SRM from 6.0.x to 6.5 is not supported. You have to upgrade to 6.1.x before you upgrade to 6.5 (in my case that’s not required since already running on 6.1.1.x)
·         Any vCenter upgrade will break SRM until both sides are on the same level. I have Array replication active in case of a disaster while upgrading SRM.

Prerequisites –

·         Backup of vCenter Database on primary and recovery site.
·         Backup of SRM vPostgres Database on primary and recovery sit. (Explained in detail below)
·         Primary and Recovery Site Platform Services Controller and vCenter server instances must be running.

Order of Upgrade –

·         Upgrade Avamar to 7.5
·         RPA 4.4.1 is not compatible with ESXi 6.5 hence upgrade that to RPA 5.1
·         Upgrade vCenter Server to VCSA 6.5 GA at primary site.
·         Upgrade SRM to 6.5 at primary site. Note: SRM cannot be upgraded from 6.1.1 to 6.5.1
·         Upgrade SRA at primary site – Not required since running on latest
·         vCenter Server to VCSA 6.5 GA at recovery site.
·         Upgrade SRM to 6.5 at recovery site.
·         Upgrade SRA at recovery site – Not required since running on latest.
·         Upgrade vCenter from 6.5 GA to 6.5U1g at primary site.
·         Upgrade SRM from 6.5 to 6.5.1 at primary site.
·         Upgrade vCenter Server to VCSA 6.5U1g at recovery site.
·         Upgrade SRM from 6.5 to 6.5.1 at recovery site.
·         Verify connection between SRM. Verify Protection groups and recovery plans are valid.
·         Upgrade ESXi to 6.5, 7388607 at recovery site.
·         Upgrade ESXi to 6.5, 7388607 at primary site.
·         Upgrade virtual hardware and then VMtools on Virtual Machines – Can be scheduled during the next available outage window.
·         Upgrade XIOS and XMS to 6.0.1

Backup & Restore (if required) the SRM Embedded vPostgres Database -

1)      Log into the system on which you installed Site Recovery Manager Server.
2)      Stop the Site Recovery Manager service.
3)      Navigate to the folder that contains the vPostgres commands.
4)      If you installed Site Recovery Manager Server in the default location, you find the vPostgres commands in C:\Program Files\VMware\VMware vCenter Site Recovery Manager Embedded Database\bin.
5)      Create a backup of the embedded vPostgres database by using the pg_dump command.
pg_dump -Fc --host 127.0.0.1 --port port_number --username=db_username srm_db > srm_backup_name. To create a backup you need the admin password. We did not have the Admin password documented. Here is a link on how to reset the Admin password - http://virtuallycurious.blogspot.com/2018/06/the-case-of-forgotten-site-recovery.html
You set the port number, username, and password for the embedded vPostgres database when you installed Site Recovery Manager. The default port number is 5678. The database name is srm_db and cannot be changed.
6)      Start the Site Recovery Manager service.
7)      Restore (if things go south) by using the pg_restore command
pg_restore -Fc --host 127.0.0.1 --port port_number --username=db_username --dbname=srm_db srm_backup_name

References –

·         VMware Product Interoperability Matrices -http://partnerweb.vmware.com/comp_guide2/sim/interop_matrix.php
·         Update sequence for vSphere 6.5 and its compatible VMware products (2147289) - https://kb.vmware.com/s/article/2147289
·         Backup and Restore the embedded vPostgres Database - https://docs.vmware.com/en/Site-Recovery-Manager/6.5/srm-install-config-6-5.pdf
·        EMC Recoverpoint SRA compatibility Matrix - https://www.vmware.com/resources/compatibility/detail.php?deviceCategory=sra&productid=39129
·        Compatibility Matrix for SRM 6.5 - https://www.vmware.com/support/srm/srm-compat-matrix-6-5.html

Sunday, November 26, 2017

Sharing NFS-backed Volumes Between Containers

vSphere Integrated Containers supports two types of volumes, each of which has different characteristics.
  • VMFS virtual disks (VMDKs), mounted as formatted disks directly on container VMs. These volumes are supported on multiple vSphere datastore types, including NFS, iSCSI and VMware vSAN. They are thin, lazy zeroed disks. 
  • NFS shared volumes. These volumes are distinct from a block-level VMDK on an NFS datastore. They are Linux guest-level mounts of an NFS file-system share.
VMDKs are locked while a container VM is running and other containers cannot share them.

NFS volumes on the other hand are useful for scenarios where two containers need read-write access to the same volume.

To use container volumes, one must first declare or create a volume store at the time of VCH creation

You must use the vic-machine create --volume-store option to create a volume store at the time of VCH creation.

You can add a volume store to an existing VCH by using the vic-machine configure --volume-store option. If you are adding volume stores to a VCH that already has one or more volume stores, you must specify each existing volume store in a separate instance of --volume-store.

Note: If you do not specify a volume store, no volume store is created by default and container developers cannot create or run containers that use volumes.

In my example, I have assigned a whole vSphere datastore as a volume store and would like to add a new NFS volume store to the VCH. The syntax is as follows - 

$ vic-machine-operating_system configure 
--target vcenter_server_username:password@vcenter_server_address 
--thumbprint certificate_thumbprint --id vch_id 
--volume-store datastore_name/datastore_path:default 
--volume-store nfs://datastore_name/path_to_share_point:nfs_volume_store_label

nfs://datastore_name/path_to_share_point:nfs_volume_store_label is adding a NFS datastore in vSphere as the volume store. Which means when a volume is created on it, it will be a VMDK file which cannot be shared between containers.

Add a NFS mountpoint to be able to share between containers. You need to specify the URL, UID, GID and access protocol. 

Note:You cannot specify the root folder of an NFS server as a volume store.

The syntax is as follows - 

--volume-store nfs://datastore_address/path_to_share_point?uid=1234&gid=5678&proto=tcp:nfs_volume_store_label

If you do not specify the UID and GID the default is 1000. Read more and UID and GID here.

Before adding NFS Volume Store

After adding NFS Volume Store


Two things to note in my example- 

1) I am running a VM based RHEL backed NFS and the UID and GID did not work for me. 
2) The workaround was to manually change permissions on the test2  folder on the NFS Share point by using  chmod 777 test2

Now that the NFS volume store is added, lets create a volume - 


Next, deploy two containers with My_NFS_Volume  mounted on both.


To check if the volume was mounted docker inspect containername and you will see the details under Mounts. You will see the name as well as the read/write mode.



Now lets create a .txt file from one container and check from the other if it can be seen.



This concludes we can share NFS-backed volumes between containers.

Friday, November 17, 2017

Enable SSH and pings to PhotonOS

In the previous post we saw how to configure static IP for PhotonOS.

Lets take a look at how to enable SSH and set to start at boot.

Two simple commands -

# Start Service - systemctl start sshd

# Configure SSH service to automatically start at boot - systemctl enable sshd

PhotonOS uses iptables firewall which by default will block everything except SSH.

Lets allow pings using the following commands

iptables -A OUTPUT -p icmp -j ACCEPT

iptables -A INPUT -p icmp -j ACCEPT

Note: This change is not persistent. 

So how do we get this to be persistent ? Lets see - 

/etc/systemd/scripts/iptables is the script that gets executed on iptables service start. So we can add our rules at the end of this script and ICMP rules will be persistent.



Reboot and check it out yourself !