Thursday, March 15, 2018

VMware Stack Upgrade

I’m working on a plan to upgrade our existing VMware Stack and wanted to write a detailed post about all components involved and the order of upgrade.

Current State –

·        Primary and Recovery site with XtremIO Arrays on both ends with physical RecoverPoint appliances for array based replication.
·        vCenter Servers on both sites (with embedded Platform Services Controller) running on Windows Server 2012 R2 with an external SQL Database.
·         ESXi 6.0.0, 3825889
·         Site Recovery Manager (SRM) 6.1.1.13825
·         Storage Replication Adapter (SRA) 2.2.0.3
·         XtremIO (XIOS) 4.0.15-24, XMS 4.2.1
·         RecoverPoint (RPA) 4.4.1
·         Avamar 7.3

Future State –
·         vCenter Server Appliance (VCSA) 6.5 U1f
·         ESXi 6.5, 7388607
·         SRM 6.5.1, 6014840
·         SRA 2.2.0.3
·         XIOS 6.0.1, XMS 6.0.1
·         RecoverPoint (RPA) 5.1
·         Avamar 7.5

 There are numerous inter-dependencies to get to the future state.

·         Avamar 7.3 is not compatible with vCenter 6.5
·         RPA 4.4.1 is not compatible with ESXi 6.5
·         Upgrading SRM from 6.0.x to 6.5 is not supported. You have to upgrade to 6.1.x before you upgrade to 6.5 (in my case that’s not required since already running on 6.1.1.x)
·         Any vCenter upgrade will break SRM until both sides are on the same level. I have Array replication active in case of a disaster while upgrading SRM.

Prerequisites –

·         Backup of vCenter Database on primary and recovery site.
·         Backup of SRM vPostgres Database on primary and recovery sit. (Explained in detail below)
·         Primary and Recovery Site Platform Services Controller and vCenter server instances must be running.

Order of Upgrade –

·         Upgrade Avamar to 7.5
·         RPA 4.4.1 is not compatible with ESXi 6.5 hence upgrade that to RPA 5.1
·         Upgrade vCenter Server to VCSA 6.5 GA at primary site.
·         Upgrade SRM to 6.5 at primary site. Note: SRM cannot be upgraded from 6.1.1 to 6.5.1
·         Upgrade SRA at primary site – Not required since running on latest
·         vCenter Server to VCSA 6.5 GA at recovery site.
·         Upgrade SRM to 6.5 at recovery site.
·         Upgrade SRA at recovery site – Not required since running on latest.
·         Upgrade vCenter from 6.5 GA to 6.5U1g at primary site.
·         Upgrade SRM from 6.5 to 6.5.1 at primary site.
·         Upgrade vCenter Server to VCSA 6.5U1g at recovery site.
·         Upgrade SRM from 6.5 to 6.5.1 at recovery site.
·         Verify connection between SRM. Verify Protection groups and recovery plans are valid.
·         Upgrade ESXi to 6.5, 7388607 at recovery site.
·         Upgrade ESXi to 6.5, 7388607 at primary site.
·         Upgrade virtual hardware and then VMtools on Virtual Machines – Can be scheduled during the next available outage window.
·         Upgrade XIOS and XMS to 6.0.1

Backup & Restore (if required) the SRM Embedded vPostgres Database -

1)      Log into the system on which you installed Site Recovery Manager Server.
2)      Stop the Site Recovery Manager service.
3)      Navigate to the folder that contains the vPostgres commands.
4)      If you installed Site Recovery Manager Server in the default location, you find the vPostgres commands in C:\Program Files\VMware\VMware vCenter Site Recovery Manager Embedded Database\bin.
5)      Create a backup of the embedded vPostgres database by using the pg_dump command.
pg_dump -Fc --host 127.0.0.1 --port port_number --username=db_username srm_db > srm_backup_name. To create a backup you need the admin password. We did not have the Admin password documented. Here is a link on how to reset the Admin password - http://virtuallycurious.blogspot.com/2018/06/the-case-of-forgotten-site-recovery.html
You set the port number, username, and password for the embedded vPostgres database when you installed Site Recovery Manager. The default port number is 5678. The database name is srm_db and cannot be changed.
6)      Start the Site Recovery Manager service.
7)      Restore (if things go south) by using the pg_restore command
pg_restore -Fc --host 127.0.0.1 --port port_number --username=db_username --dbname=srm_db srm_backup_name

References –

·         VMware Product Interoperability Matrices -http://partnerweb.vmware.com/comp_guide2/sim/interop_matrix.php
·         Update sequence for vSphere 6.5 and its compatible VMware products (2147289) - https://kb.vmware.com/s/article/2147289
·         Backup and Restore the embedded vPostgres Database - https://docs.vmware.com/en/Site-Recovery-Manager/6.5/srm-install-config-6-5.pdf
·        EMC Recoverpoint SRA compatibility Matrix - https://www.vmware.com/resources/compatibility/detail.php?deviceCategory=sra&productid=39129
·        Compatibility Matrix for SRM 6.5 - https://www.vmware.com/support/srm/srm-compat-matrix-6-5.html

Sunday, November 26, 2017

Sharing NFS-backed Volumes Between Containers

vSphere Integrated Containers supports two types of volumes, each of which has different characteristics.
  • VMFS virtual disks (VMDKs), mounted as formatted disks directly on container VMs. These volumes are supported on multiple vSphere datastore types, including NFS, iSCSI and VMware vSAN. They are thin, lazy zeroed disks. 
  • NFS shared volumes. These volumes are distinct from a block-level VMDK on an NFS datastore. They are Linux guest-level mounts of an NFS file-system share.
VMDKs are locked while a container VM is running and other containers cannot share them.

NFS volumes on the other hand are useful for scenarios where two containers need read-write access to the same volume.

To use container volumes, one must first declare or create a volume store at the time of VCH creation

You must use the vic-machine create --volume-store option to create a volume store at the time of VCH creation.

You can add a volume store to an existing VCH by using the vic-machine configure --volume-store option. If you are adding volume stores to a VCH that already has one or more volume stores, you must specify each existing volume store in a separate instance of --volume-store.

Note: If you do not specify a volume store, no volume store is created by default and container developers cannot create or run containers that use volumes.

In my example, I have assigned a whole vSphere datastore as a volume store and would like to add a new NFS volume store to the VCH. The syntax is as follows - 

$ vic-machine-operating_system configure 
--target vcenter_server_username:password@vcenter_server_address 
--thumbprint certificate_thumbprint --id vch_id 
--volume-store datastore_name/datastore_path:default 
--volume-store nfs://datastore_name/path_to_share_point:nfs_volume_store_label

nfs://datastore_name/path_to_share_point:nfs_volume_store_label is adding a NFS datastore in vSphere as the volume store. Which means when a volume is created on it, it will be a VMDK file which cannot be shared between containers.

Add a NFS mountpoint to be able to share between containers. You need to specify the URL, UID, GID and access protocol. 

Note:You cannot specify the root folder of an NFS server as a volume store.

The syntax is as follows - 

--volume-store nfs://datastore_address/path_to_share_point?uid=1234&gid=5678&proto=tcp:nfs_volume_store_label

If you do not specify the UID and GID the default is 1000. Read more and UID and GID here.

Before adding NFS Volume Store

After adding NFS Volume Store


Two things to note in my example- 

1) I am running a VM based RHEL backed NFS and the UID and GID did not work for me. 
2) The workaround was to manually change permissions on the test2  folder on the NFS Share point by using  chmod 777 test2

Now that the NFS volume store is added, lets create a volume - 


Next, deploy two containers with My_NFS_Volume  mounted on both.


To check if the volume was mounted docker inspect containername and you will see the details under Mounts. You will see the name as well as the read/write mode.



Now lets create a .txt file from one container and check from the other if it can be seen.



This concludes we can share NFS-backed volumes between containers.

Friday, November 17, 2017

Enable SSH and pings to PhotonOS

In the previous post we saw how to configure static IP for PhotonOS.

Lets take a look at how to enable SSH and set to start at boot.

Two simple commands -

# Start Service - systemctl start sshd

# Configure SSH service to automatically start at boot - systemctl enable sshd

PhotonOS uses iptables firewall which by default will block everything except SSH.

Lets allow pings using the following commands

iptables -A OUTPUT -p icmp -j ACCEPT

iptables -A INPUT -p icmp -j ACCEPT

Note: This change is not persistent. 

So how do we get this to be persistent ? Lets see - 

/etc/systemd/scripts/iptables is the script that gets executed on iptables service start. So we can add our rules at the end of this script and ICMP rules will be persistent.



Reboot and check it out yourself !

Configure Static IP on PhotonOS

To obtain the name of your Ethernet link run the following command:  networkctl




If this is the first time you are using Photon OS, you will only see the first 2 links. The others got created because I ran some docker swarms and created customer network bridges.

The network configuration file is located at -

                   /etc/systemd/network/



You might see the file 10-dhcp-eth0.network. I renamed this file to static.

You can do this by running the following command -

root@photon [ ~ ]# mv /etc/systemd/network/10-dhcp-eth0.network  /etc/systemd/network/10-static-eth0.network

Use vi editor to edit the file and add your static IP, Gateway, DNS, Domain and NTP.

This is how the file would look like. 

root@photon [ ~ ]# cat /etc/systemd/network/10-static-eth0.network
[Match]
Name=eth0  <<<<<<< “Make sure to change this to your adapter. ipconfig to check adapter name”

[Network]
Address=10.xx.xx.xx/24
Gateway=10.xx.xx.1
DNS=10.xx.xx.xx 10.xx.xx.xx
Domains=na.xx.com
NTP=time.nist.gov

Apply the changes by running -

systemctl restart systemd-networkd

Try to ping out form the OS. 

Note: You will not able able to ping this VM as by default the iptables firewall blocks everything except SSH. In my next blog I will explain how to allow ping on iptables.

Friday, November 10, 2017

Pull and Push Images from Private Repository (Project Respository) inside a dch-photon container

vSphere Integrated Containers v1.2 includes the ability to provision native Docker container hosts (DCH). The DCH is distributed by VMware through Docker Hub.

Refer to my previous blog about Docker Swarm

Each node in this swarm is a native docker container host. A service that is deployed on the swarm is nothing but containers deployed on top of individual swarm nodes. Which means you can run a docker ps on individual swam nodes and get a list of containers running on each of them.

Swarm_test is our VCH.

Manager1 is the swarm manager and worker1, worker2, worker3 are the worker nodes.


As you can see the swarm is running 2 services - portainer and web

There are 4 replicas of the web service. Service ps web shows you the 4 instances with their IDs.

These are nothing but 4 individual containers running on each of the nodes.


Note the service IDs on the swarm which runs as a container on manager1 and worker1. 

You can use either deploy images from Docker Hub or use docker-compose.yml to define your application made up of multiple containers.

What if a Developer wants to pull and push images from a private repository inside a Project created in your VIC Management Portal ?

So I tried to connect to my private repository and got a certificate error.

I am trying to login from worker2 to vic.xx.xxx.com which is my VIC Manager.


Lets copy the right certificate so we can login to our private registry. To get the certificate login to your VIC management portal - https://vicmanagerip:8282 and login using an Admin account. I logged in with administrator@vsphere.local 



Download the certificate. The certificate needs to be copied on the worker2 node at - /etc/docker/certs.d/yourFQDN_VIC_manager_name/

You need to create certs.d and yourFQDN_VIC_manager_name directory. Here is how -



Lets try to connect once again -


Success! Similarly you can copy the cert to all other nodes thus letting you push and pull images from the private registry and deploy them straight from or to a swarm node.

Thursday, November 9, 2017

Testing a MySQL restore in a container using VIC

We had a requirement yesterday where we had to lift and shift a static website from a major Cloud provider to our in-house Datacenter (why would someone want to do that is another topic ;-) ). It was a typical 3-tier application. The Database was MySQL. The customer shipped us a database dump and our System Admins got busy building a new RHEL VM for it. I got curious to see if the dump can be tested before restoring in the RHEL VM. Docker Containers to the rescue!

With limited MySQL knowledge I started digging deeper. Following was the thought process -

1) Can the backup be copied inside a container running MySQL image?
2) That would require a volume store in VCH which can be mounted inside the MySQL container.
3) Test if the database is restored and running - locally as well as using a remote client.

Lets dive into how this was done using vSphere Integrated Containers.

I have already deployed the VCH.

Lets create a volume store where the DB Backup can be copied.









Next create and mount the volume in a MySQL container.

--name: Name of container

-v: Bind mount a volume. soure_volume:container_destination_options

-e: The environment variable.

MYSQL_ROOT_PASSWORD: Password option is mandatory when running a MySQL container. You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD or MYSQL_RANDOM_ROOT_PASSWORD
Refer to "How to use this image" in the official repository

--network: connecting the container to a container network defined while creating the VCH. This was defined with a IP block so it will assign an IP from that block.

mysql: image name.

Now that the container is created, lets copy the mysql_db_dump.sql  dump file to the volume that is mounted to the container.





Now that the database dump is copied in the container, lets start it. Once that container is running lets confirm if the dump file is copied at /var/lib/mysql

























Next, lets check what databases are currently running inside the container. There are multiple ways you can achieve this -

Opening a bash shell to the container:



As you can see we land directly at the root prompt inside the container.

mysql -uroot -pversecurepassword

Login to mysql with root as the username and verysecurepassword as the password. The password was set at the time of container creation. 
This gives you a mysql prompt. From here you can run all your queries against the Database. 

Using Docker exec: 

You can also use docker exec -i to just pass commands to the container in an interactive mode instead of the Bash shell. Here is how its done - 








Lets create a new Database named TEST_RESTORE and restore the mysql_db_dump.sql


































































































As you can see the database has been restored and "show tables;" lists all tables.

Lets create a client container to see if we can connect to the Database from it.

Note: We did not run legacy --link command. We ran the client container on a Container Network which is able to communicate with our mysqlrestore container.

This concludes that we can connect to the restored database locally as well as from a remote client.

Sunday, October 29, 2017

Upgrading Virtual Container Host (VCH) to a newer version

VMware recently released VIC 1.2.1 with some bug fixes. The process of  upgrading vSphere Integrated Containers from 1.1.x to 1.2.x, or from 1.2.x to 1.2.y is pretty straight forward. You can refer to the documentation and follow the pre and port upgrade tasks.

This post will demonstrate how to upgrade existing VCH to a newer version using vic-machine upgrade command line utility.

Assumptions -

1) You have upgraded VIC successfully.
2) You have downloaded the latest VIC Engine bundle which provides the vic-machine utility.

Lets start with listing all running VCHs.-

             ./vic-machine-linux ls

As you can see Project_A and Project_B are on version 1.2.0 and swarm_test and vch_test are on v1.2.1.

Lets upgrade Project_A to v 1.2.1.

       ./vic-machine-linux upgrade --name Project_A --compute-resource XXX

Here is what is happening in the background - 
  • Validation of whether the configuration of existing VCH is compatible with the new version.
  • Uploading the new appliance.iso and bootstrap.iso files to VCH
  • Create a snapshot
  • Power off VCH
  • Boot from new appliance.iso
  • Delete snapshot
Here is what you see in the vSphere client - 



Note: I tried to upgrade several VCHs and the upgrade went successfully but the snapshot was never deleted. This seems to be some kind of a bug that maybe fixed in the later version. Manually deleting the snapshot had no adverse effect on the VCH. 

Also note that if you are mapping container ports to the VCH, you will have an outage. If the containers are running on a container network, there will be no outage. 

After you upgrade the VCH any new containers will boot from the new bootstrap.iso.

For further troubleshooting please refer to the documentation.