Showing posts with label Volumes. Show all posts
Showing posts with label Volumes. Show all posts

Sunday, November 26, 2017

Sharing NFS-backed Volumes Between Containers

vSphere Integrated Containers supports two types of volumes, each of which has different characteristics.
  • VMFS virtual disks (VMDKs), mounted as formatted disks directly on container VMs. These volumes are supported on multiple vSphere datastore types, including NFS, iSCSI and VMware vSAN. They are thin, lazy zeroed disks. 
  • NFS shared volumes. These volumes are distinct from a block-level VMDK on an NFS datastore. They are Linux guest-level mounts of an NFS file-system share.
VMDKs are locked while a container VM is running and other containers cannot share them.

NFS volumes on the other hand are useful for scenarios where two containers need read-write access to the same volume.

To use container volumes, one must first declare or create a volume store at the time of VCH creation

You must use the vic-machine create --volume-store option to create a volume store at the time of VCH creation.

You can add a volume store to an existing VCH by using the vic-machine configure --volume-store option. If you are adding volume stores to a VCH that already has one or more volume stores, you must specify each existing volume store in a separate instance of --volume-store.

Note: If you do not specify a volume store, no volume store is created by default and container developers cannot create or run containers that use volumes.

In my example, I have assigned a whole vSphere datastore as a volume store and would like to add a new NFS volume store to the VCH. The syntax is as follows - 

$ vic-machine-operating_system configure 
--target vcenter_server_username:password@vcenter_server_address 
--thumbprint certificate_thumbprint --id vch_id 
--volume-store datastore_name/datastore_path:default 
--volume-store nfs://datastore_name/path_to_share_point:nfs_volume_store_label

nfs://datastore_name/path_to_share_point:nfs_volume_store_label is adding a NFS datastore in vSphere as the volume store. Which means when a volume is created on it, it will be a VMDK file which cannot be shared between containers.

Add a NFS mountpoint to be able to share between containers. You need to specify the URL, UID, GID and access protocol. 

Note:You cannot specify the root folder of an NFS server as a volume store.

The syntax is as follows - 

--volume-store nfs://datastore_address/path_to_share_point?uid=1234&gid=5678&proto=tcp:nfs_volume_store_label

If you do not specify the UID and GID the default is 1000. Read more and UID and GID here.

Before adding NFS Volume Store

After adding NFS Volume Store


Two things to note in my example- 

1) I am running a VM based RHEL backed NFS and the UID and GID did not work for me. 
2) The workaround was to manually change permissions on the test2  folder on the NFS Share point by using  chmod 777 test2

Now that the NFS volume store is added, lets create a volume - 


Next, deploy two containers with My_NFS_Volume  mounted on both.


To check if the volume was mounted docker inspect containername and you will see the details under Mounts. You will see the name as well as the read/write mode.



Now lets create a .txt file from one container and check from the other if it can be seen.



This concludes we can share NFS-backed volumes between containers.

Thursday, November 9, 2017

Testing a MySQL restore in a container using VIC

We had a requirement yesterday where we had to lift and shift a static website from a major Cloud provider to our in-house Datacenter (why would someone want to do that is another topic ;-) ). It was a typical 3-tier application. The Database was MySQL. The customer shipped us a database dump and our System Admins got busy building a new RHEL VM for it. I got curious to see if the dump can be tested before restoring in the RHEL VM. Docker Containers to the rescue!

With limited MySQL knowledge I started digging deeper. Following was the thought process -

1) Can the backup be copied inside a container running MySQL image?
2) That would require a volume store in VCH which can be mounted inside the MySQL container.
3) Test if the database is restored and running - locally as well as using a remote client.

Lets dive into how this was done using vSphere Integrated Containers.

I have already deployed the VCH.

Lets create a volume store where the DB Backup can be copied.









Next create and mount the volume in a MySQL container.

--name: Name of container

-v: Bind mount a volume. soure_volume:container_destination_options

-e: The environment variable.

MYSQL_ROOT_PASSWORD: Password option is mandatory when running a MySQL container. You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD or MYSQL_RANDOM_ROOT_PASSWORD
Refer to "How to use this image" in the official repository

--network: connecting the container to a container network defined while creating the VCH. This was defined with a IP block so it will assign an IP from that block.

mysql: image name.

Now that the container is created, lets copy the mysql_db_dump.sql  dump file to the volume that is mounted to the container.





Now that the database dump is copied in the container, lets start it. Once that container is running lets confirm if the dump file is copied at /var/lib/mysql

























Next, lets check what databases are currently running inside the container. There are multiple ways you can achieve this -

Opening a bash shell to the container:



As you can see we land directly at the root prompt inside the container.

mysql -uroot -pversecurepassword

Login to mysql with root as the username and verysecurepassword as the password. The password was set at the time of container creation. 
This gives you a mysql prompt. From here you can run all your queries against the Database. 

Using Docker exec: 

You can also use docker exec -i to just pass commands to the container in an interactive mode instead of the Bash shell. Here is how its done - 








Lets create a new Database named TEST_RESTORE and restore the mysql_db_dump.sql


































































































As you can see the database has been restored and "show tables;" lists all tables.

Lets create a client container to see if we can connect to the Database from it.

Note: We did not run legacy --link command. We ran the client container on a Container Network which is able to communicate with our mysqlrestore container.

This concludes that we can connect to the restored database locally as well as from a remote client.