In the previous post we saw how to configure static IP for PhotonOS.
Lets take a look at how to enable SSH and set to start at boot.
Two simple commands -
# Start Service - systemctl start sshd
# Configure SSH service to automatically start at boot - systemctl enable sshd
PhotonOS uses iptables firewall which by default will block everything except SSH.
Lets allow pings using the following commands -
iptables -A OUTPUT -p icmp -j ACCEPT
iptables -A INPUT -p icmp -j ACCEPT
Note: This change is not persistent.
So how do we get this to be persistent ? Lets see -
/etc/systemd/scripts/iptables is the script that gets executed on iptables service start. So we can add our rules at the end of this script and ICMP rules will be persistent.
Reboot and check it out yourself !
Friday, November 17, 2017
Configure Static IP on PhotonOS
To obtain the name of your Ethernet link run the following command: networkctl
If this is the first time you are using Photon OS, you will only see the first 2 links. The others got created because I ran some docker swarms and created customer network bridges.
The network configuration file is located at -
/etc/systemd/network/
You can do this by running the following command -
Apply the changes by running -
systemctl restart systemd-networkd
Try to ping out form the OS.
Note: You will not able able to ping this VM as by default the iptables firewall blocks everything except SSH. In my next blog I will explain how to allow ping on iptables.
If this is the first time you are using Photon OS, you will only see the first 2 links. The others got created because I ran some docker swarms and created customer network bridges.
The network configuration file is located at -
/etc/systemd/network/
You might see the file 10-dhcp-eth0.network. I renamed this file to static.
root@photon [ ~ ]# mv /etc/systemd/network/10-dhcp-eth0.network
/etc/systemd/network/10-static-eth0.network
Use vi editor to edit the file and add your static IP, Gateway, DNS, Domain and NTP.
This is how the file would look like.
root@photon
[ ~ ]# cat /etc/systemd/network/10-static-eth0.network
[Match]
Name=eth0 <<<<<<<
“Make sure to change this to your adapter. ipconfig to check adapter name”
[Network]
Address=10.xx.xx.xx/24
Gateway=10.xx.xx.1
DNS=10.xx.xx.xx 10.xx.xx.xx
Domains=na.xx.com
NTP=time.nist.gov
Apply the changes by running -
systemctl restart systemd-networkd
Try to ping out form the OS.
Note: You will not able able to ping this VM as by default the iptables firewall blocks everything except SSH. In my next blog I will explain how to allow ping on iptables.
Friday, November 10, 2017
Pull and Push Images from Private Repository (Project Respository) inside a dch-photon container
Refer to my previous blog about Docker Swarm
Each node in this swarm is a native docker container host. A service that is deployed on the swarm is nothing but containers deployed on top of individual swarm nodes. Which means you can run a docker ps on individual swam nodes and get a list of containers running on each of them.
Swarm_test is our VCH.
Manager1 is the swarm manager and worker1, worker2, worker3 are the worker nodes.
As you can see the swarm is running 2 services - portainer and web
There are 4 replicas of the web service. Service ps web shows you the 4 instances with their IDs.
These are nothing but 4 individual containers running on each of the nodes.
Note the service IDs on the swarm which runs as a container on manager1 and worker1.
You can use either deploy images from Docker Hub or use docker-compose.yml to define your application made up of multiple containers.
What if a Developer wants to pull and push images from a private repository inside a Project created in your VIC Management Portal ?
So I tried to connect to my private repository and got a certificate error.
I am trying to login from worker2 to vic.xx.xxx.com which is my VIC Manager.
Lets copy the right certificate so we can login to our private registry. To get the certificate login to your VIC management portal - https://vicmanagerip:8282 and login using an Admin account. I logged in with administrator@vsphere.local
Download the certificate. The certificate needs to be copied on the worker2 node at - /etc/docker/certs.d/yourFQDN_VIC_manager_name/
You need to create certs.d and yourFQDN_VIC_manager_name directory. Here is how -
Success! Similarly you can copy the cert to all other nodes thus letting you push and pull images from the private registry and deploy them straight from or to a swarm node.
Thursday, November 9, 2017
Testing a MySQL restore in a container using VIC
We had a requirement yesterday where we had to lift and shift a static website from a major Cloud provider to our in-house Datacenter (why would someone want to do that is another topic ;-) ). It was a typical 3-tier application. The Database was MySQL. The customer shipped us a database dump and our System Admins got busy building a new RHEL VM for it. I got curious to see if the dump can be tested before restoring in the RHEL VM. Docker Containers to the rescue!
With limited MySQL knowledge I started digging deeper. Following was the thought process -
1) Can the backup be copied inside a container running MySQL image?
2) That would require a volume store in VCH which can be mounted inside the MySQL container.
3) Test if the database is restored and running - locally as well as using a remote client.
Lets dive into how this was done using vSphere Integrated Containers.
I have already deployed the VCH.
Lets create a volume store where the DB Backup can be copied.
Next create and mount the volume in a MySQL container.
--name: Name of container
-v: Bind mount a volume. soure_volume:container_destination_options
-e: The environment variable.
MYSQL_ROOT_PASSWORD: Password option is mandatory when running a MySQL container. You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD or MYSQL_RANDOM_ROOT_PASSWORD
Refer to "How to use this image" in the official repository
--network: connecting the container to a container network defined while creating the VCH. This was defined with a IP block so it will assign an IP from that block.
mysql: image name.
Now that the container is created, lets copy the mysql_db_dump.sql dump file to the volume that is mounted to the container.
Now that the database dump is copied in the container, lets start it. Once that container is running lets confirm if the dump file is copied at /var/lib/mysql
Next, lets check what databases are currently running inside the container. There are multiple ways you can achieve this -
Opening a bash shell to the container:
As you can see we land directly at the root prompt inside the container.
Using Docker exec:
As you can see the database has been restored and "show tables;" lists all tables.
Lets create a client container to see if we can connect to the Database from it.
Note: We did not run legacy --link command. We ran the client container on a Container Network which is able to communicate with our mysqlrestore container.
This concludes that we can connect to the restored database locally as well as from a remote client.
With limited MySQL knowledge I started digging deeper. Following was the thought process -
1) Can the backup be copied inside a container running MySQL image?
2) That would require a volume store in VCH which can be mounted inside the MySQL container.
3) Test if the database is restored and running - locally as well as using a remote client.
Lets dive into how this was done using vSphere Integrated Containers.
I have already deployed the VCH.
Lets create a volume store where the DB Backup can be copied.
Next create and mount the volume in a MySQL container.
-v: Bind mount a volume. soure_volume:container_destination_options
-e: The environment variable.
MYSQL_ROOT_PASSWORD: Password option is mandatory when running a MySQL container. You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD or MYSQL_RANDOM_ROOT_PASSWORD
Refer to "How to use this image" in the official repository
--network: connecting the container to a container network defined while creating the VCH. This was defined with a IP block so it will assign an IP from that block.
mysql: image name.
Now that the container is created, lets copy the mysql_db_dump.sql dump file to the volume that is mounted to the container.
Now that the database dump is copied in the container, lets start it. Once that container is running lets confirm if the dump file is copied at /var/lib/mysql
Next, lets check what databases are currently running inside the container. There are multiple ways you can achieve this -
Opening a bash shell to the container:
As you can see we land directly at the root prompt inside the container.
mysql -uroot -pversecurepassword
Login to mysql with root as the username and verysecurepassword as the password. The password was set at the time of container creation.
This gives you a mysql prompt. From here you can run all your queries against the Database.
Using Docker exec:
You can also use docker exec -i to just pass commands to the container in an interactive mode instead of the Bash shell. Here is how its done -
As you can see the database has been restored and "show tables;" lists all tables.
Lets create a client container to see if we can connect to the Database from it.
This concludes that we can connect to the restored database locally as well as from a remote client.
Sunday, October 29, 2017
Upgrading Virtual Container Host (VCH) to a newer version
VMware recently released VIC 1.2.1 with some bug fixes. The process of upgrading vSphere Integrated Containers from 1.1.x to 1.2.x, or from 1.2.x to 1.2.y is pretty straight forward. You can refer to the documentation and follow the pre and port upgrade tasks.
This post will demonstrate how to upgrade existing VCH to a newer version using vic-machine upgrade command line utility.
Assumptions -
1) You have upgraded VIC successfully.
2) You have downloaded the latest VIC Engine bundle which provides the vic-machine utility.
Lets start with listing all running VCHs.-
This post will demonstrate how to upgrade existing VCH to a newer version using vic-machine upgrade command line utility.
Assumptions -
1) You have upgraded VIC successfully.
2) You have downloaded the latest VIC Engine bundle which provides the vic-machine utility.
Lets start with listing all running VCHs.-
./vic-machine-linux ls
As you can see Project_A and Project_B are on version 1.2.0 and swarm_test and vch_test are on v1.2.1.
Lets upgrade Project_A to v 1.2.1.
./vic-machine-linux upgrade --name Project_A --compute-resource XXX
Here is what is happening in the background -
- Validation of whether the configuration of existing VCH is compatible with the new version.
- Uploading the new appliance.iso and bootstrap.iso files to VCH
- Create a snapshot
- Power off VCH
- Boot from new appliance.iso
- Delete snapshot
Here is what you see in the vSphere client -
Note: I tried to upgrade several VCHs and the upgrade went successfully but the snapshot was never deleted. This seems to be some kind of a bug that maybe fixed in the later version. Manually deleting the snapshot had no adverse effect on the VCH.
Also note that if you are mapping container ports to the VCH, you will have an outage. If the containers are running on a container network, there will be no outage.
After you upgrade the VCH any new containers will boot from the new bootstrap.iso.
For further troubleshooting please refer to the documentation.
Saturday, October 21, 2017
vSphere Integrated Containers (Docker Swarm) - Part 2
In my last post we created a Docker Swarm using DCH now lets look at a few cool things you can do with it.
The setup remains the same as the last post.
Scaling up and down
In our original example we had 6 replicas running.
Run the following to check the running processes -
docker -H 10.156.134.141 service ps web
docker -H 10.156.134.141 service scale web=10
And then check to see if the instances have been scaled to 10
docker -H 10.156.134.141 service scale web=6
And check to see if the instances have been scaled down back to 6.
Before draining a node let us make sure the number of running processes/instances and the corresponding worker nodes they are running on.
- 1 on manager1
- 1 on worker1
- 2 on worker2
- 2 on worker3
Run the following command to drain worker3
docker -H 10.156.134.141 node update --availability drain worker3
As you can see web.8 and web.9 are shutdown on worker3 and are scheduled on manager1 and worker1 respectively.
Applying Rolling Updates
The simplest of all commands. If the nginx image you deployed needs to be updated, just run -
docker service update --image <imagename>:<version> web
Removing a service
Be careful using this command. It does not ask for confirmation.
docker service rm web
vSphere Integrated Containers (Docker Swarm)
vSphere Integrated Containers v1.2 includes the ability to provision native Docker container hosts (DCH). The DCH is distributed by VMware through Docker Hub.
VIC v1.2 does not support docker build and docker push and developers can use this dch-photon image to perform these operations as well as deploy a swarm (not natively supported in VIC)
dch-photon is also pre-loaded in the default-project in vSphere Integrated Containers Registry. If you are not familiar with dch-photon, please read the VIC Admin Guide for reference.
Lets jump right in to see how a Developer can use DCH to create a Docker Swarm and deploy an application.
You can write a simple shell script that deploys Docker swarm manager node and then create and join worker nodes to the swarm. In this example I will deploy the manager and worker nodes manually.
Create a Virtual Container Host (VCH) - This will act as an endpoint for deploying the master and worker DCH. I have deployed a VCH named swarm_test with a container network IP range so the containers will pick an IP from the range provided. The VCH IP in 10.156.134.35.
This is where I spent most of the time trying to figure out why my worker nodes were not talking to the manager. I highly recommend reading the network use cases in the documentation. As per my network setup I had to combine bridge networks with a container network.
VIC v1.2 does not support docker build and docker push and developers can use this dch-photon image to perform these operations as well as deploy a swarm (not natively supported in VIC)
dch-photon is also pre-loaded in the default-project in vSphere Integrated Containers Registry. If you are not familiar with dch-photon, please read the VIC Admin Guide for reference.
Lets jump right in to see how a Developer can use DCH to create a Docker Swarm and deploy an application.
You can write a simple shell script that deploys Docker swarm manager node and then create and join worker nodes to the swarm. In this example I will deploy the manager and worker nodes manually.
Create a Virtual Container Host (VCH) - This will act as an endpoint for deploying the master and worker DCH. I have deployed a VCH named swarm_test with a container network IP range so the containers will pick an IP from the range provided. The VCH IP in 10.156.134.35.
Creating a docker
volume for the master image cache
docker
-H 10.156.134.35 volume create --opt Capacity=10GB --name registrycache
Creating a volume for
each worker image cache
docker -H 10.156.134.35 volume
create --opt Capacity=10GB --name worker1
docker -H 10.156.134.35 volume
create --opt Capacity=10GB --name worker2
docker -H 10.156.134.35 volume create --opt Capacity=10GB --name worker3
Lets create a Master
instance
docker -H 10.156.134.35 create
-v registrycache:/var/lib/docker \
--net
VIC-Container \
--name
manager1 \
--hostname=manager1
\
vmware/dch-photon:17.06
Create the worker
instances worker 1,2,3
docker
-H 10.156.134.35 create -v worker1:/var/lib/docker \
--net VIC-Container \
--name
worker1 \
--hostname=worker1 \
vmware/dch-photon:17.06
Here is how the deployed setup looks like -
Connect the master and worker nodes to the appropriate Bridge network.
docker -H 10.156.134.35 network
connect bridge manager1
docker
-H 10.156.134.35 network connect bridge worker1
docker
-H 10.156.134.35 network connect bridge worker2
docker
-H 10.156.134.35 network connect bridge worker3
Now, start the master and all worker nodes.
docker –H
10.156.134.35 start manager1
docker –H 10.156.134.35 start
worker1 worker2 worker3
Create a Swarm on the
Master (note my manager node IP in the screenshot above)
docker -H 10.156.134.141 swarm
init --advertise-addr 10.156.134.141
I am advertising the manager IP so the nodes can communicate on it. This is your eth0 in Docker networking world.The output of the above command will give you a Token.
This Token is required for the worker nodes to join the swarm. Don't panic if you missed copying the token string. You can get it back by running the following command
docker -H 10.156.134.141 swarm join-token worker (make sure you get the worker token. If you replace worker my manager you will get the manager token. This is useful if you want to have more than one manager in your swarm)
Add each worker to
the swarm
docker -H 10.156.134.142 swarm join 4stkz3wrziufhq8qjwkszxpzet6o3tlut1lf9o9ijqkhsvb5va-dtopt9a6bp9r03q52q2ea6mo4 10.156.134.141:2377
Once all worker nodes are added, run the following to list the nodes -
docker -H 10.156.134.141 node ls
Now that the nodes are up lets create a simple nginx web service.
Create a service
docker -H
10.156.134.141 service create --replicas 6 -p 80:80 --name web nginx
Check the status of the service -
docker –H 10.156.134.141 service
ls
docker –H 10.156.134.141 service
ps web
You can see the replicas are preparing and not ready yet.
Whats happening in the background is the manager node is distributing the nginx image on the worker nodes and orchestration layer is scheduling containers on the manager and worker nodes. If you run docker service ps web again you will see the service is now running which means the nginx daemon has been launched and ready to serve requests.
Go to your favorite browser and hit the IP of any worker or manager nodes. Even if a container is not scheduled on one of the nodes, you should still be able to get to the Welcome to nginx! webpage. Thats the whole idea of a swarm.
In my next blog I will talk about Scaling up and down, inspecting nodes, draining nodes, removing a service and applying rolling updates to a service.
Subscribe to:
Posts (Atom)