Platforms

Docker, docker, docker…. where do I start.  We’ve been using the technology here at mydev-ops.com since 2012. Lets give some background and then dive into where we have used it and what we’ve learned.

Introductions

By way of its appearance in the marketplace, Docker had two very key advantages Dockerover their competition.  First, they had a name that was easy to understand and second, they had a logo that described what they do very effectively. A Whale to express they are enterprise-ready.  The Ocean expressing portability.  Shipping containers reflecting security, compartmentalization and uniformity (on the outside at least). The logo is brilliant, but what does it all mean?

Docker has become a standard amongst companies looking to standardize their applications/solutions to the point where they can be migrated anywhere in the world and operate identically.

Amazon Web Services AWSFor instance.  I could take the time to provision a host in AWS and then login and install MySQL.  This will work perfectly fine, but the moment I want to take my infrastructure elsewhere (Rackspace for arguments sake), it falls apart.  I have no easy way to export my application with it’s settings and place it somewhere else.  I have to take the time an
re-install and configure MySQL all over again.  Note also that I didn’t mention anything about the data that this MySQL host serves.  Nor did I talk about any other parameters like HA (high availability) or scaling.

Enter Docker.  With containers that Docker offers we can clone an existing MySQL container frDocker and AWSom dockerhub, change it to our likings, rename it and then check it into our own dockerhub account.  That’s it.  Each time to tweak the docker container you version control it.  You can go back in time to pickup an old version, or you can branch to setup a development-only or an extra-secure version of the container.

What about the data you ask?  That can be tricky if you don’t realize its not part of the container.  You must backup and restore your data (SQL statements are my preference). You can check that into your private github account if you like.

Now, if at any time I need to move my database I simply clone the container, start it up, and then restore my data into it.  As a matter of fact, that’s how the mydev-ops.com website was built and is maintained.  All of the sites’ data and the MySQL and wordpress containers (that I have modified) are in github and dockerhub.

Usage

Lets dive into the nuts and bolts of this website – mydev-ops.com…

We needed a location on the inter-webs that would house our content and be easy MySQLto update and maintain.  Wordpress was a natural fit; lots of plugins and easy to do just about anything. We knew Docker had a wordpress container ripe enough to use.  We also knew there was a pretty reliable MySQL container that could be linked up easily.

Next we needed hosting.  Infrastructure was AWS for the sake of familiarity.  Hosting the Docker containers was not an easy find at first.  We settled on Tutum.  It just so happens that Docker later purchased Tutum in 2015.

TutumTutum is integrated with AWS so we just plugged in our API key and off we went.  We built one node (AWS host stood up by Tutum on our behalf), and added two services (MySQL and WordPress).  First, we stood up MySQL, taking mostly defaults but adding some tighter security.  Next, we stood up the WordPress container.  The nice part about Tutum is the parameters for each of the services are available to one-another.  This means we never have to enter actual passwords, host names, and database names.

Next is content.  We had the content but how do we get the content alongside/inside/accessible by the MySQL or WordPress containers?  This is where it gets interesting.  Now we needed access to the AWS node itself.  Since Tutum handled the provisioning with its own private key, we were screwed – we didn’t have access to that key – its a Tutum thing.

Learning

So, start over.  We checked in the docker containers (as we made changes to them for the mydev-ops.com site) into our Dockerhub account, blew away the AWS node and opted to “bring-your-own” via Tutum.  This allows us to provision our own machine from AWS, with our own private key pair.  Tutum attached to the node when it booted and we were back in business.  Provision the two services and then SSH into the node to drop our content into the shared directory for the WordPress site.  We then used Navigate to import our content into the MySQL server.

We also installed a backup plugin for WordPress to automatically backup all content to our dropbox.

It sounds like a bit of overkill but I cannot stress enough how important the term “backup” becomes when using containers.  If you make a change to the application in the container, and that container goes away, you lose your configuration.  If you make a change to your content in WordPress and the WordPress container goes down, you lose the application, not the content.  However, if the node goes away in AWS, the files are gone that made up your site.  Github, Dockerhub and dropbox are our best friends when it comes to containers.



————————————


————————————



————————————

myDev-Ops