Forum

“There are many ways to provide for failover and clustered services.  One very specific way to achieve high availability that allows for leap-frog releases cycles is blue-green deployments.  The idea is to have a mirrored setup where release code runs one one environment and pre-release code runs on the other.  This creates an environment that provides rollback, reliability, failover and a host of other capabilities.”

“Blue-green deployment allows you to upgrade production software without downtime. You deploy the new version into a copy of the production environment and change routing to switch.”

Source: BlueGreenDeployment

0

Posted:

Categories: Uncategorized

“Another quick reference card on Continuous Integration Patterns.  Automation of builds is not a new practice, why not learn how to consistently describe and develop it.”

Continuous Integration is the process of building software with every change committed to a project’s version control repository. CI can be explained via patterns (i.e., ineffective approaches used to “fix” the specific problem) and anti-patterns associated with the process. This DZone Refcard will walk you through 40 different Patterns and Anti-patterns associated with CI and expands the notion of CI to include concepts such as Deployment and Provisioning. The end result is learning whether you are capable of delivering working software with every source change.

Source: Continuous Integration – DZone – Refcardz

0

Posted:

Categories: Uncategorized

I have never been a big believer in patterns for software development.  As I have grown in my understanding of tools and how consistency plays a part in velocity it is becoming more apparent to me that practical/pragmatic patterns are a good thing.  Here is one article I have read recently regarding SCM patterns.  Not rocket science, but its undeniable the amount of redundant “discovery” taking place when teams believe they are inventing the wheel.

“Software Configuration Management can mean the difference between clarity and confusion in the development cycle. The 16 patterns discussed in this card together serve as potential ways to increase team agility, and they are further enhanced by the inclusion of some general guidelines for improved SCM, as well as a list of useful resources. These patterns are described in further detail in Steve Berczuk’s book, Software Configuration Management Patterns: Effective Teamwork, Practical Integration.”

Source: Software Configuration Management Patterns – DZone – Refcardz

0

Posted:

Categories: Uncategorized

What is BusyBox? The Swiss Army Knife of Embedded Linux Coming in somewhere between 1 and 5 Mb in on-disk size (depending on the variant), BusyBox is a very good ingredient to craft space-efficient distributions. BusyBox combines tiny versions of many common UNIX utilities into a single small executable. It provides replacements for most of the utilities you usually find in GNU fileutils, shellutils, etc. The utilities in BusyBox generally have fewer options than their full-featured GNU cousins; however, t

During the creation of the preceding posts on kubernetes, coreos, and docker, I have encountered something called ‘BusyBox’

Many sites assume you know what this is.  Let me explain just in case your not entirely sure.

Wouldn’t it be great to have a Linux box that fits inside a single executable and is less than 5mb in size?  That’s BusyBox.

The beautiful thing about where we are in container education is we found an even easier and more consistent way to get, use and extend BusyBox… watch carefully…

docker run -it –rm busybox

Thats it – your dropped into a ‘shell’ and can run most linux commands (limited subset of switches for some).  It sure beats downloading cygwin for quick and dirty jobs.

0

Posted:

Categories: Uncategorized

Guestbook ExampleThis example shows how to build a simple multi-tier web application using Kubernetes and Docker. The application consists of a web front-end, Redis master for storage, and replicated set of Redis slaves, all for which we will create Kubernetes replication controllers, pods, and services.If you are running a cluster in Google Container Engine (GKE), instead see the Guestbook Example for Google Container Engine.

Source: Guestbook Example

Its been a long time since I wished to work with Kubernetes.  But the time has passed.  I got my hands on instructions to build out a local kubernetes cluster and then load redis and an app onto it.

Very cool stuff.  I was impressed with the simplicity of the controller and service definitions.  I don’t fully understand it all yet but it was very easy to put the concepts and their relationships together.

You start out with an empty kubernetes cluster.  Its essentially build atop fedora with saltstack as its delivery mechanism for all components.  It was nice to see plenty of python and cherrypy (an old friend).

As with any saltstack deployment you have a master and a minion with secured keys exchanged.  I had 3 nodes appear in my virtualbox (e1, c1, w1).  That was an etcd node (for key/value pairs), a controller and a worker.  I think the master was pushed out onto the controller and the minion was the worker?  Not sure of the mappings yet.  I think the controller is more or less what they call ‘flannel’.

Getting kubernetes installed was difficult due to a bug whereby saltstack couldn’t be found via curl.  I found the answer to this in a forum – replace the curl statement in the provision-master and provision-minion scripts and re-run kube-up.sh.

Then, following the demo I found a missing instruction.  To bring this up on a localhost you need to forward an IP from your local host to the guest app pod.  This might be wrong, but it was the only way I could get access to the guest app from a browser.

Kubernetes claims they are inherently more secure than docker.  I don’t completely understand that claim yet (other than through documentation they offer).  I plan to find out soon.

One of the items I am not fully understanding here is how to serve up content from an external file system/mount.  I need to work on this because I use apache from time-to-time and I like having mounted content.  The problem I am seeing with kubernetes is “how does the content in a path get shared amongst pods/members?”  I will have to test this out more.

All-in-all, very cool technology.  I am enjoying experimenting with it.

0

Posted:

Categories: PaaS

Getting Started with etcd etcd is an open-source distributed key value store that provides shared configuration and service discovery for CoreOS clusters. etcd runs on each machine in a cluster and gracefully handles master election during network partitions and the loss of the current master. Application containers running on your cluster can read and write data into etcd. Common examples are storing database connection details, cache settings, feature flags, and more. This guide will walk you through a

Source: Getting Started with etcd on CoreOS

 

Today I got a better understanding of how to use the etcd tool for CoreOS.  As soon as I read the description of the component I though “not another zoo keeper!”.  Well, I am happy to say this is a much easier tool to use than zoo keeper.  The use is the same – storage of key/value pairs.

I followed the tutorial and learned how to set, get, remove and watch a variable.  There was really no effort needed to figure out how to use this across the three node cluster we have.

In one window I logged into core-01 and in another window, core-02. I executed a set “etcdctl set /message Hello” on core-01.  This sets the value pair in the shared directory service for the cluster.  From core-02 I issued “etcdctl get /message” and “Hello” was returned.  Very neat and easy.  Essentially etcdctl is a shortcut to using curl with all its switches and values.  Makes using curl much simpler for this very specific purpose.

One of the more advanced uses of etcdctl is watching a key for a change and then taking some action based upon it.  Once again, very easy.  On core-01 issue:

etcdctl exec-watch --recursive /foo-service -- sh -c 'echo "\"$ETCD_WATCH_KEY\" key was updated to \"$ETCD_WATCH_VALUE\" value by \"$ETCD_WATCH_ACTION\" action"'

On core-02:

etcdctl set /foo-service/container3 localhost:2222

You will see on core-01, instantly, the message:

“/foo-service/container2” key was updated to “localhost:2222” value by “set” action

 

Couple etcdctl with systemd via fleet and now you have an interactive cluster that can take action based on values set like parameters indicating increased load or slow response time from an app.  The possibilities are endless.

Next I want to get to rkt (rocket).

0

Posted:

Categories: PaaS

Launching Containers with fleet fleet is a cluster manager that controls systemd at the cluster level. To run your services in the cluster, you must submit regular systemd units combined with a few fleet-specific properties. If you’re not familiar with systemd units, check out our Getting Started with systemd guide. This guide assumes you’re running fleetctl locally from a CoreOS machine that’s part of a CoreOS cluster. You can also control your cluster remotely. All of the units referenced in this blog

Source: Launching Containers with fleet

Here is the second in the series of gearing up for kubernetes.  I wanted to learn how systemd and fleet work together to get container spread throughout a cluster.  If you understand how containers run in docker (required reading) then you should see the power of this simple toolset.

First we build a service file (a unit).  The service file is essentially a case statement with pre-reqs and conditionals.  If you have ever started or stopped a service on a linux client, then its easy to link what this service file does.  It is essentially building a linux service.  Instead of the service running other binaries (which it can do also), it executes docker commands (kill, rm, pull, run).

When you are finished writing a service, you use fleet to distribute it. Either you ask fleet to spawn 1 or more of the service anywhere it likes, or you can instruct fleet to filter where it runs based on metadata retrieved from the machines that are in the CoreOS cluster.  In addition to this control feature, you can create a global service and when run it will spread out onto each of the machines in the cluster; providing high availability.

Neat, simple and powerful.  I am going to move onto playing with rkt (rocket) next.  Stay tuned.

0

Posted:

Categories: PaaS

CoreOS creates and maintains open source projects for Linux Containers.

Source: CoreOS is Linux for Massive Server Deployments

 

I have started to experiment with kubernetes in my off time.  First things first, I need to know a little more about CoreOS.  The folks at CoreOS have made it exceedingly easy to pull the system down and get it running.

Simply change the config.rb to have the number of instances and change the user-data to include a link to the etcd directory where your key value pairs will exist for the cluster, and away you go.

I now have a 3 node CoreOS cluster running inside vagrant/virtualbox.  I plan to test out some container distribution to see how quickly one can recover from loss of a system.

If you would like to follow along with me, what you need is VirtualBox and Vagrant to start with and then clone the CoreOS.  Tweak the user-data and config file and then “vagrant up” as always.

0

Posted:

Categories: Uncategorized



————————————


————————————



————————————

myDev-Ops