Getting Started with etcd etcd is an open-source distributed key value store that provides shared configuration and service discovery for CoreOS clusters. etcd runs on each machine in a cluster and gracefully handles master election during network partitions and the loss of the current master. Application containers running on your cluster can read and write data into etcd. Common examples are storing database connection details, cache settings, feature flags, and more. This guide will walk you through a

Source: Getting Started with etcd on CoreOS

 

Today I got a better understanding of how to use the etcd tool for CoreOS.  As soon as I read the description of the component I though “not another zoo keeper!”.  Well, I am happy to say this is a much easier tool to use than zoo keeper.  The use is the same – storage of key/value pairs.

I followed the tutorial and learned how to set, get, remove and watch a variable.  There was really no effort needed to figure out how to use this across the three node cluster we have.

In one window I logged into core-01 and in another window, core-02. I executed a set “etcdctl set /message Hello” on core-01.  This sets the value pair in the shared directory service for the cluster.  From core-02 I issued “etcdctl get /message” and “Hello” was returned.  Very neat and easy.  Essentially etcdctl is a shortcut to using curl with all its switches and values.  Makes using curl much simpler for this very specific purpose.

One of the more advanced uses of etcdctl is watching a key for a change and then taking some action based upon it.  Once again, very easy.  On core-01 issue:

etcdctl exec-watch --recursive /foo-service -- sh -c 'echo "\"$ETCD_WATCH_KEY\" key was updated to \"$ETCD_WATCH_VALUE\" value by \"$ETCD_WATCH_ACTION\" action"'

On core-02:

etcdctl set /foo-service/container3 localhost:2222

You will see on core-01, instantly, the message:

“/foo-service/container2” key was updated to “localhost:2222” value by “set” action

 

Couple etcdctl with systemd via fleet and now you have an interactive cluster that can take action based on values set like parameters indicating increased load or slow response time from an app.  The possibilities are endless.

Next I want to get to rkt (rocket).

0

Posted:

Categories: PaaS

Launching Containers with fleet fleet is a cluster manager that controls systemd at the cluster level. To run your services in the cluster, you must submit regular systemd units combined with a few fleet-specific properties. If you’re not familiar with systemd units, check out our Getting Started with systemd guide. This guide assumes you’re running fleetctl locally from a CoreOS machine that’s part of a CoreOS cluster. You can also control your cluster remotely. All of the units referenced in this blog

Source: Launching Containers with fleet

Here is the second in the series of gearing up for kubernetes.  I wanted to learn how systemd and fleet work together to get container spread throughout a cluster.  If you understand how containers run in docker (required reading) then you should see the power of this simple toolset.

First we build a service file (a unit).  The service file is essentially a case statement with pre-reqs and conditionals.  If you have ever started or stopped a service on a linux client, then its easy to link what this service file does.  It is essentially building a linux service.  Instead of the service running other binaries (which it can do also), it executes docker commands (kill, rm, pull, run).

When you are finished writing a service, you use fleet to distribute it. Either you ask fleet to spawn 1 or more of the service anywhere it likes, or you can instruct fleet to filter where it runs based on metadata retrieved from the machines that are in the CoreOS cluster.  In addition to this control feature, you can create a global service and when run it will spread out onto each of the machines in the cluster; providing high availability.

Neat, simple and powerful.  I am going to move onto playing with rkt (rocket) next.  Stay tuned.

0

Posted:

Categories: PaaS



————————————


————————————



————————————

myDev-Ops