20
Jan 13

Be The Master Of Your Minions (An Introduction To Salt)

Happy new year fzysqr readers! What better way could we possibly celebrate the new year than by trying a new infrastructure automation system? Late last year, I began rolling out Salt a.k.a. SaltStack (for SEO help I’m told).

Background

Having spent quite a bit of time writing Chef recipes, I realized that my least favorite part of Chef is dealing with Chef server. Managing a Chef Server install seems like a full-time job. I suppose that is why Opscode is in business. But hosted Chef subscriptions cost money and I’m cheap. So instead, I just wailed in anguish every time Chef became unresponsive and required fiddling with RabbitMQ or CouchDB to get going again.

When the time came to again choose an infrastructure management tool, I decided to look around. After a friend recommended Salt, I took it for a spin and became immediately hooked. Who doesn’t like having an army of minions to boss around (“slave” nodes are called minions in Salt)? Salt still has centralized server, but the Master is much simpler to install and maintain. It also communicates securely with minions by default I’m looking at you Chef! Perhaps the biggest draw for me was that Salt is written in Python and I simply find writing custom modules and new Salt states to be easier. The community around Salt is fired up too! Lot’s of good folks willing to help on IRC.

As I mentioned above, Salt is easy to install and the docs are solid. So I’ll just skip to some fun examples of how you can use it to do useful things.

Salt States

Salt works by having a collection of states. Each state is a collection of Salt State Files (SLSs). You use SLS one or more SLS files to build out a state tree. The top.sls provides the root node of the tree and binds the states together. A simple version might look something like this:

base:
  '*':
    - ntp
    - users
    - newrelic
    - ec2
  'jenkins*':
    - git
    - node
    - tools.build
  'appserver*':
    - node
    - runit
    - myApp.web
  'dbserver*':
    - myApp.database

In this case, base is the root node of the state tree and we only have one single environment. Salt also can have multiple environments, letting you target states separately in your development and production environments. By using globs or regular expressions, you specify what states the minions should enforce. In my example, I have a core set of services that are shared across my network (ntp, users, newrelic, and ec2) and then I specify more specific states for a few of the minion types.

Each state is represented by a directory in your configuration root with one or more files inside. If there is an init.sls file in a directory, it will be called automatically when you include the state (ntp). If there are other files, you can call them using dot notation; tools.build calls tools\build.sls.

Example State

Let’s check out the ntp state. The purpose of my ntp state is to install ntpd on minion and configure it to use the same set of time servers:

ntp:
  pkg:
    - installed
  service:
    {% if grains['os'] == 'CentOS' or grains['os'] == 'RedHat' %}
    - name: ntpd
    {% endif %}
    - running
    - watch:
      - file: /etc/ntp.conf
  file.managed:
    {% if grains['os'] == 'Debian' or grains['os'] == 'Ubuntu' %}
    - name: /etc/ntp.conf
    - source: salt://ntp/ntp.conf.ubuntu.jinja
    {% elif grains['os'] == 'CentOS' or grains['os'] == 'RedHat' %}
    - name: /etc/ntp.conf
    - source: salt://ntp/ntp.conf.redhat.jinja
    {% endif %}
    - mode: 644
    - template: jinja
    - defaults:
          servers: ['0.pool.ntp.org',
                  '1.pool.ntp.org',
                  '2.pool.ntp.org',
                  '3.pool.ntp.org']
    - require:
      - pkg: ntp

SLS states look a bit funky, but are quite easy to read once you understand how they work. First off, each one is actually a jinja template that is processed at runtime and compiled to yaml. This allows you to use python constructs to dynamically generated parts of the yaml as needed. In the case the template generates tweaked configs for Debian vs RedHat.

Let’s dissect this template line by line.

ntp:

Here we have the subject of the state. This value will automatically get passed to the called modules as the name parameter.

  pkg:

The next level is the module. In this case, we are reference the package module.

    - installed

Inside the module declaration, the first word is the method we want to call, installed. This is the same as specifying the method using dot notation (pkg.installed).

  service:
    - running

Here we have a new module, service, and we are calling the running method. Notice that we did not have to specify the ntp name again. We could have, but the yaml hierarchy does it for us.

    {% if grains['os'] == 'CentOS' or grains['os'] == 'RedHat' %}
    - name: ntpd
    {% endif %}

Next, we dynamically change the name, ntp, to ntpd if we are on a RedHat OS. This is because the running service name is different from the yum package name on RH.

    - watch:
      - file: /etc/ntp.conf

Finally, we instruct the service to watch a specific file. It will restart whenever the contents of this file is changed.

  file.managed:
    {% if grains['os'] == 'Debian' or grains['os'] == 'Ubuntu' %}
    - name: /etc/ntp.conf
    - source: salt://ntp/ntp.conf.ubuntu.jinja
    {% elif grains['os'] == 'CentOS' or grains['os'] == 'RedHat' %}
    - name: /etc/ntp.conf
    - source: salt://ntp/ntp.conf.redhat.jinja
    {% endif %}
    - template: jinja
    - mode: 644

Our final module call is the most complicated. It specifies that Salt should create a file based on one of two templates, depending on the OS. The templates are ntp.conf files, parameterized with jinja.

    - defaults:
          servers: ['0.pool.ntp.org',
                  '1.pool.ntp.org',
                  '2.pool.ntp.org',
                  '3.pool.ntp.org']

Here we actually pass the target ntp servers to the template engine so that the final output is our custom ntp.conf.

    - require:
      - pkg: ntp

Finally, we tell Salt that this state cannot be enforced until the package has been installed. Because Salt execution order is not guaranteed to be linear, you really need to make sure you pay attention to adding appropriate require statements to state declarations.

A word about infrastructure automation

It can be hard deciding whether a specific component of your infrastructure is “worth” automating. Sometimes the investment to automate trivial setups can seem disproportionately high. I generally always automate anything in production (or destined for it). You never know when you will be rebuilding a production box in a hurry. The one exception to this rule is if my required recovery time objectives dictate the need to have a fully preconfigured image (or if I am using AWS auto scaling). If the system is temporary or for test/dev, I will only invest in automation if there is an immediate ROI on my time.

What’s next?

Salt is great. I love it and will continue to recommend it to anyone who asks. However, if you are already invested in Chef or Puppet, it is probably not worth dumping all your code and starting over. If you are in the process of selecting a config management framework, I recommend reading through the docs, downloading Salt, and giving it a spin. I think you’ll like what you find.