[lug] CM for a small sysadmin.

Tyler Cipriani tyler at tylercipriani.com
Wed Oct 28 16:01:28 MDT 2015


Just did a quick bug search to verify, but I haven't heard of any
problems with the salt-minion or salt-master services doing any
serious resource consumption. Our deployment is fair-sized (close to
1000 nodes).

Also, anecdotally, I've run the salt-minion service on an old (first
model) raspberry pi and it cranked along.

I will say that, while we have salt deployed on all these nodes for
remote parallel command execution (and some deployment tooling), we
actually run puppet agent via cron for configuration management—so
we're not driving salt very hard a vast majority of the time.

The problems I've had are mostly related to the fact that after a
certain point commands run "asynchronously", but sometimes they don't,
and the circumstances under which you get one behavior vs the other
are undocumented.

As an example, if I try to restart a service on all nodes some minions
will check back in, other minions unilaterally decide based on a set
of opaque criteria that the restart is taking too long and **don't**
check back in with me, but the salt command returns and tells me
everything is fine (zero exit status). Building custom salt-returners
is helpful, but it's been frustrating. A node that is just off vs a
node that has decided that a restart is taking too long appear
functionally the same from the salt-master.

On Wed, Oct 28, 2015 at 2:08 PM, Quentin Hartman <qhartman at gmail.com> wrote:
> Have you had any performance issues using saltstack? I was using a
> monitoring framework for Ceph (their Calamari tool) to monitor my ceph
> cluster and it's based around Salt. It utterly crushed the node I was trying
> to use as my monitoring host, and it had a measurable impact on my ceph
> nodes as well. It appeared that salt itself was the culprit, so I've been
> curious if that sort of overhead is something anyone else has seen.
>
> On Wed, Oct 28, 2015 at 2:02 PM, Tyler Cipriani <tyler at tylercipriani.com>
> wrote:
>>
>> tl;dr: go with saltstack or ansible, my opinion, YMMV.
>>
>> I would advise staying away from chef and puppet unless you're using
>> this as a learning exercise. They're both ruby-based tools that
>> require some level of pre-deploy setup on the target node. Also, even
>> as a learning exercise, many larger organizations are starting to move
>> away from chef and puppet towards smaller more agile tooling.
>>
>> Having said that, this could be easily done in a serverless manner by
>> using chef-solo or puppet apply. Of those two options, I would tend to
>> choose puppet—it's a bit more declarative and harder to break
>> (seemingly) than chef (YMMV). This can also mean puppet is more of a
>> headache at the "compile" stage.
>>
>> Today, if I were deploying a new configuration management system, I
>> would choose either ansible or saltstack. Both can be run without a
>> server, but there is a huge difference in terms of execution
>> models—even in serverless-mode.
>>
>> Ansible, seemingly, dynamically generates python code that is then
>> scp'd to a target node and executed via `python
>> /tmp/[generated-temp-file.py`
>>
>> Saltstack, by virtue of being installed on the target machine, does
>> not need to be scp'd any dynamic scripts, and can simply be passed a
>> "module" (python script written in a special way that lives in a
>> special location) or an `sts` file which is a yaml file full of
>> module-commands. You can run something like: `sudo salt-call --local
>> cmd.run 'echo "hi"'` and it will use the default shell to run that
>> command locally on the target.
>>
>> Both Saltstack and Ansible are the right choice when choosing a
>> configuration management system in 2015.
>>
>> Ansible is easier to setup by far (as there is nothing to install on
>> the target and all that is needed is ssh access); however, it is
>> slower and less-scalable (although I know they use it at Microsoft and
>> Twitter in some capacity). Also noteworthy is that it was aquired by
>> RedHat recently, so if you're looking for tight integration with any
>> RedHat-realted-things that may be a factor.
>>
>> Saltstack is difficult to get setup correctly. I've had some problems
>> using it in server mode with ZeroMQ. That being said it is (in theory)
>> much more scalable and it's _actually possible_ to define sudoer rules
>> that give access to certain salt modules (say, like a deployment
>> system, or a package update system) whereas this is impossible to do
>> with Ansible since it is just running a python script. That is, if you
>> tell ansible to use sudo, it will run: `ssh [host] -- "sudo python
>> [generated-scp'd-python-script].py"`
>>
>> Having said all this, for a single server, I'd use ansible. If I
>> didn't know ansible, I'd use python (and if I didn't care about
>> ssh-super-secure-lockdown, I'd probably use Fabric with python). If I
>> didn't know python, I'd use bash :)
>>
>> Couple good blog posts:
>> - http://jensrantil.github.io/salt-vs-ansible.html
>> -
>> http://ryandlane.com/blog/2014/08/04/moving-away-from-puppet-saltstack-or-ansible/
>>
>>
>> On Wed, Oct 28, 2015 at 12:33 PM, Kevin Fenzi <kevin at scrye.com> wrote:
>> > On Wed, 28 Oct 2015 13:07:52 -0600
>> > Dan Ferris <dan at usrsbin.com> wrote:
>> >
>> >> Ansible.  It's by far the easiest of the bunch.
>> >
>> > I'd second this. It's very easy to ramp up and understand and pretty
>> > powerful.
>> >
>> > You might try some simple example service or setup and test out the
>> > various options yourself though and see which one fits best with how
>> > you work.
>> >
>> > kevin
>> >
>> >
>> >
>> > _______________________________________________
>> > Web Page:  http://lug.boulder.co.us
>> > Mailing List: http://lists.lug.boulder.co.us/mailman/listinfo/lug
>> > Join us on IRC: irc.hackingsociety.org port=6667 channel=#hackingsociety
>> _______________________________________________
>> Web Page:  http://lug.boulder.co.us
>> Mailing List: http://lists.lug.boulder.co.us/mailman/listinfo/lug
>> Join us on IRC: irc.hackingsociety.org port=6667 channel=#hackingsociety
>
>
>
> _______________________________________________
> Web Page:  http://lug.boulder.co.us
> Mailing List: http://lists.lug.boulder.co.us/mailman/listinfo/lug
> Join us on IRC: irc.hackingsociety.org port=6667 channel=#hackingsociety


More information about the LUG mailing list