Goals

Table of contents

Before Starting

Assumptions:

Wording:

Installation

Installing a Puppet Agent

Install sysutils/puppet7.

If you are completely new to puppet, take some time to run puppet resource and inspect how Puppet sees your system:

% # Show all packages
% puppet resource package

% # Show a specific package
% puppet resource package zsh

% # Same for services
% puppet resource service
% puppet resource service sshd

% # Same for files (you can't list all files for obvious reason)
% puppet resource file ~/.zshrc

% # Same for users, note that when you are root you may have access to more data (e.g. password)
% puppet resource user $USER
% sudo puppet resource user $USER

The output of puppet resource command use the Puppet syntax. This command is convenient if you do not know all parameters by heart and quicker that locating the exact part of the documentation.

The puppet resource command can also be used to alter the state of resources:

% sudo puppet resource service sshd ensure=stopped

Managing resources one by one is not handy. You can copy the output of puppet resource in a file, update it and run puppet apply to apply the changes in a single command run:

% cat > manifest.pp << EOF
service { 'sshd':
  ensure => 'stopped',
}

user { 'romain':
  ensure => 'present',
  shell  => '/usr/local/bin/zsh',
}
EOF
% sudo puppet apply manifest.pp

Current status

We have installed a Puppet Agent on a node and played with it, but nothing more will happen.

Installing a Puppet Server

Install sysutils/puppetserver7.

Enable and start the puppetserver service. Either do this manually as you usually do on FreeBSD host, or use puppet resource as seen above:

# puppet resource service puppetserver ensure=running enable=true
Notice: /Service[puppetserver]/ensure: ensure changed 'stopped' to 'running'
service { 'puppetserver':
  ensure => 'running',
  enable => 'true',
}

The Puppet Server is now listening on port 8140. Ensure your firewalls allows your nodes to reach this port on the Puppet Server.

Current status

We have installed a Puppet Server on a node. The service is started, but for now, nothing will happen.

Connecting a Puppet Agent to the Puppet Server

Let's start the Puppet Agent in test mode, and telling it attempt to get a signed certificate every 10 seconds. Since the agent is stating for the first time, it will generate a private key and a certificate signing request (CSR). This CSR will be sent to the Puppet Server, and the agent will regularly check the Puppet Server for a signed certificate:

# puppet agent --test --waitforcert 10
Info: Creating a new SSL key for agent.example.com
Info: Caching certificate for ca
Info: csr_attributes file loading from /usr/local/etc/puppet/csr_attributes.yaml
Info: Creating a new SSL certificate request for agent.example.com
Info: Certificate Request fingerprint (SHA256): 32:33:46:79:75:DB:96:81:36:69:0D:9E:03:5C:68:71:52:EC:C5:CD:25:36:5A:6F:9C:44:A4:35:19:D3:5E:AE
Info: Caching certificate for ca
Notice: Did not receive certificate
Notice: Did not receive certificate
[...]

While the agent is waiting for its signed certificate in a loop, let's examine the pending requests on the Puppet Server:

# puppetserver ca list
   "agent.example.com" (SHA256) 32:33:46:79:75:DB:96:81:36:69:0D:9E:03:5C:68:71:52:EC:C5:CD:25:36:5A:6F:9C:44:A4:35:19:D3:5E:AE

We can check the certificate common name (CN) and fingerprint. They match what puppet(8) reported above, so let's generate the node certificate by signing the CSR:

# puppetserver ca sign --certname agent.example.com
Signing Certificate Request for:
  "agent.example.com" (SHA256) 32:33:46:79:75:DB:96:81:36:69:0D:9E:03:5C:68:71:52:EC:C5:CD:25:36:5A:6F:9C:44:A4:35:19:D3:5E:AE
Notice: Signed certificate request for agent.example.com
Notice: Removing file Puppet::SSL::CertificateRequest agent.example.com at '/var/puppet/ssl/ca/requests/agent.example.com.pem'

A few seconds later, our Puppet Agent gets its signed certificates, fetch its (empty) catalog, applies it (does nothing) and terminates. Success!

Now, if we do not want to run the agent interactively, it might be a good idea to enable and start the puppet service:

# puppet resource service puppet ensure=running enable=true

Current status

Our Puppet Agents regularly query the Puppet Server for their configuration, but the Puppet Server does not deliver useful catalogs yet.

Discovering Puppet's Manifests

Managing Resources

Configuration Management is all about managing resources. Resources are described in manifests. Puppet has a Domain Specific Language (DSL) to write manifests.

You may wonder what to start with, so here a are a few hints:

Good examples for starting:

Bad examples:

In this guide, we will start very-small: let's configure the Message Of The Day (MOTD, motd(5)) to greet users when they login.

The MOTD is configured in a single file, /etc/motd. Puppet has native support for a bunch of resource types, one of them being the file resource.

Let's create a file /usr/local/etc/puppet/code/environments/production/manifests/site.pp containing:

file { '/etc/motd':
  ensure  => file,
  owner   => 'root',
  group   => 'wheel',
  mode    => '644',
  content => "This is ${facts['fqdn']}\n\nThis node is managed by Puppet, changes may be overwritten.\n",
}

Notice how we are using variable interpolation to customize each node's MOTD with it's name by using the fact fqdn.

All files in the /usr/local/etc/puppet/code/environments/production/manifests/ directory are called manifest, they are used by the Puppet Server to compute the catalog for a node.

When an agent will get it's catalog, it will now contain rules to enforce the /etc/motd file ownership, permissions and content. Instead of waiting for the agent to run (which by default happen every half an hour), you can run it in test mode on an agent:

# puppet agent --test

See how the file is managed if you modify it, remove it, change it's permissions or ownership. Also note that if the file is as expected, nothing happen. With Puppet, you describe the state you want to resources to be in, not the steps required to make these resources as you expect. The idea that nothing happens when your resources are as expected is known as idempotency.

Before going on, let's manage another resource. All our nodes should be running the openssh daemon, so let's describe a service resource:

service { 'sshd':
  ensure => running,
  enable => true,
}

Current status

The Puppet Server provide basic but useful catalogs to the Puppet Agents. So far, all nodes get the same catalogs and enforce the same configuration.

Distinguishing Nodes

Enforcing the same resources on all your node have low value. What if we want to manage a Bob's user account on only one of our nodes, alpha.example.com? Resources can be grouped by nodes. Let's add a user resource to our manifest, but in a way it only gets enforced on the machine named alpha.example.com:

node 'alpha.example.com' {
  user { 'bob':
    ensure     => present,
    shell      => '/bin/tcsh',
    home       => '/home/bob',
    managehome => true,
    password   => '*',
  }
}

We could also have used the host part of the FQDN to identify the node, or a regular expression. For example, in a university, if each node is named according to it's physical location (e.g. '<room-number>-<machine number>', it's possible to ensure that mono is installed on all nodes of the room 'B21':

node /^SB21-\d+$/ {
  package { 'mono':
    ensure => installed,
  }
}

You can start managing real things in your infrastructure to be at ease with these concepts. Just keep in mind that nodes will ever match a single 'node' section of the configuration file: we will see later how to share configuration across multiple machines. When you start to see repetition, move on to the next section where we will refactor our code so that is scales.

Current status

Our Puppet Agents regularly query the Puppet Server for their configuration, and the Puppet Server give them a catalog suited for them.

Puppet Modules and the Forge

When we added configuration to manage the openssh server, we only ensured the sshd daemon was enabled and running. In the real world, we would also want to customize the service configuration, and while we could use a file resource to manage /etc/ssh/sshd_config, let's face it: you are not the first person interested in this, and smart people may already have packaged together resources to manage the whole openssh service in a convenient way. The Puppet Forge is a repository of contributed Puppet modules. Searching for ssh returns more than 400 modules, so we should find something that matches our needs.

By using a module rather than managing a bunch of resource, you add a layer of abstraction in your manifests, making it easier to handle them. For example, the sshd service may be named ssh on another operating system, yet it's the same service you want to manage and you do not want to have different code to manage it.

In my setup, I use the awesome zleslie/ssh module by another FreeBSD developer. I will use it for the rest of the section, but fell free to choose another module if you find a better match for your environment.

Installing a module is super easy:

puppet module install zleslie-ssh

The module is installed in /usr/local/etc/puppet/code/environments/production/modules/ssh, and we can adapt our manifest by removing the sshd service resource and adding the following to enforce strong cryptography and only permit key-based authentication:

class { 'ssh::server::config':
  authenticationmethods  => 'publickey',
  ciphers                => 'chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr',
  kexalgorithms          => 'curve25519-sha256@libssh.org,ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256',
  log_level              => 'VERBOSE',
  macs                   => 'hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,umac-128@openssh.com',
  passwordauthentication => 'no',
  permitrootlogin        => 'without-password',
  useprivilegeseparation => 'sandbox',
}

class { 'ssh::server':
}

Explore the forge and find out which modules could be useful for your environment. In the next section, we will see how to handle your code (puppet manifests) and dependencies (puppet modules) efficiently. This will also help you and the other members of your team to work together more efficiently.

Control-repo

Since you are writing code to describe your configuration, it makes sense to use a Version Control System to track the changes you do. Let's use git(1) for this purpose! But instead of starting from an empty repository, let's have a look to the control-repo project.

This project is a template we can use as a base to handle our code. You should recognize the manifests directory. We have not used yet all other directories present in the control-repo, but the site directory will help managing Roles & Profiles (next section) and the data directory will be used with Hiera, covered in another section. For now, let's clone this repository and add our code into the manifests directory of the control-repo.

% git clone https://github.com/puppetlabs/control-repo.git
% cd control-repo
% cp /usr/local/etc/puppet/code/environments/production/manifests/site.pp manifests
% git add .
% git commit -m'Import our configuration'

Note that the default branch is named production, not main. In fact, each branch in the control-repo will be extracted in the /usr/local/etc/puppet/code/environments/<branch name>/ directory by a tool named r10k. You will need r10k on the Puppet Server:

# pkg install rubygem-r10k

Create a git repository where convenient (where your team will pull and push your code), and add a post-commit hook that runs r10k (in this example, the git repository is on the machine of the Puppet Server):

   1 #!/bin/sh
   2 
   3 branch=$(git rev-parse --abbrev-ref HEAD)
   4 
   5 echo "Deploying environments ${branch} with r10k…"
   6 sudo -u r10k r10k deploy environment ${branch} -vp
   7 
   8 echo "Caching types for environments ${branch}"
   9 sudo -u r10k puppet generate types --environment ${branch}

The last missing piece is the r10k configuration file. Create /usr/local/etc/r10k/r10k.yaml. Don't forget to update the control-repo git repository location:

# The location to use for storing cached Git repos
:cachedir: '/var/puppet/r10k/cache'

# A list of git repositories to create
:sources:
  # This will clone the git repository and instantiate an environment per
  # branch in /usr/local/etc/puppet/code/environments
  :code:
    remote: '/path/to/the/control-repo/.git'
    basedir: '/usr/local/etc/puppet/code/environments'

If everything is fine, changes pushed into the control-repo should be automatically deployed into sub-directories of /usr/local/etc/puppet/code/environments/.

Current status

You are exactly at the same point as before, but each change in your Puppet code can be tracked. You can also collaborate easily with your team by creating branches in git.

Roles and Profiles

The Roles & Profiles pattern add two layers of abstraction to your Puppet code. Each machine will have a single "role", and roles are namespaced in the "role" namespace, so example roles are role::development_machine, role::student_computer, role::laptop, role::mediacenter, role::backup_server. These name should match the purpose of the machines.

Each role will include one or more "profiles". A profile configure a given piece of technology, and is namespaced in the "profile" namespace, for example profile::openssh, profile::webserver, profile::java.

Last, each profile will describe your site-specific configuration for the piece of technology.

Previously, we configured SSH to use some enforced security settings. If we want to follow the Role & Profile pattern, this configuration should be put into a profile named profile::openssh. Create a profile/manifests/openssh.pp file in your control-repo, and move the ssh configuration there. Your final file should look like this:

class profile::openssh {
  class { 'ssh::server::config':
    authenticationmethods  => 'publickey',
    ciphers                => 'chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr',
    kexalgorithms          => 'curve25519-sha256@libssh.org,ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256',
    log_level              => 'VERBOSE',
    macs                   => 'hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,umac-128@openssh.com',
    passwordauthentication => 'no',
    permitrootlogin        => 'without-password',
    useprivilegeseparation => 'sandbox',
  }

  class { 'ssh::server':
  }
}

You need to create roles. For the sake of this tutorial, we will create two: role::desktop and role::laptop assuming your laptops and desktop computers have a slightly different configuration. Let's create the role::laptop role in role/manifests/laptop.pp:

class role::laptop {
  include profile::openssh
}

Also create a role::desktop role (for now similar to what is found in the role::laptop role, and update your manifests/site.pp file to include a single role for each machine:

node 'lappy' {
  include role::laptop
}

node /^B21-\d+$/ {
  include role::desktop
}

Move similarly your code into profiles, and include these profiles into roles. You may still think that some roles have a lot of duplicate code… Feel free to add a new role and use inheritance as described below:

# role/manifests/base.pp
class role::base {
  include profile::openssh
}

# role/manifests/laptop.pp
class role::laptop inherits role::base {
  include profile::wireless_tools
}

# role/manifests/desktop.pp
class role::desktop inherits role::base {
  include profile::something_that_needs_a_lot_of_ram
}

Inheritance is generally avoided in the Puppet world, but roles inheritance is an exception that helps avoiding code duplication.

If you want to take a 20mn break, watch this video about Roles & Profiles. The end of the video is about the topic of the next section.

Current status

Once again, you are exactly at the same point as before, but your code should be more clean with low or no duplication.

Hiera

All nodes that use a single profile may not *always* need to be configured the exact same way. Let's say that ssh connection to all machines is by public-key authentication only but for a few nodes, public-key authentication as the root user should not be possible. You could create a copy of the profile::openssh profile, adjust it and assign a different role to these nodes, but it will probably sound not-so-good since a lot of code will be common. In fact, the Puppet language has conditional evaluation that helps you adjust your profiles under certain circumstances. In our case, let's open profile/manifests/openssh.pp and change it's content so that the class accepts a Boolean saying if yes or no logging-in as root is allowed:

class profile::openssh (
  Boolean $allow_root = false,
) {
  if $allow_root {
    $permitrootlogin = 'without-password'
  } else {
    $permitrootlogin = 'no'
  }

  class { 'ssh::server::config':
    authenticationmethods  => 'publickey',
    ciphers                => 'chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr',
    kexalgorithms          => 'curve25519-sha256@libssh.org,ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256',
    log_level              => 'VERBOSE',
    macs                   => 'hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,umac-128@openssh.com',
    passwordauthentication => 'no',
    permitrootlogin        => $permitrootlogin,
    useprivilegeseparation => 'sandbox',
  }

  class { 'ssh::server':
  }
}

Now, how can we change this $allow_root variable for a specific node? With Hiera! Just create a new file data/nodes/foo.example.com.yaml (where foo.example.com is the name of your machine) and write:

---
profile::openssh::allow_root: true

With the control-repo template, this should be enough for this machine to adjust it's configuration to allow logging-in as root.

Hiera is configured in hiera.yaml to tell it where and how to fetch the configuration for nodes.

Current status

Your Puppet code is cleanly organized in roles and profiles. When some specific bit of configuration is node-dependent, this special behavior is managed by Hiera.

Managing Facts

Hiera can use facts to find nodes specific configuration. You manifests can also use facts to adjust some settings. Facts are fetched by facter, but sometimes, the facts provided by Puppet are not enough and you want to add a few more.

Adding static facts

You may want to add a fact for the customer who pays for a machine; or the datacenter the machine is located in. These information do not change. Facter has a way to create structured facts for this static data.

Let's create a file /usr/local/etc/facter/facts.d/country.yaml

---
country: fr

Now, you can use $facts['country'] in your manifests, for example for filling-in the motd.

Adding dynamic facts

Static facts are not a solution for things that change, for example humidity as read by a sensor. Facter also support facts written in Ruby. Let's create a $facts['dwarf'] fact that randomly select one of the 7 dwarfs. Custom facts are better shipped by modules, but role and profile are modules… So let's write our custom fact in the site/profile/lib/facter/dwarf.rb file:

Facter.add(:dwarf) do
  setcode do
    ['Doc', 'Grumpy', 'Happy', 'Sleepy', 'Bashful', 'Sneezy', 'Dopey'].sample
  end
end

PuppetDB

https://puppet.com/docs/puppetdb

https://puppet.com/blog/introducing-puppetdb-put-your-data-to-work

Nodes collaboration (e.g. each node "announce" the data that must be backup-up (export) and the backup system automagically adjust its configuration to save these directories (collect), see for example puppet-bacula)

Don't https://gist.github.com/tnolet/7133083

PuppetDB Visualization / Dashboard

www/py-puppetboard

Orchestration with Choria

Choria is a modern replacement of Marionette Collective (also known as MCollective). MCollective used to be hard to deploy, mostly because of its flexibility: new users had to understand the consequences of their choices before deciding to use one or another tool in their setup. And each tool could be configured in different manners, having consequences in respect to security and scalability. Choria attempts to fix this by proposing a one-size-fit-all configuration which is secured by default.


CategoryPorts CategoryHowTo

Puppet/GettingStarted (last edited 2022-10-14T23:18:22+0000 by RomainTartiere)