Tune infrastructure nodes

Use these guidelines to configure your Puppet Enterprise (PE) installation to maximize use of available system resources (CPU and RAM).

PE includes of multiple services running on one or more infrastructure hosts. Services running on the same host share the host's resources. You can configure each service's settings to maximize use of system resources and optimize performance.

Each service's default settings are conservative, and your optimal settings depend of the complexity and scale of your infrastructure.

Configure these settings after you install PE, upgrade PE, or make changes to infrastructure hosts (such as changing existing hosts' system resources, adding new hosts, or adding or changing compilers).

Primary server tuning

These are the default and recommended tuning settings for your primary server or disaster recovery replica.

Compiler tuning

These are the default and recommended tuning settings for compilers running the PuppetDB service.

Hardware Setting category Puppet Server PuppetDB
JRuby max active instances Java heap (MB) Reserved code cache (MB) Command processing threads Java heap (MB) Read Maximum Pool Size Write Maximum Pool Size
4 cores, 8 GB RAM Default 3 1536 384 1 819 4 2
6 cores, 12 GB RAM Default 4 2048 512 1 1228 6 2
Recommended 4 3072 512 1 1228 6 2

Legacy compiler tuning

These are the default and recommended tuning settings for legacy compilers without the PuppetDB service.

Hardware Setting category Puppet Server
JRuby max active instances Java heap (MB) Reserved code cache (MB)
4 cores, 8 GB RAM Default 3 2048 512
Recommended 3 1536 288
6 cores, 12 GB RAM Default 4 2048 512
Recommended 5 3840 480

The puppet infrastructure tune command

The puppet infrastructure tune command outputs optimized settings for Puppet Enterprise (PE) services based on recommended guidelines.

Running puppet infrastructure tune queries PuppetDB to identify processor and memory facts about your infrastructure hosts. The command outputs settings in YAML format for you to use in Hiera.

This command is compatible with most standard PE configurations, including those with compilers, a replica, or standalone PostgreSQL.

You must run this command on your primary server as root. Using sudo for elevated privileges is not sufficient. Instead, start a root session by running sudo su -, and then run the puppet infrastructure command.

These options are commonly used with the puppet infrastructure tune command:
  • --current outputs existing tuning settings from the PE console and Hiera. This option also identifies duplicate settings declared in both the console and Hiera
  • --memory_per_jruby <MB> outputs tuning recommendations based on specified memory allocated to each JRuby in Puppet Server. If you implement tuning recommendations using this option, specify the same value for puppetserver_ram_per_jruby.
  • --memory_reserved_for_os <MB> outputs tuning recommendations based on specified RAM reserved for the operating system.
  • --common outputs common settings, which are identical on several nodes, separately from node-specific settings.

For more information about the tune command, run puppet infrastructure tune --help.

Restriction: The puppet infrastructure tune command fails if environmentpath (in your puppet.conf file) is set to multiple environments. Comment out this setting before running this command. For details about this setting, refer to environmentpath in the open source Puppet documentation.

Tuning parameters

Configure tuning parameters to customize your PE service settings for optimum performance and hardware resource utilization.

Specify tuning parameters in Hiera for the best scalability and consistency. You can learn About Hiera in the Puppet documentation.

If you must use the PE console, add the parameter to the appropriate infrastructure node group using one of the following methods:
  • Specify puppet_enterprise::profile parameters (including java_args, shared_buffers, and work_mem) as parameters of their class.
  • Specify all other tuning parameters as configuration data.

How to configure PE explains the different ways you can configure PE parameters.

RAM per JRuby

The puppetserver_ram_per_jruby setting determines how much RAM is allocated to each JRuby instance in Puppet Server.

You might need to change this setting if you have complex Hiera code, many environments or modules, or large reports.

Tip: If your PuppetDB service runs on a compiler, this is a good starting point for tuning your infrastructure, because this value is factored into several other parameters, including JRuby max active instances and Java heap allocation on compilers running PuppetDB.
Console node group
PE Master
Parameter
puppet_enterprise::puppetserver_ram_per_jruby
Default value
512 MB
Accepted values
An integer representing a number of MB
How to calculate
You can usually achieve good performance by allocating around 2 GB per JRuby.
If 2 GB is inadequate, it might help to Change the environment_timeout setting.

JRuby max active instances

The jruby_max_active_instances setting can be set in multiple places. It controls the maximum number of JRuby instances to allow on the Puppet Server and how many plans can run concurrently in the orchestrator.

Puppet Server jruby_max_active_instances

Console node group
If Puppet Server runs on the primary server: PE Master
If the PuppetDB service runs on compilers: PE Compiler
Parameter
puppet_enterprise::master::puppetserver::jruby_max_active_instances
Tip: This parameter is the same as the max_active_instances parameter in the pe-puppet-server.conf settings and in open source Puppet.
Default value
If Puppet Server runs on the primary server, the default value is the number of CPUs minus 1. The minimum is 1, and the maximum is 4.
If the PuppetDB service runs on compilers, the default value is the number of CPUs multiplied by 0.75. The minimum is 1, and the maximum is 24.
Accepted values
An integer representing a number of JRuby instances
How to calculate
As a conservative estimate, one JRuby process uses approximately 512 MB of RAM. For most installations, four JRuby instances are adequate.
Important: Because increasing the maximum number of JRuby instances also increases the amount of RAM used by Puppet Server, make sure to proportionally scale the Puppet Server Java heap size (java_args). For example, if you set jruby_max_active_instances to 4, set Puppet Server's java_args to at least 2 GB.

Orchestrator jruby_max_active_instances

Running a plan consumes one JRuby instance. If a plan calls other plans, the nested plans use the parent plan's JRuby instance. JRuby instances are deallocated once a plan finishes running, and tasks are not affected by JRuby availability.

Console node group
PE Orchestrator
Parameter
puppet_enterprise::profile::orchestrator::jruby_max_active_instances
Default value
The default value is the orchestrator heap size (java_args) divided by 1024. The minimum is 1.
Accepted values
An integer representing a number of JRuby instances
How to calculate
Because the jruby_max_active_instances default value is derived from the orchestrator heap size (java_args), changing the orchestrator heap size automatically changes the number of JRuby instances available to the orchestrator. For example, setting the orchestrator heap size to 5120 MB allows up to five JRuby instances (or plans) to run concurrently.
If you notice poor performance while running plans, increase the orchestrator Java heap size instead of jruby_max_active_instances. However, keep in mind that allowing too many JRuby instances can reduce system performance, especially if your plans use a lot of memory.

JRuby max requests per instance

The jruby_max_requests_per_instance setting determines the maximum number of HTTP requests a JRuby handles before it's terminated. When a JRuby instance reaches this limit, it's flushed from memory and replaced with a fresh one.

Console node group
PE Master
Parameter
puppet_enterprise::master::puppetserver::jruby_max_requests_per_instance
Tip: This parameter is the same as the max_requests_per_instance parameter in the pe-puppet-server.conf settings and in open source Puppet.
Default value
100000
Accepted values
An integer representing a number of HTTP requests
How to calculate
More frequent JRuby flushing can help address memory leaks, because it prevents any one interpreter from consuming too much RAM. However, performance is reduced slightly each time a new JRuby instance loads. Therefore, set this parameter to get a new interpreter no more than once every few hours.
Requests are balanced across multiple interpreters running concurrently, so the lifespan of each interpreter varies.

Java heap

The java_args settings specify heap size, which is the amount of memory that each Java process can request from the operating system. You can specify a heap size for each PE service that uses Java, including Puppet Server, PuppetDB, the console, and the orchestrator

Heap size is declared as a JSON hash containing a maximum (Xmx) and minimum (Xms) value. Usually, the maximum and minimum are the same so that the heap size is fixed, for example:
{ 'Xmx' => '2048m', 'Xms' => '2048m' }
Puppet Server Java heap
Console node group: PE Master or PE Compiler
Parameter: puppet_enterprise::profile::master::java_args
Tip: puppet_enterprise::master::java_args and puppet_enterprise::master::puppetserver::java_args are the same, because profile::master filters down to master, which filters down to master::puppetserver.
Default value: 2 GB
PuppetDB Java heap
Console node group: If the PuppetDB service runs on compilers, set this parameter on the PE Compiler node group. Otherwise, set this parameter on the PE PuppetDB node group.
Parameter: puppet_enterprise::profile::puppetdb
Default value: 256 MB
Console services Java heap
Console node group: PE Console
Parameter: puppet_enterprise::profile::console
Default value: 256 MB
Orchestrator Java heap
Console node group: PE Orchestrator
Parameter: puppet_enterprise::profile::orchestrator
Default value: 704 MB

Puppet Server reserved code cache

The reserved_code_cache setting specifies the maximum space available to store the Puppet Server code cache during catalog compilation.

Console node group
If the PuppetDB service runs on compilers, set this parameter on the PE Compiler node group. Otherwise, set this parameter on the PE Master node group.
Parameter
puppet_enterprise::master::puppetserver::reserved_code_cache
Default value
If Puppet Server runs on your primary server: If total RAM is less than 2 GB, then the Java default is used. Otherwise, the default value is 512 MB.
If the PuppetDB service runs on compilers: The default value is the number of JRuby instances multiplied by 128 MB. The minimum is 128 MB, and the maximum is 2048 MB.
Accepted values
An integer representing a number of MB
How to calculate
JRuby requires an estimated 128 MB of cache space for each instance. To determine the minimum amount of space needed multiple the number of JRuby instances by 128 MB.

PuppetDB command processing threads

The command_processing_threads setting specifies how many command processing threads PuppetDB uses to sort incoming data. Each thread can process one command at a time.

Console node group
If the PuppetDB service runs on compilers, set this parameter on the PE Compiler node group. Otherwise, set this parameter on the PE PuppetDB node group.
Parameter
puppet_enterprise::puppetdb::command_processing_threads
Default value
If the PuppetDB service runs on compilers, the default value is the number of CPUs multiplied by 0.25 (with a minimum of 1 and a maximum of 3).
Otherwise, the default value is the number of CPUs multiplied by 0.5 (with a minimum of 1).
Accepted values
An integer representing a number of threads.
How to calculate
If the PuppetDB queue is backing up and you have CPU cores to spare, increasing the number of threads can help process the backlog more rapidly.
Don't allocate all of your CPU cores to command processing, because this can starve other PuppetDB subsystems of resources and decrease throughput.

PostgreSQL max connections

The max_connections setting determines the maximum number of concurrent connections allowed to the PE-PostgreSQL server. It should be configured to accommodate all infrastructure nodes running PuppetDB.

Console node group
PE Database
Parameter
puppet_enterprise::profile::database::max_connections
Default value
400
Accepted values
An integer representing the number of concurrent connections allowed. The minimum is 200.
How to calculate
Set the max_connections parameter to a number greater than the sum of read and write connections across all PuppetDB instances in your PE installation, including compilers and the primary server. The connection count from each instance should equal (command processing threads * 2) + number of JRuby instances. Rule out any underlying performance issues prior to adjusting max_connections.

PostgreSQL shared buffers

The shared_buffers setting specifies the amount of memory the PE-PostgreSQL server uses for shared memory buffers.

Console node group
PE Database
Parameter
puppet_enterprise::profile::database::shared_buffers
Default value
The available RAM multiplied by 0.25, with a minimum of 32 MB and a maximum of 4096 MB
Accepted values
An integer representing a number of MB
How to calculate
The default value is suitable for most installations, but console performance might improve if you increase shared_buffers up to 40% of available RAM.

PostgreSQL working memory

The work_mem setting specifies the maximum amount of memory used for queries before writing to temporary files.

Console node group
PE Database
Parameter
puppet_enterprise::profile::database::work_mem
Default value
Based on the following calculation:
(Available RAM / 1024 / 8) + 0.5
The minimum is 4 MB, and the maximum is 16 MB.
Accepted values
An integer representing a number of MB

PostgreSQL WAL disk space

The max_slot_wal_keep_size setting specifies the maximum allocated WAL disk space for each replication slot. This prevents the pg_wal directory from growing infinitely.

If you have set up disaster recovery, this setting prevents an unreachable replica from consuming all of your primary server's disk space when the PE-PostgreSQL service on the primary server attempts to retain change logs that the replica hasn't acknowledged.

If your replica is offline long enough to reach the max_slot_wal_keep_size value, replication slots are dropped to allow the primary server to continue functioning normally. When the replica comes back online, you'll know replication slots were dropped if puppet infra status returns a message that replication is inactive for PostgreSQL's status. To restore PostgreSQL replication, run puppet infra reinitialize replica on your replica.

Console node group
PE Database
Parameter
puppet_enterprise::profile::database::max_slot_wal_keep_size
Default value
12288 MB (twice the size of the max_wal_size parameter)
Important: If you don't have enough disk space for the default setting, you must adjust this value.
Accepted values
An integer representing a number of MB