GNU.WIKI: The GNU/Linux Knowledge Base

  [HOME] [PHP Manual] [HowTo] [ABS] [MAN1] [MAN2] [MAN3] [MAN4] [MAN5] [MAN6] [MAN7] [MAN8] [MAN9]

  [0-9] [Aa] [Bb] [Cc] [Dd] [Ee] [Ff] [Gg] [Hh] [Ii] [Jj] [Kk] [Ll] [Mm] [Nn] [Oo] [Pp] [Qq] [Rr] [Ss] [Tt] [Uu] [Vv] [Ww] [Xx] [Yy] [Zz]


NAME

       crm - Pacemaker command line interface for configuration and management

SYNOPSIS

       crm [OPTIONS] [ARGS...]

DESCRIPTION

       The crm shell is a command-line based cluster configuration and
       management tool. Its goal is to assist as much as possible with the
       configuration and maintenance of Pacemaker-based High Availability
       clusters.

       crm works both as a command-line tool to be called directly from the
       system shell, and as an interactive shell with extensive tab completion
       and help.

       The primary focus of the crm shell is to provide a simplified and
       consistent interface to Pacemaker, but it also provides tools for
       managing the creation and configuration of High Availability clusters
       from scratch. To learn more about this aspect of crm, see the cluster
       section below.

       The Pacemaker configuration is stored in something called a CIB file,
       where CIB stands for Cluster Information Base. The CIB is a set of
       instructions coded in XML which is synchronized across the cluster.

       Editing the CIB is a challenge, not only due to its complexity and wide
       variety of options, but also because XML is more computer than user
       friendly. To help with this task, the crm shell provides a small and
       simple line-oriented configuration language consistent with the other
       commands available in the shell. For more information about this
       language and how to use it, see the configure section below.

       crm provides a consistent and well-documented interface to most of the
       management tools included in Pacemaker, for example crm_resource(8) or
       crm_attribute(8). Instead of having to remember the various flags and
       options available for each tool, crm hides all of the arcane detail.

       crm can also function as a cluster scripting tool, and can be fed
       multi-line sets of commands either directly from standard input or via
       a file. Templates with ready made configurations may help newbies learn
       about the cluster configuration or facilitate testing procedures.

       The crm shell is line oriented: every command must start and finish on
       the same line. It is possible to use a continuation character (\) to
       write one command in two or more lines. The continuation character is
       commonly used when displaying configurations.

OPTIONS

       -f, --file=FILE
           Load commands from the given file. If a dash - is used in place of
           a file name, crm will read commands from the shell standard input
           (stdin).

       -c, --cib=CIB
           Start the session using the given shadow CIB file. Equivalent to
           cib use <CIB>.

       -D, --display=OUTPUT_TYPE
           Choose one of the output options: plain, color, or uppercase. The
           default is color if the terminal emulation supports colors.
           Otherwise, plain is used.

       -F, --force
           Make crm proceed with applying changes where it would normally ask
           the user to confirm before proceeding. This option is mainly useful
           in scripts, and should be used with care.

       -w, --wait
           Make crm wait for the cluster transition to finish (for the changes
           to take effect) after each processed line.

       -H, --history=DIR|FILE
           The history commands can either work directly on the live cluster
           (default), or on a report generated by the report command. Use this
           option to specify a directory or file containing the previously
           generated report.

       -h, --help
           Print help page.

       --version
           Print crmsh version and build information (Mercurial Hg changeset
           hash).

       -R, --regression-tests
           Run in the regression test mode. Used mainly by the regression
           testing suite.

       -d, --debug
           Print some debug information. Used by developers. [Not yet refined
           enough to print useful information for other users.]

       --scriptdir=DIR
           Extra directory where crm looks for cluster scripts. Can be a
           semi-colon separated list of directories.

INTRODUCTION TO THE USER INTERFACE

       Arguably the most important aspect of crm is the user interface. We
       begin with an informal introduction so that the reader may get
       acquainted with it and get a general feeling of the tool. It is
       probably best just to give some examples:

       The main purpose of crm is to provide a simple yet powerful interface
       to the cluster stack. To get started and to give new users a feel for
       how to use it, lets just jump straight into some examples:

       Command line (one-shot) use:.

           # crm resource stop www_app

       Interactive use:.

           # crm
           crm(live)# resource
           crm(live)resource# unmanage tetris_1
           crm(live)resource# up
           crm(live)# node standby node4

       Cluster configuration:.

           # crm configure<<EOF
             #
             # resources
             #
             primitive disk0 iscsi \
               params portal=192.168.2.108:3260 target=iqn.2008-07.com.suse:disk0
             primitive fs0 Filesystem \
               params device=/dev/disk/by-label/disk0 directory=/disk0 fstype=ext3
             primitive internal_ip IPaddr params ip=192.168.1.101
             primitive apache apache \
               params configfile=/disk0/etc/apache2/site0.conf
             primitive apcfence stonith:apcsmart \
               params ttydev=/dev/ttyS0 hostlist="node1 node2" \
               op start timeout=60s
             primitive pingd pingd \
               params name=pingd dampen=5s multiplier=100 host_list="r1 r2"
             #
             # monitor apache and the UPS
             #
             monitor apache 60s:30s
             monitor apcfence 120m:60s
             #
             # cluster layout
             #
             group internal_www \
               disk0 fs0 internal_ip apache
             clone fence apcfence \
               meta globally-unique=false clone-max=2 clone-node-max=1
             clone conn pingd \
               meta globally-unique=false clone-max=2 clone-node-max=1
             location node_pref internal_www \
               rule 50: #uname eq node1 \
               rule pingd: defined pingd
             #
             # cluster properties
             #
             property stonith-enabled=true
             commit
           EOF

       If you have ever done a CRM style configuration before, the examples
       above should be immediately understandable without too much difficulty.
       The crm provides a means to efficiently manage a cluster, and to put a
       configuration together in a simple and concise manner.

       The crm interface is hierarchical, with commands organized into
       separate levels by functionality. To list the available levels and
       commands, either execute help <level>, or, if at the top level of the
       shell, simply typing help will provide an overview of all available
       levels and commands.

       The (live) string in the crm prompt signifies that the current CIB in
       use is the cluster live configuration. It is also possible to work with
       so-called shadow CIBs. These are separate, inactive configurations
       stored in files, that can be applied and thereby replace the live
       configuration at any time.

SHADOW CIB USAGE

       A Shadow CIB is a normal cluster configuration stored in a file. They
       may be manipulated in much the same way as the live CIB, with the key
       difference that changes to a shadow CIB have no effect on the actual
       cluster resources. An administrator may choose to apply any of them to
       the cluster, thus replacing the running configuration with the one
       found in the shadow CIB.

       The crm prompt always contains the name of the configuration which is
       currently in use, or the string live if using the live cluster
       configuration.

       When editing the configuration in the configure level, no changes are
       actually applied until the commit command is executed. It is possible
       to start editing a configuration as usual, but instead of committing
       the changes to the active CIB, save them to a shadow CIB.

       The following example configure session demonstrates how this can be
       done:

           crm(live)configure# cib new test-2
           INFO: test-2 shadow CIB created
           crm(test-2)configure# commit

CONFIGURATION TEMPLATES

       Configuration templates are ready made configurations created by
       cluster experts. They are designed in such a way so that users may
       generate valid cluster configurations with minimum effort. If you are
       new to Pacemaker, templates may be the best way to start.

       We will show here how to create a simple yet functional Apache
       configuration:

           # crm configure
           crm(live)configure# template
           crm(live)configure template# list templates
           apache       filesystem   virtual-ip
           crm(live)configure template# new web <TAB><TAB>
           apache       filesystem   virtual-ip
           crm(live)configure template# new web apache
           INFO: pulling in template apache
           INFO: pulling in template virtual-ip
           crm(live)configure template# list
           web2-d       web2     vip2     web3     vip      web

       We enter the template level from configure. Use the list command to
       show templates available on the system. The new command creates a
       configuration from the apache template. You can use tab completion to
       pick templates. Note that the apache template depends on a virtual IP
       address which is automatically pulled along. The list command shows the
       just created web configuration, among other configurations (I hope that
       you, unlike me, will use more sensible and descriptive names).

       The show command, which displays the resulting configuration, may be
       used to get an idea about the minimum required changes which have to be
       done. All ERROR messages show the line numbers in which the respective
       parameters are to be defined:

           crm(live)configure template# show
           ERROR: 23: required parameter ip not set
           ERROR: 61: required parameter id not set
           ERROR: 65: required parameter configfile not set
           crm(live)configure template# edit

       The edit command invokes the preferred text editor with the web
       configuration. At the top of the file, the user is advised how to make
       changes. A good template should require from the user to specify only
       parameters. For example, the web configuration we created above has the
       following required and optional parameters (all parameter lines start
       with %%):

           $ grep -n ^%% ~/.crmconf/web
           23:%% ip
           31:%% netmask
           35:%% lvs_support
           61:%% id
           65:%% configfile
           71:%% options
           76:%% envfiles

       These lines are the only ones that should be modified. Simply append
       the parameter value at the end of the line. For instance, after editing
       this template, the result could look like this (we used tabs instead of
       spaces to make the values stand out):

           $ grep -n ^%% ~/.crmconf/web
           23:%% ip        192.168.1.101
           31:%% netmask
           35:%% lvs_support
           61:%% id        websvc
           65:%% configfile    /etc/apache2/httpd.conf
           71:%% options
           76:%% envfiles

       As you can see, the parameter line format is very simple:

           %% <name> <value>

       After editing the file, use show again to display the configuration:

           crm(live)configure template# show
           primitive virtual-ip ocf:heartbeat:IPaddr \
               params ip="192.168.1.101"
           primitive apache ocf:heartbeat:apache \
               params configfile="/etc/apache2/httpd.conf"
           monitor apache 120s:60s
           group websvc \
               apache virtual-ip

       The target resource of the apache template is a group which we named
       websvc in this sample session.

       This configuration looks exactly as you could type it at the configure
       level. The point of templates is to save you some typing. It is
       important, however, to understand the configuration produced.

       Finally, the configuration may be applied to the current crm
       configuration (note how the configuration changed slightly, though it
       is still equivalent, after being digested at the configure level):

           crm(live)configure template# apply
           crm(live)configure template# cd ..
           crm(live)configure# show
           node xen-b
           node xen-c
           primitive apache ocf:heartbeat:apache \
               params configfile="/etc/apache2/httpd.conf" \
               op monitor interval="120s" timeout="60s"
           primitive virtual-ip ocf:heartbeat:IPaddr \
               params ip="192.168.1.101"
           group websvc apache virtual-ip

       Note that this still does not commit the configuration to the CIB which
       is used in the shell, either the running one (live) or some shadow CIB.
       For that you still need to execute the commit command.

       To complete our example, we should also define the preferred node to
       run the service:

           crm(live)configure# location websvc-pref websvc 100: xen-b

       If you are not happy with some resource names which are provided by
       default, you can rename them now:

           crm(live)configure# rename virtual-ip intranet-ip
           crm(live)configure# show
           node xen-b
           node xen-c
           primitive apache ocf:heartbeat:apache \
               params configfile="/etc/apache2/httpd.conf" \
               op monitor interval="120s" timeout="60s"
           primitive intranet-ip ocf:heartbeat:IPaddr \
               params ip="192.168.1.101"
           group websvc apache intranet-ip
           location websvc-pref websvc 100: xen-b

       To summarize, working with templates typically consists of the
       following steps:

       ·   new: create a new configuration from templates

       ·   edit: define parameters, at least the required ones

       ·   show: see if the configuration is valid

       ·   apply: apply the configuration to the configure level

RESOURCE TESTING

       The amount of detail in a cluster makes all configurations prone to
       errors. By far the largest number of issues in a cluster is due to bad
       resource configuration. The shell can help quickly diagnose such
       problems. And considerably reduce your keyboard wear.

       Let’s say that we entered the following configuration:

           node xen-b
           node xen-c
           node xen-d
           primitive fencer stonith:external/libvirt \
               params hypervisor_uri="qemu+tcp://10.2.13.1/system" \
                   hostlist="xen-b xen-c xen-d" \
               op monitor interval="2h"
           primitive svc ocf:heartbeat:Xinetd \
               params service="systat" \
               op monitor interval="30s"
           primitive intranet-ip ocf:heartbeat:IPaddr2 \
               params ip="10.2.13.100" \
               op monitor interval="30s"
           primitive apache ocf:heartbeat:apache \
               params configfile="/etc/apache2/httpd.conf" \
               op monitor interval="120s" timeout="60s"
           group websvc apache intranet-ip
           location websvc-pref websvc 100: xen-b

       Before typing commit to submit the configuration to the cib we can make
       sure that all resources are usable on all nodes:

           crm(live)configure# rsctest websvc svc fencer

       It is important that resources being tested are not running on any
       nodes. Otherwise, the rsctest command will refuse to do anything. Of
       course, if the current configuration resides in a CIB shadow, then a
       commit is irrelevant. The point being that resources are not running on
       any node.  Note on stopping all resources

       Alternatively to not committing a configuration, it is also possible to
       tell Pacemaker not to start any resources:

           crm(live)configure# property stop-all-resources="yes"

       Almost none---resources of class stonith are still started. But shell
       is not as strict when it comes to stonith resources.

       Order of resources is significant insofar that a resource depends on
       all resources to its left. In most configurations, it’s probably
       practical to test resources in several runs, based on their
       dependencies.

       Apart from groups, crm does not interpret constraints and therefore
       knows nothing about resource dependencies. It also doesn’t know if a
       resource can run on a node at all in case of an asymmetric cluster. It
       is up to the user to specify a list of eligible nodes if a resource is
       not meant to run on every node.

TAB COMPLETION

       The crm makes extensive use of tab completion. The completion is both
       static (i.e. for crm commands) and dynamic. The latter takes into
       account the current status of the cluster or information from installed
       resource agents. Sometimes, completion may also be used to get short
       help on resource parameters. Here a few examples:

           crm(live)# resource
           crm(live)resource# <TAB><TAB>
           bye           failcount     move          restart       unmigrate
           cd            help          param         show          unmove
           cleanup       list          promote       start         up
           demote        manage        quit          status        utilization
           end           meta          refresh       stop
           exit          migrate       reprobe       unmanage
           crm(live)resource# end
           crm(live)# configure
           crm(live)configure# primitive fence-1 <TAB><TAB>
           heartbeat:  lsb:    ocf:    stonith:
           crm(live)configure# primitive fence-1 stonith:<TAB><TAB>
           apcmaster                external/ippower9258     fence_legacy
           apcmastersnmp            external/kdumpcheck      ibmhmc
           apcsmart                 external/libvirt         ipmilan
           baytech                  external/nut             meatware
           bladehpi                 external/rackpdu         null
           cyclades                 external/riloe           nw_rpc100s
           drac3                    external/sbd             rcd_serial
           external/drac5           external/ssh             rps10
           external/dracmc-telnet   external/ssh-bad         ssh
           external/hmchttp         external/ssh-slow        suicide
           external/ibmrsa          external/vmware          wti_mpc
           external/ibmrsa-telnet   external/xen0            wti_nps
           external/ipmi            external/xen0-ha
           crm(live)configure# primitive fence-1 stonith:ipmilan params <TAB><TAB>
           auth=      hostname=  ipaddr=    login=     password=  port=      priv=
           crm(live)configure# primitive fence-1 stonith:ipmilan params auth=<TAB><TAB>
           auth* (string)
               The authorization type of the IPMI session ("none", "straight", "md2", or "md5")
           crm(live)configure# primitive fence-1 stonith:ipmilan params auth=

CONFIGURATION SEMANTIC CHECKS

       Resource definitions may be checked against the meta-data provided with
       the resource agents. These checks are currently carried out:

       ·   are required parameters set

       ·   existence of defined parameters

       ·   timeout values for operations

       The parameter checks are obvious and need no further explanation.
       Failures in these checks are treated as configuration errors.

       The timeouts for operations should be at least as long as those
       recommended in the meta-data. Too short timeout values are a common
       mistake in cluster configurations and, even worse, they often slip
       through if cluster testing was not thorough. Though operation timeouts
       issues are treated as warnings, make sure that the timeouts are usable
       in your environment. Note also that the values given are just advisory
       minimum---your resources may require longer timeouts.

       User may tune the frequency of checks and the treatment of errors by
       the check-frequency and check-mode preferences.

       Note that if the check-frequency is set to always and the check-mode to
       strict, errors are not tolerated and such configuration cannot be
       saved.

ACCESS CONTROL LISTS (ACL)

       By default, the users from the haclient group have full access to the
       cluster (or, more precisely, to the CIB). Access control lists allow
       for finer access control to the cluster.

       Access control lists consist of an ordered set of access rules. Each
       rule allows read or write access or denies access completely. Rules are
       typically combined to produce a specific role. Then, users may be
       assigned a role.

       For instance, this is a role which defines a set of rules allowing
       management of a single resource:

           role bigdb_admin \
               write meta:bigdb:target-role \
               write meta:bigdb:is-managed \
               write location:bigdb \
               read ref:bigdb

       The first two rules allow modifying the target-role and is-managed meta
       attributes which effectively enables users in this role to stop/start
       and manage/unmanage the resource. The constraints write access rule
       allows moving the resource around. Finally, the user is granted read
       access to the resource definition.

       For proper operation of all Pacemaker programs, it is advisable to add
       the following role to all users:

           role read_all \
               read cib

       For finer grained read access try with the rules listed in the
       following role:

           role basic_read \
               read node attribute:uname \
               read node attribute:type \
               read property \
               read status

       It is however possible that some Pacemaker programs (e.g. ptest) may
       not function correctly if the whole CIB is not readable.

       Some of the ACL rules in the examples above are expanded by the shell
       to XPath specifications. For instance, meta:bigdb:target-role is a
       shortcut for
       //primitive[@id='bigdb']/meta_attributes/nvpair[@name='target-role'].
       You can see the expansion by showing XML:

           crm(live) configure# show xml bigdb_admin
           ...
           <acls>
             <acl_role id="bigdb_admin">
                 <write id="bigdb_admin-write"
                 xpath="//primitive[@id='bigdb']/meta_attributes/nvpair[@name='target-role']"/>

       Many different XPath expressions can have equal meaning. For instance,
       the following two are equal, but only the first one is going to be
       recognized as shortcut:

           //primitive[@id='bigdb']/meta_attributes/nvpair[@name='target-role']
           //resources/primitive[@id='bigdb']/meta_attributes/nvpair[@name='target-role']

       XPath is a powerful language, but you should try to keep your ACL
       xpaths simple and the builtin shortcuts should be used whenever
       possible.

COMMAND REFERENCE

       We define a small and simple language. Most commands consist of just a
       list of simple tokens. The only complex constructs are found at the
       configure level.

       The syntax is described in a somewhat informal manner: <> denotes a
       string, [] means that the construct is optional, the ellipsis (...)
       signifies that the previous construct may be repeated, | means pick one
       of many, and the rest are literals (strings, :, =).

   status
       Show cluster status. The status is displayed by crm_mon. Supply
       additional arguments for more information or different format. See
       crm_mon(8) for more details.

       Usage:

           status [<option> ...]

           option :: bynode | inactive | ops | timing | failcounts

   cluster
       Whole-cluster configuration management with High Availability
       awareness.

       The commands on the cluster level allows configuration and modification
       of the underlying cluster infrastructure, and also supplies tools to do
       whole-cluster systems management.

       These commands enable easy installation and maintenance of a HA
       cluster, by providing support for package installation, configuration
       of the cluster messaging layer, file system setup and more.

       start
           Starts the cluster-related system services on this node.

           Usage:

               start

       start
           Stops the cluster-related system services on this node.

           Usage:

               stop

       init
           Installs and configures a basic HA cluster on a set of nodes.

           Usage:

               init node1 node2 node3
               init --dry-run node1 node2 node3

       add
           This command simplifies the process of adding a new node to a
           running cluster. The new node will be installed and configured with
           the packages and configuration files needed to run the cluster
           resources. If a cluster file system is used, the new node will be
           set up to host the file system.

           This command should be executed from a node already in the cluster.

           Usage:

               add <node>

       remove
           This command simplifies the process of removing a node from the
           cluster, moving any resources hosted by that node to other nodes.

           Usage:

               remove <node>

       status
           Reports the status for the cluster messaging layer on the local
           node.

           Usage:

               status

       health
           Runs a larger set of tests and queries on all nodes in the cluster
           to verify the general system health and detect potential problems.

           Usage:

               health

       wait_for_startup
           Mostly useful in scripts or automated workflows, this command will
           attempt to connect to the local cluster node repeatedly. The
           command will keep trying until the cluster node responds, or the
           timeout elapses. The timeout can be changed by supplying a value in
           seconds as an argument.

           Usage:

               wait_for_startup

       run
           This command takes a shell statement as argument, executes that
           statement on all nodes in the cluster, and reports the result.

           Usage:

               run <command>

           Example:

               run "cat /proc/uptime"

   cluster
       Cluster scripts can perform cluster-wide configuration, validation and
       management. See the list command for an overview of available scripts.

       list
           Lists the available cluster scripts.

           Usage:

               list

       verify
           Mainly useful when creating new scripts, this command verifies that
           the script definition has all necessary fields and that the
           referenced actions exist.

           Usage:

               verify <script>

       describe
           Prints a description and short summary of the cluster script, with
           descriptions of all parameters, both required and optional.

           Usage:

               describe <script>

       steps
           List the names of all steps in the cluster script.

           This command is intended for use by automated tools and the web
           frontend.

           Usage:

               steps <script>

       run
           Runs a cluster script. Can optionally take at least two arguments:
           * nodes=<nodes>: List of nodes that the script runs over *
           dry_run=yes|no: If set, the script will not perform any
           modifications.

           Additional arguments may be available depending on the cluster
           script. Use the describe command to see what arguments are
           provided.

           Usage:

               run <script> [args...]

           Example:

               run health dry_run=yes verbose=yes
               run init nodes="node-1 node-2 node-3"

   corosync
       Corosync is the underlying messaging layer for most HA clusters. This
       level provides commands for editing and managing the corosync
       configuration.

       status
           Displays the status of Corosync, including the votequorum state.

           Usage:

               status

       show
           Displays the corosync configuration on the current node.

               show

       edit
           Opens the Corosync configuration file in an editor.

           Usage:

               edit

       log
           Opens the log file specified in the corosync configuration file. If
           no log file is configured, this command returns an error.

           The pager used can be configured either using the PAGER environment
           variable or in crm.conf.

           Usage:

               log

       reload
           Tells all instances of corosync in this cluster to reload
           corosync.conf.

           After pushing a new configuration to all cluster nodes, call this
           command to make corosync use the new configuration.

           Usage:

               reload

       push
           Pushes the corosync configuration file on this node to the list of
           nodes provided. If no target nodes are given, the configuration is
           pushed to all other nodes in the cluster.

           It is recommended to use csync2 to distribute the cluster
           configuration files rather than relying on this command.

           Usage:

               push [node] ...

           Example:

               push node-2 node-3

       pull
           Gets the corosync configuration from another node and copies it to
           this node.

           Usage:

               pull <node>

       diff
           Diffs the corosync configurations on different nodes. If no nodes
           are given as arguments, the corosync configurations on all nodes in
           the cluster are compared.

           diff takes an option argument --checksum, to force checksum mode.

           If the number of nodes to compare are greater than two, diff
           automatically switches to checksum mode.

           Usage:

               diff [--checksum] [node...]

       add-node
           Adds a node to the corosync configuration. This is used with the
           udpu type configuration in corosync.

           A nodeid for the added node is generated automatically.

           Note that this command assumes that only a single ring is used, and
           sets only the address for ring0.

           Usage:

               add-node <addr>

       del-node
           Removes a node from the corosync configuration. The argument given
           is the ring0_addr address set in the configuration file.

           Usage:

               del-node <addr>

       get
           Returns the value configured in corosync.conf, which is not
           necessarily the value used in the running configuration. See reload
           for telling corosync about configuration changes.

           The argument is the complete dot-separated path to the value.

           If there are multiple values configured with the same path, the
           command returns all values for that path. For example, to get all
           configured ring0_addr values, use this command:

           Example:

               get nodelist.node.ring0_addr

       set
           Sets the value identified by the given path. If the value does not
           exist in the configuration file, it will be added. However, if the
           section containing the value does not exist, the command will fail.

           Usage:

               set quorum.expected_votes 2

   cib (shadow CIBs)
       This level is for management of shadow CIBs. It is available both at
       the top level and the configure level.

       All the commands are implemented using cib_shadow(8) and the CIB_shadow
       environment variable. The user prompt always includes the name of the
       currently active shadow or the live CIB.

       new
           Create a new shadow CIB. The live cluster configuration and status
           is copied to the shadow CIB.

           If the name of the shadow is omitted, we create a temporary CIB
           shadow. It is useful if multiple level sessions are desired without
           affecting the cluster. A temporary CIB shadow is short lived and
           will be removed either on commit or on program exit. Note that if
           the temporary shadow is not committed all changes in the temporary
           shadow are lost.

           Specify withstatus if you want to edit the status section of the
           shadow CIB (see the cibstatus section). Add force to force
           overwriting the existing shadow CIB.

           To start with an empty configuration that is not copied from the
           live CIB, specify the empty keyword. (This also allows a shadow CIB
           to be created in case no cluster is running.)

           Usage:

               new [<cib>] [withstatus] [force] [empty]

       delete
           Delete an existing shadow CIB.

           Usage:

               delete <cib>

       reset
           Copy the current cluster configuration into the shadow CIB.

           Usage:

               reset <cib>

       commit
           Apply a shadow CIB to the cluster. If the shadow name is omitted
           then the current shadow CIB is applied.

           Temporary shadow CIBs are removed automatically on commit.

           Usage:

               commit [<cib>]

       use
           Choose a CIB source. If you want to edit the status from the shadow
           CIB specify withstatus (see cibstatus). Leave out the CIB name to
           switch to the running CIB.

           Usage:

               use [<cib>] [withstatus]

       diff
           Print differences between the current cluster configuration and the
           active shadow CIB.

           Usage:

               diff

       list
           List existing shadow CIBs.

           Usage:

               list

       import
           At times it may be useful to create a shadow file from the existing
           CIB. The CIB may be specified as file or as a PE input file number.
           The shell will look up files in the local directory first and then
           in the PE directory (typically /var/lib/pengine). Once the CIB file
           is found, it is copied to a shadow and this shadow is immediately
           available for use at both configure and cibstatus levels.

           If the shadow name is omitted then the target shadow is named after
           the input CIB file.

           Note that there are often more than one PE input file, so you may
           need to specify the full name.

           Usage:

               import {<file>|<number>} [<shadow>]

           Examples:

               import pe-warn-2222
               import 2289 issue2

       cibstatus
           Enter edit and manage the CIB status section level. See the CIB
           status management section.

   ra
       This level contains commands which show various information about the
       installed resource agents. It is available both at the top level and at
       the configure level.

       classes
           Print all resource agents' classes and, where appropriate, a list
           of available providers.

           Usage:

               classes

       list
           List available resource agents for the given class. If the class is
           ocf, supply a provider to get agents which are available only from
           that provider.

           Usage:

               list <class> [<provider>]

           Example:

               list ocf pacemaker

       info (meta)
           Show the meta-data of a resource agent type. This is where users
           can find information on how to use a resource agent. It is also
           possible to get information from some programs: pengine, crmd, cib,
           and stonithd. Just specify the program name instead of an RA.

           Usage:

               info [<class>:[<provider>:]]<type>
               info <type> <class> [<provider>] (obsolete)

           Example:

               info apache
               info ocf:pacemaker:Dummy
               info stonith:ipmilan
               info pengine

       providers
           List providers for a resource agent type. The class parameter
           defaults to ocf.

           Usage:

               providers <type> [<class>]

           Example:

               providers apache

   resource
       At this level resources may be managed.

       All (or almost all) commands are implemented with the CRM tools such as
       crm_resource(8).

       status (show, list)
           Print resource status. If the resource parameter is left out status
           of all resources is printed.

           Usage:

               status [<rsc>]

       start
           Start a resource by setting the target-role attribute. If there are
           multiple meta attributes sets, the attribute is set in all of them.
           If the resource is a clone, all target-role attributes are removed
           from the children resources.

           For details on group management see options manage-children.

           Usage:

               start <rsc>

       stop
           Stop a resource using the target-role attribute. If there are
           multiple meta attributes sets, the attribute is set in all of them.
           If the resource is a clone, all target-role attributes are removed
           from the children resources.

           For details on group management see options manage-children.

           Usage:

               stop <rsc>

       restart
           Restart a resource. This is essentially a shortcut for resource
           stop followed by a start. The shell is first going to wait for the
           stop to finish, that is for all resources to really stop, and only
           then to order the start action. Due to this command entailing a
           whole set of operations, informational messages are printed to let
           the user see some progress.

           For details on group management see options manage-children.

           Usage:

               restart <rsc>

           Example:

               # crm resource restart g_webserver
               INFO: ordering g_webserver to stop
               waiting for stop to finish .... done
               INFO: ordering g_webserver to start
               #

       promote
           Promote a master-slave resource using the target-role attribute.

           Usage:

               promote <rsc>

       demote
           Demote a master-slave resource using the target-role attribute.

           Usage:

               demote <rsc>

       manage
           Manage a resource using the is-managed attribute. If there are
           multiple meta attributes sets, the attribute is set in all of them.
           If the resource is a clone, all is-managed attributes are removed
           from the children resources.

           For details on group management see options manage-children.

           Usage:

               manage <rsc>

       unmanage
           Unmanage a resource using the is-managed attribute. If there are
           multiple meta attributes sets, the attribute is set in all of them.
           If the resource is a clone, all is-managed attributes are removed
           from the children resources.

           For details on group management see options manage-children.

           Usage:

               unmanage <rsc>

       migrate (move)
           Migrate a resource to a different node. If node is left out, the
           resource is migrated by creating a constraint which prevents it
           from running on the current node. Additionally, you may specify a
           lifetime for the constraint---once it expires, the location
           constraint will no longer be active.

           Usage:

               migrate <rsc> [<node>] [<lifetime>] [force]

       unmigrate (unmove)
           Remove the constraint generated by the previous migrate command.

           Usage:

               unmigrate <rsc>

       param
           Show/edit/delete a parameter of a resource.

           Usage:

               param <rsc> set <param> <value>
               param <rsc> delete <param>
               param <rsc> show <param>

           Example:

               param ip_0 show ip

       secret
           Sensitive parameters can be kept in local files rather than CIB in
           order to prevent accidental data exposure. Use the secret command
           to manage such parameters. stash and unstash move the value from
           the CIB and back to the CIB respectively. The set subcommand sets
           the parameter to the provided value. delete removes the parameter
           completely. show displays the value of the parameter from the local
           file. Use check to verify if the local file content is valid.

           Usage:

               secret <rsc> set <param> <value>
               secret <rsc> stash <param>
               secret <rsc> unstash <param>
               secret <rsc> delete <param>
               secret <rsc> show <param>
               secret <rsc> check <param>

           Example:

               secret fence_1 show password
               secret fence_1 stash password
               secret fence_1 set password secret_value

       meta
           Show/edit/delete a meta attribute of a resource. Currently, all
           meta attributes of a resource may be managed with other commands
           such as resource stop.

           Usage:

               meta <rsc> set <attr> <value>
               meta <rsc> delete <attr>
               meta <rsc> show <attr>

           Example:

               meta ip_0 set target-role stopped

       utilization
           Show/edit/delete a utilization attribute of a resource. These
           attributes describe hardware requirements. By setting the
           placement-strategy cluster property appropriately, it is possible
           then to distribute resources based on resource requirements and
           node size. See also node utilization attributes.

           Usage:

               utilization <rsc> set <attr> <value>
               utilization <rsc> delete <attr>
               utilization <rsc> show <attr>

           Example:

               utilization xen1 set memory 4096

       failcount
           Show/edit/delete the failcount of a resource.

           Usage:

               failcount <rsc> set <node> <value>
               failcount <rsc> delete <node>
               failcount <rsc> show <node>

           Example:

               failcount fs_0 delete node2

       cleanup
           Cleanup resource status. Typically done after the resource has
           temporarily failed. If a node is omitted, cleanup on all nodes. If
           there are many nodes, the command may take a while.

           Usage:

               cleanup <rsc> [<node>]

       refresh
           Refresh CIB from the LRM status.

           Usage:

               refresh [<node>]

       reprobe
           Probe for resources not started by the CRM.

           Usage:

               reprobe [<node>]

       trace
           Start tracing RA for the given operation. The trace files are
           stored in $HA_VARLIB/trace_ra. If the operation to be traced is
           monitor, note that the number of trace files can grow very quickly.

           Usage:

               trace <rsc> <op> [<interval>]

           Example:

               trace fs start

       untrace
           Stop tracing RA for the given operation.

           Usage:

               untrace <rsc> <op> [<interval>]

           Example:

               untrace fs start

   scores
       Display the allocation scores for all resources.

       Usage:

           scores

   node
       Node management and status commands.

       status
           Show nodes' status as XML. If the node parameter is omitted then
           all nodes are shown.

           Usage:

               status [<node>]

       show
           Show a node definition. If the node parameter is omitted then all
           nodes are shown.

           Usage:

               show [<node>]

       standby
           Set a node to standby status. The node parameter defaults to the
           node where the command is run. Additionally, you may specify a
           lifetime for the standby---if set to reboot, the node will be back
           online once it reboots. forever will keep the node in standby after
           reboot.

           Usage:

               standby [<node>] [<lifetime>]

               lifetime :: reboot | forever

       online
           Set a node to online status. The node parameter defaults to the
           node where the command is run.

           Usage:

               online [<node>]

       maintenance
           Set the node status to maintenance. This is equivalent to the
           cluster-wide maintenance-mode property but puts just one node into
           the maintenance mode. The node parameter defaults to the node where
           the command is run.

           Usage:

               maintenance [<node>]

       ready
           Set the node’s maintenance status to off. The node should be now
           again fully operational and capable of running resource operations.

           Usage:

               ready [<node>]

       fence
           Make CRM fence a node. This functionality depends on stonith
           resources capable of fencing the specified node. No such stonith
           resources, no fencing will happen.

           Usage:

               fence <node>

       clearnodestate
           Resets and clears the state of the specified node. This node is
           afterwards assumed clean and offline. This command can be used to
           manually confirm that a node has been fenced (e.g., powered off).

           Be careful! This can cause data corruption if you confirm that a
           node is down that is, in fact, not cleanly down - the cluster will
           proceed as if the fence had succeeded, possibly starting resources
           multiple times.

           Usage:

               clearstate <node>

       delete
           Delete a node. This command will remove the node from the CIB and,
           in case the cluster stack is running, use the appropriate program
           (crm_node or hb_delnode) to remove the node from the membership.

           If the node is still listed as active and a member of our partition
           we refuse to remove it. With the global force option (-F) we will
           try to delete the node anyway.

           Usage:

               delete <node>

       attribute
           Edit node attributes. This kind of attribute should refer to
           relatively static properties, such as memory size.

           Usage:

               attribute <node> set <attr> <value>
               attribute <node> delete <attr>
               attribute <node> show <attr>

           Example:

               attribute node_1 set memory_size 4096

       utilization
           Edit node utilization attributes. These attributes describe
           hardware characteristics as integer numbers such as memory size or
           the number of CPUs. By setting the placement-strategy cluster
           property appropriately, it is possible then to distribute resources
           based on resource requirements and node size. See also resource
           utilization attributes.

           Usage:

               utilization <node> set <attr> <value>
               utilization <node> delete <attr>
               utilization <node> show <attr>

           Examples:

               utilization node_1 set memory 16384
               utilization node_1 show cpu

       status-attr
           Edit node attributes which are in the CIB status section, i.e.
           attributes which hold properties of a more volatile nature. One
           typical example is attribute generated by the pingd utility.

           Usage:

               status-attr <node> set <attr> <value>
               status-attr <node> delete <attr>
               status-attr <node> show <attr>

           Example:

               status-attr node_1 show pingd

   site
       A cluster may consist of two or more subclusters in different and
       distant locations. This set of commands supports such setups.

       ticket
           Tickets are cluster-wide attributes. They can be managed at the
           site where this command is executed.

           It is then possible to constrain resources depending on the ticket
           availability (see the rsc_ticket command for more details).

           Usage:

               ticket {grant|revoke|standby|activate|show|time|delete} <ticket>

           Example:

               ticket grant ticket1

   options
       The user may set various options for the crm shell itself.

       skill-level
           Based on the skill-level setting, the user is allowed to use only a
           subset of commands. There are three levels: operator,
           administrator, and expert. The operator level allows only commands
           at the resource and node levels, but not editing or deleting
           resources. The administrator may do that and may also configure the
           cluster at the configure level and manage the shadow CIBs. The
           expert may do all.

           Usage:

               skill-level <level>

               level :: operator | administrator | expert
           Note on security

           The skill-level option is advisory only. There is nothing stopping
           any users change their skill level (see Access Control Lists (ACL)
           on how to enforce access control).

       user
           Sufficient privileges are necessary in order to manage a cluster:
           programs such as crm_verify or crm_resource and, ultimately,
           cibadmin have to be run either as root or as the CRM owner user
           (typically hacluster). You don’t have to worry about that if you
           run crm as root. A more secure way is to run the program with your
           usual privileges, set this option to the appropriate user (such as
           hacluster), and setup the sudoers file.

           Usage:

               user system-user

           Example:

               user hacluster

       editor
           The edit command invokes an editor. Use this to specify your
           preferred editor program. If not set, it will default to either the
           value of the EDITOR environment variable or to one of the standard
           UNIX editors (vi,emacs,nano).

           Usage:

               editor program

           Example:

               editor vim

       pager
           The view command displays text through a pager. Use this to specify
           your preferred pager program. If not set, it will default to either
           the value of the PAGER environment variable or to one of the
           standard UNIX system pagers (less,more,pg).

       sort-elements
           crm by default sorts CIB elements. If you want them appear in the
           order they were created, set this option to no.

           Usage:

               sort-elements {yes|no}

           Example:

               sort-elements no

       wait
           In normal operation, crm runs a command and gets back immediately
           to process other commands or get input from the user. With this
           option set to yes it will wait for the started transition to
           finish. In interactive mode dots are printed to indicate progress.

           Usage:

               wait {yes|no}

           Example:

               wait yes

       output
           crm can adorn configurations in two ways: in color (similar to for
           instance the ls --color command) and by showing keywords in upper
           case. Possible values are plain, color, and uppercase. It is
           possible to combine the latter two in order to get an upper case
           xmass tree. Just set this option to color,uppercase.

       colorscheme
           With output set to color, a comma separated list of colors from
           this option are used to emphasize:

           ·   keywords

           ·   object ids

           ·   attribute names

           ·   attribute values

           ·   scores

           ·   resource references

           crm can show colors only if there is curses support for python
           installed (usually provided by the python-curses package). The
           colors are whatever is available in your terminal. Use normal if
           you want to keep the default foreground color.

           This user preference defaults to
           yellow,normal,cyan,red,green,magenta which is good for terminals
           with dark background. You may want to change the color scheme and
           save it in the preferences file for other color setups.

           Example:

               colorscheme yellow,normal,blue,red,green,magenta

       check-frequency
           Semantic check of the CIB or elements modified or created may be
           done on every configuration change (always), when verifying
           (on-verify) or never. It is by default set to always. Experts may
           want to change the setting to on-verify.

           The checks require that resource agents are present. If they are
           not installed at the configuration time set this preference to
           never.

           See Configuration semantic checks for more details.

       check-mode
           Semantic check of the CIB or elements modified or created may be
           done in the strict mode or in the relaxed mode. In the former
           certain problems are treated as configuration errors. In the
           relaxed mode all are treated as warnings. The default is strict.

           See Configuration semantic checks for more details.

       add-quotes
           The shell (as in /bin/sh) parser strips quotes from the command
           line. This may sometimes make it really difficult to type values
           which contain white space. One typical example is the configure
           filter command. The crm shell will supply extra quotes around
           arguments which contain white space. The default is yes.  Note on
           quotes use

           Adding quotes around arguments automatically has been introduced
           with version 1.2.2 and it is technically a regression. Being a
           regression is the only reason the add-quotes option exists. If you
           have custom shell scripts which would break, just set the
           add-quotes option to no.

           For instance, with adding quotes enabled, it is possible to do the
           following:

               # crm configure primitive d1 ocf:heartbeat:Dummy \
                   meta description="some description here"
               # crm configure filter 'sed "s/hostlist=./&node-c /"' fencing

       manage-children
           Some resource management commands, such as resource stop, when the
           target resource is a group, may not always produce desired result.
           Each element, group and the primitive members, can have a meta
           attribute and those attributes may end up with conflicting values.
           Consider the following construct:

               crm(live)# configure show svc fs virtual-ip
               primitive fs ocf:heartbeat:Filesystem \
                   params device="/dev/drbd0" directory="/srv/nfs" fstype="ext3" \
                   op monitor interval="10s" \
                   meta target-role="Started"
               primitive virtual-ip ocf:heartbeat:IPaddr2 \
                   params ip="10.2.13.110" iflabel="1" \
                   op monitor interval="10s" \
                   op start interval="0" \
                   meta target-role="Started"
               group svc fs virtual-ip \
                   meta target-role="Stopped"

           Even though the element svc should be stopped, the group is
           actually running because all its members have the target-role set
           to Started:

               crm(live)# resource show svc
               resource svc is running on: xen-f

           Hence, if the user invokes resource stop svc the intention is not
           clear. This preference gives the user an opportunity to better
           control what happens if attributes of group members have values
           which are in conflict with the same attribute of the group itself.

           Possible values are ask (the default), always, and never. If set to
           always, the crm shell removes all children attributes which have
           values different from the parent. If set to never, all children
           attributes are left intact. Finally, if set to ask, the user will
           be asked for each member what is to be done.

       show
           Display all current settings.

           Given an option name as argument, show will display only the value
           of that argument.

           Given all as argument, show displays all available user options.

           Usage:

               show [all|<option>]

           Example:

               show
               show skill-level
               show all

       set
           Sets the value of an option. Takes the fully qualified name of the
           option as argument, as displayed by show all.

           The modified option value is stored in the user-local configuration
           file, usually found in ~/.config/crm/crm.conf.

           Usage:

               set <option> <value>

           Example:

               set color.warn "magenta bold"
               set editor nano

       save
           Save current settings to the rc file ($HOME/.config/crm/rc). On
           further crm runs, the rc file is automatically read and parsed.

       reset
           This command resets all user options to the defaults. If used as a
           single-shot command, the rc file ($HOME/.config/crm/rc) is reset to
           the defaults too.

   configure
       This level enables all CIB object definition commands.

       The configuration may be logically divided into four parts: nodes,
       resources, constraints, and (cluster) properties and attributes. Each
       of these commands support one or more basic CIB objects.

       Nodes and attributes describing nodes are managed using the node
       command.

       Commands for resources are:

       ·   primitive

       ·   monitor

       ·   group

       ·   clone

       ·   ms/master (master-slave)

       In order to streamline large configurations, it is possible to define a
       template which can later be referenced in primitives:

       ·   rsc_template

       In that case the primitive inherits all attributes defined in the
       template.

       There are three types of constraints:

       ·   location

       ·   colocation

       ·   order

       It is possible to define fencing order (stonith resource priorities):

       ·   fencing_topology

       Finally, there are the cluster properties, resource meta attributes
       defaults, and operations defaults. All are just a set of attributes.
       These attributes are managed by the following commands:

       ·   property

       ·   rsc_defaults

       ·   op_defaults

       In addition to the cluster configuration, the Access Control Lists
       (ACL) can be setup to allow access to parts of the CIB for users other
       than root and hacluster. The following commands manage ACL:

       ·   user

       ·   role

       The changes are applied to the current CIB only on ending the
       configuration session or using the commit command.

       Comments start with # in the first line. The comments are tied to the
       element which follows. If the element moves, its comments will follow.

       Resource sets
           Using resource sets can be a bit confusing unless one knows the
           details of the implementation in Pacemaker as well as how to
           interpret the syntax provided by crmsh.

           Three different types of resource sets are provided by crmsh, and
           each one implies different values for the two resource set
           attributes, sequential and require-all.

           sequential
               If true, the resources in the set do not depend on each other
               internally. Setting sequential to true implies a strict order
               of dependency within the set.

           require-all
               If false, only one resource in the set is required to fulfil
               the requirements of the set. The set of A, B and C with
               require-all set to false is be read as A OR B OR C when its
               dependencies are resolved.

           The three types of resource sets modify the attributes in the
           following way:

            1. Implicit sets (no brackets).  sequential=true, require-all=true

            2. Parenthesis set (( ... )).  sequential=false, require-all=true

            3. Bracket set ([ ... ]).  sequential=false, require-all=false

           To create a set with the properties sequential=true and
           require-all=false, explicitly set sequential in a bracketed set, [
           A B C sequential=true ].

           To create multiple sets with both sequential and require-all set to
           true, explicitly set sequential in a parenthesis set: A B ( C D
           sequential=true ).

       node
           The node command describes a cluster node. Nodes in the CIB are
           commonly created automatically by the CRM. Hence, you should not
           need to deal with nodes unless you also want to define node
           attributes. Note that it is also possible to manage node attributes
           at the node level.

           Usage:

               node [$id=<id>] <uname>[:<type>]
                 [description=<description>]
                 [attributes <param>=<value> [<param>=<value>...]]
                 [utilization <param>=<value> [<param>=<value>...]]

               type :: normal | member | ping

           Example:

               node node1
               node big_node attributes memory=64

       primitive
           The primitive command describes a resource. It may be referenced
           only once in group, clone, or master-slave objects. If it’s not
           referenced, then it is placed as a single resource in the CIB.

           Operations may be specified in three ways. "Anonymous" as a simple
           list of "op" specifications. Use that if you don’t want to
           reference the set of operations elsewhere. That’s by far the most
           common way to define operations. If reusing operation sets is
           desired, use the "operations" keyword along with the id to give the
           operations set a name and the id-ref to reference another set of
           operations.

           Operation’s attributes which are not recognized are saved as
           instance attributes of that operation. A typical example is
           OCF_CHECK_LEVEL.

           For multistate resources, roles are specified as role=<role>.

           A template may be defined for resources which are of the same type
           and which share most of the configuration. See rsc_template for
           more information.

           Usage:

               primitive <rsc> {[<class>:[<provider>:]]<type>|@<template>}
                 [description=<description>]
                 [params attr_list]
                 [meta attr_list]
                 [utilization attr_list]
                 [operations id_spec]
                   [op op_type [<attribute>=<value>...] ...]

               attr_list :: [$id=<id>] <attr>=<val> [<attr>=<val>...] | $id-ref=<id>
               id_spec :: $id=<id> | $id-ref=<id>
               op_type :: start | stop | monitor

           Example:

               primitive apcfence stonith:apcsmart \
                 params ttydev=/dev/ttyS0 hostlist="node1 node2" \
                 op start timeout=60s \
                 op monitor interval=30m timeout=60s

               primitive www8 apache \
                 params configfile=/etc/apache/www8.conf \
                 operations $id-ref=apache_ops

               primitive db0 mysql \
                 params config=/etc/mysql/db0.conf \
                 op monitor interval=60s \
                 op monitor interval=300s OCF_CHECK_LEVEL=10

               primitive r0 ocf:linbit:drbd \
                 params drbd_resource=r0 \
                 op monitor role=Master interval=60s \
                 op monitor role=Slave interval=300s

               primitive xen0 @vm_scheme1 \
                 params xmfile=/etc/xen/vm/xen0

       monitor
           Monitor is by far the most common operation. It is possible to add
           it without editing the whole resource. Also, long primitive
           definitions may be a bit uncluttered. In order to make this command
           as concise as possible, less common operation attributes are not
           available. If you need them, then use the op part of the primitive
           command.

           Usage:

               monitor <rsc>[:<role>] <interval>[:<timeout>]

           Example:

               monitor apcfence 60m:60s

           Note that after executing the command, the monitor operation may be
           shown as part of the primitive definition.

       group
           The group command creates a group of resources. This can be useful
           when resources depend on other resources and require that those
           resources start in order on the same node. A commmon use of
           resource groups is to ensure that a server and a virtual IP are
           located together, and that the virtual IP is started before the
           server.

           Grouped resources are started in the order they appear in the
           group, and stopped in the reverse order. If a resource in the group
           cannot run anywhere, resources following it in the group will not
           start.

           group can be passed the "container" meta attribute, to indicate
           that it is to be used to group VM resources monitored using Nagios.
           The resource referred to by the container attribute must be of type
           ocf:heartbeat:Xen, oxf:heartbeat:VirtualDomain or
           ocf:heartbeat:lxc.

           Usage:

               group <name> <rsc> [<rsc>...]
                 [description=<description>]
                 [meta attr_list]
                 [params attr_list]

               attr_list :: [$id=<id>] <attr>=<val> [<attr>=<val>...] | $id-ref=<id>

           Example:

               group internal_www disk0 fs0 internal_ip apache \
                 meta target_role=stopped

               group vm-and-services vm vm-sshd meta container="vm"

       clone
           The clone command creates a resource clone. It may contain a single
           primitive resource or one group of resources.

           Usage:

               clone <name> <rsc>
                 [description=<description>]
                 [meta attr_list]
                 [params attr_list]

               attr_list :: [$id=<id>] <attr>=<val> [<attr>=<val>...] | $id-ref=<id>

           Example:

               clone cl_fence apc_1 \
                 meta clone-node-max=1 globally-unique=false

       ms (master)
           The ms command creates a master/slave resource type. It may contain
           a single primitive resource or one group of resources.

           Usage:

               ms <name> <rsc>
                 [description=<description>]
                 [meta attr_list]
                 [params attr_list]

               attr_list :: [$id=<id>] <attr>=<val> [<attr>=<val>...] | $id-ref=<id>

           Example:

               ms disk1 drbd1 \
                 meta notify=true globally-unique=false
           Note on id-ref usage

           Instance or meta attributes (‘params` and meta) may contain a
           reference to another set of attributes. In that case, no other
           attributes are allowed. Since attribute sets’ ids, though they do
           exist, are not shown in the crm, it is also possible to reference
           an object instead of an attribute set. crm will automatically
           replace such a reference with the right id:

               crm(live)configure# primitive a2 www-2 meta $id-ref=a1
               crm(live)configure# show a2
               primitive a2 ocf:heartbeat:apache \
                   meta $id-ref="a1-meta_attributes"
                   [...]

           It is advisable to give meaningful names to attribute sets which
           are going to be referenced.

       rsc_template
           The rsc_template command creates a resource template. It may be
           referenced in primitives. It is used to reduce large configurations
           with many similar resources.

           Usage:

               rsc_template <name> [<class>:[<provider>:]]<type>
                 [description=<description>]
                 [params attr_list]
                 [meta attr_list]
                 [utilization attr_list]
                 [operations id_spec]
                   [op op_type [<attribute>=<value>...] ...]

               attr_list :: [$id=<id>] <attr>=<val> [<attr>=<val>...] | $id-ref=<id>
               id_spec :: $id=<id> | $id-ref=<id>
               op_type :: start | stop | monitor

           Example:

               rsc_template public_vm ocf:heartbeat:Xen \
                 op start timeout=300s \
                 op stop timeout=300s \
                 op monitor interval=30s timeout=60s \
                 op migrate_from timeout=600s \
                 op migrate_to timeout=600s
               primitive xen0 @public_vm \
                 params xmfile=/etc/xen/xen0
               primitive xen1 @public_vm \
                 params xmfile=/etc/xen/xen1

       location
           location defines the preference of nodes for the given resource.
           The location constraints consist of one or more rules which specify
           a score to be awarded if the rule matches.

           Usage:

               location <id> <rsc> {node_pref|rules}

               node_pref :: <score>: <node> [role=<role>]

               rules ::
                 rule [id_spec] [$role=<role>] <score>: <expression>
                 [rule [id_spec] [$role=<role>] <score>: <expression> ...]

               id_spec :: $id=<id> | $id-ref=<id>
               score :: <number> | <attribute> | [-]inf
               expression :: <simple_exp> [bool_op <simple_exp> ...]
               bool_op :: or | and
               simple_exp :: <attribute> [type:]<binary_op> <value>
                         | <unary_op> <attribute>
                         | date <date_expr>
               type :: string | version | number
               binary_op :: lt | gt | lte | gte | eq | ne
               unary_op :: defined | not_defined

               date_expr :: lt <end>
                        | gt <start>
                        | in_range start=<start> end=<end>
                        | in_range start=<start> <duration>
                        | date_spec <date_spec>
               duration|date_spec ::
                        hours=<value>
                        | monthdays=<value>
                        | weekdays=<value>
                        | yearsdays=<value>
                        | months=<value>
                        | weeks=<value>
                        | years=<value>
                        | weekyears=<value>
                        | moon=<value>

           Examples:

               location conn_1 internal_www 100: node1

               location conn_1 internal_www \
                 rule 50: #uname eq node1 \
                 rule pingd: defined pingd

               location conn_2 dummy_float \
                 rule -inf: not_defined pingd or pingd number:lte 0

       colocation (collocation)
           This constraint expresses the placement relation between two or
           more resources. If there are more than two resources, then the
           constraint is called a resource set.

           The score is used to indicate the priority of the constraint. A
           positive score indicates that the resources should run on the same
           node. A negative score that they should not run on the same node.
           Values of positive or negative infinity indicate a mandatory
           constraint.

           In the two resource form, the cluster will place <with-rsc> first,
           and then decide where to put the <rsc> resource.

           Collocation resource sets have an extra attribute (sequential) to
           allow for sets of resources which don’t depend on each other in
           terms of state. The shell syntax for such sets is to put resources
           in parentheses.

           Sets cannot be nested.

           The optional ‘node-attribute` references an attribute in nodes’
           instance attributes.

           Usage:

               colocation <id> <score>: <rsc>[:<role>] <with-rsc>[:<role>]
                 [node-attribute=<node_attr>]

               colocation <id> <score>: <rsc>[:<role>] <rsc>[:<role>] ...
                 [node-attribute=<node_attr>]

           Example:

               colocation never_put_apache_with_dummy -inf: apache dummy
               colocation c1 inf: A ( B C )

       order
           This constraint expresses the order of actions on two resources or
           more resources. If there are more than two resources, then the
           constraint is called a resource set.

           Ordered resource sets have an extra attribute to allow for sets of
           resources whose actions may run in parallel. The shell syntax for
           such sets is to put resources in parentheses.

           If the subsequent resource can start or promote after any one of
           the resources in a set has done, enclose the set in brackets ([ and
           ]).

           Sets cannot be nested.

           Three strings are reserved to specify a kind of order constraint:
           Mandatory, Optional, and Serialize. It is preferred to use one of
           these settings instead of score. Previous versions mapped scores 0
           and inf to keywords advisory and mandatory. That is still valid but
           deprecated.  Note on resource sets' XML attributes

           The XML attribute require-all controls whether all resources in a
           set are, well, required. The bracketed sets actually have this
           attribute as well as sequential set to false. If you need a
           different combination, for whatever reason, just set one of the
           attributes within the set. Something like this:

               crm(live)configure# order o1 Mandatory: [ A B sequential=true ] C

           It is up to you to find out whether such a combination makes sense.

           Usage:

               order <id> {kind|<score>}: <rsc>[:<action>] <rsc>[:<action>] ...
                 [symmetrical=<bool>]

               kind :: Mandatory | Optional | Serialize

           Example:

               order c_apache_1 Mandatory: apache:start ip_1
               order o1 Serialize: A ( B C )
               order order_2 Mandatory: [ A B ] C

       rsc_ticket
           This constraint expresses dependency of resources on cluster-wide
           attributes, also known as tickets. Tickets are mainly used in
           geo-clusters, which consist of multiple sites. A ticket may be
           granted to a site, thus allowing resources to run there.

           The loss-policy attribute specifies what happens to the resource
           (or resources) if the ticket is revoked. The default is either stop
           or demote depending on whether a resource is multi-state.

           See also the site set of commands.

           Usage:

               rsc_ticket <id> <ticket_id>: <rsc>[:<role>] [<rsc>[:<role>] ...]
                 [loss-policy=<loss_policy_action>]

               loss_policy_action :: stop | demote | fence | freeze

           Example:

               rsc_ticket ticket-A_public-ip ticket-A: public-ip
               rsc_ticket ticket-A_bigdb ticket-A: bigdb loss-policy=fence
               rsc_ticket ticket-B_storage ticket-B: drbd-a:Master drbd-b:Master

       property
           Set the cluster (crm_config) options.

           Usage:

               property [$id=<set_id>] <option>=<value> [<option>=<value> ...]

           Example:

               property stonith-enabled=true

       rsc_defaults
           Set defaults for the resource meta attributes.

           Usage:

               rsc_defaults [$id=<set_id>] <option>=<value> [<option>=<value> ...]

           Example:

               rsc_defaults failure-timeout=3m

       fencing_topology
           If multiple fencing (stonith) devices are available capable of
           fencing a node, their order may be specified by fencing_topology.
           The order is specified per node.

           Stonith resources can be separated by , in which case all of them
           need to succeed. If they fail, the next stonith resource (or set of
           resources) is used. In other words, use comma to separate resources
           which all need to succeed and whitespace for serial order. It is
           not allowed to use whitespace around comma.

           If the node is left out, the order is used for all nodes. That
           should reduce the configuration size in some stonith setups.

           Usage:

               fencing_topology stonith_resources [stonith_resources ...]
               fencing_topology fencing_order [fencing_order ...]

               fencing_order :: <node>: stonith_resources [stonith_resources ...]

               stonith_resources :: <rsc>[,<rsc>...]

           Example:

               fencing_topology poison-pill power
               fencing_topology \
                   node-a: poison-pill power
                   node-b: ipmi serial

       role
           An ACL role is a set of rules which describe access rights to CIB.
           Rules consist of an access right read, write, or deny and a
           specification denoting part of the configuration to which the
           access right applies. The specification can be an XPath or a
           combination of tag and id references. If an attribute is appended,
           then the specification applies only to that attribute of the
           matching element.

           There is a number of shortcuts for XPath specifications. The meta,
           params, and utilization shortcuts reference resource meta
           attributes, parameters, and utilization respectively. The location
           may be used to specify location constraints most of the time to
           allow resource move and unmove commands. The property references
           cluster properties. The node allows reading node attributes.
           nodeattr and nodeutil reference node attributes and node capacity
           (utilization). The status shortcut references the whole status
           section of the CIB. Read access to status is necessary for various
           monitoring tools such as crm_mon(8) (aka crm status).

           Usage:

               role <role-id> rule [rule ...]

               rule :: acl-right cib-spec [attribute:<attribute>]

               acl-right :: read | write | deny

               cib-spec :: xpath-spec | tag-ref-spec
               xpath-spec :: xpath:<xpath> | shortcut
               tag-ref-spec :: tag:<tag> | ref:<id> | tag:<tag> ref:<id>

               shortcut :: meta:<rsc>[:<attr>]
                       params:<rsc>[:<attr>]
                       utilization:<rsc>
                       location:<rsc>
                       property[:<attr>]
                       node[:<node>]
                       nodeattr[:<attr>]
                       nodeutil[:<node>]
                       status

           Example:

               role app1_admin \
                   write meta:app1:target-role \
                   write meta:app1:is-managed \
                   write location:app1 \
                   read ref:app1

       user
           Users which normally cannot view or manage cluster configuration
           can be allowed access to parts of the CIB. The access is defined by
           a set of read, write, and deny rules as in role definitions or by
           referencing roles. The latter is considered best practice.

           Usage:

               user <uid> {roles|rules}

               roles :: role:<role-ref> [role:<role-ref> ...]
               rules :: rule [rule ...]

           Example:

               user joe \
                   role:app1_admin \
                   role:read_all

       op_defaults
           Set defaults for the operations meta attributes.

           Usage:

               op_defaults [$id=<set_id>] <option>=<value> [<option>=<value> ...]

           Example:

               op_defaults record-pending=true

       schema
           CIB’s content is validated by a RNG schema. Pacemaker supports
           several, depending on version. Currently supported schemas are
           pacemaker-1.0, pacemaker-1.1, and pacemaker-1.2.

           Use this command to display or switch to another RNG schema.

           Usage:

               schema [<schema>]

           Example:

               schema pacemaker-1.1

       show
           The show command displays objects. It may display all objects or a
           set of objects. The user may also choose to see only objects which
           were changed.

           Optionally, the XML code may be displayed instead of the CLI
           representation by passing xml as the first argument.

           To show all objects of a certain type, use the type: prefix.

           Usage:

               show [xml] [<id> ...]
               show [xml] changed

           Example:

               show webapp
               show type:primitive
               show xml type:node

       edit
           This command invokes the editor with the object description. As
           with the show command, the user may choose to edit all objects or a
           set of objects.

           If the user insists, he or she may edit the XML edition of the
           object. If you do that, don’t modify any id attributes.

           Usage:

               edit [xml] [<id> ...]
               edit [xml] changed
           Note on renaming element ids

           The edit command sometimes cannot properly handle modifying element
           ids. In particular for elements which belong to group or ms
           resources. Group and ms resources themselves also cannot be
           renamed. Please use the rename command instead.

       filter
           This command filters the given CIB elements through an external
           program. The program should accept input on stdin and send output
           to stdout (the standard UNIX filter conventions). As with the show
           command, the user may choose to filter all or just a subset of
           elements.

           It is possible to filter the XML representation of objects, but
           probably not as useful as the configuration language. The
           presentation is somewhat different from what would be displayed by
           the show command---each element is shown on a single line, i.e.
           there are no backslashes and no other embelishments.

           Don’t forget to put quotes around the filter if it contains spaces.

           Usage:

               filter <prog> [xml] [<id> ...]
               filter <prog> [xml] changed

           Examples:

               filter "sed '/^primitive/s/target-role=[^ ]*//'"
               # crm configure filter "sed '/^primitive/s/target-role=[^ ]*//'"
               crm configure <<END
                 filter "sed '/threshold=\"1\"/s/=\"1\"/=\"0\"/g'"
               END
           Note on quotation marks

           Filter commands which feature a blend of quotation marks can be
           difficult to get right, especially when used directly from bash,
           since bash does its own quotation parsing. In these cases, it can
           be easier to supply the filter command as standard input. See the
           last example above.

       delete
           Delete one or more objects. If an object to be deleted belongs to a
           container object, such as a group, and it is the only resource in
           that container, then the container is deleted as well. Any related
           constraints are removed as well.

           Usage:

               delete <id> [<id>...]

       default-timeouts
           This command takes the timeouts from the actions section of the
           resource agent meta-data and sets them for the operations of the
           primitive.

           Usage:

               default-timeouts <id> [<id>...]
           Note on default-timeouts

           You may be happy using this, but your applications may not. And it
           will tell you so at the worst possible moment. You have been
           warned.

       rename
           Rename an object. It is recommended to use this command to rename a
           resource, because it will take care of updating all related
           constraints and a parent resource. Changing ids with the edit
           command won’t have the same effect.

           If you want to rename a resource, it must be in the stopped state.

           Usage:

               rename <old_id> <new_id>

       modgroup
           Add or remove primitives in a group. The add subcommand appends the
           new group member by default. Should it go elsewhere, there are
           after and before clauses.

           Usage:

               modgroup <id> add <id> [after <id>|before <id>]
               modgroup <id> remove <id>

           Examples:

               modgroup share1 add storage2 before share1-fs

       refresh
           Refresh the internal structures from the CIB. All changes made
           during this session are lost.

           Usage:

               refresh

       erase
           The erase clears all configuration. Apart from nodes. To remove
           nodes, you have to specify an additional keyword nodes.

           Note that removing nodes from the live cluster may have some
           strange/interesting/unwelcome effects.

           Usage:

               erase [nodes]

       ptest (simulate)
           Show PE (Policy Engine) motions using ptest(8) or crm_simulate(8).

           A CIB is constructed using the current user edited configuration
           and the status from the running CIB. The resulting CIB is run
           through ptest (or crm_simulate) to show changes which would happen
           if the configuration is committed.

           The status section may be loaded from another source and modified
           using the cibstatus level commands. In that case, the ptest command
           will issue a message informing the user that the Policy Engine
           graph is not calculated based on the current status section and
           therefore won’t show what would happen to the running but some
           imaginary cluster.

           If you have graphviz installed and X11 session, dotty(1) is run to
           display the changes graphically.

           Add a string of v characters to increase verbosity. ptest can also
           show allocation scores. utilization turns on information about the
           remaining capacity of nodes. With the actions option, ptest will
           print all resource actions.

           The ptest program has been replaced by crm_simulate in newer
           Pacemaker versions. In some installations both could be installed.
           Use simulate to enfore using crm_simulate.

           Usage:

               ptest [nograph] [v...] [scores] [actions] [utilization]

           Examples:

               ptest scores
               ptest vvvvv
               simulate actions

       rsctest
           Test resources with current resource configuration. If no nodes are
           specified, tests are run on all known nodes.

           The order of resources is significant: it is assumed that later
           resources depend on earlier ones.

           If a resource is multi-state, it is assumed that the role on which
           later resources depend is master.

           Tests are run sequentially to prevent running the same resource on
           two or more nodes. Tests are carried out only if none of the
           specified nodes currently run any of the specified resources.
           However, it won’t verify whether resources run on the other nodes.

           Superuser privileges are obviously required: either run this as
           root or setup the sudoers file appropriately.

           Note that resource testing may take some time.

           Usage:

               rsctest <rsc_id> [<rsc_id> ...] [<node_id> ...]

           Examples:

               rsctest my_ip websvc
               rsctest websvc nodeB

   cib (shadow CIBs)
       This level is for management of shadow CIBs. It is available at the
       configure level to enable saving intermediate changes to a shadow CIB
       instead of to the live cluster. This short excerpt shows how:

           crm(live)configure# cib new test-2
           INFO: test-2 shadow CIB created
           crm(test-2)configure# commit

       Note how the current CIB in the prompt changed from live to test-2
       after issuing the cib new command. See also the CIB shadow management
       for more information.

       cibstatus
           Enter edit and manage the CIB status section level. See the CIB
           status management section.

       template
           The specified template is loaded into the editor. It’s up to the
           user to make a good CRM configuration out of it. See also the
           template section.

           Usage:

               template [xml] url

           Example:

               template two-apaches.txt

       commit
           Commit the current configuration to the CIB in use. As noted
           elsewhere, commands in a configure session don’t have immediate
           effect on the CIB. All changes are applied at one point in time,
           either using commit or when the user leaves the configure level. In
           case the CIB in use changed in the meantime, presumably by somebody
           else, the crm shell will refuse to apply the changes. If you know
           that it’s fine to still apply them add force.

           Usage:

               commit [force]

       verify
           Verify the contents of the CIB which would be committed.

           Usage:

               verify

       upgrade
           If you get the CIB not supported error, which typically means that
           the current CIB version is coming from the older release, you may
           try to upgrade it to the latest revision. The command to perform
           the upgrade is:

               # cibadmin --upgrade --force

           If we don’t recognize the current CIB as the old one, but you’re
           sure that it is, you may force the command.

           Usage:

               upgrade [force]

       save
           Save the current configuration to a file. Optionally, as XML. Use -
           instead of file name to write the output to stdout.

           Usage:

               save [xml] <file>

           Example:

               save myfirstcib.txt

       load
           Load a part of configuration (or all of it) from a local file or a
           network URL. The replace method replaces the current configuration
           with the one from the source. The update tries to import the
           contents into the current configuration. The file may be a CLI file
           or an XML file.

           Usage:

               load [xml] <method> URL

               method :: replace | update

           Example:

               load xml update myfirstcib.xml
               load xml replace http://storage.big.com/cibs/bigcib.xml

       graph
           Create a graphviz graphical layout from the current cluster
           configuration.

           Currently, only dot (directed graph) is supported. It is
           essentially a visualization of resource ordering.

           The graph may be saved to a file which can be used as source for
           various graphviz tools (by default it is displayed in the user’s
           X11 session). Optionally, by specifying the format, one can also
           produce an image instead.

           For more or different graphviz attributes, it is possible to save
           the default set of attributes to an ini file. If this file exists
           it will always override the builtin settings. The exportsettings
           subcommand also prints the location of the ini file.

           Usage:

               graph [<gtype> [<file> [<img_format>]]]
               graph exportsettings

               gtype :: dot
               img_format :: `dot` output format (see the `-T` option)

           Example:

               graph dot
               graph dot clu1.conf.dot
               graph dot clu1.conf.svg svg

       xml
           Even though we promissed no xml, it may happen, but hopefully very
           very seldom, that an element from the CIB cannot be rendered in the
           configuration language. In that case, the element will be shown as
           raw xml, prefixed by this command. That element can then be edited
           like any other. If the shell finds out that after the change it can
           digest it, then it is going to be converted into the normal
           configuration language. Otherwise, there is no need to use xml for
           configuration.

           Usage:

               xml <xml>

   template
       User may be assisted in the cluster configuration by templates prepared
       in advance. Templates consist of a typical ready configuration which
       may be edited to suit particular user needs.

       This command enters a template level where additional commands for
       configuration/template management are available.

       new
           Create a new configuration from one or more templates. Note that
           configurations and templates are kept in different places, so it is
           possible to have a configuration name equal a template name.

           If you already know which parameters are required, you can set them
           directly on the command line.

           The parameter name id is set by default to the name of the
           configuration.

           Usage:

               new <config> <template> [<template> ...] [params name=value ...]

           Example:

               new vip virtual-ip
               new bigfs ocfs2 params device=/dev/sdx8 directory=/bigfs

       load
           Load an existing configuration. Further edit, show, and apply
           commands will refer to this configuration.

           Usage:

               load <config>

       edit
           Edit current or given configuration using your favourite editor.

           Usage:

               edit [<config>]

       delete
           Remove a configuration. The loaded (active) configuration may be
           removed by force.

           Usage:

               delete <config> [force]

       list
           List existing configurations or templates.

           Usage:

               list [templates]

       apply
           Copy the current or given configuration to the current CIB. By
           default, the CIB is replaced, unless the method is set to "update".

           Usage:

               apply [<method>] [<config>]

               method :: replace | update

       show
           Process the current or given configuration and display the result.

           Usage:

               show [<config>]

   cibstatus
       The status section of the CIB keeps the current status of nodes and
       resources. It is modified only on events, i.e. when some resource
       operation is run or node status changes. For obvious reasons, the CRM
       has no user interface with which it is possible to affect the status
       section. From the user’s point of view, the status section is
       essentially a read-only part of the CIB. The current status is never
       even written to disk, though it is available in the PE (Policy Engine)
       input files which represent the history of cluster motions. The current
       status may be read using the cibadmin -Q command.

       It may sometimes be of interest to see how status changes would affect
       the Policy Engine. The set of ‘cibstatus` level commands allow the user
       to load status sections from various sources and then insert or modify
       resource operations or change nodes’ state.

       The effect of those changes may then be observed by running the ptest
       command at the configure level or simulate and run commands at this
       level. The ptest runs with the user edited CIB whereas the latter two
       commands run with the CIB which was loaded along with the status
       section.

       The simulate and run commands as well as all status modification
       commands are implemented using crm_simulate(8).

       load
           Load a status section from a file, a shadow CIB, or the running
           cluster. By default, the current (live) status section is modified.
           Note that if the live status section is modified it is not going to
           be updated if the cluster status changes, because that would
           overwrite the user changes. To make crm drop changes and resume use
           of the running cluster status, run load live.

           All CIB shadow configurations contain the status section which is a
           snapshot of the status section taken at the time the shadow was
           created. Obviously, this status section doesn’t have much to do
           with the running cluster status, unless the shadow CIB has just
           been created. Therefore, the ptest command by default uses the
           running cluster status section.

           Usage:

               load {<file>|shadow:<cib>|live}

           Example:

               load bug-12299.xml
               load shadow:test1

       save
           The current internal status section with whatever modifications
           were performed can be saved to a file or shadow CIB.

           If the file exists and contains a complete CIB, only the status
           section is going to be replaced and the rest of the CIB will remain
           intact. Otherwise, the current user edited configuration is saved
           along with the status section.

           Note that all modifications are saved in the source file as soon as
           they are run.

           Usage:

               save [<file>|shadow:<cib>]

           Example:

               save bug-12299.xml

       origin
           Show the origin of the status section currently in use. This
           essentially shows the latest load argument.

           Usage:

               origin

       show
           Show the current status section in the XML format. Brace yourself
           for some unreadable output. Add changed option to get a human
           readable output of all changes.

           Usage:

               show [changed]

       node
           Change the node status. It is possible to throw a node out of the
           cluster, make it a member, or set its state to unclean.

           online
               Set the node_statecrmd attribute to online and the expected and
               join attributes to member. The effect is that the node becomes
               a cluster member.

           offline
               Set the node_statecrmd attribute to offline and the expected
               attribute to empty. This makes the node cleanly removed from
               the cluster.

           unclean
               Set the node_statecrmd attribute to offline and the expected
               attribute to member. In this case the node has unexpectedly
               disappeared.

           Usage:

               node <node> {online|offline|unclean}

           Example:

               node xen-b unclean

       op
           Edit the outcome of a resource operation. This way you can tell CRM
           that it ran an operation and that the resource agent returned
           certain exit code. It is also possible to change the operation’s
           status. In case the operation status is set to something other than
           done, the exit code is effectively ignored.

           Usage:

               op <operation> <resource> <exit_code> [<op_status>] [<node>]

               operation :: probe | monitor[:<n>] | start | stop |
                  promote | demote | notify | migrate_to | migrate_from
               exit_code :: <rc> | success | generic | args |
                  unimplemented | perm | installed | configured | not_running |
                  master | failed_master
               op_status :: pending | done | cancelled | timeout | notsupported | error

               n :: the monitor interval in seconds; if omitted, the first
                  recurring operation is referenced
               rc :: numeric exit code in range 0..9

           Example:

               op start d1 xen-b generic
               op start d1 xen-b 1
               op monitor d1 xen-b not_running
               op stop d1 xen-b 0 timeout

       quorum
           Set the quorum value.

           Usage:

               quorum <bool>

           Example:

               quorum false

       ticket
           Modify the ticket status. Tickets can be granted and revoked.
           Granted tickets could be activated or put in standby.

           Usage:

               ticket <ticket> {grant|revoke|activate|standby}

           Example:

               ticket ticketA grant

       run
           Run the policy engine with the edited status section.

           Add a string of v characters to increase verbosity. Specify scores
           to see allocation scores also. utilization turns on information
           about the remaining capacity of nodes.

           If you have graphviz installed and X11 session, dotty(1) is run to
           display the changes graphically.

           Usage:

               run [nograph] [v...] [scores] [utilization]

           Example:

               run

       simulate
           Run the policy engine with the edited status section and simulate
           the transition.

           Add a string of v characters to increase verbosity. Specify scores
           to see allocation scores also. utilization turns on information
           about the remaining capacity of nodes.

           If you have graphviz installed and X11 session, dotty(1) is run to
           display the changes graphically.

           Usage:

               simulate [nograph] [v...] [scores] [utilization]

           Example:

               simulate

   assist
       The assist sublevel is a collection of helper commands that create or
       modify resources and constraints, to simplify the creation of certain
       configurations.

       For more information on individual commands, see the help text for
       those commands.

       weak-bond
           A colocation between a group of resources says that the resources
           should be located together, but it also means that those resources
           are dependent on each other. If one of the resources fails, the
           others will be restarted.

           If this is not desired, it is possible to circumvent: By placing
           the resources in a non-sequential set and colocating the set with a
           dummy resource which is not monitored, the resources will be placed
           together but will have no further dependency on each other.

           This command creates both the constraint and the dummy resource
           needed for such a colocation.

           Usage:

               weak-bond resource-1 resource-2

   history
       Examining Pacemaker’s history is a particularly involved task. The
       number of subsystems to be considered, the complexity of the
       configuration, and the set of various information sources, most of
       which are not exactly human readable, keep analyzing resource or node
       problems accessible to only the most knowledgeable. Or, depending on
       the point of view, to the most persistent. The following set of
       commands has been devised in hope to make cluster history more
       accessible.

       Of course, looking at all history could be time consuming regardless of
       how good tools at hand are. Therefore, one should first say which
       period he or she wants to analyze. If not otherwise specified, the last
       hour is considered. Logs and other relevant information is collected
       using hb_report. Since this process takes some time and we always need
       fresh logs, information is refreshed in a much faster way using
       pssh(1). If python-pssh is not found on the system, examining live
       cluster is still possible though not as comfortable.

       Apart from examining live cluster, events may be retrieved from a
       report generated by hb_report (see also the -H option). In that case we
       assume that the period stretching the whole report needs to be
       investigated. Of course, it is still possible to further reduce the
       time range.

       If you think you may have found a bug or just need clarification from
       developers or your support, the session pack command can help create a
       report.

       Example:

           crm(live)history# timeframe "Jul 18 12:00" "Jul 18 12:30"
           crm(live)history# session save strange_restart
           crm(live)history# session pack
           Report saved in .../strange_restart.tar.bz2
           crm(live)history#

       In order to reduce report size and allow developers to concentrate on
       the issue, you should beforehand limit the time frame. Giving a
       meaningful session name helps too.

       info
           The info command provides a summary of the information source,
           which can be either a live cluster snapshot or a previously
           generated report.

           Usage:

               info

           Example:

               info

       latest
           The latest command shows a bit of recent history, more precisely
           whatever happened since the last cluster change (the latest
           transition). If the transition is running, the shell will first
           wait until it finishes.

           Usage:

               latest

           Example:

               latest

       limit (timeframe)
           All history commands look at events within certain period. It
           defaults to the last hour for the live cluster source. There is no
           limit for the hb_report source. Use this command to set the
           timeframe.

           The time period is parsed by the dateutil python module. It covers
           wide range of date formats. For instance:

           ·   3:00 (today at 3am)

           ·   15:00 (today at 3pm)

           ·   2010/9/1 2pm (September 1st 2010 at 2pm)

           We won’t bother to give definition of the time specification in
           usage below. Either use common sense or read the dateutil
           documentation.

           If dateutil is not available, then the time is parsed using
           strptime and only the kind as printed by date(1) is allowed:

           ·   Tue Sep 15 20:46:27 CEST 2010

           Usage:

               limit [<from_time> [<to_time>]]

           Examples:

               limit 10:15
               limit 15h22m 16h
               limit "Sun 5 20:46" "Sun 5 22:00"

       source
           Events to be examined can come from the current cluster or from a
           hb_report report. This command sets the source. source live sets
           source to the running cluster and system logs. If no source is
           specified, the current source information is printed.

           In case a report source is specified as a file reference, the file
           is going to be unpacked in place where it resides. This directory
           is not removed on exit.

           Usage:

               source [<dir>|<file>|live]

           Examples:

               source live
               source /tmp/customer_case_22.tar.bz2
               source /tmp/customer_case_22
               source

       refresh
           This command makes sense only for the live source and makes crm
           collect the latest logs and other relevant information from the
           logs. If you want to make a completely new report, specify force.

           Usage:

               refresh [force]

       detail
           How much detail to show from the logs.

           Usage:

               detail <detail_level>

               detail_level :: small integer (defaults to 0)

           Example:

               detail 1

       setnodes
           In case the host this program runs on is not part of the cluster,
           it is necessary to set the list of nodes.

           Usage:

               setnodes node <node> [<node> ...]

           Example:

               setnodes node_a node_b

       resource
           Show actions and any failures that happened on all specified
           resources on all nodes. Normally, one gives resource names as
           arguments, but it is also possible to use extended regular
           expressions. Note that neither groups nor clones or master/slave
           names are ever logged. The resource command is going to expand all
           of these appropriately, so that clone instances or resources which
           are part of a group are shown.

           Usage:

               resource <rsc> [<rsc> ...]

           Example:

               resource bigdb public_ip
               resource my_.*_db2
               resource ping_clone

       node
           Show important events that happened on a node. Important events are
           node lost and join, standby and online, and fence. Use either node
           names or extended regular expressions.

           Usage:

               node <node> [<node> ...]

           Example:

               node node1

       log
           Show messages logged on one or more nodes. Leaving out a node name
           produces combined logs of all nodes. Messages are sorted by time
           and, if the terminal emulations supports it, displayed in different
           colours depending on the node to allow for easier reading.

           The sorting key is the timestamp as written by syslog which
           normally has the maximum resolution of one second. Obviously,
           messages generated by events which share the same timestamp may not
           be sorted in the same way as they happened. Such close events may
           actually happen fairly often.

           Usage:

               log [<node> [<node> ...] ]

           Example:

               log node-a

       exclude
           If a log is infested with irrelevant messages, those messages may
           be excluded by specifying a regular expression. The regular
           expressions used are Python extended. This command is additive. To
           drop all regular expressions, use exclude clear. Run exclude only
           to see the current list of regular expressions. Excludes are saved
           along with the history sessions.

           Usage:

               exclude [<regex>|clear]

           Example:

               exclude kernel.*ocfs2

       peinputs
           Every event in the cluster results in generating one or more Policy
           Engine (PE) files. These files describe future motions of
           resources. The files are listed as full paths in the current report
           directory. Add v to also see the creation time stamps.

           Usage:

               peinputs [{<range>|<number>} ...] [v]

               range :: <n1>:<n2>

           Example:

               peinputs
               peinputs 440:444 446
               peinputs v

       transition
           This command will print actions planned by the PE and run graphviz
           (dotty) to display a graphical representation of the transition. Of
           course, for the latter an X11 session is required. This command
           invokes ptest(8) in background.

           The showdot subcommand runs graphviz (dotty) to display a graphical
           representation of the .dot file which has been included in the
           report. Essentially, it shows the calculation produced by pengine
           which is installed on the node where the report was produced. In
           optimal case this output should not differ from the one produced by
           the locally installed pengine.

           The log subcommand shows the full log for the duration of the
           transition.

           A transition can also be saved to a CIB shadow for further analysis
           or use with cib or configure commands (use the save subcommand).
           The shadow file name defaults to the name of the PE input file.

           If the PE input file number is not provided, it defaults to the
           last one, i.e. the last transition. The last transition can also be
           referenced with number 0. If the number is negative, then the
           corresponding transition relative to the last one is chosen.

           If there are warning and error PE input files or different nodes
           were the DC in the observed timeframe, it may happen that PE input
           file numbers collide. In that case provide some unique part of the
           path to the file.

           After the ptest output, logs about events that happened during the
           transition are printed.

           Usage:

               transition [<number>|<index>|<file>] [nograph] [v...] [scores] [actions] [utilization]
               transition showdot [<number>|<index>|<file>]
               transition log [<number>|<index>|<file>]
               transition save [<number>|<index>|<file> [name]]

           Examples:

               transition
               transition 444
               transition -1
               transition pe-error-3.bz2
               transition node-a/pengine/pe-input-2.bz2
               transition showdot 444
               transition log
               transition save 0 enigma-22

       show
           Every transition is saved as a PE file. Use this command to render
           that PE file either as configuration or status. The configuration
           output is the same as crm configure show.

           Usage:

               show <pe> [status]

               pe :: <number>|<index>|<file>|live

           Examples:

               show 2066
               show pe-input-2080.bz2 status

       graph
           Create a graphviz graphical layout from the PE file (the
           transition). Every transition contains the cluster configuration
           which was active at the time. See also generate a directed graph
           from configuration.

           Usage:

               graph <pe> [<gtype> [<file> [<img_format>]]]

               gtype :: dot
               img_format :: `dot` output format (see the `-T` option)

           Example:

               graph -1
               graph 322 dot clu1.conf.dot
               graph 322 dot clu1.conf.svg svg

       diff
           A transition represents a change in cluster configuration or state.
           Use diff to see what has changed between two transitions.

           If you want to specify the current cluster configuration and
           status, use the string live.

           Normally, the first transition specified should be the one which is
           older, but we are not going to enforce that.

           Note that a single configuration update may result in more than one
           transition.

           Usage:

               diff <pe> <pe> [status] [html]

               pe :: <number>|<index>|<file>|live

           Examples:

               diff 2066 2067
               diff pe-input-2080.bz2 live status

       wdiff
           A transition represents a change in cluster configuration or state.
           Use wdiff to see what has changed between two transitions as word
           differences on a line-by-line basis.

           If you want to specify the current cluster configuration and
           status, use the string live.

           Normally, the first transition specified should be the one which is
           older, but we are not going to enforce that.

           Note that a single configuration update may result in more than one
           transition.

           Usage:

               wdiff <pe> <pe> [status]

               pe :: <number>|<index>|<file>|live

           Examples:

               wdiff 2066 2067
               wdiff pe-input-2080.bz2 live status

       session
           Sometimes you may want to get back to examining a particular
           history period or bug report. In order to make that easier, the
           current settings can be saved and later retrieved.

           If the current history being examined is coming from a live cluster
           the logs, PE inputs, and other files are saved too, because they
           may disappear from nodes. For the existing reports coming from
           hb_report, only the directory location is saved (not to waste
           space).

           A history session may also be packed into a tarball which can then
           be sent to support.

           Leave out subcommand to see the current session.

           Usage:

               session [{save|load|delete} <name> | pack [<name>] | update | list]

           Examples:

               session save bnc966622
               session load rsclost-2
               session list

   report
       Interface to a tool for creating a cluster report. A report is an
       archive containing log files, configuration files, system information
       and other relevant data for a given time period. This is a useful tool
       for collecting data to attach to bug reports, or for detecting the root
       cause of errors resulting in resource failover, for example.

       See crmsh_hb_report(8) for more details on arguments, or call crm
       report -h

       Usage:

           report -f {time|"cts:"testnum} [-t time] [-u user] [-l file]
                  [-n nodes] [-E files] [-p patt] [-L patt] [-e prog]
                  [-MSDZAVsvhd] [dest]

       Examples:

           report -f 2pm report_1
           report -f "2007/9/5 12:30" -t "2007/9/5 14:00" report_2
           report -f 1:00 -t 3:00 -l /var/log/cluster/ha-debug report_3
           report -f "09sep07 2:00" -u hbadmin report_4
           report -f 18:00 -p "usern.*" -p "admin.*" report_5
           report -f cts:133 ctstest_133

   end (cd, up)
       The end command ends the current level and the user moves to the parent
       level. This command is available everywhere.

       Usage:

           end

   help
       The help command prints help for the current level or for the specified
       topic (command). This command is available everywhere.

       Usage:

           help [<topic>]

   quit (exit, bye)
       Leave the program.

BUGS

       Even though all sensible configurations (and most of those that are
       not) are going to be supported by the crm shell, I suspect that it may
       still happen that certain XML constructs may confuse the tool. When
       that happens, please file a bug report.

       The crm shell will not try to update the objects it does not
       understand. Of course, it is always possible to edit such objects in
       the XML format.

AUTHORS

       Dejan Muhamedagic, <dejan@suse.de> Kristoffer Gronlund
       <kgronlund@suse.com> and many OTHERS

SEE ALSO

       crm_resource(8), crm_attribute(8), crm_mon(8), cib_shadow(8), ptest(8),
       dotty(1), crm_simulate(8), cibadmin(8)

COPYING

       Copyright (C) 2008-2013 Dejan Muhamedagic. Copyright (C) 2013
       Kristoffer Gronlund.

       Free use of this software is granted under the terms of the GNU General
       Public License (GPL).



  All copyrights belong to their respective owners. Other content (c) 2014-2018, GNU.WIKI. Please report site errors to webmaster@gnu.wiki.
Page load time: 0.113 seconds. Last modified: November 04 2018 12:49:43.