CGROUPS(7)                (2020-08-13)                 CGROUPS(7)

          cgroups - Linux control groups

          Control groups, usually referred to as cgroups, are a Linux
          kernel feature which allow processes to be organized into
          hierarchical groups whose usage of various types of
          resources can then be limited and monitored.  The kernel's
          cgroup interface is provided through a pseudo-filesystem
          called cgroupfs.  Grouping is implemented in the core cgroup
          kernel code, while resource tracking and limits are
          implemented in a set of per-resource-type subsystems
          (memory, CPU, and so on).

          A cgroup is a collection of processes that are bound to a
          set of limits or parameters defined via the cgroup filesys-

          A subsystem is a kernel component that modifies the behavior
          of the processes in a cgroup.  Various subsystems have been
          implemented, making it possible to do things such as limit-
          ing the amount of CPU time and memory available to a cgroup,
          accounting for the CPU time used by a cgroup, and freezing
          and resuming execution of the processes in a cgroup.  Sub-
          systems are sometimes also known as resource controllers (or
          simply, controllers).

          The cgroups for a controller are arranged in a hierarchy.
          This hierarchy is defined by creating, removing, and renam-
          ing subdirectories within the cgroup filesystem.  At each
          level of the hierarchy, attributes (e.g., limits) can be
          defined.  The limits, control, and accounting provided by
          cgroups generally have effect throughout the subhierarchy
          underneath the cgroup where the attributes are defined.
          Thus, for example, the limits placed on a cgroup at a higher
          level in the hierarchy cannot be exceeded by descendant

        Cgroups version 1 and version 2
          The initial release of the cgroups implementation was in
          Linux 2.6.24.  Over time, various cgroup controllers have
          been added to allow the management of various types of
          resources.  However, the development of these controllers
          was largely uncoordinated, with the result that many incon-
          sistencies arose between controllers and management of the
          cgroup hierarchies became rather complex.  A longer descrip-
          tion of these problems can be found in the kernel source
          file Documentation/admin-guide/cgroup-v2.rst (or
          Documentation/cgroup-v2.txt in Linux 4.17 and earlier).

     Page 1                        Linux             (printed 5/22/22)

     CGROUPS(7)                (2020-08-13)                 CGROUPS(7)

          Because of the problems with the initial cgroups implementa-
          tion (cgroups version 1), starting in Linux 3.10, work began
          on a new, orthogonal implementation to remedy these prob-
          lems.  Initially marked experimental, and hidden behind the
          -o __DEVEL__sane_behavior mount option, the new version
          (cgroups version 2) was eventually made official with the
          release of Linux 4.5.  Differences between the two versions
          are described in the text below.  The file
          cgroup.sane_behavior, present in cgroups v1, is a relic of
          this mount option. The file always reports "0" and is only
          retained for backward compatibility.

          Although cgroups v2 is intended as a replacement for cgroups
          v1, the older system continues to exist (and for compatibil-
          ity reasons is unlikely to be removed).  Currently, cgroups
          v2 implements only a subset of the controllers available in
          cgroups v1.  The two systems are implemented so that both v1
          controllers and v2 controllers can be mounted on the same
          system.  Thus, for example, it is possible to use those con-
          trollers that are supported under version 2, while also
          using version 1 controllers where version 2 does not yet
          support those controllers.  The only restriction here is
          that a controller can't be simultaneously employed in both a
          cgroups v1 hierarchy and in the cgroups v2 hierarchy.

          Under cgroups v1, each controller may be mounted against a
          separate cgroup filesystem that provides its own hierarchi-
          cal organization of the processes on the system.  It is also
          possible to comount multiple (or even all) cgroups v1 con-
          trollers against the same cgroup filesystem, meaning that
          the comounted controllers manage the same hierarchical orga-
          nization of processes.

          For each mounted hierarchy, the directory tree mirrors the
          control group hierarchy.  Each control group is represented
          by a directory, with each of its child control cgroups rep-
          resented as a child directory.  For instance,
          /user/joe/1.session represents control group 1.session,
          which is a child of cgroup joe, which is a child of /user.
          Under each cgroup directory is a set of files which can be
          read or written to, reflecting resource limits and a few
          general cgroup properties.

        Tasks (threads) versus processes
          In cgroups v1, a distinction is drawn between processes and
          tasks. In this view, a process can consist of multiple tasks
          (more commonly called threads, from a user-space perspec-
          tive, and called such in the remainder of this man page).
          In cgroups v1, it is possible to independently manipulate
          the cgroup memberships of the threads in a process.

     Page 2                        Linux             (printed 5/22/22)

     CGROUPS(7)                (2020-08-13)                 CGROUPS(7)

          The cgroups v1 ability to split threads across different
          cgroups caused problems in some cases.  For example, it made
          no sense for the memory controller, since all of the threads
          of a process share a single address space.  Because of these
          problems, the ability to independently manipulate the cgroup
          memberships of the threads in a process was removed in the
          initial cgroups v2 implementation, and subsequently restored
          in a more limited form (see the discussion of "thread mode"

        Mounting v1 controllers
          The use of cgroups requires a kernel built with the
          CONFIG_CGROUP option.  In addition, each of the v1 con-
          trollers has an associated configuration option that must be
          set in order to employ that controller.

          In order to use a v1 controller, it must be mounted against
          a cgroup filesystem.  The usual place for such mounts is
          under a tmpfs(5) filesystem mounted at /sys/fs/cgroup. Thus,
          one might mount the cpu controller as follows:

              mount -t cgroup -o cpu none /sys/fs/cgroup/cpu

          It is possible to comount multiple controllers against the
          same hierarchy.  For example, here the cpu and cpuacct con-
          trollers are comounted against a single hierarchy:

              mount -t cgroup -o cpu,cpuacct none /sys/fs/cgroup/cpu,cpuacct

          Comounting controllers has the effect that a process is in
          the same cgroup for all of the comounted controllers.  Sepa-
          rately mounting controllers allows a process to be in cgroup
          /foo1 for one controller while being in /foo2/foo3 for

          It is possible to comount all v1 controllers against the
          same hierarchy:

              mount -t cgroup -o all cgroup /sys/fs/cgroup

          (One can achieve the same result by omitting -o all, since
          it is the default if no controllers are explicitly speci-

          It is not possible to mount the same controller against mul-
          tiple cgroup hierarchies.  For example, it is not possible
          to mount both the cpu and cpuacct controllers against one
          hierarchy, and to mount the cpu controller alone against
          another hierarchy.  It is possible to create multiple mount
          points with exactly the same set of comounted controllers.
          However, in this case all that results is multiple mount
          points providing a view of the same hierarchy.

     Page 3                        Linux             (printed 5/22/22)

     CGROUPS(7)                (2020-08-13)                 CGROUPS(7)

          Note that on many systems, the v1 controllers are automati-
          cally mounted under /sys/fs/cgroup; in particular,
          systemd(1) automatically creates such mount points.

        Unmounting v1 controllers
          A mounted cgroup filesystem can be unmounted using the
          umount(8) command, as in the following example:

              umount /sys/fs/cgroup/pids

          But note well: a cgroup filesystem is unmounted only if it
          is not busy, that is, it has no child cgroups.  If this is
          not the case, then the only effect of the umount(8) is to
          make the mount invisible.  Thus, to ensure that the mount
          point is really removed, one must first remove all child
          cgroups, which in turn can be done only after all member
          processes have been moved from those cgroups to the root

        Cgroups version 1 controllers
          Each of the cgroups version 1 controllers is governed by a
          kernel configuration option (listed below).  Additionally,
          the availability of the cgroups feature is governed by the
          CONFIG_CGROUPS kernel configuration option.

               Cgroups can be guaranteed a minimum number of "CPU
               shares" when a system is busy.  This does not limit a
               cgroup's CPU usage if the CPUs are not busy.  For fur-
               ther information, see
               Documentation/scheduler/sched-design-CFS.rst (or
               Documentation/scheduler/sched-design-CFS.txt in Linux
               5.2 and earlier).

               In Linux 3.2, this controller was extended to provide
               CPU "bandwidth" control.  If the kernel is configured
               with CONFIG_CFS_BANDWIDTH, then within each scheduling
               period (defined via a file in the cgroup directory), it
               is possible to define an upper limit on the CPU time
               allocated to the processes in a cgroup.  This upper
               limit applies even if there is no other competition for
               the CPU.  Further information can be found in the ker-
               nel source file Documentation/scheduler/sched-bwc.rst
               (or Documentation/scheduler/sched-bwc.txt in Linux 5.2
               and earlier).

               This provides accounting for CPU usage by groups of

               Further information can be found in the kernel source
               file Documentation/admin-guide/cgroup-v1/cpuacct.rst

     Page 4                        Linux             (printed 5/22/22)

     CGROUPS(7)                (2020-08-13)                 CGROUPS(7)

               (or Documentation/cgroup-v1/cpuacct.txt in Linux 5.2
               and earlier).

               This cgroup can be used to bind the processes in a
               cgroup to a specified set of CPUs and NUMA nodes.

               Further information can be found in the kernel source
               file Documentation/admin-guide/cgroup-v1/cpusets.rst
               (or Documentation/cgroup-v1/cpusets.txt in Linux 5.2
               and earlier).

               The memory controller supports reporting and limiting
               of process memory, kernel memory, and swap used by

               Further information can be found in the kernel source
               file Documentation/admin-guide/cgroup-v1/memory.rst (or
               Documentation/cgroup-v1/memory.txt in Linux 5.2 and

               This supports controlling which processes may create
               (mknod) devices as well as open them for reading or
               writing.  The policies may be specified as allow-lists
               and deny-lists.  Hierarchy is enforced, so new rules
               must not violate existing rules for the target or
               ancestor cgroups.

               Further information can be found in the kernel source
               file Documentation/admin-guide/cgroup-v1/devices.rst
               (or Documentation/cgroup-v1/devices.txt in Linux 5.2
               and earlier).

               The freezer cgroup can suspend and restore (resume) all
               processes in a cgroup.  Freezing a cgroup /A also
               causes its children, for example, processes in /A/B, to
               be frozen.

               Further information can be found in the kernel source
               .}f (or Documentation/cgroup-v1/freezer-subsystem.txt
               in Linux 5.2 and earlier).

               This places a classid, specified for the cgroup, on
               network packets created by a cgroup.  These classids
               can then be used in firewall rules, as well as used to
               shape traffic using tc(8).  This applies only to

     Page 5                        Linux             (printed 5/22/22)

     CGROUPS(7)                (2020-08-13)                 CGROUPS(7)

               packets leaving the cgroup, not to traffic arriving at
               the cgroup.

               Further information can be found in the kernel source
               file Documentation/admin-guide/cgroup-v1/net_cls.rst
               (or Documentation/cgroup-v1/net_cls.txt in Linux 5.2
               and earlier).

               The blkio cgroup controls and limits access to speci-
               fied block devices by applying IO control in the form
               of throttling and upper limits against leaf nodes and
               intermediate nodes in the storage hierarchy.

               Two policies are available.  The first is a
               proportional-weight time-based division of disk imple-
               mented with CFQ.  This is in effect for leaf nodes
               using CFQ.  The second is a throttling policy which
               specifies upper I/O rate limits on a device.

               Further information can be found in the kernel source
               .}f (or Documentation/cgroup-v1/blkio-controller.txt in
               Linux 5.2 and earlier).

               This controller allows perf monitoring of the set of
               processes grouped in a cgroup.

               Further information can be found in the kernel source

               This allows priorities to be specified, per network
               interface, for cgroups.

               Further information can be found in the kernel source
               file Documentation/admin-guide/cgroup-v1/net_prio.rst
               .}f (or Documentation/cgroup-v1/net_prio.txt in Linux
               5.2 and earlier).

               This supports limiting the use of huge pages by

               Further information can be found in the kernel source
               file Documentation/admin-guide/cgroup-v1/hugetlb.rst
               (or Documentation/cgroup-v1/hugetlb.txt in Linux 5.2
               and earlier).


     Page 6                        Linux             (printed 5/22/22)

     CGROUPS(7)                (2020-08-13)                 CGROUPS(7)

               This controller permits limiting the number of process
               that may be created in a cgroup (and its descendants).

               Further information can be found in the kernel source
               file Documentation/admin-guide/cgroup-v1/pids.rst (or
               Documentation/cgroup-v1/pids.txt in Linux 5.2 and ear-

               The RDMA controller permits limiting the use of
               RDMA/IB-specific resources per cgroup.

               Further information can be found in the kernel source
               file Documentation/admin-guide/cgroup-v1/rdma.rst (or
               Documentation/cgroup-v1/rdma.txt in Linux 5.2 and ear-

        Creating cgroups and moving processes
          A cgroup filesystem initially contains a single root cgroup,
          '/', which all processes belong to.  A new cgroup is created
          by creating a directory in the cgroup filesystem:

              mkdir /sys/fs/cgroup/cpu/cg1

          This creates a new empty cgroup.

          A process may be moved to this cgroup by writing its PID
          into the cgroup's cgroup.procs file:

              echo $$ > /sys/fs/cgroup/cpu/cg1/cgroup.procs

          Only one PID at a time should be written to this file.

          Writing the value 0 to a cgroup.procs file causes the writ-
          ing process to be moved to the corresponding cgroup.

          When writing a PID into the cgroup.procs, all threads in the
          process are moved into the new cgroup at once.

          Within a hierarchy, a process can be a member of exactly one
          cgroup.  Writing a process's PID to a cgroup.procs file
          automatically removes it from the cgroup of which it was
          previously a member.

          The cgroup.procs file can be read to obtain a list of the
          processes that are members of a cgroup.  The returned list
          of PIDs is not guaranteed to be in order.  Nor is it guaran-
          teed to be free of duplicates.  (For example, a PID may be
          recycled while reading from the list.)

          In cgroups v1, an individual thread can be moved to another
          cgroup by writing its thread ID (i.e., the kernel thread ID

     Page 7                        Linux             (printed 5/22/22)

     CGROUPS(7)                (2020-08-13)                 CGROUPS(7)

          returned by clone(2) and gettid(2)) to the tasks file in a
          cgroup directory.  This file can be read to discover the set
          of threads that are members of the cgroup.

        Removing cgroups
          To remove a cgroup, it must first have no child cgroups and
          contain no (nonzombie) processes.  So long as that is the
          case, one can simply remove the corresponding directory
          pathname.  Note that files in a cgroup directory cannot and
          need not be removed.

        Cgroups v1 release notification
          Two files can be used to determine whether the kernel pro-
          vides notifications when a cgroup becomes empty.  A cgroup
          is considered to be empty when it contains no child cgroups
          and no member processes.

          A special file in the root directory of each cgroup hierar-
          chy, release_agent, can be used to register the pathname of
          a program that may be invoked when a cgroup in the hierarchy
          becomes empty.  The pathname of the newly empty cgroup (rel-
          ative to the cgroup mount point) is provided as the sole
          command-line argument when the release_agent program is
          invoked.  The release_agent program might remove the cgroup
          directory, or perhaps repopulate it with a process.

          The default value of the release_agent file is empty, mean-
          ing that no release agent is invoked.

          The content of the release_agent file can also be specified
          via a mount option when the cgroup filesystem is mounted:

              mount -o release_agent=pathname ...

          Whether or not the release_agent program is invoked when a
          particular cgroup becomes empty is determined by the value
          in the notify_on_release file in the corresponding cgroup
          directory.  If this file contains the value 0, then the
          release_agent program is not invoked.  If it contains the
          value 1, the release_agent program is invoked.  The default
          value for this file in the root cgroup is 0.  At the time
          when a new cgroup is created, the value in this file is
          inherited from the corresponding file in the parent cgroup.

        Cgroup v1 named hierarchies
          In cgroups v1, it is possible to mount a cgroup hierarchy
          that has no attached controllers:

              mount -t cgroup -o none,name=somename none /some/mount/point

          Multiple instances of such hierarchies can be mounted; each
          hierarchy must have a unique name.  The only purpose of such

     Page 8                        Linux             (printed 5/22/22)

     CGROUPS(7)                (2020-08-13)                 CGROUPS(7)

          hierarchies is to track processes.  (See the discussion of
          release notification below.)  An example of this is the
          name=systemd cgroup hierarchy that is used by systemd(1) to
          track services and user sessions.

          Since Linux 5.0, the cgroup_no_v1 kernel boot option
          (described below) can be used to disable cgroup v1 named
          hierarchies, by specifying cgroup_no_v1=named.

          In cgroups v2, all mounted controllers reside in a single
          unified hierarchy.  While (different) controllers may be
          simultaneously mounted under the v1 and v2 hierarchies, it
          is not possible to mount the same controller simultaneously
          under both the v1 and the v2 hierarchies.

          The new behaviors in cgroups v2 are summarized here, and in
          some cases elaborated in the following subsections.

          1. Cgroups v2 provides a unified hierarchy against which all
             controllers are mounted.

          2. "Internal" processes are not permitted.  With the excep-
             tion of the root cgroup, processes may reside only in
             leaf nodes (cgroups that do not themselves contain child
             cgroups).  The details are somewhat more subtle than
             this, and are described below.

          3. Active cgroups must be specified via the files
             cgroup.controllers and cgroup.subtree_control.

          4. The tasks file has been removed.  In addition, the
             cgroup.clone_children file that is employed by the cpuset
             controller has been removed.

          5. An improved mechanism for notification of empty cgroups
             is provided by the file.

          For more changes, see the
          Documentation/admin-guide/cgroup-v2.rst file in the kernel
          source (or Documentation/cgroup-v2.txt in Linux 4.17 and

          Some of the new behaviors listed above saw subsequent modi-
          fication with the addition in Linux 4.14 of "thread mode"
          (described below).

        Cgroups v2 unified hierarchy
          In cgroups v1, the ability to mount different controllers
          against different hierarchies was intended to allow great
          flexibility for application design.  In practice, though,

     Page 9                        Linux             (printed 5/22/22)

     CGROUPS(7)                (2020-08-13)                 CGROUPS(7)

          the flexibility turned out to be less useful than expected,
          and in many cases added complexity.  Therefore, in cgroups
          v2, all available controllers are mounted against a single
          hierarchy.  The available controllers are automatically
          mounted, meaning that it is not necessary (or possible) to
          specify the controllers when mounting the cgroup v2 filesys-
          tem using a command such as the following:

              mount -t cgroup2 none /mnt/cgroup2

          A cgroup v2 controller is available only if it is not cur-
          rently in use via a mount against a cgroup v1 hierarchy.
          Or, to put things another way, it is not possible to employ
          the same controller against both a v1 hierarchy and the uni-
          fied v2 hierarchy.  This means that it may be necessary
          first to unmount a v1 controller (as described above) before
          that controller is available in v2.  Since systemd(1) makes
          heavy use of some v1 controllers by default, it can in some
          cases be simpler to boot the system with selected v1 con-
          trollers disabled.  To do this, specify the
          cgroup_no_v1=list option on the kernel boot command line;
          list is a comma-separated list of the names of the con-
          trollers to disable, or the word all to disable all v1 con-
          trollers.  (This situation is correctly handled by
          systemd(1), which falls back to operating without the speci-
          fied controllers.)

          Note that on many modern systems, systemd(1) automatically
          mounts the cgroup2 filesystem at /sys/fs/cgroup/unified dur-
          ing the boot process.

        Cgroups v2 mount options
          The following options (mount -o) can be specified when
          mounting the group v2 filesystem:

               Treat cgroup namespaces as delegation boundaries.  For
               details, see below.

               The should show statistics only for the
               cgroup itself, and not for any descendant cgroups.
               This was the behavior before Linux 5.2.  Starting in
               Linux 5.2, the default behavior is to include statis-
               tics for descendant cgroups in, and this
               mount option can be used to revert to the legacy behav-
               ior.  This option is system wide and can be set on
               mount or modified through remount only from the initial
               mount namespace; it is silently ignored in noninitial

        Cgroups v2 controllers

     Page 10                       Linux             (printed 5/22/22)

     CGROUPS(7)                (2020-08-13)                 CGROUPS(7)

          The following controllers, documented in the kernel source
          file Documentation/admin-guide/cgroup-v2.rst (or
          Documentation/cgroup-v2.txt in Linux 4.17 and earlier), are
          supported in cgroups version 2:

               This is the successor to the version 1 cpu and cpuacct

               This is the successor of the version 1 cpuset con-

               This is the successor of the version 1 freezer con-

               This is the successor of the version 1 hugetlb con-

               This is the successor of the version 1 blkio con-

               This is the successor of the version 1 memory con-

               This is the same as the version 1 perf_event con-

               This is the same as the version 1 pids controller.

               This is the same as the version 1 rdma controller.

          There is no direct equivalent of the net_cls and net_prio
          controllers from cgroups version 1.  Instead, support has
          been added to iptables(8) to allow eBPF filters that hook on
          cgroup v2 pathnames to make decisions about network traffic
          on a per-cgroup basis.

          The v2 devices controller provides no interface files;
          instead, device control is gated by attaching an eBPF
          (BPF_CGROUP_DEVICE) program to a v2 cgroup.

        Cgroups v2 subtree control
          Each cgroup in the v2 hierarchy contains the following two

     Page 11                       Linux             (printed 5/22/22)

     CGROUPS(7)                (2020-08-13)                 CGROUPS(7)

               This read-only file exposes a list of the controllers
               that are available in this cgroup.  The contents of
               this file match the contents of the
               cgroup.subtree_control file in the parent cgroup.

               This is a list of controllers that are active (enabled)
               in the cgroup.  The set of controllers in this file is
               a subset of the set in the cgroup.controllers of this
               cgroup.  The set of active controllers is modified by
               writing strings to this file containing space-delimited
               controller names, each preceded by '+' (to enable a
               controller) or '-' (to disable a controller), as in the
               following example:

                   echo aq+pids -memoryaq > x/y/cgroup.subtree_control

               An attempt to enable a controller that is not present
               in cgroup.controllers leads to an ENOENT error when
               writing to the cgroup.subtree_control file.

          Because the list of controllers in cgroup.subtree_control is
          a subset of those cgroup.controllers, a controller that has
          been disabled in one cgroup in the hierarchy can never be
          re-enabled in the subtree below that cgroup.

          A cgroup's cgroup.subtree_control file determines the set of
          controllers that are exercised in the child cgroups.  When a
          controller (e.g., pids) is present in the
          cgroup.subtree_control file of a parent cgroup, then the
          corresponding controller-interface files (e.g., pids.max)
          are automatically created in the children of that cgroup and
          can be used to exert resource control in the child cgroups.

        Cgroups v2 "no internal processes" rule
          Cgroups v2 enforces a so-called "no internal processes"
          rule.  Roughly speaking, this rule means that, with the
          exception of the root cgroup, processes may reside only in
          leaf nodes (cgroups that do not themselves contain child
          cgroups).  This avoids the need to decide how to partition
          resources between processes which are members of cgroup A
          and processes in child cgroups of A.

          For instance, if cgroup /cg1/cg2 exists, then a process may
          reside in /cg1/cg2, but not in /cg1. This is to avoid an
          ambiguity in cgroups v1 with respect to the delegation of
          resources between processes in /cg1 and its child cgroups.
          The recommended approach in cgroups v2 is to create a subdi-
          rectory called leaf for any nonleaf cgroup which should con-
          tain processes, but no child cgroups.  Thus, processes which
          previously would have gone into /cg1 would now go into

     Page 12                       Linux             (printed 5/22/22)

     CGROUPS(7)                (2020-08-13)                 CGROUPS(7)

          /cg1/leaf. This has the advantage of making explicit the
          relationship between processes in /cg1/leaf and /cg1's other

          The "no internal processes" rule is in fact more subtle than
          stated above.  More precisely, the rule is that a (nonroot)
          cgroup can't both (1) have member processes, and (2) dis-
          tribute resources into child cgroups-that is, have a
          nonempty cgroup.subtree_control file.  Thus, it is possible
          for a cgroup to have both member processes and child
          cgroups, but before controllers can be enabled for that
          cgroup, the member processes must be moved out of the cgroup
          (e.g., perhaps into the child cgroups).

          With the Linux 4.14 addition of "thread mode" (described
          below), the "no internal processes" rule has been relaxed in
          some cases.

        Cgroups v2 file
          Each nonroot cgroup in the v2 hierarchy contains a read-only
          file,, whose contents are key-value pairs
          (delimited by newline characters, with the key and value
          separated by spaces) providing state information about the

              $ cat mygrp/
              populated 1
              frozen 0

          The following keys may appear in this file:

               The value of this key is either 1, if this cgroup or
               any of its descendants has member processes, or other-
               wise 0.

               The value of this key is 1 if this cgroup is currently
               frozen, or 0 if it is not.

          The file can be monitored, in order to receive
          notification when the value of one of its keys changes.
          Such monitoring can be done using inotify(7), which notifies
          changes as IN_MODIFY events, or poll(2), which notifies
          changes by returning the POLLPRI and POLLERR bits in the
          revents field.

        Cgroup v2 release notification
          Cgroups v2 provides a new mechanism for obtaining notifica-
          tion when a cgroup becomes empty.  The cgroups v1
          release_agent and notify_on_release files are removed, and
          replaced by the populated key in the file.

     Page 13                       Linux             (printed 5/22/22)

     CGROUPS(7)                (2020-08-13)                 CGROUPS(7)

          This key either has the value 0, meaning that the cgroup
          (and its descendants) contain no (nonzombie) member pro-
          cesses, or 1, meaning that the cgroup (or one of its descen-
          dants) contains member processes.

          The cgroups v2 release-notification mechanism offers the
          following advantages over the cgroups v1 release_agent mech-

          *  It allows for cheaper notification, since a single pro-
             cess can monitor multiple files (using the
             techniques described earlier).  By contrast, the cgroups
             v1 mechanism requires the expense of creating a process
             for each notification.

          *  Notification for different cgroup subhierarchies can be
             delegated to different processes.  By contrast, the
             cgroups v1 mechanism allows only one release agent for an
             entire hierarchy.

        Cgroups v2 cgroup.stat file
          Each cgroup in the v2 hierarchy contains a read-only
          cgroup.stat file (first introduced in Linux 4.14) that con-
          sists of lines containing key-value pairs.  The following
          keys currently appear in this file:

               This is the total number of visible (i.e., living)
               descendant cgroups underneath this cgroup.

               This is the total number of dying descendant cgroups
               underneath this cgroup.  A cgroup enters the dying
               state after being deleted.  It remains in that state
               for an undefined period (which will depend on system
               load) while resources are freed before the cgroup is
               destroyed.  Note that the presence of some cgroups in
               the dying state is normal, and is not indicative of any

               A process can't be made a member of a dying cgroup, and
               a dying cgroup can't be brought back to life.

        Limiting the number of descendant cgroups
          Each cgroup in the v2 hierarchy contains the following
          files, which can be used to view and set limits on the num-
          ber of descendant cgroups under that cgroup:

               This file defines a limit on the depth of nesting of
               descendant cgroups.  A value of 0 in this file means
               that no descendant cgroups can be created.  An attempt

     Page 14                       Linux             (printed 5/22/22)

     CGROUPS(7)                (2020-08-13)                 CGROUPS(7)

               to create a descendant whose nesting level exceeds the
               limit fails (mkdir(2) fails with the error EAGAIN).

               Writing the string

               to this file means that no limit is imposed.  The
               default value in this file is

               This file defines a limit on the number of live descen-
               dant cgroups that this cgroup may have.  An attempt to
               create more descendants than allowed by the limit fails
               (mkdir(2) fails with the error EAGAIN).

               Writing the string

               to this file means that no limit is imposed.  The
               default value in this file is

          In the context of cgroups, delegation means passing manage-
          ment of some subtree of the cgroup hierarchy to a nonprivi-
          leged user.  Cgroups v1 provides support for delegation
          based on file permissions in the cgroup hierarchy but with
          less strict containment rules than v2 (as noted below).
          Cgroups v2 supports delegation with containment by explicit
          design.  The focus of the discussion in this section is on
          delegation in cgroups v2, with some differences for cgroups
          v1 noted along the way.

          Some terminology is required in order to describe delega-
          tion.  A delegater is a privileged user (i.e., root) who
          owns a parent cgroup.  A delegatee is a nonprivileged user
          who will be granted the permissions needed to manage some
          subhierarchy under that parent cgroup, known as the
          delegated subtree.

          To perform delegation, the delegater makes certain directo-
          ries and files writable by the delegatee, typically by
          changing the ownership of the objects to be the user ID of
          the delegatee.  Assuming that we want to delegate the hier-
          archy rooted at (say) /dlgt_grp and that there are not yet
          any child cgroups under that cgroup, the ownership of the
          following is changed to the user ID of the delegatee:

               Changing the ownership of the root of the subtree means
               that any new cgroups created under the subtree (and the
               files they contain) will also be owned by the delega-

     Page 15                       Linux             (printed 5/22/22)

     CGROUPS(7)                (2020-08-13)                 CGROUPS(7)

               Changing the ownership of this file means that the del-
               egatee can move processes into the root of the dele-
               gated subtree.








        html<H4>    html</H4>






        html<H4>    html</H4>



     html<H4>    html</H4>

             htmlmanrefstartNothread-granularitycontrol: htmlmanrefendNothread-granularitycontrol:

             htmlmanrefstartNointernalprocesses: htmlmanrefendNointernalprocesses:







        html<H4>    html</H4>


        html<H4>    html</H4>


                     htmlmanrefstarty/zhtmlmanrefendy/z htmlmanrefstartthreaded.htmlmanrefendthreaded.

                      htmlmanrefstarty,htmlmanrefendy, htmlmanrefstartdomainthreaded.



                 htmlmanrefstartdomaininvalidhtmlmanrefenddomaininvalid  htmlmanrefstarty,htmlmanrefendy,




                   htmlmanrefstartthreadedhtmlmanrefendthreaded    htmlmanrefstartdomaininvalid.

                 htmlmanrefstartdomaininvalidhtmlmanrefenddomaininvalid  htmlmanrefstarty,htmlmanrefendy,


        html<H4>    html</H4>



        html<H4>    html</H4>



                htmlmanrefstartdomainhtmlmanrefenddomain htmlmanrefstartdomainthreaded:htmlmanrefenddomainthreaded:








        html<H4>     html</H4>


           htmlmanrefstartdomainthreadedhtmlmanrefenddomainthreaded htmlmanrefstartx,htmlmanrefendx,     htmlmanrefstartdomainhtmlmanrefenddomain

            htmlmanrefstartdomainthreadedhtmlmanrefenddomainthreaded htmlmanrefstartxhtmlmanrefendx    htmlmanrefstartdomain:htmlmanrefenddomain:



        html<H4>    html</H4>





        html<H4>    html</H4>

     html<H4>    html</H4>


     html<H4>    html</H4>

        html<H4>    html</H4>




        html<H4>    html</H4>




     html<H4>    html</H4>


     html<H4>    html</H4>