[譯] Control Group v2(cgroupv2 權威指南)(KernelDoc, 2021)

語言: CN / TW / HK

譯者序

本文翻譯自 2021 年 Linux 5.10 核心文件: Control Group v2 , 它是描述 cgroupv2 使用者空間側 的設計、介面和規範的 權威文件

原文非常全面詳細,本文只翻譯了目前感興趣的部分,其他部分保留原文。 另外,由於技術規範的描述比較抽象,因此翻譯時加了一些系統測試輸出、核心程式碼片段和 連結,便於更好理解。

由於譯者水平有限,本文不免存在遺漏或錯誤之處。如有疑問,請查閱原文。

以下是譯文。

      • 1.2.1 cgroup 組成部分
      • 1.2.2 程序/執行緒與 cgroup 關係
      • 2.1.1 控制器與 v1/v2 繫結關係
      • 2.1.2 示例:ubuntu 20.04 同時掛載 cgroupv1/cgroupv2(譯註)
      • 2.1.3 控制器在 v1 和 v2 之間切換
      • 2.1.4 cgroupv2 mount 選項
    • 2.2 組織(organizing)程序和執行緒
      • 2.2.1 程序:建立/刪除/移動/檢視 cgroup
        • 將 cgroup 改成 theaded 模式(單向/不可逆操作)
    • 2.3 [Un]populated Notification(程序退出通知)
    • 2.3 管理控制器(controlling controllers)
      • 2.3.2 自頂向下啟用(top-down constraint)
      • 2.3.3 將資源分給 children 時,parent cgroup 內不能有程序(no internal process)
    • 2.4 Delegation(委派)
      • 2.4.1 Model of Delegation
      • 2.4.2 Delegation Containment
      • 2.5.1 避免頻繁在 cgroup 之間遷移程序(Organize once and control)
      • 2.5.2 避免檔名衝突(Avoid Name Collisions)
  • 3 資源分配模型(Resource distribution models)
    • 3.1 Weights(資源量權重)
    • 3.2 Limits(資源量上限,可超分)
    • 3.3 Protections(資源量保護,可超分)
    • 3.4 Allocations(獨佔資源量,不可超分)
  • 4 介面檔案(Interface Files)
    • 4.2 一些慣例(conventions)
    • 4.3 核心介面檔案(core interface files)
  • 5 Controllers(控制器)
      • 5.4.1 PID 介面檔案: pids.current/pids.max
      • 5.4.2 繞開 cgroup PID 限制,實現 pids.current > pids.max
      • Cpuset Interface Files
    • 5.6 Device controller
      • 5.6.1 控制方式:基於 cgroup BPF 而非介面檔案
      • 5.6.2 cgroup BPF 程式上下文和返回值
      • 5.6.3 cgroup BPF 程式示例
      • HugeTLB Interface Files
    • 5.10 規範外(non-normative)的一些資訊
      • CPU controller root cgroup 處理行為
      • IO controller root cgroup 處理行為
  • 6 cgroup 名稱空間(cgroupns)
      • 6.1.1 功能:對 /proc/PID/cgroup 和 cgroup mount 進行虛擬化
      • 6.1.2 新建 cgroup namespace
      • 6.1.3 多執行緒程序:執行緒 unshare 後的行為
      • 6.1.4 cgroupns 生命週期
    • 6.2 進一步解釋 cgroupns root 和檢視
    • 6.3 在 cgroupns 之間遷移程序
    • 6.4 與其他 cgroupns 互動
    • 檔案系統對 writeback 的支援
  • 9 v1 存在的問題及 v2 的設計考慮(rationales)
    • 9.1 v1 多 hierarchy 帶來的問題
    • 9.2 執行緒粒度(thread granularity)
    • 9.3 內部節點(inner nodes)與執行緒之間競爭
    • 9.4 其他 cgroup 介面相關的問題
    • 9.5 一些 controller 相關的問題及解決方式

本文(指 英文原文 ) 是描述 cgroup v2 設計、介面和規範的 權威文件 。 未來所有改動/變化都需反應到本文件中。v1 的文件見 cgroup-v1

本文描述 cgroup v2 所有 使用者空間可見的部分 ,包括 cgroup core 和各 controller。

1 引言

1.1 術語

“cgroup” 是 “control group” 的縮寫,並且 首字母永遠不大寫 (never capitalized)。

  • 單數形式(cgroup)指這個特性,或用於 “cgroup controllers” 等術語中的修飾詞。
  • 複數形式(cgroups)顯式地指多個 cgroup。

1.2 cgroup 是什麼?

cgroup 是一種 以 hierarchical(樹形層級)方式組織程序的機制 (a mechanism to organize processes hierarchically),以及在層級中 以受控和 可配置的方式 (controlled and configurable manner) 分發系統資源 (distribute system resources)。

1.2.1 cgroup 組成部分

cgroup 主要由兩部分組成:

  1. 核心(core) :主要負責 層級化地組織程序
  2. 控制器(controllers) :大部分控制器負責 cgroup 層級中 特定型別的系統資源的分配 ,少部分 utility 控制器用於其他目的。

1.2.2 程序/執行緒與 cgroup 關係

所有 cgroup 組成一個 樹形結構 (tree structure),

  • 系統中的 每個程序都屬於且只屬於 某一個 cgroup;
  • 一個 程序的所有執行緒 屬於同一個 cgroup;
  • 建立子程序時,繼承其父程序的 cgroup;
  • 一個程序可以被 遷移 到其他 cgroup;
  • 遷移一個程序時, 子程序(後代程序)不會自動 跟著一起遷移;

1.2.3 控制器

  • 遵循特定的結構規範(structural constraints),可以選擇性地 針對一個 cgroup 啟用或禁用某些控制器
  • 控制器的所有行為都是 hierarchical 的。

    • 如果一個 cgroup 啟用了某個控制器,那這個 cgroup 的 sub-hierarchy 中所有程序都會受控制。
    • 如果在更接近 root 的節點上設定了資源限制(restrictions set closer to the root),那在下面的 sub-hierarchy 是無法覆蓋的。

2 基礎操作

2.1 掛載(mounting)

與 v1 不同,cgroup v2 只有單個層級樹 (single hierarchy)。 用如下命令掛載 v2 hierarchy:

# mount -t <fstype> <device> <dir>
$ mount -t cgroup2 none $MOUNT_POINT

cgroupv2 檔案系統 的 magic number 是 0x63677270 (“cgrp”)。

2.1.1 控制器與 v1/v2 繫結關係

  • 所有 支援 v2 且未繫結到 v1 的控制器,會被自動繫結到 v2 hierarchy,出現在 root 層級中。
  • v2 中未在使用的控制器 (not in active use),可以繫結到其他 hierarchies。

這說明我們能以完全後向相容的方式, 混用 v2 和 v1 hierarchy

下面通過實際例子理解以上是什麼意思。

2.1.2 示例:ubuntu 20.04 同時掛載 cgroupv1/cgroupv2(譯註)

檢視 ubuntu 20.04 (5.11 核心)cgroup 相關的掛載點:

$ mount | grep cgroup
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755,inode64)
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)

可以看到,系統 同時掛載了 cgroup 和 cgroup2

  1. cgroup v2 是單一層級樹,因此只有一個掛載點(第二行) /sys/fs/cgroup/unified ,這就是上一小節所說的 root 層級
  2. cgroup v1 根據控制器型別( cpuset/cpu,cpuacct/hugetlb/... ),掛載到不同位置。

接下來看 哪些控制繫結到了 cgroup v2

$ ls -ahlp /sys/fs/cgroup/unified/
total 0
-r--r--r--   1 root root   0 cgroup.controllers
-rw-r--r--   1 root root   0 cgroup.max.depth
-rw-r--r--   1 root root   0 cgroup.max.descendants
-rw-r--r--   1 root root   0 cgroup.procs
-r--r--r--   1 root root   0 cgroup.stat
-rw-r--r--   1 root root   0 cgroup.subtree_control
-rw-r--r--   1 root root   0 cgroup.threads
-rw-r--r--   1 root root   0 cpu.pressure
-r--r--r--   1 root root   0 cpu.stat
drwxr-xr-x   2 root root   0 init.scope/
-rw-r--r--   1 root root   0 io.pressure
-rw-r--r--   1 root root   0 memory.pressure
drwxr-xr-x 121 root root   0 system.slice/
drwxr-xr-x   3 root root   0 user.slice/

只有 cpu/io/memory 等 少量控制器 (大部分還在 cgroup v1 中,系統預設使用 v1)。

最後看幾個控制器檔案的內容,加深一點直觀印象,後面章節會詳細解釋這些分別表示什麼意思:

$ cd /sys/fs/cgroup/unified
$ cat cpu.pressure
some avg10=0.00 avg60=0.00 avg300=0.00 total=2501067303

$ cat cpu.stat
usage_usec 44110960000
user_usec 29991256000
system_usec 14119704000

$ cat io.pressure
some avg10=0.00 avg60=0.00 avg300=0.00 total=299044042
full avg10=0.00 avg60=0.00 avg300=0.00 total=271257559

$ cat memory.pressure
some avg10=0.00 avg60=0.00 avg300=0.00 total=298215
full avg10=0.00 avg60=0.00 avg300=0.00 total=229843

2.1.3 控制器在 v1 和 v2 之間切換

  1. 控制器在當前 hierarchy 中已經 不再被引用 (no longer referenced), 才能移動到其他 hierarchy。
  2. 由於 per-cgroup 控制器狀態 是非同步銷燬的,從 v1 umount 之後 可能會有 linger reference,因此可能不會立即出現在 v2 hierarchy 中。
  3. 類似地,一個控制器只有被完全禁用之後,才能被移出 v2 hierarchy,且可能 過一段時間才能在 v1 hierarchy 中可用;
  4. 此外,由於控制器間的依賴,其他控制器也可能需要被禁用。

在 v2 和 v1 之間動態移動控制器對開發和手動配置很有用,但 強烈建議不要在生產環境這麼做 。建議在系統啟動、控制器開始使用之後, 就不要再修改 hierarchy 和控制器的關聯關係了。

另外,遷移到 v2 時, 系統管理軟體可能仍然會自動 mount v1 cgroup 檔案系統 , 因此需要在 系統啟動過程中 劫持所有的控制器,因為啟動之後就晚了。 為方便測試,核心提供了 cgroup_no_v1=allows 配置, 可完全禁用 v1 控制器(強制使用 v2)。

2.1.4 cgroupv2 mount 選項

前面 mount 命令沒指定任何特殊引數。目前支援如下 mount 選項:

  • nsdelegate :將 cgroup namespaces (cgroupns)作為 delegation 邊界

    系統層選項,只能在 init namespace 通過 mount/unmount 來修改這個配置。在 non-init namespace 中,這個選項會被忽略。詳見下面的。

  • memory_localevents :只為當前 cgroup populate memory.events 不統計任何 subtree

    這是 legacy 行為,如果沒配置這個引數,預設行為會統計所有的 subtree。

    系統層選項,只能在 init namespace 通過 mount/unmount 來修改這個配置。在 non-init namespace 中,這個選項會被忽略。詳見下面的。

  • memory_recursiveprot

    Recursively apply memory.min and memory.low protection to entire subtrees, without requiring explicit downward propagation into leaf cgroups. This allows protecting entire subtrees from one another, while retaining free competition within those subtrees. This should have been the default behavior but is a mount-option to avoid regressing setups relying on the original semantics (e.g. specifying bogusly high ‘bypass’ protection values at higher tree levels).

2.2 組織(organizing)程序和執行緒

2.2.1 程序:建立/刪除/移動/檢視 cgroup

初始狀態下,只有 root cgroup ,所有程序都屬於這個 cgroup。

  1. 建立 sub-cgroup :只需建立一個子目錄,

    $ mkdir $CGROUP_NAME
    • 一個 cgroup 可以有多個子 cgroup,形成一個樹形結構;
    • 每個 cgroup 都有一個 可讀寫的介面檔案 cgroup.procs

      • 讀該檔案會列出這個 cgroup 內的所有 PID,每行一個;
      • PID 並未排序;
      • 同一 PID 可能出現多次:程序先移出再移入該 cgroup,或讀檔案期間 PID 被重用了,都可能發生這種情況。
  2. 將程序移動到指定 cgroup :將 PID 寫到相應 cgroup 的 cgroup.procs 檔案即可。

    • 每次 write(2) 只能遷移 一個程序
    • 如果程序有 多個執行緒 ,那將任意執行緒的 PID 寫到檔案,都會將該程序的所有執行緒遷移到相應 cgroup。
    • 如果程序 fork 出一個子程序,那子程序屬於執行 fork 操作時父程序所屬的 cgroup。
    • 程序退出(exit)後, 仍然留在退出時它所屬的 cgroup ,直到這個程序被收割(reap);
    • 殭屍程序不會出現在 cgroup.procs 中 ,因此 無法對殭屍程序執行 cgroup 遷移操作
  3. 刪除 cgroup/sub-cgroup

    • 如果一個 cgroup 已經沒有任何 children 或活程序,那直接 刪除對應的資料夾 就刪除該 cgroup 了。
    • 如果一個 cgroup 已經沒有 children,雖然其中還有程序但 全是殭屍程序 (zombie processes),那 認為這個 cgroup 是空的 ,也可以直接刪除。
  4. 檢視程序的 cgroup 資訊: cat /proc/$PID/cgroup 會列出該程序的 cgroup membership

    如果系統啟用了 v1,這個檔案可能會包含多行, 每個 hierarchy 一行 v2 對應的行永遠是 0::$PATH 格式

    $ cat /proc/$$/cgroup # ubuntu 20.04 上的輸出,$$ 是當前 shell 的程序 ID
     12:devices:/user.slice
     11:freezer:/
     10:memory:/user.slice/user-1000.slice/session-1.scope
     9:hugetlb:/
     8:cpuset:/
     7:perf_event:/
     6:rdma:/
     5:pids:/user.slice/user-1000.slice/session-1.scope
     4:cpu,cpuacct:/user.slice
     3:blkio:/user.slice
     2:net_cls,net_prio:/
     1:name=systemd:/user.slice/user-1000.slice/session-1.scope
     0::/user.slice/user-1000.slice/session-1.scope

    如果一個程序變成 殭屍程序 (zombie),並且與它關聯的 cgroup 隨後被刪掉了 ,那行尾會出現 (deleted) 字樣:

    $ cat /proc/842/cgroup
     ...
     0::/test-cgroup/test-cgroup-nested (deleted)

2.2.2 執行緒

  • cgroup v2 的 一部分控制器 支援執行緒粒度的資源控制, 這種控制器稱為 threaded controllers

    • 預設情況下,一個程序的所有執行緒屬於同一個 cgroup,
    • 執行緒模型使我們能將不同執行緒放到 subtree 的不同位置,而同時還能保持二者在同一 資源域(resource domain)內。
  • 不支援執行緒模式的控制器稱為 domain controllers

將一個 cgroup 標記為 threaded,那它將作為 threaded cgroup 將加入 parent 的資源域 。而 parent 可能也是一個 threaded cgroup,它所屬的資源域在 hierarchy 層級中的更 上面。一個 threaded subtree 的 root,即第一個不是 threaded 的祖先,稱為 threaded domain 或 threaded root,作為整個 subtree 的資源域。

Inside a threaded subtree, threads of a process can be put in different cgroups and are not subject to the no internal process constraint - threaded controllers can be enabled on non-leaf cgroups whether they have threads in them or not.

As the threaded domain cgroup hosts all the domain resource consumptions of the subtree, it is considered to have internal resource consumptions whether there are processes in it or not and can’t have populated child cgroups which aren’t threaded. Because the root cgroup is not subject to no internal process constraint, it can serve both as a threaded domain and a parent to domain cgroups.

The current operation mode or type of the cgroup is shown in the “cgroup.type” file which indicates whether the cgroup is a normal domain, a domain which is serving as the domain of a threaded subtree, or a threaded cgroup.

將 cgroup 改成 theaded 模式(單向/不可逆操作)

cgroup 建立之後都是 domain cgroup,可以通過下面的命令將其 改成 threaded 模式

$ echo threaded > cgroup.type

但注意: 這個操作是單向的 ,一旦設定成 threaded 模式之後,就無法再切回 domain 模式了。

開啟 thread 模型必須先滿足如下條件:

  1. As the cgroup will join the parent’s resource domain. The parent must either be a valid (threaded) domain or a threaded cgroup.
  2. When the parent is an unthreaded domain, it must not have any domain controllers enabled or populated domain children. The root is exempt from this requirement.

Topology-wise, a cgroup can be in an invalid state. Please consider the following topology:

A (threaded domain) - B (threaded) - C (domain, just created)

C is created as a domain but isn’t connected to a parent which can host child domains. C can’t be used until it is turned into a threaded cgroup. “cgroup.type” file will report “domain (invalid)” in these cases. Operations which fail due to invalid topology use EOPNOTSUPP as the errno.

A domain cgroup is turned into a threaded domain when one of its child cgroup becomes threaded or threaded controllers are enabled in the “cgroup.subtree_control” file while there are processes in the cgroup. A threaded domain reverts to a normal domain when the conditions clear.

When read, “cgroup.threads” contains the list of the thread IDs of all threads in the cgroup. Except that the operations are per-thread instead of per-process, “cgroup.threads” has the same format and behaves the same way as “cgroup.procs”. While “cgroup.threads” can be written to in any cgroup, as it can only move threads inside the same threaded domain, its operations are confined inside each threaded subtree.

The threaded domain cgroup serves as the resource domain for the whole subtree, and, while the threads can be scattered across the subtree, all the processes are considered to be in the threaded domain cgroup. “cgroup.procs” in a threaded domain cgroup contains the PIDs of all processes in the subtree and is not readable in the subtree proper. However, “cgroup.procs” can be written to from anywhere in the subtree to migrate all threads of the matching process to the cgroup.

Only threaded controllers can be enabled in a threaded subtree. When a threaded controller is enabled inside a threaded subtree, it only accounts for and controls resource consumptions associated with the threads in the cgroup and its descendants. All consumptions which aren’t tied to a specific thread belong to the threaded domain cgroup.

Because a threaded subtree is exempt from no internal process constraint, a threaded controller must be able to handle competition between threads in a non-leaf cgroup and its child cgroups. Each threaded controller defines how such competitions are handled.

2.3 [Un]populated Notification(程序退出通知)

每個 non-root cgroup 都有一個 cgroup.events 檔案 , 其中包含了 populated 欄位,描述這個 cgroup 的 sub-hierarchy 中 是否存在活程序 (live processes)。

  • 如果值是 0,表示 cgroup 及其 sub-cgroup 中沒有活程序;
  • 如果值是 1:那這個值變為 0 時,會觸發 poll 和 [id]notify 事件。

這可以用來,例如,在一個 sub-hierarchy 內的 所有程序退出之後觸發執行清理操作

The populated 狀態更新和通知是遞迴的。以下圖為例,括號中的數字表示該 cgroup 中的程序數量:

A(4) - B(0) - C(1)
              \ D(0)
  • A、B 和 C 的 populated 欄位都應該是 1 ,而 D 的是 0
  • 當 C 中唯一的程序退出之後,B 和 C 的 populated 欄位將變成 0 ,將 在這兩個 cgroup 內觸發一次 cgroup.events 檔案的檔案修改事件

2.3 管理控制器(controlling controllers)

2.3.1 啟用和禁用

每個 cgroup 都有一個 cgroup.controllers 檔案, 其中列出了這個 cgroup 可用的所有控制器:

$ cat cgroup.controllers
cpu io memory

預設沒有啟用任何控制 。啟用或禁用是通過寫 cgroup.subtree_control 檔案完成的:

$ echo "+cpu +memory -io" > cgroup.subtree_control

只有 出現在 cgroup.controllers 中 的控制器 才能被啟用

  • 如果像上面的命令一樣,一次指定多個操作,那它們要麼全部功能,要麼全部失敗;
  • 如果對同一個控制器指定了多個操作,最後一個是有效的。

啟用 cgroup 的某個控制器,意味著控制它在子節點之間分配目標資源(target resource)的行為。 考慮下面的 sub-hierarchy,括號中是已經啟用的控制器:

A(cpu,memory) - B(memory) - C()
                            \ D()
  • A 啟用了 cpumemory ,因此會控制它的 child(即 B)的 CPU 和 memory 使用;
  • B 只啟用了 memory ,因此 C 和 D 的 memory 使用量會受 B 控制,但 CPU 可以隨意競爭 (compete freely)。

控制器限制 children 的資源使用方式,是 建立或寫入 children cgroup 的介面檔案 。 還是以上面的拓撲為例:

  • 在 B 上啟用 cpu 將會在 C 和 D 的 cgroup 目錄中建立 cpu. 開頭的介面檔案;
  • 同理,禁用 memory 時會刪除對應的 memory. 開頭的檔案。

這也意味著 cgroup 目錄中所有不以 cgroup. 開頭的 控制器介面檔案 —— 在管理上 都屬於 parent cgroup 而非當前 cgroup 自己

2.3.2 自頂向下啟用(top-down constraint)

資源是自頂向下(top-down)分配的,只有當一個 cgroup 從 parent 獲得了某種資源,它 才可以繼續向下分發。這意味著

  • 只有父節點啟用了某個控制器,子節點才能啟用;
  • 對應到實現上, 所有非根節點 (non-root)的 cgroup.subtree_control 檔案中, 只能包含它的父節點的 cgroup.subtree_control 中有的控制器;
  • 另一方面,只要有子節點還在使用某個控制器,父節點就無法禁用之。

2.3.3 將資源分給 children 時,parent cgroup 內不能有程序(no internal process)

只有當一個 non-root cgroup 中 沒有任何程序時 ,才能將其 domain resource 分配給它的 children。換句話說,只有那些沒有任何程序的 domain cgroup, 才能 將它們的 domain controllers 寫到其 children 的 cgroup.subtree_control 檔案中

這種方式保證了在給定的 domain controller 範圍內, 所有程序都位於葉子節點上 , 因而 避免了 child cgroup 內的程序與 parent 內的程序競爭 的情況,便於 domain controller 掃描 hierarchy。

但 root cgroup 不受此限制。

  • 對大部分型別的控制器來說,root 中包含了一些 沒有與任何 cgroup 相關聯的程序和匿名資源佔用 (anonymous resource consumption),需要特殊對待。
  • root cgroup 的資源佔用是如何管理的, 因控制器而異 (更多資訊可參考 Controllers 小結)。

注意,在 parent 的 cgroup.subtree_control 啟用控制器之前,這些限制不會生效。 這非常重要,因為它決定了建立 populated cgroup children 的方式。 要控制一個 cgroup 的資源分配 ,這個 cgroup 需要:

cgroup.subtree_control

2.4 Delegation(委派)

2.4.1 Model of Delegation

cgroup 能以兩種方式 delegate。

  1. 通過授予該目錄以及目錄中的 cgroup.procscgroup.threadscgroup.subtree_control 檔案的寫許可權, 將 cgroup delegate 給一個 less privileged 使用者;
  2. 如果配置了 nsdelegate 掛載選項,會在建立 cgroup 時自動 delegate。

對於一個給定的目錄,由於其中的 resource control 介面檔案控制著 parent 的資源的分配, 因此 delegatee 不應該被授予寫許可權。

  1. For the first method, this is achieved by not granting access to these files.
  2. 對第二種方式,核心會拒絕除了在該 namespace 內對 cgroup.procscgroup.subtree_control 之外的對其他檔案的寫操作。

The end results are equivalent for both delegation types. Once delegated, the user can build sub-hierarchy under the directory, organize processes inside it as it sees fit and further distribute the resources it received from the parent. The limits and other settings of all resource controllers are hierarchical and regardless of what happens in the delegated sub-hierarchy, nothing can escape the resource restrictions imposed by the parent.

目前,cgroup 並未對 delegated sub-hierarchy 的 cgroup 數量或巢狀深度施加限制;但未來可能會施加顯式限制。

2.4.2 Delegation Containment

A delegated sub-hierarchy is contained in the sense that processes can’t be moved into or out of the sub-hierarchy by the delegatee.

For delegations to a less privileged user, this is achieved by requiring the following conditions for a process with a non-root euid to migrate a target process into a cgroup by writing its PID to the “cgroup.procs” file.

  • The writer must have write access to the “cgroup.procs” file.
  • The writer must have write access to the “cgroup.procs” file of the common ancestor of the source and destination cgroups.

The above two constraints ensure that while a delegatee may migrate processes around freely in the delegated sub-hierarchy it can’t pull in from or push out to outside the sub-hierarchy.

For an example, let’s assume cgroups C0 and C1 have been delegated to user U0 who created C00, C01 under C0 and C10 under C1 as follows and all processes under C0 and C1 belong to U0::

~~~~~~~~~~~~~ - C0 - C00
~ cgroup    ~      \ C01
~ hierarchy ~
~~~~~~~~~~~~~ - C1 - C10

Let’s also say U0 wants to write the PID of a process which is currently in C10 into “C00/cgroup.procs”. U0 has write access to the file; however, the common ancestor of the source cgroup C10 and the destination cgroup C00 is above the points of delegation and U0 would not have write access to its “cgroup.procs” files and thus the write will be denied with -EACCES.

For delegations to namespaces, containment is achieved by requiring that both the source and destination cgroups are reachable from the namespace of the process which is attempting the migration. If either is not reachable, the migration is rejected with -ENOENT.

2.5 指導原則

2.5.1 避免頻繁在 cgroup 之間遷移程序(Organize once and control)

原則:建立程序前,先想好應該放在哪個 cgroup;程序啟動後,通過 controller 介面檔案進行控制。

在 cgroup 之間遷移程序是一個 開銷相對較高 的操作,而且 有狀態資源(例如 memory) 不會隨著程序一起遷移 的。 這種行為是有意設計的,因為 there often exist inherent trade-offs between migration and various hot paths in terms of synchronization cost.

因此, 不建議為了達到某種資源限制目的而頻繁地在 cgroup 之間遷移程序 。 一個程序啟動時,就應該根據系統的邏輯和資源結構分配到合適的 cgroup。 動態調整資源分配可以通過修改介面檔案來調整 controller 配置。

2.5.2 避免檔名衝突(Avoid Name Collisions)

cgroup 自己的介面檔案和它的 children cgroup 的介面檔案 位於同一目錄中 , 因此建立 children cgroup 時有可能與 cgroup 自己的介面檔案衝突。

  • 所有 cgroup 核心介面檔案 都是以 cgroup. 開頭,並且不會以常用的 job/service/slice/unit/workload 等作為開頭或結尾。
  • 每個控制器的介面檔案都以 <controller name>. 開頭,其中 <controller> name 由小寫字母和下劃線組成,但不會以 _ 開頭。

因此為避免衝突,可以用 _ 作為字首。

cgroup 沒有任何檔名衝突檢測機制 ,因此避免檔案衝突是使用者自己的責任。

3 資源分配模型(Resource distribution models)

根據資源型別(resource type)與使用場景的不同,cgroup 控制器實現了機制不同的資源 分發方式。本節介紹主要的幾種機制及其行為。

3.1 Weights(資源量權重)

這種模型的一個 例子 cpu.weight ,負責在 active children 之間 按比例分配 CPU cycle 資源。

這種模型中,parent 會 根據所有 active children 的權重來計算它們各自的佔比 (ratio)。

  • 由於只有那些能使用這些資源的 children 會參與到資源分配,因此這種模型 能實現資源的充分利用 (work-conserving)。
  • 這種分配模型 本質上是動態的 (the dynamic nature), 因此常用於 無狀態資源
  • 權重值範圍是 [1, 10000] ,預設 100 。這使得能以 足夠細的粒度增大或縮小權重(以 100 為中心, 100/100 = 1100*100 = 10000 )。

3.2 Limits(資源量上限,可超分)

這種模型的一個 例子 io.max ,負責在 IO device 上限制 cgroup 的最大 BPS 或 IOPS。

  • 這種模型給 child 配置的資源使用量上限(limit)。
  • 資源是可以超分的 (over-committed),即所有 children 的份額加起來可以大於 parent 的總可用量。
  • Limits 值範圍是 [0, max] ,預設 max ,也就是沒做限制。
  • 由於 limits 是可以超分的,因此所有配置組合都是合法的。

3.3 Protections(資源量保護,可超分)

這種模型的一個 例子 memory.low ,實現了 best-effort 記憶體保護

  • 在這種模型中,只要一個 cgroup 的所有祖先都處於各自的 protected level 以下,那 這個 cgroup 拿到的資源量就能達到配置值(有保障)。這裡的保障可以是

    • hard guarantees
    • best effort soft boundaries
  • Protection 可以超分,在這種情況下,only upto the amount available to the parent is protected among children.
  • Protection 值範圍是 [0, max] ,預設是 0 ,也就是沒有特別限制。
  • 由於 protections 是可以超分的,因此所有配置組合都是合法的。

3.4 Allocations(獨佔資源量,不可超分)

這種模型的一個 例子 cpu.rt.max ,它 hard-allocates realtime slices。

  • 這種模型中,cgroup 會 排他性地分配 (exclusively allocated)資源量。
  • Allocation 不可超分 ,即所有 children 的 allocations 之和不能超過 parent 的可用資源量。
  • Allocation 值範圍是 [0, max] ,預設是 0 ,也就是不會排他性地分配資源。
  • 由於 allocation 不可超分,因此某些配置可能不合法,會被拒絕;如果強制遷移程序,可能會因配置不合法(資源達到上限)而失敗。

4 介面檔案(Interface Files)

4.1 檔案格式

所有介面檔案都應該屬於以下某種型別:

  1. 換行符分隔的值 (每次 write 操作只允許寫入一個值)

    VAL0\n
     VAL1\n
     ...
  2. 空格分隔的值(只讀場景,或一次可寫入多個值的場景)

    VAL0 VAL1 ...\n
  3. 扁平 key 型別 (flat keyed,每行一個 key value 對)

    KEY0 VAL0\n
     KEY1 VAL1\n
     ...
  4. 巢狀 key 型別(nested keyed,每行一個 Key value 對,其中 value 中又包含 subkey/subvalue)

    KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01...
     KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11...
     ...

對於可寫檔案(writable file),通常來說寫的格式應與讀的格式保持一致; 但對於大部分常用場景,控制器可能會允許省略後面的欄位(later fields),或實現了受 限的快捷方式(restricted shortcuts)。

對於 flat 和 nested key 檔案來說,每次只能寫一個 key (及對於的 values)。 對於 nested keyed files,sub key pairs 的順序可以隨意,也不必每次都指定所有 pairs。

4.2 一些慣例(conventions)

  1. 每個特性的配置應該放到單獨檔案。
  2. root cgroup 不受資源控制的限制 ,因此不應該有資源控制 介面檔案 cgroup.* )。
  3. 預設的時間單位是微秒 us (microseconds)。如果改 用其他時間單位,必須顯式加上一個單位字尾。
  4. 表示各部分佔比時,應該用十進位制百分比表示,且小數點後保留至少兩位,例如 13.40
  5. 如果一個控制器實現了 weight 模型,那介面檔案應命名為 weight ,值範圍 [1, 10000] ,預設 100。
  6. 如果一個控制器實現了絕對 resource guarantee and/or limit,則介面檔案應命名為 minmax 。如果實現了 best effort resource guarantee and/or limit,應命名為 lowhigh 。對於這四種控制檔案, "max" 是一個 特殊的合法值 (special token), 表示 讀和寫無上限 (upward infinity)。
  7. 如果一個配置項的預設值可配置,且有 keyed specific overrides,那預設 default entry 的 key 應該是 "default" ,並出現在這個檔案的第一行。

    更新/覆蓋預設值:將 default $VAL$VAL 寫入檔案。單純寫入 default 恢復預設配置。

    例如,下面的配置項以 major:minor 裝置號為 key,整數為 value:

    # cat cgroup1.example-interface-file
     default 150
     8:0 300

    可用如下方式 更新預設值

    # 方式一
     $ echo 125 > cgroup-example-interface-file
     # 方式二
     $ echo "default 125" > cgroup-example-interface-file

    用自定義值覆蓋預設值:

    $ echo "8:16 170" > cgroup-example-interface-file

    清除配置:

    $ echo "8:0 default" > cgroup-example-interface-file
     $ cat cgroup-example-interface-file
     default 125
     8:16 170
  8. 對於不是太頻繁的 events,應該建立一個介面檔案 "events" ,讀取這個檔案能 list event key value pairs。當發生任何 notifiable event 時,這個檔案上都應該生成一個 file modified event。

4.3 核心介面檔案(core interface files)

所有的 cgroup 核心檔案都以 cgroup. 開頭。

  1. cgroup.type

    可讀寫檔案,只能 位於 non-root cgroup 中 。型別可以是:

    1. “domain”:正常的 domain cgroup。
    2. “domain threaded”:threaded domain cgroup,作為 threaded subtree 的 root。
    3. “domain invalid”:該 cgroup 當前處於 invalid 狀態。這種狀態下無法被 populate,或者啟用控制器。可能能變成一個 threaded cgroup。
    4. “threaded” : 表示當前 cgroup 是某個 threaded subtree 的一個 member。

    可以將一個 cgroup 設定成 threaded group,只需將字串 "threaded" 寫入這個檔案。

  2. cgroup.procs

    可讀寫檔案,每行一個 PID,可用於所有 cgroups。

    讀時,返回這個 cgroup 內的所有程序 ID,每行一個。PID 列表沒有排序,同一個 PID 可能會出現多次 —— 如果該程序先移除再移入該 cgroup,或 PID 迴圈利用了, 都可以回出現這種情況。

    要將一個程序移動到該 cgroup,只需將 PID 寫入這個檔案。寫入時必須滿足:

    1. 必須有對改 cgroup 的 cgroup.procs 檔案寫許可權。
    2. 必須對 source and destination cgroups 的 共同祖先 的 cgroup.procs 檔案有寫許可權。

    When delegating a sub-hierarchy, write access to this file should be granted along with the containing directory.

    In a threaded cgroup, reading this file fails with EOPNOTSUPP as all the processes belong to the thread root. Writing is supported and moves every thread of the process to the cgroup.

  3. cgroup.threads

    A read-write new-line separated values file which exists on all cgroups.

    When read, it lists the TIDs of all threads which belong to the cgroup one-per-line. The TIDs are not ordered and the same TID may show up more than once if the thread got moved to another cgroup and then back or the TID got recycled while reading.

    A TID can be written to migrate the thread associated with the TID to the cgroup. The writer should match all of the following conditions.

    • It must have write access to the “cgroup.threads” file.
    • The cgroup that the thread is currently in must be in the same resource domain as the destination cgroup.
    • It must have write access to the “cgroup.procs” file of the common ancestor of the source and destination cgroups.

    When delegating a sub-hierarchy, write access to this file should be granted along with the containing directory.

  4. cgroup.controllers

    只讀 (read-only)檔案,內容是空格隔開的值,可用於所有 cgroups。

    讀取這個檔案,得到的是該 cgroup 的所有可用控制器,空格隔開。控制器列表未排序。

  5. cgroup.subtree_control

    可讀寫,空格隔開的值,可用於所有控制器,初始時是空的。

    讀取時,返回這個 cgroup 已經啟用的控制器,對其 children 做資源控制。

    可通過 +<controller>-<controller> 來啟用或禁用控制器。如果一個控制器在檔案中出現多次,最後一次有效。 如果一次操作中指定了啟用或禁用多個動作,那要麼全部成功,要麼全部失敗。

  6. cgroup.events

    只讀,flat-keyed file,只可用於 non-root cgroups。

    定義了下面兩個配置項:

    • populated:1 if the cgroup or its descendants contains any live processes; otherwise, 0.
    • frozen:1 if the cgroup is frozen; otherwise, 0.

    除非有特別設定,否則修改本檔案會觸發一次 file modified event.

  7. cgroup.max.descendants

    可讀寫 single value files,預設值 "max"

    允許的最大 descent cgroups 數量。如果實際的 descendants 數量等於或大於該值,在 hierarchy 中再建立新 cgroup 時會失敗。

  8. cgroup.max.depth

    可讀寫 single value files,預設值 "max"

    當前 cgroup 內允許的最大 descent depth。如果實際的 depth 數量等於或大於該值,再建立新 child cgroup 時會失敗。

  9. cgroup.stat

    只讀 flat-keyed file,定義了下列 entries:

    • nr_descendants:可見的 descendant cgroups 總數。

    • nr_dying_descendants

      Total number of dying descendant cgroups. A cgroup becomes dying after being deleted by a user. The cgroup will remain in dying state for some time undefined time (which can depend on system load) before being completely destroyed.

      A process can’t enter a dying cgroup under any circumstances, a dying cgroup can’t revive.

      A dying cgroup can consume system resources not exceeding limits, which were active at the moment of cgroup deletion.

  10. cgroup.freeze

    可讀寫 single value file,只能用於 non-root cgroups。 Allowed values are “0” and “1”. The default is “0”.

    Writing “1” to the file causes freezing of the cgroup and all descendant cgroups. This means that all belonging processes will be stopped and will not run until the cgroup will be explicitly unfrozen. Freezing of the cgroup may take some time; when this action is completed, the “frozen” value in the cgroup.events control file will be updated to “1” and the corresponding notification will be issued.

    A cgroup can be frozen either by its own settings, or by settings of any ancestor cgroups. If any of ancestor cgroups is frozen, the cgroup will remain frozen.

    Processes in the frozen cgroup can be killed by a fatal signal. They also can enter and leave a frozen cgroup: either by an explicit move by a user, or if freezing of the cgroup races with fork(). If a process is moved to a frozen cgroup, it stops. If a process is moved out of a frozen cgroup, it becomes running.

    Frozen status of a cgroup doesn’t affect any cgroup tree operations: it’s possible to delete a frozen (and empty) cgroup, as well as create new sub-cgroups.

5 Controllers(控制器)

5.1 CPU

The “cpu” controllers 控制著 CPU cycle 的分配。這個控制器實現了

  • 常規排程 策略:weight and absolute bandwidth limit 模型
  • 實時排程 策略:absolute bandwidth allocation 模型

在所有以上模型中,cycles distribution 只定義在 temporal base 上,it does not account for the frequency at which tasks are executed. The (optional) utilization clamping support allows to hint the schedutil cpufreq governor about the minimum desired frequency which should always be provided by a CPU, as well as the maximum desired frequency, which should not be exceeded by a CPU.

警告:cgroupv2 還 不支援對實時程序的控制 ,並且只有當所有實時程序 都位於 root cgroup 時 , cpu 控制器才能啟用。需要注意:一些系統管理軟體可能已經在系統啟動期間,將實時程序放到了 non-root cgroup 中, 因此在啟用 CPU 控制器之前,需要將這些程序移動到 root cgroup。

CPU Interface Files

所有時間單位都是 microseconds。

  1. cpu.stat

    A read-only flat-keyed file. This file exists whether the controller is enabled or not.

    It always reports the following three stats:

    • usage_usec
    • user_usec
    • system_usec

    and the following three when the controller is enabled:

    • nr_periods
    • nr_throttled
    • throttled_usec
  2. cpu.weight

    A read-write single value file which exists on non-root cgroups. The default is “100”.

    The weight in the range [1, 10000].

  3. cpu.weight.nice

    A read-write single value file which exists on non-root cgroups. The default is “0”.

    The nice value is in the range [-20, 19].

    This interface file is an alternative interface for “cpu.weight” and allows reading and setting weight using the same values used by nice(2). Because the range is smaller and granularity is coarser for the nice values, the read value is the closest approximation of the current weight.

  4. cpu.max

    A read-write two value file which exists on non-root cgroups. The default is “max 100000”.

    The maximum bandwidth limit. It’s in the following format::

    $MAX $PERIOD

    which indicates that the group may consume upto $MAX in each $PERIOD duration. “max” for $MAX indicates no limit. If only one number is written, $MAX is updated.

  5. cpu.pressure

    A read-only nested-key file which exists on non-root cgroups.

    Shows pressure stall information for CPU. See Documentation/accounting/psi.rst <psi> for details.

  6. cpu.uclamp.min

    A read-write single value file which exists on non-root cgroups. The default is “0”, i.e. no utilization boosting.

    The requested minimum utilization (protection) as a percentage rational number, e.g. 12.34 for 12.34%.

    This interface allows reading and setting minimum utilization clamp values similar to the sched_setattr(2). This minimum utilization value is used to clamp the task specific minimum utilization clamp.

    The requested minimum utilization (protection) is always capped by the current value for the maximum utilization (limit), i.e. cpu.uclamp.max .

  7. cpu.uclamp.max

    A read-write single value file which exists on non-root cgroups. The default is “max”. i.e. no utilization capping

    The requested maximum utilization (limit) as a percentage rational number, e.g. 98.76 for 98.76%.

    This interface allows reading and setting maximum utilization clamp values similar to the sched_setattr(2). This maximum utilization value is used to clamp the task specific maximum utilization clamp.

5.2 Memory

The “memory” controller regulates distribution of memory. 記憶體是 有狀態的 ,實現了 limit 和 protection 兩種模型。 Due to the intertwining between memory usage and reclaim pressure and the stateful nature of memory, the distribution model is relatively complex.

While not completely water-tight, 給定 cgroup 的所有主要 memory usages 都會跟蹤,因此總記憶體佔用可以控制在一個合理的範圍內。目前 下列型別的記憶體 使用會被跟蹤:

  1. Userland memory - page cache and anonymous memory.
  2. Kernel data structures such as dentries and inodes.
  3. TCP socket buffers .

The above list may expand in the future for better coverage.

Memory Interface Files

All memory amounts are in bytes. If a value which is not aligned to PAGE_SIZE is written, the value may be rounded up to the closest PAGE_SIZE multiple when read back.

  1. memory.current

    A read-only single value file which exists on non-root cgroups.

    The total amount of memory currently being used by the cgroup and its descendants.

  2. memory.min

    A read-write single value file which exists on non-root cgroups. The default is “0”.

    Hard memory protection. If the memory usage of a cgroup is within its effective min boundary, the cgroup’s memory won’t be reclaimed under any conditions. If there is no unprotected reclaimable memory available, OOM killer is invoked. Above the effective min boundary (or effective low boundary if it is higher), pages are reclaimed proportionally to the overage, reducing reclaim pressure for smaller overages.

    Effective min boundary is limited by memory.min values of all ancestor cgroups. If there is memory.min overcommitment (child cgroup or cgroups are requiring more protected memory than parent will allow), then each child cgroup will get the part of parent’s protection proportional to its actual memory usage below memory.min.

    Putting more memory than generally available under this protection is discouraged and may lead to constant OOMs.

    If a memory cgroup is not populated with processes, its memory.min is ignored.

  3. memory.low

    A read-write single value file which exists on non-root cgroups. The default is “0”.

    Best-effort memory protection. If the memory usage of a cgroup is within its effective low boundary, the cgroup’s memory won’t be reclaimed unless there is no reclaimable memory available in unprotected cgroups. Above the effective low boundary (or effective min boundary if it is higher), pages are reclaimed proportionally to the overage, reducing reclaim pressure for smaller overages.

    Effective low boundary is limited by memory.low values of all ancestor cgroups. If there is memory.low overcommitment (child cgroup or cgroups are requiring more protected memory than parent will allow), then each child cgroup will get the part of parent’s protection proportional to its actual memory usage below memory.low.

    Putting more memory than generally available under this protection is discouraged.

  4. memory.high

    A read-write single value file which exists on non-root cgroups. The default is “max”.

    Memory usage throttle limit. This is the main mechanism to control memory usage of a cgroup. If a cgroup’s usage goes over the high boundary, the processes of the cgroup are throttled and put under heavy reclaim pressure.

    Going over the high limit never invokes the OOM killer and under extreme conditions the limit may be breached.

  5. memory.max

    A read-write single value file which exists on non-root cgroups. The default is “max”.

    Memory usage hard limit. This is the final protection mechanism. If a cgroup’s memory usage reaches this limit and can’t be reduced, the OOM killer is invoked in the cgroup. Under certain circumstances, the usage may go over the limit temporarily.

    In default configuration regular 0-order allocations always succeed unless OOM killer chooses current task as a victim.

    Some kinds of allocations don’t invoke the OOM killer. Caller could retry them differently, return into userspace as -ENOMEM or silently ignore in cases like disk readahead.

    This is the ultimate protection mechanism. As long as the high limit is used and monitored properly, this limit’s utility is limited to providing the final safety net.

  6. memory.oom.group

    A read-write single value file which exists on non-root cgroups. The default value is “0”.

    Determines whether the cgroup should be treated as an indivisible workload by the OOM killer. If set, all tasks belonging to the cgroup or to its descendants (if the memory cgroup is not a leaf cgroup) are killed together or not at all. This can be used to avoid partial kills to guarantee workload integrity.

    Tasks with the OOM protection (oom_score_adj set to -1000) are treated as an exception and are never killed.

    If the OOM killer is invoked in a cgroup, it’s not going to kill any tasks outside of this cgroup, regardless memory.oom.group values of ancestor cgroups.

  7. memory.events

    A read-only flat-keyed file which exists on non-root cgroups. The following entries are defined. Unless specified otherwise, a value change in this file generates a file modified event.

    Note that all fields in this file are hierarchical and the file modified event can be generated due to an event down the hierarchy. For for the local events at the cgroup level see memory.events.local.

    low
         The number of times the cgroup is reclaimed due to
         high memory pressure even though its usage is under
         the low boundary.  This usually indicates that the low
         boundary is over-committed.
    
       high
         The number of times processes of the cgroup are
         throttled and routed to perform direct memory reclaim
         because the high memory boundary was exceeded.  For a
         cgroup whose memory usage is capped by the high limit
         rather than global memory pressure, this event's
         occurrences are expected.
    
       max
         The number of times the cgroup's memory usage was
         about to go over the max boundary.  If direct reclaim
         fails to bring it down, the cgroup goes to OOM state.
    
       oom
         The number of time the cgroup's memory usage was
         reached the limit and allocation was about to fail.
    This event is not raised if the OOM killer is not considered as an
    option, e.g. for failed high-order allocations or if caller asked to not
    retry attempts.
    oom_kill
         The number of processes belonging to this cgroup
         killed by any kind of OOM killer.
  8. memory.events.local

    Similar to memory.events but the fields in the file are local to the cgroup i.e. not hierarchical. The file modified event generated on this file reflects only the local events.

  9. memory.stat

    A read-only flat-keyed file which exists on non-root cgroups.

    This breaks down the cgroup’s memory footprint into different types of memory, type-specific details, and other information on the state and past events of the memory management system.

    All memory amounts are in bytes.

    The entries are ordered to be human readable, and new entries can show up in the middle. Don’t rely on items remaining in a fixed position; use the keys to look up specific values!

    If the entry has no per-node counter(or not show in the mempry.numa_stat). We use ‘npn’(non-per-node) as the tag to indicate that it will not show in the mempry.numa_stat.

    anon
         Amount of memory used in anonymous mappings such as
         brk(), sbrk(), and mmap(MAP_ANONYMOUS)
    
       file
         Amount of memory used to cache filesystem data,
         including tmpfs and shared memory.
    
       kernel_stack
         Amount of memory allocated to kernel stacks.
    
       percpu(npn)
         Amount of memory used for storing per-cpu kernel
         data structures.
    
       sock(npn)
         Amount of memory used in network transmission buffers
    
       shmem
         Amount of cached filesystem data that is swap-backed,
         such as tmpfs, shm segments, shared anonymous mmap()s
    
       file_mapped
         Amount of cached filesystem data mapped with mmap()
    
       file_dirty
         Amount of cached filesystem data that was modified but
         not yet written back to disk
    
       file_writeback
         Amount of cached filesystem data that was modified and
         is currently being written back to disk
    
       anon_thp
         Amount of memory used in anonymous mappings backed by
         transparent hugepages
    
       inactive_anon, active_anon, inactive_file, active_file, unevictable
         Amount of memory, swap-backed and filesystem-backed,
         on the internal memory management lists used by the
         page reclaim algorithm.
    
         As these represent internal list state (eg. shmem pages are on anon
         memory management lists), inactive_foo + active_foo may not be equal to
         the value for the foo counter, since the foo counter is type-based, not
         list-based.
    
       slab_reclaimable
         Part of "slab" that might be reclaimed, such as
         dentries and inodes.
    
       slab_unreclaimable
         Part of "slab" that cannot be reclaimed on memory
         pressure.
    
       slab(npn)
         Amount of memory used for storing in-kernel data
         structures.
    
       workingset_refault_anon
         Number of refaults of previously evicted anonymous pages.
    
       workingset_refault_file
         Number of refaults of previously evicted file pages.
    
       workingset_activate_anon
         Number of refaulted anonymous pages that were immediately
         activated.
    
       workingset_activate_file
         Number of refaulted file pages that were immediately activated.
    
       workingset_restore_anon
         Number of restored anonymous pages which have been detected as
         an active workingset before they got reclaimed.
    
       workingset_restore_file
         Number of restored file pages which have been detected as an
         active workingset before they got reclaimed.
    
       workingset_nodereclaim
         Number of times a shadow node has been reclaimed
    
       pgfault(npn)
         Total number of page faults incurred
    
       pgmajfault(npn)
         Number of major page faults incurred
    
       pgrefill(npn)
         Amount of scanned pages (in an active LRU list)
    
       pgscan(npn)
         Amount of scanned pages (in an inactive LRU list)
    
       pgsteal(npn)
         Amount of reclaimed pages
    
       pgactivate(npn)
         Amount of pages moved to the active LRU list
    
       pgdeactivate(npn)
         Amount of pages moved to the inactive LRU list
    
       pglazyfree(npn)
         Amount of pages postponed to be freed under memory pressure
    
       pglazyfreed(npn)
         Amount of reclaimed lazyfree pages
    
       thp_fault_alloc(npn)
         Number of transparent hugepages which were allocated to satisfy
         a page fault. This counter is not present when CONFIG_TRANSPARENT_HUGEPAGE
                 is not set.
    
       thp_collapse_alloc(npn)
         Number of transparent hugepages which were allocated to allow
         collapsing an existing range of pages. This counter is not
         present when CONFIG_TRANSPARENT_HUGEPAGE is not set.
  10. memory.numa_stat

    A read-only nested-keyed file which exists on non-root cgroups.

    This breaks down the cgroup’s memory footprint into different types of memory, type-specific details, and other information per node on the state of the memory management system.

    This is useful for providing visibility into the NUMA locality information within an memcg since the pages are allowed to be allocated from any physical node. One of the use case is evaluating application performance by combining this information with the application’s CPU allocation.

    All memory amounts are in bytes.

    The output format of memory.numa_stat is::

    type N0=<bytes in node 0> N1=<bytes in node 1> ...

    The entries are ordered to be human readable, and new entries can show up in the middle. Don’t rely on items remaining in a fixed position; use the keys to look up specific values!

    The entries can refer to the memory.stat.

  11. memory.swap.current

    A read-only single value file which exists on non-root cgroups.

    The total amount of swap currently being used by the cgroup and its descendants.

  12. memory.swap.high

    A read-write single value file which exists on non-root cgroups. The default is “max”.

    Swap usage throttle limit. If a cgroup’s swap usage exceeds this limit, all its further allocations will be throttled to allow userspace to implement custom out-of-memory procedures.

    This limit marks a point of no return for the cgroup. It is NOT designed to manage the amount of swapping a workload does during regular operation. Compare to memory.swap.max, which prohibits swapping past a set amount, but lets the cgroup continue unimpeded as long as other memory can be reclaimed.

    Healthy workloads are not expected to reach this limit.

  13. memory.swap.max

    A read-write single value file which exists on non-root cgroups. The default is “max”.

    Swap usage hard limit. If a cgroup’s swap usage reaches this limit, anonymous memory of the cgroup will not be swapped out.

  14. memory.swap.events

    A read-only flat-keyed file which exists on non-root cgroups. The following entries are defined. Unless specified otherwise, a value change in this file generates a file modified event.

    high
         The number of times the cgroup's swap usage was over
         the high threshold.
    
       max
         The number of times the cgroup's swap usage was about
         to go over the max boundary and swap allocation
         failed.
    
       fail
         The number of times swap allocation failed either
         because of running out of swap system-wide or max
         limit.

    When reduced under the current usage, the existing swap entries are reclaimed gradually and the swap usage may stay higher than the limit for an extended period of time. This reduces the impact on the workload and memory management.

  15. memory.pressure

    A read-only nested-key file which exists on non-root cgroups.

    Shows pressure stall information for memory. See :ref: Documentation/accounting/psi.rst <psi> for details.

使用建議

memory.high 控制記憶體使用量的主要機制 。重要策略:

  1. high limit 超分(high limits 總和大於可用記憶體)
  2. 讓全域性記憶體壓力(global memory pressure)根據使用量分配記憶體

由於超過 high limit 之後 只會 throttle 該 cgroup 而不會觸發 OOM killer , 因此 management agent 有足夠的機會來監控這種情況及採取合適措施, 例如增加記憶體配額,或者幹掉該 workload。

判斷一個 cgroup 記憶體是否夠 並不是一件簡單的事情,因為記憶體使用量 並不能反映出增加記憶體之後,workload 效能是否能有改善。例如,從網路接收資料然後寫 入本地檔案的 workload,能充分利用所有可用記憶體;但另一方面,即使只給它很小一部分 記憶體,這種 workload 的效能也同樣是高效的。 記憶體壓力(memory pressure)的測量 —— 即由於記憶體不足導致 workload 受了多少影響 —— 對判斷一個 workload 是否需要更多記憶體來說至關重要;但不 幸的是, 核心還未實現記憶體壓力監控機制

Memory Ownership

A memory area is charged to the cgroup which instantiated it and stays charged to the cgroup until the area is released. Migrating a process to a different cgroup doesn’t move the memory usages that it instantiated while in the previous cgroup to the new cgroup.

A memory area may be used by processes belonging to different cgroups. To which cgroup the area will be charged is in-deterministic; however, over time, the memory area is likely to end up in a cgroup which has enough memory allowance to avoid high reclaim pressure.

If a cgroup sweeps a considerable amount of memory which is expected to be accessed repeatedly by other cgroups, it may make sense to use POSIX_FADV_DONTNEED to relinquish the ownership of memory areas belonging to the affected files to ensure correct memory ownership.

5.3 IO

The “io” controller regulates the distribution of IO resources. This controller implements both weight based and absolute bandwidth or IOPS limit distribution; however, weight based distribution is available only if cfq-iosched is in use and neither scheme is available for blk-mq devices.

IO Interface Files

  1. io.stat A read-only nested-keyed file.

    Lines are keyed by $MAJ:$MIN device numbers and not ordered. The following nested keys are defined.

    ======    =====================
       rbytes    Bytes read
       wbytes    Bytes written
       rios        Number of read IOs
       wios        Number of write IOs
       dbytes    Bytes discarded
       dios        Number of discard IOs
       ======    =====================

    An example read output follows::

    8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353 dbytes=0 dios=0
       8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252 dbytes=50331648 dios=3021
  2. io.cost.qos

    A read-write nested-keyed file with exists only on the root cgroup.

    This file configures the Quality of Service of the IO cost model based controller (CONFIG_BLK_CGROUP_IOCOST) which currently implements “io.weight” proportional control. Lines are keyed by $MAJ:$MIN device numbers and not ordered. The line for a given device is populated on the first write for the device on “io.cost.qos” or “io.cost.model”. The following nested keys are defined.

    ======    =====================================
     enable    Weight-based control enable
     ctrl      "auto" or "user"
     rpct      Read latency percentile    [0, 100]
     rlat      Read latency threshold
     wpct      Write latency percentile   [0, 100]
     wlat      Write latency threshold
     min       Minimum scaling percentage [1, 10000]
     max       Maximum scaling percentage [1, 10000]
     ======    =====================================

    The controller is disabled by default and can be enabled by setting “enable” to 1. “rpct” and “wpct” parameters default to zero and the controller uses internal device saturation state to adjust the overall IO rate between “min” and “max”.

    When a better control quality is needed, latency QoS parameters can be configured. For example::

    8:16 enable=1 ctrl=auto rpct=95.00 rlat=75000 wpct=95.00 wlat=150000 min=50.00 max=150.0

    shows that on sdb, the controller is enabled, will consider the device saturated if the 95th percentile of read completion latencies is above 75ms or write 150ms, and adjust the overall IO issue rate between 50% and 150% accordingly.

    The lower the saturation point, the better the latency QoS at the cost of aggregate bandwidth. The narrower the allowed adjustment range between “min” and “max”, the more conformant to the cost model the IO behavior. Note that the IO issue base rate may be far off from 100% and setting “min” and “max” blindly can lead to a significant loss of device capacity or control quality. “min” and “max” are useful for regulating devices which show wide temporary behavior changes - e.g. a ssd which accepts writes at the line speed for a while and then completely stalls for multiple seconds.

    When “ctrl” is “auto”, the parameters are controlled by the kernel and may change automatically. Setting “ctrl” to “user” or setting any of the percentile and latency parameters puts it into “user” mode and disables the automatic changes. The automatic mode can be restored by setting “ctrl” to “auto”.

  3. io.cost.model

    A read-write nested-keyed file with exists only on the root cgroup.

    This file configures the cost model of the IO cost model based controller (CONFIG_BLK_CGROUP_IOCOST) which currently implements “io.weight” proportional control. Lines are keyed by $MAJ:$MIN device numbers and not ordered. The line for a given device is populated on the first write for the device on “io.cost.qos” or “io.cost.model”. The following nested keys are defined.

    =====        ================================
       ctrl        "auto" or "user"
       model        The cost model in use - "linear"
       =====        ================================

    When “ctrl” is “auto”, the kernel may change all parameters dynamically. When “ctrl” is set to “user” or any other parameters are written to, “ctrl” become “user” and the automatic changes are disabled.

    When “model” is “linear”, the following model parameters are defined.

    =============    ========================================
       [r|w]bps    The maximum sequential IO throughput
       [r|w]seqiops    The maximum 4k sequential IOs per second
       [r|w]randiops    The maximum 4k random IOs per second
       =============    ========================================

    From the above, the builtin linear model determines the base costs of a sequential and random IO and the cost coefficient for the IO size. While simple, this model can cover most common device classes acceptably.

    The IO cost model isn’t expected to be accurate in absolute sense and is scaled to the device behavior dynamically.

    If needed, tools/cgroup/iocost_coef_gen.py can be used to generate device-specific coefficients.

  4. io.weight

    A read-write flat-keyed file which exists on non-root cgroups. The default is “default 100”.

    The first line is the default weight applied to devices without specific override. The rest are overrides keyed by $MAJ:$MIN device numbers and not ordered. The weights are in the range [1, 10000] and specifies the relative amount IO time the cgroup can use in relation to its siblings.

    The default weight can be updated by writing either “default $WEIGHT” or simply “$WEIGHT”. Overrides can be set by writing “$MAJ:$MIN $WEIGHT” and unset by writing “$MAJ:$MIN default”.

    An example read output follows::

    default 100
       8:16 200
       8:0 50
  5. io.max A read-write nested-keyed file which exists on non-root cgroups.

    BPS and IOPS based IO limit. Lines are keyed by $MAJ:$MIN device numbers and not ordered. The following nested keys are defined.

    =====        ==================================
       rbps        Max read bytes per second
       wbps        Max write bytes per second
       riops        Max read IO operations per second
       wiops        Max write IO operations per second
       =====        ==================================

    When writing, any number of nested key-value pairs can be specified in any order. “max” can be specified as the value to remove a specific limit. If the same key is specified multiple times, the outcome is undefined.

    BPS and IOPS are measured in each IO direction and IOs are delayed if limit is reached. Temporary bursts are allowed.

    Setting read limit at 2M BPS and write at 120 IOPS for 8:16::
    
       echo "8:16 rbps=2097152 wiops=120" > io.max
    
     Reading returns the following::
    
       8:16 rbps=2097152 wbps=max riops=max wiops=120
    
     Write IOPS limit can be removed by writing the following::
    
       echo "8:16 wiops=max" > io.max
    
     Reading now returns the following::
    
       8:16 rbps=2097152 wbps=max riops=max wiops=max
  6. io.pressure A read-only nested-key file which exists on non-root cgroups.

    Shows pressure stall information for IO. See :ref: Documentation/accounting/psi.rst <psi> for details.

Writeback

Page cache is dirtied through buffered writes and shared mmaps and written asynchronously to the backing filesystem by the writeback mechanism. Writeback sits between the memory and IO domains and regulates the proportion of dirty memory by balancing dirtying and write IOs.

The io controller, in conjunction with the memory controller, implements control of page cache writeback IOs. The memory controller defines the memory domain that dirty memory ratio is calculated and maintained for and the io controller defines the io domain which writes out dirty pages for the memory domain. Both system-wide and per-cgroup dirty memory states are examined and the more restrictive of the two is enforced.

cgroup writeback requires explicit support from the underlying filesystem. Currently, cgroup writeback is implemented on ext2, ext4, btrfs, f2fs, and xfs. On other filesystems, all writeback IOs are attributed to the root cgroup.

There are inherent differences in memory and writeback management which affects how cgroup ownership is tracked. Memory is tracked per page while writeback per inode. For the purpose of writeback, an inode is assigned to a cgroup and all IO requests to write dirty pages from the inode are attributed to that cgroup.

As cgroup ownership for memory is tracked per page, there can be pages which are associated with different cgroups than the one the inode is associated with. These are called foreign pages. The writeback constantly keeps track of foreign pages and, if a particular foreign cgroup becomes the majority over a certain period of time, switches the ownership of the inode to that cgroup.

While this model is enough for most use cases where a given inode is mostly dirtied by a single cgroup even when the main writing cgroup changes over time, use cases where multiple cgroups write to a single inode simultaneously are not supported well. In such circumstances, a significant portion of IOs are likely to be attributed incorrectly. As memory controller assigns page ownership on the first use and doesn’t update it until the page is released, even if writeback strictly follows page ownership, multiple cgroups dirtying overlapping areas wouldn’t work as expected. It’s recommended to avoid such usage patterns.

The sysctl knobs which affect writeback behavior are applied to cgroup writeback as follows.

vm.dirty_background_ratio, vm.dirty_ratio
    These ratios apply the same to cgroup writeback with the
    amount of available memory capped by limits imposed by the
    memory controller and system-wide clean memory.

  vm.dirty_background_bytes, vm.dirty_bytes
    For cgroup writeback, this is calculated into ratio against
    total available memory and applied the same way as
    vm.dirty[_background]_ratio.

IO Latency

This is a cgroup v2 controller for IO workload protection. You provide a group with a latency target, and if the average latency exceeds that target the controller will throttle any peers that have a lower latency target than the protected workload.

The limits are only applied at the peer level in the hierarchy. This means that in the diagram below, only groups A, B, and C will influence each other, and groups D and F will influence each other. Group G will influence nobody::

[root]
      /       |       \
      A       B        C
     /  \     |
    D    F    G

So the ideal way to configure this is to set io.latency in groups A, B, and C. Generally you do not want to set a value lower than the latency your device supports. Experiment to find the value that works best for your workload. Start at higher than the expected latency for your device and watch the avg_lat value in io.stat for your workload group to get an idea of the latency you see during normal operation. Use the avg_lat value as a basis for your real setting, setting at 10-15% higher than the value in io.stat.

How IO Latency Throttling Works

io.latency is work conserving; so as long as everybody is meeting their latency target the controller doesn’t do anything. Once a group starts missing its target it begins throttling any peer group that has a higher target than itself. This throttling takes 2 forms:

  • Queue depth throttling. This is the number of outstanding IO’s a group is allowed to have. We will clamp down relatively quickly, starting at no limit and going all the way down to 1 IO at a time.

  • Artificial delay induction. There are certain types of IO that cannot be throttled without possibly adversely affecting higher priority groups. This includes swapping and metadata IO. These types of IO are allowed to occur normally, however they are “charged” to the originating group. If the originating group is being throttled you will see the use_delay and delay fields in io.stat increase. The delay value is how many microseconds that are being added to any process that runs in this group. Because this number can grow quite large if there is a lot of swapping or metadata IO occurring we limit the individual delay events to 1 second at a time.

Once the victimized group starts meeting its latency target again it will start unthrottling any peer groups that were throttled previously. If the victimized group simply stops doing IO the global counter will unthrottle appropriately.

IO Latency Interface Files

  1. io.latency

    This takes a similar format as the other controllers.

    "MAJOR:MINOR target=<target time in microseconds"
  2. io.stat

    If the controller is enabled you will see extra stats in io.stat in addition to the normal ones.

    depth
         This is the current queue depth for the group.
    
       avg_lat
         This is an exponential moving average with a decay rate of 1/exp
         bound by the sampling interval.  The decay rate interval can be
         calculated by multiplying the win value in io.stat by the
         corresponding number of samples based on the win value.
    
       win
         The sampling window size in milliseconds.  This is the minimum
         duration of time between evaluation events.  Windows only elapse
         with IO activity.  Idle periods extend the most recent window.

5.4 PID

PID 控制器用於在 程序數量超過設定的 limit 之後,禁止通過 fork() 或 clone() 建立新程序

  • 依靠其他控制器是無法避免 cgroup 中的程序暴增問題的,例如,fork 炸彈能在觸發記憶體 限制之前耗盡 PID 空間,因此引入了 PID 控制器。
  • 注意,這裡所說的 PID 指的是核心在使用的 TID 和程序 ID。

5.4.1 PID 介面檔案: pids.current/pids.max

  1. pids.max

    A read-write single value file which exists on non-root cgroups. The default is “max”.

    Hard limit of number of processes.

  2. pids.current

    A read-only single value file which exists on all cgroups.

    cgroup 及其 descendants 中的 當前程序數

5.4.2 繞開 cgroup PID 限制,實現 pids.current > pids.max

上面提到,PID 控制器是 限制通過 fork/clone 來建立新程序 (超過限制之後返回 -EAGAIN )。 因此 只要不用這兩個系統呼叫 ,我們還是能將 cgroup 內的程序數量搞成 current > max 的。例如:

  1. 設定 pids.max 小於 pids.current (即先有足夠多的程序,再降低 max 配置),或者
  2. 將足夠多的程序從其他 cgroup 移動到當前 cgroup(遷移現有程序不需要 fork/clone)。

5.5 Cpuset

The “cpuset” controller provides a mechanism for constraining the CPU and memory node placement of tasks to only the resources specified in the cpuset interface files in a task’s current cgroup. This is especially valuable on large NUMA systems where placing jobs on properly sized subsets of the systems with careful processor and memory placement to reduce cross-node memory access and contention can improve overall system performance.

The “cpuset” controller is hierarchical. That means the controller cannot use CPUs or memory nodes not allowed in its parent.

Cpuset Interface Files

  1. cpuset.cpus A read-write multiple values file which exists on non-root cpuset-enabled cgroups.

    It lists the requested CPUs to be used by tasks within this cgroup. The actual list of CPUs to be granted, however, is subjected to constraints imposed by its parent and can differ from the requested CPUs.

    The CPU numbers are comma-separated numbers or ranges. For example::

    $ cat cpuset.cpus
     0-4,6,8-10

    An empty value indicates that the cgroup is using the same setting as the nearest cgroup ancestor with a non-empty “cpuset.cpus” or all the available CPUs if none is found.

    The value of “cpuset.cpus” stays constant until the next update and won’t be affected by any CPU hotplug events.

  2. cpuset.cpus.effective

    A read-only multiple values file which exists on all cpuset-enabled cgroups.

    It lists the onlined CPUs that are actually granted to this cgroup by its parent. These CPUs are allowed to be used by tasks within the current cgroup.

    If “cpuset.cpus” is empty, the “cpuset.cpus.effective” file shows all the CPUs from the parent cgroup that can be available to be used by this cgroup. Otherwise, it should be a subset of “cpuset.cpus” unless none of the CPUs listed in “cpuset.cpus” can be granted. In this case, it will be treated just like an empty “cpuset.cpus”.

    Its value will be affected by CPU hotplug events.

  3. cpuset.mems

    A read-write multiple values file which exists on non-root cpuset-enabled cgroups.

    It lists the requested memory nodes to be used by tasks within this cgroup. The actual list of memory nodes granted, however, is subjected to constraints imposed by its parent and can differ from the requested memory nodes.

    The memory node numbers are comma-separated numbers or ranges. For example::

    $ cat cpuset.mems
     0-1,3

    An empty value indicates that the cgroup is using the same setting as the nearest cgroup ancestor with a non-empty “cpuset.mems” or all the available memory nodes if none is found.

    The value of “cpuset.mems” stays constant until the next update and won’t be affected by any memory nodes hotplug events.

  4. cpuset.mems.effective

    A read-only multiple values file which exists on all cpuset-enabled cgroups.

    It lists the onlined memory nodes that are actually granted to this cgroup by its parent. These memory nodes are allowed to be used by tasks within the current cgroup.

    If “cpuset.mems” is empty, it shows all the memory nodes from the parent cgroup that will be available to be used by this cgroup. Otherwise, it should be a subset of “cpuset.mems” unless none of the memory nodes listed in “cpuset.mems” can be granted. In this case, it will be treated just like an empty “cpuset.mems”.

    Its value will be affected by memory nodes hotplug events.

  5. cpuset.cpus.partition

    A read-write single value file which exists on non-root cpuset-enabled cgroups. This flag is owned by the parent cgroup and is not delegatable.

    It accepts only the following input values when written to.

    "root"   - a partition root
         "member" - a non-root member of a partition

    When set to be a partition root, the current cgroup is the root of a new partition or scheduling domain that comprises itself and all its descendants except those that are separate partition roots themselves and their descendants. The root cgroup is always a partition root.

    There are constraints on where a partition root can be set. It can only be set in a cgroup if all the following conditions are true.

    1) The "cpuset.cpus" is not empty and the list of CPUs are
        exclusive, i.e. they are not shared by any of its siblings.
     2) The parent cgroup is a partition root.
     3) The "cpuset.cpus" is also a proper subset of the parent's
        "cpuset.cpus.effective".
     4) There is no child cgroups with cpuset enabled.  This is for
        eliminating corner cases that have to be handled if such a
        condition is allowed.

    Setting it to partition root will take the CPUs away from the effective CPUs of the parent cgroup. Once it is set, this file cannot be reverted back to “member” if there are any child cgroups with cpuset enabled.

    A parent partition cannot distribute all its CPUs to its child partitions. There must be at least one cpu left in the parent partition.

    Once becoming a partition root, changes to “cpuset.cpus” is generally allowed as long as the first condition above is true, the change will not take away all the CPUs from the parent partition and the new “cpuset.cpus” value is a superset of its children’s “cpuset.cpus” values.

    Sometimes, external factors like changes to ancestors’ “cpuset.cpus” or cpu hotplug can cause the state of the partition root to change. On read, the “cpuset.sched.partition” file can show the following values.

    "member"       Non-root member of a partition
     "root"         Partition root
     "root invalid" Invalid partition root

    It is a partition root if the first 2 partition root conditions above are true and at least one CPU from “cpuset.cpus” is granted by the parent cgroup.

    A partition root can become invalid if none of CPUs requested in “cpuset.cpus” can be granted by the parent cgroup or the parent cgroup is no longer a partition root itself. In this case, it is not a real partition even though the restriction of the first partition root condition above will still apply. The cpu affinity of all the tasks in the cgroup will then be associated with CPUs in the nearest ancestor partition.

    An invalid partition root can be transitioned back to a real partition root if at least one of the requested CPUs can now be granted by its parent. In this case, the cpu affinity of all the tasks in the formerly invalid partition will be associated to the CPUs of the newly formed partition. Changing the partition state of an invalid partition root to “member” is always allowed even if child cpusets are present.

5.6 Device controller

Device controller 管理對裝置檔案(device files)的訪問,包括建立新檔案(使用 mknod )和訪問已有檔案。

5.6.1 控制方式:基於 cgroup BPF 而非介面檔案

cgroupv2 裝置控制器 沒有介面檔案,而是實現在 cgroup BPF 之上 。 要控制對裝置檔案的訪問時,使用者需要:

  1. 編寫的 BPF_PROG_TYPE_CGROUP_DEVICE 型別的 BPF 程式;
  2. 將 BPF 程式 attach 到指定的 cgroup,其中指定 attach 型別為 BPF_CGROUP_DEVICE

在訪問裝置檔案時,會 觸發相應 BPF 程式的執行 ,後者的返回值決定了 是否允許訪問。

5.6.2 cgroup BPF 程式上下文和返回值

這種 BPF 程式接受一個 struct bpf_cgroup_dev_ctx * 指標,

// https://github.com/torvalds/linux/blob/v5.10/include/uapi/linux/bpf.h#L4833

struct bpf_cgroup_dev_ctx {
    __u32 access_type; /* encoded as (BPF_DEVCG_ACC_* << 16) | BPF_DEVCG_DEV_* */
    __u32 major;
    __u32 minor;
};

欄位含義:

  • access_type :訪問操作的型別,例如 mknod/read/write
  • majorminor :主次裝置號;

BPF 程式返回值:

  1. 0 :訪問失敗( -EPERM
  2. 其他值:訪問成功。

5.6.3 cgroup BPF 程式示例

核心測試用例:

  1. tools/testing/selftests/bpf/progs/dev_cgroup.c
  2. tools/testing/selftests/bpf/test_dev_cgroup.c

5.7 RDMA

The “rdma” controller regulates the distribution and accounting of RDMA resources.

RDMA Interface Files

  1. rdma.max

    A readwrite nested-keyed file that exists for all the cgroups except root that describes current configured resource limit for a RDMA/IB device.

    Lines are keyed by device name and are not ordered. Each line contains space separated resource name and its configured limit that can be distributed.

    The following nested keys are defined.

    ==========    =============================
       hca_handle    Maximum number of HCA Handles
       hca_object     Maximum number of HCA Objects
       ==========    =============================

    An example for mlx4 and ocrdma device follows::

    mlx4_0 hca_handle=2 hca_object=2000
       ocrdma1 hca_handle=3 hca_object=max
  2. rdma.current

    A read-only file that describes current resource usage. It exists for all the cgroup except root.

    An example for mlx4 and ocrdma device follows::

    mlx4_0 hca_handle=1 hca_object=20
       ocrdma1 hca_handle=1 hca_object=23

5.8 HugeTLB

The HugeTLB controller allows to limit the HugeTLB usage per control group and enforces the controller limit during page fault.

HugeTLB Interface Files

  1. hugetlb. .current

    Show current usage for “hugepagesize” hugetlb. It exists for all the cgroup except root.

  2. hugetlb. .max

    Set/show the hard limit of “hugepagesize” hugetlb usage. The default value is “max”. It exists for all the cgroup except root.

  3. hugetlb. .events

    A read-only flat-keyed file which exists on non-root cgroups.

    max
         The number of allocation failure due to HugeTLB limit
  4. hugetlb. .events.local

    Similar to hugetlb. .events but the fields in the file are local to the cgroup i.e. not hierarchical. The file modified event generated on this file reflects only the local events.

5.9 Misc

perf_event

perf_event controller, if not mounted on a legacy hierarchy, is automatically enabled on the v2 hierarchy so that perf events can always be filtered by cgroup v2 path. The controller can still be moved to a legacy hierarchy after v2 hierarchy is populated.

5.10 規範外(non-normative)的一些資訊

本節內容 不屬於 stable kernel API ,隨時可能變化。

CPU controller root cgroup 處理行為

When distributing CPU cycles in the root cgroup each thread in this cgroup is treated as if it was hosted in a separate child cgroup of the root cgroup. This child cgroup weight is dependent on its thread nice level.

For details of this mapping see sched_prio_to_weight array in kernel/sched/core.c file (values from this array should be scaled appropriately so the neutral - nice 0 - value is 100 instead of 1024).

IO controller root cgroup 處理行為

Root cgroup processes are hosted in an implicit leaf child node. When distributing IO resources this implicit child node is taken into account as if it was a normal child cgroup of the root cgroup with a weight value of 200.

6 cgroup 名稱空間(cgroupns)

容器環境中用 cgroup 和其他一些 namespace 來隔離程序,但 /proc/$PID/cgroup 檔案 可能會洩露潛在的系統層資訊 。例如:

$ cat /proc/self/cgroup
0::/batchjobs/container_id1 # <-- cgroup 的絕對路徑,屬於系統層資訊,不希望暴露給隔離的程序

因此 引入了 cgroup namespace ,以下簡寫為 cgroupns (類似於 network namespace 簡寫為 netns)。

6.1 基礎

6.1.1 功能:對 /proc/PID/cgroup 和 cgroup mount 進行虛擬化

cgroupns 對 /proc/$PID/cgroup 檔案和 cgroup 掛載視角 進行了虛擬化。

  • 如果沒有 cgroupns cat /proc/$PID/cgroup 看到的是 程序所屬 cgroup 的絕對路徑
  • 有了 cgroupns 之後,看到的將是 cgroupns root 範圍內的路徑。

下面具體來看。

6.1.2 新建 cgroup namespace

可以用 clone(2)/unshare(2) 指定 CLONE_NEWCGROUP flag 來建立一個新的 cgroupns。

  • 建立該 cgroupns 時, unshare/clone 所在的 cgroup namespace 稱為 cgroupns root
  • 該 cgroupns 內的程序檢視 /proc/$PID/cgroup 時,只能看到其 cgroupns root 範圍內的 cgroup 檔案路徑。

也就是說,cgroupns 限制了 cgroup 檔案路徑的可見性。例如,沒有建立 cgroup namespace 時的檢視:

$ ls -l /proc/self/ns/cgroup
lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835]

$ cat /proc/self/cgroup
0::/batchjobs/container_id1  # <-- 絕對路徑

unshare 建立一個新 cgroupns 之後的檢視:

$ ls -l /proc/self/ns/cgroup
lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183]

$ cat /proc/self/cgroup
0::/                         # <-- cgroupns root 限制範圍內的路徑

6.1.3 多執行緒程序:執行緒 unshare 後的行為

對於多執行緒的程序,任何一個執行緒通過 unshare 建立新 cgroupns 時,整個程序(所有執行緒) 都會進入到新的 cgroupns。這對 v2 hierarchy 是很自然的事情,但對 v1 來說,可能是 不期望的行為。

6.1.4 cgroupns 生命週期

只要以下條件之一滿足,cgroupns 就會活著:

  1. cgroup 中還有活著的程序
  2. 掛載的檔案系統中,還有物件 pin 在這個 cgroupns 上

當最後一個還在使用 cgroupns 的程序退出或檔案系統 unmount 之後,這個 cgroupns 就銷燬了。 但 cgroupns root 和真正的 cgroups 還是繼續存在的。

6.2 進一步解釋 cgroupns root 和檢視

前面提到,cgroupns root 是指 unshare(2) 建立 cgroupns 時所在的 cgroup。 例如,如果 /batchjobs/container_id1 cgroup 中的一個程序呼叫 unshare ,那 /batchjobs/container_id1 就成了 cgroupns root。For the init_cgroup_ns, this is the real root (‘/’) cgroup.

即便建立這個 cgroupns 的程序後面移動到其他 cgroup,這個 cgroupns root 也是不會變的

$ ~/unshare -c # 在當前 cgroup 中通過 unshare 命令建立一個 cgroupns

# 以下命令都在剛建立的 cgroupns 中執行的
$ cat /proc/self/cgroup
0::/

$ mkdir sub_cgrp_1                  # 建立一個 sub-cgroup
$ echo 0 > sub_cgrp_1/cgroup.procs  # 將當前 shell 程序遷移到新建立的 cgroup sub_cgrp_1
$ cat /proc/self/cgroup             # 檢視當前 shell 程序的 cgroup 資訊
0::/sub_cgrp_1                      # 可以看到是相對路徑

每個程序獲得了它自己的 namespace-specific /proc/$PID/cgroup

執行在 cgroupns 中的程序,在 /proc/self/cgroup 中只能看到它們的 root cgroup 內的 cgroup 路徑。 例如,還是在剛才 unshare 建立的 cgroupns 中:

# 接著上面的視窗,現在還是在創建出的 cgroupns 中執行命令
$ sleep 100000 &                      # 建立一個程序,放在後臺執行
[1] 7353

$ echo 7353 > sub_cgrp_1/cgroup.procs # 將程序遷移到前面建立的 sub-cgroup 中
$ cat /proc/7353/cgroup               # 檢視這個程序的 cgroup 資訊,會看到是相對路徑
0::/sub_cgrp_1

在預設 cgroupns 中,真實 cgroup path 還是可見的:

$ cat /proc/7353/cgroup
0::/batchjobs/container_id1/sub_cgrp_1 # 絕對路徑,說明沒有在新建的 cgroupns 中

在 sibling(兄弟)cgroupns (a namespace rooted at a different cgroup) 中,會顯示 相對 cgroup path (相對於它自己的cgroupns root)。例如,如果 PID 7353 的 cgroupns root 是 /batchjobs/container_id2 ,那它將看到:

$ cat /proc/7353/cgroup
0::/../container_id2/sub_cgrp_1

注意:相對路徑永遠以 / 開頭,以提醒使用者這是相對於呼叫者的 cgroupns root 的路徑。

6.3 在 cgroupns 之間遷移程序

cgroupns 內的程序,可以移出或移入 ns root,只要有 對外部 cgroup 的訪問許可權 (proper access to external cgroups)。例如,在 cgroupns root 是 /batchjobs/container_id1 的某 cgroupns 內,假設能訪問全域性 hierarchy,那可以通 過如下命令遷移程序:

$ cat /proc/7353/cgroup
0::/sub_cgrp_1

$ echo 7353 > batchjobs/container_id2/cgroup.procs
$ cat /proc/7353/cgroup
0::/../container_id2

注意,這種遷移方式 並不推薦 。cgroupns 內的程序只應當被暴露到它 自己的 cgroupns hierarchy 內。

還可以使用 setns(2) 將程序移動到其他 cgroupns,前提條件:

  1. 有 CAP_SYS_ADMIN against its current user namespace
  2. 有 CAP_SYS_ADMIN against the target cgroupns’s userns

當 attach 到其他 cgroupns 時,不會有隱式的 cgroup changes。 It is expected that the someone moves the attaching process under the target cgroupns root.

6.4 與其他 cgroupns 互動

Namespace 相關的 cgroup hierarchy 可以在 non-init cgroupns 內以如下方式掛載:

# mount -t <fstype> <device> <dir>
$ mount -t cgroup2 none $MOUNT_POINT

這會 掛載預設的 unified cgroup hierarchy ,並 將 cgroupns root 作為 filesystem root 。 這個操作需要 CAP_SYS_ADMIN 許可權。

/proc/self/cgroup 的虛擬化,以及通過 namespace-private cgroupfs mount 來限制 程序能看到的 cgroup hierarchy,提供了容器的隔離的 cgroup 視角。

7 核心程式設計相關資訊

一些與 cgroup 互動相關的核心程式設計資訊。

檔案系統對 writeback 的支援

A filesystem can support cgroup writeback by updating address_space_operations->writepage[s]() to annotate bio’s using the following two functions.

wbc_init_bio(@wbc, @bio)
    Should be called for each bio carrying writeback data and
    associates the bio with the inode's owner cgroup and the
    corresponding request queue.  This must be called after
    a queue (device) has been associated with the bio and
    before submission.

  wbc_account_cgroup_owner(@wbc, @page, @bytes)
    Should be called for each data segment being written out.
    While this function doesn't care exactly when it's called
    during the writeback session, it's the easiest and most
    natural to call it as data segments are added to a bio.

With writeback bio’s annotated, cgroup support can be enabled per super_block by setting SB_I_CGROUPWB in ->s_iflags. This allows for selective disabling of cgroup writeback support which is helpful when certain filesystem features, e.g. journaled data mode, are incompatible.

wbc_init_bio() binds the specified bio to its cgroup. Depending on the configuration, the bio may be executed at a lower priority and if the writeback session is holding shared resources, e.g. a journal entry, may lead to priority inversion. There is no one easy solution for the problem. Filesystems can try to work around specific problem cases by skipping wbc_init_bio() and using bio_associate_blkg() directly.

8 v1 core 已棄用特性

  1. Multiple hierarchies including named ones are not supported.
  2. All v1 mount options are not supported.
  3. The “tasks” file is removed and “cgroup.procs” is not sorted.
  4. “cgroup.clone_children” is removed.
  5. /proc/cgroups is meaningless for v2. Use “cgroup.controllers” file at the root instead.

9 v1 存在的問題及 v2 的設計考慮(rationales)

9.1 v1 多 hierarchy 帶來的問題

v1 允許任意數量的 hierarchy,每個 hierarchy 可以啟用任意數量的 controller。 這種方式看上去高度靈活,但 在實際中並不是很有用 。例如,

  1. utility 型別的 controller(例如 freezer)本可用於多個 hierarchy,而 由於 v1 中每個 controller 只有一個例項,utility controller 的作用就大打折扣; 而 hierarchy 一旦被 populated 之後,控制器就不能移動到其他 hierarchy 的事實, 更是加劇了這個問題。
  2. 另一個問題是,關聯到某個 hierarchy 的所有控制器,只能擁有相同的 hierarchy 檢視。 無法在 controller 粒度改變這種檢視

在實際中,這些問題嚴重製約著每個 hierarchy 能啟用哪些控制器,導致的結果就是: 大部分 hierarchy 都啟用了所有控制器 。而實際上只有聯絡非常緊密的 控制器 —— 例如 cpucpuacct —— 放到同一個 hierarchy 中才有意義。 因此最終的結果就是:

  1. 使用者空間最後 管理著多個非常類似的 hierarchy
  2. 在執行 hierarchy 管理操作時, 每個 hierarchy 上都重複著相同的操作

此外,支援多個 hierarchy 代價也非常高。它使得 cgroup core 的實現更加複雜,更重要的是, 限制了 cgroup 如何使用以及每個控制器能做什麼。

  1. 由於未限制 hierarchy 數量,因此一個執行緒的 cgroup membership 無法用有限長度來描述。

    cgroup 檔案可能包含任意數量(行數)的 entry,長度是沒有限制的,使得管理非常棘手, 最終不得 不加一些特殊的控制器 ,而這些控制器的唯一目的就是識 別 membership,這反過來又加劇了最初的問題:hierarchy 數量不斷增加。

  2. 由於 controller 無法對其他 controller 所在的 hierarchy 拓撲做出預測,每個 controller 只能假設所有控制器都 attach 到了完全正交的 hierarchies。 這使得無法 —— 或至少非常困難 —— 實現控制器之間的協作

    在大部分場景下,將控制器放到多個完全正交的 hierarchy 都是沒必要的。大家更 希望的是不同控制器能有不同層級的控制粒度。換句話說,從某個具體的 controller 角 度來看時,hierarchy 能夠自底向上(from leaf towards root)collapse。例如 ,某個配置能不關心記憶體是否已經超過限制,而只關心 CPU cycle 的分配是否符合設定。

9.2 執行緒粒度(thread granularity)

cgroup v1 allowed threads of a process to belong to different cgroups. This didn’t make sense for some controllers and those controllers ended up implementing different ways to ignore such situations but much more importantly it blurred the line between API exposed to individual applications and system management interface.

Generally, in-process knowledge is available only to the process itself; thus, unlike service-level organization of processes, categorizing threads of a process requires active participation from the application which owns the target process.

cgroup v1 had an ambiguously defined delegation model which got abused in combination with thread granularity. cgroups were delegated to individual applications so that they can create and manage their own sub-hierarchies and control resource distributions along them. This effectively raised cgroup to the status of a syscall-like API exposed to lay programs.

First of all, cgroup has a fundamentally inadequate interface to be exposed this way. For a process to access its own knobs, it has to extract the path on the target hierarchy from /proc/self/cgroup, construct the path by appending the name of the knob to the path, open and then read and/or write to it. This is not only extremely clunky and unusual but also inherently racy. There is no conventional way to define transaction across the required steps and nothing can guarantee that the process would actually be operating on its own sub-hierarchy.

cgroup controllers implemented a number of knobs which would never be accepted as public APIs because they were just adding control knobs to system-management pseudo filesystem. cgroup ended up with interface knobs which were not properly abstracted or refined and directly revealed kernel internal details. These knobs got exposed to individual applications through the ill-defined delegation mechanism effectively abusing cgroup as a shortcut to implementing public APIs without going through the required scrutiny.

This was painful for both userland and kernel. Userland ended up with misbehaving and poorly abstracted interfaces and kernel exposing and locked into constructs inadvertently.

9.3 內部節點(inner nodes)與執行緒之間競爭

cgroup v1 允許執行緒在任意 cgroup,這導致了一個很有趣的問題: threads belonging to a parent cgroup and its children cgroups competed for resources. This was nasty as two different types of entities competed and there was no obvious way to settle it. Different controllers did different things.

The cpu controller considered threads and cgroups as equivalents and mapped nice levels to cgroup weights. This worked for some cases but fell flat when children wanted to be allocated specific ratios of CPU cycles and the number of internal threads fluctuated - the ratios constantly changed as the number of competing entities fluctuated. There also were other issues. The mapping from nice level to weight wasn’t obvious or universal, and there were various other knobs which simply weren’t available for threads.

The io controller implicitly created a hidden leaf node for each cgroup to host the threads. The hidden leaf had its own copies of all the knobs with leaf_ prefixed. While this allowed equivalent control over internal threads, it was with serious drawbacks. It always added an extra layer of nesting which wouldn’t be necessary otherwise, made the interface messy and significantly complicated the implementation.

The memory controller didn’t have a way to control what happened between internal tasks and child cgroups and the behavior was not clearly defined. There were attempts to add ad-hoc behaviors and knobs to tailor the behavior to specific workloads which would have led to problems extremely difficult to resolve in the long term.

Multiple controllers struggled with internal tasks and came up with different ways to deal with it; unfortunately, all the approaches were severely flawed and, furthermore, the widely different behaviors made cgroup as a whole highly inconsistent.

This clearly is a problem which needs to be addressed from cgroup core in a uniform way.

9.4 其他 cgroup 介面相關的問題

v1 的設計並 沒有前瞻性 ,因此後面引入了 大量的怪異特性和不一致性

9.4.1 核心介面

cgroup core 中的問題,例如:

  • 如何通知一個 empty cgroup。v1 的實現非常粗暴: 對於每個事件都 fork 執行一個使用者空間 helper binary。
  • event delivery 也是不可遞迴或 delegatable 的。這也使核心中的事件 delivery 過濾機制讓 cgroup 介面變得更復雜。

9.4.2 控制器介面

控制器介面也有問題。

  • 一個極端的例子:控制器完全不管 hierarchical organization,認為所有 cgroup 都直接位於 root cgroup 下面。
  • 一些控制器暴露了大量的、不一致的實現細節給使用者空間。

9.4.3 控制器行為

控制器行為也有不一致。

建立一個新 cgroup 之後,某些控制器預設不會施加限制,而另一些控制器則會直接禁用 資源,需要使用者顯式配置來解除禁用。 Configuration knobs for the same type of control used widely differing naming schemes and formats. Statistics and information knobs were named arbitrarily and used different formats and units even in the same controller.

v2 建立了 通用約定 ,並更新了控制器設計,以使得它們只需 暴露最少且一致的介面

9.5 一些 controller 相關的問題及解決方式

Memory

The original lower boundary, the soft limit, is defined as a limit that is per default unset. As a result, the set of cgroups that global reclaim prefers is opt-in, rather than opt-out. The costs for optimizing these mostly negative lookups are so high that the implementation, despite its enormous size, does not even provide the basic desirable behavior. First off, the soft limit has no hierarchical meaning. All configured groups are organized in a global rbtree and treated like equal peers, regardless where they are located in the hierarchy. This makes subtree delegation impossible. Second, the soft limit reclaim pass is so aggressive that it not just introduces high allocation latencies into the system, but also impacts system performance due to overreclaim, to the point where the feature becomes self-defeating.

The memory.low boundary on the other hand is a top-down allocated reserve. A cgroup enjoys reclaim protection when it’s within its effective low, which makes delegation of subtrees possible. It also enjoys having reclaim pressure proportional to its overage when above its effective low.

The original high boundary, the hard limit, is defined as a strict limit that can not budge, even if the OOM killer has to be called. But this generally goes against the goal of making the most out of the available memory. The memory consumption of workloads varies during runtime, and that requires users to overcommit. But doing that with a strict upper limit requires either a fairly accurate prediction of the working set size or adding slack to the limit. Since working set size estimation is hard and error prone, and getting it wrong results in OOM kills, most users tend to err on the side of a looser limit and end up wasting precious resources.

The memory.high boundary on the other hand can be set much more conservatively. When hit, it throttles allocations by forcing them into direct reclaim to work off the excess, but it never invokes the OOM killer. As a result, a high boundary that is chosen too aggressively will not terminate the processes, but instead it will lead to gradual performance degradation. The user can monitor this and make corrections until the minimal memory footprint that still gives acceptable performance is found.

In extreme cases, with many concurrent allocations and a complete breakdown of reclaim progress within the group, the high boundary can be exceeded. But even then it’s mostly better to satisfy the allocation from the slack available in other groups or the rest of the system than killing the group. Otherwise, memory.max is there to limit this type of spillover and ultimately contain buggy or even malicious applications.

Setting the original memory.limit_in_bytes below the current usage was subject to a race condition, where concurrent charges could cause the limit setting to fail. memory.max on the other hand will first set the limit to prevent new charges, and then reclaim and OOM kill until the new limit is met - or the task writing to memory.max is killed.

The combined memory+swap accounting and limiting is replaced by real control over swap space.

The main argument for a combined memory+swap facility in the original cgroup design was that global or parental pressure would always be able to swap all anonymous memory of a child group, regardless of the child’s own (possibly untrusted) configuration. However, untrusted groups can sabotage swapping by other means - such as referencing its anonymous memory in a tight loop - and an admin can not assume full swappability when overcommitting untrusted jobs.

For trusted jobs, on the other hand, a combined counter is not an intuitive userspace interface, and it flies in the face of the idea that cgroup controllers should account and limit specific physical resources. Swap space is a resource like all others in the system, and that’s why unified hierarchy allows distributing it separately.

« [譯] Linux 非同步 I/O 框架 io_uring:基本原理、程式示例與效能壓測(2020)