使用Go開發Kubernetes Operator:基本結構

語言: CN / TW / HK

本文永久連結 – https://tonybai.com/2022/08/15/developing-kubernetes-operators-in-go-part1

注:文章首圖基於《Kubernetes Operators Explained》修改

幾年前,我還稱Kubernetes為服務編排和容器排程領域的事實標準 ,如今K8s已經是這個領域的“霸主”,地位無可撼動。不過,雖然Kubernetes發展演化到今天已經變得非常複雜,但是Kubernetes最初的資料模型、應用模式與擴充套件方式卻依然有效。並且像 Operator這樣的應用模式和擴充套件方式 日益受到開發者與運維者的歡迎。

我們的平臺內部存在有狀態(stateful)的後端服務,對有狀態的服務的部署和運維是k8s operator的 拿手好戲 ,是時候來研究一下operator了。

一. Operator的優點

kubernetes operator的概念最初來自CoreOS – 一家被紅帽(redhat)收購的容器技術公司。

CoreOS在引入Operator概念的同時,也給出了Operator的第一批參考實現: etcd operatorprometheus operator

注: etcd 於2013年由CoreOS以開源形式釋出; prometheus 作為首款面向雲原生服務的時序資料儲存與監控系統,由SoundCloud公司於2012年以開源的形式釋出。

下面是CoreOS對Operator這一概念的詮釋: Operator在軟體中代表了人類的運維操作知識,通過它可以可靠地管理一個應用程式

圖:CoreOS對operator的詮釋(截圖來自CoreOS官方部落格歸檔)

Operator出現的初衷就是用來解放運維人員的,如今Operator也越來越受到雲原生運維開發人員的青睞。

那麼operator好處究竟在哪裡呢?下面示意圖對使用Operator和不使用Operator進行了對比:

通過這張圖,即便對operator不甚瞭解,你也能大致感受到operator的優點吧。

我們看到在使用operator的情況下,對有狀態應用的伸縮操作(這裡以伸縮操作為例,也可以是其他諸如版本升級等對於有狀態應用來說的“複雜”操作),運維人員僅需一個簡單的命令即可,運維人員也無需知道k8s內部對有狀態應用的伸縮操作的原理是什麼。

在沒有使用operator的情況下,運維人員需要對有狀態應用的伸縮的操作步驟有深刻的認知,並按順序逐個執行一個命令序列中的命令並檢查命令響應,遇到失敗的情況時還需要進行重試,直到伸縮成功。

我們看到operator就好比一個內置於k8s中的經驗豐富運維人員,時刻監控目標物件的狀態,把複雜性留給自己,給運維人員一個簡潔的互動介面,同時operator也能降低運維人員因個人原因導致的操作失誤的概率。

不過,operator雖好,但開發門檻卻不低。開發門檻至少體現在如下幾個方面:

  • 對operator概念的理解是基於對k8s的理解的基礎之上的,而k8s自從2014年開源以來,變的日益複雜,理解起來需要一定時間投入;
  • 從頭手擼operator很verbose,幾乎無人這麼做,大多數開發者都會去學習相應的開發框架與工具,比如: kubebuilderoperator framework sdk 等;
  • operator的能力也有高低之分,operator framework就提出了一個包含 五個等級的operator能力模型(CAPABILITY MODEL) ,見下圖。使用Go開發高能力等級的operator需要對 client-go 這個kubernetes官方go client庫中的API有深入的瞭解。

圖:operator能力模型(截圖來自operator framework官網)

當然在這些門檻當中,對operator概念的理解既是基礎也是前提,而理解operator的前提又是對kubernetes的諸多概念要有深入理解,尤其是resource、resource type、API、controller以及它們之間的關係。接下來我們就來快速介紹一下這些概念。

二. Kubernetes resource、resource type、API和controller介紹

Kubernetes發展到今天,其本質已經顯現:

  • Kubernetes就是一個“資料庫”(資料實際持久儲存在etcd中);
  • 其API就是“sql語句”;
  • API設計採用基於resource的Restful風格, resource type是API的端點(endpoint);
  • 每一類resource(即Resource Type)是一張“表”,Resource Type的spec對應“表結構”資訊(schema);
  • 每張“表”裡的一行記錄就是一個resource,即該表對應的Resource Type的一個例項(instance);
  • Kubernetes這個“資料庫”內建了很多“表”,比如Pod、Deployment、DaemonSet、ReplicaSet等;

下面是一個Kubernetes API與resource關係的示意圖:

我們看到resource type有兩類,一類的namespace相關的(namespace-scoped),我們通過下面形式的API操作這類resource type的例項:

VERB /apis/GROUP/VERSION/namespaces/NAMESPACE/RESOURCETYPE - 操作某特定namespace下面的resouce type中的resource例項集合
VERB /apis/GROUP/VERSION/namespaces/NAMESPACE/RESOURCETYPE/NAME - 操作某特定namespace下面的resource type中的某個具體的resource例項

另外一類則是namespace無關,即cluster範圍(cluster-scoped)的,我們通過下面形式的API對這類resource type的例項進行操作:

VERB /apis/GROUP/VERSION/RESOURCETYPE - 操作resouce type中的resource例項集合
VERB /apis/GROUP/VERSION/RESOURCETYPE/NAME - 操作resource type中的某個具體的resource例項

我們知道Kubernetes並非真的只是一個“資料庫”,它是服務編排和容器排程的平臺標準,它的基本排程單元是Pod(也是一個resource type),即一組容器的集合。那麼Pod又是如何被建立、更新和刪除的呢?這就離不開控制器(controller)了。 每一類resource type都有自己對應的控制器(controller) 。以pod這個resource type為例,它的controller為ReplicasSet的例項。

控制器的執行邏輯如下圖所示:

圖:控制器執行邏輯(引自《Kubernetes Operators Explained》一文)

控制器一旦啟動,將嘗試獲得resource的當前狀態(current state),並與儲存在k8s中的resource的期望狀態(desired state,即spec)做比對,如果不一致,controller就會呼叫相應API進行調整,盡力使得current state與期望狀態達成一致。這個達成一致的過程被稱為 協調(reconciliation) ,協調過程的虛擬碼邏輯如下:

for {
    desired := getDesiredState()
    current := getCurrentState()
    makeChanges(desired, current)
}

注:k8s中有一個object的概念?那麼object是什麼呢?它類似於Java Object基類或Ruby中的Object超類。不僅resource type的例項resource是一個(is-a)object,resource type本身也是一個object,它是kubernetes concept的例項。

有了上面對k8s這些概念的初步理解,我們下面就來理解一下Operator究竟是什麼!

三. Operator模式 = 操作物件(CRD) + 控制邏輯(controller)

如果讓運維人員直面這些內建的resource type(如deployment、pod等),也就是前面“使用operator vs. 不使用operator”對比圖中的第二種情況, 運維人員面臨的情況將會很複雜,且操作易錯。

那麼如果不直面內建的resource type,那麼我們如何自定義resource type呢, Kubernetes提供了Custom Resource Definition,CRD(在coreos剛提出operator概念的時候,crd的前身是Third Party Resource, TPR)可以用於自定義resource type。

根據前面我們對resource type理解,定義CRD相當於建立新“表”(resource type),一旦CRD建立,k8s會為我們自動生成對應CRD的API endpoint,我們就可以通過yaml或API來操作這個“表”。我們可以向“表”中“插入”資料,即基於CRD建立Custom Resource(CR),這就好比我們建立Deployment例項,向Deployment“表”中插入資料一樣。

和原生內建的resource type一樣,光有儲存物件狀態的CR還不夠,原生resource type有對應controller負責協調(reconciliation)例項的建立、伸縮與刪除,CR也需要這樣的“協調者”,即我們也需要定義一個controller來負責監聽CR狀態並管理CR建立、伸縮、刪除以及保持期望狀態(spec)與當前狀態(current state)的一致。這個controller不再是面向原生Resource type的例項,而是 面向CRD的例項CR的controller

有了自定義的操作物件型別(CRD),有了面向操作物件型別例項的controller,我們將其打包為一個概念:“Operator模式”,operator模式中的controller也被稱為operator,它是在叢集中對CR進行維護操作的主體。

四. 使用kubebuilder開發webserver operator

假設:此時你的本地開發環境已經具備訪問實驗用k8s環境的一切配置,通過kubectl工具可以任意操作k8s。

再深入淺出的概念講解都不如一次實戰對理解概念更有幫助,下面我們就來開發一個簡單的Operator。

前面提過operator開發非常verbose,因此社群提供了開發工具和框架來幫助開發人員簡化開發過程,目前主流的包括operator framework sdk和kubebuilder,前者是redhat開源並維護的一套工具,支援使用go、ansible、helm進行operator開發(其中只有go可以開發到能力級別5的operator,其他兩種則不行);而kubebuilder則是kubernetes官方的一個sig(特別興趣小組)維護的operator開發工具。目前基於operator framework sdk和go進行operator開發時,operator sdk底層使用的也是kubebuilder,所以這裡我們就直接使用kubebuilder來開發operator。

按照operator能力模型,我們這個operator差不多處於2級這個層次,我們定義一個Webserver的resource type,它代表的是一個基於nginx的webserver叢集,我們的operator支援建立webserver示例(一個nginx叢集),支援nginx叢集伸縮,支援叢集中nginx的版本升級。

下面我們就用kubebuilder來實現這個operator!

1. 安裝kubebuilder

這裡我們採用原始碼構建方式安裝,步驟如下:

$git clone [email protected]:kubernetes-sigs/kubebuilder.git
$cd kubebuilder
$make
$cd bin
$./kubebuilder version
Version: main.version{KubeBuilderVersion:"v3.5.0-101-g5c949c2e",
KubernetesVendor:"unknown",
GitCommit:"5c949c2e50ca8eec80d64878b88e1b2ee30bf0bc",
BuildDate:"2022-08-06T09:12:50Z", GoOs:"linux", GoArch:"amd64"}

然後將bin/kubebuilder拷貝到你的PATH環境變數中的某個路徑下即可。

2. 建立webserver-operator工程

接下來,我們就可以使用kubebuilder建立webserver-operator工程了:

$mkdir webserver-operator
$cd webserver-operator
$kubebuilder init  --repo github.com/bigwhite/webserver-operator --project-name webserver-operator

Writing kustomize manifests for you to edit...
Writing scaffold for you to edit...
Get controller runtime:
$ go get sigs.k8s.io/[email protected]
go: downloading k8s.io/client-go v0.24.2
go: downloading k8s.io/component-base v0.24.2
Update dependencies:
$ go mod tidy
Next: define a resource with:
kubebuilder create api

注:–repo指定go.mod中的module root path,你可以定義你自己的module root path。

3. 建立API,生成初始CRD

Operator包括CRD和controller,這裡我們就來建立自己的CRD,即自定義的resource type,也就是API的endpoint,我們使用下面kubebuilder create命令來完成這個步驟:

$kubebuilder create api --version v1 --kind WebServer
Create Resource [y/n]
y
Create Controller [y/n]
y
Writing kustomize manifests for you to edit...
Writing scaffold for you to edit...
api/v1/webserver_types.go
controllers/webserver_controller.go
Update dependencies:
$ go mod tidy
Running make:
$ make generate
mkdir -p /home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin
test -s /home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin/controller-gen || GOBIN=/home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin go install sigs.k8s.io/controller-tools/cmd/[email protected]
/home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
Next: implement your new API and generate the manifests (e.g. CRDs,CRs) with:
$ make manifests

之後,我們執行make manifests來生成最終CRD對應的yaml檔案:

$make manifests
/home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin/controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases

此刻,整個工程的目錄檔案佈局如下:

$tree -F .
.
├── api/
│   └── v1/
│       ├── groupversion_info.go
│       ├── webserver_types.go
│       └── zz_generated.deepcopy.go
├── bin/
│   └── controller-gen*
├── config/
│   ├── crd/
│   │   ├── bases/
│   │   │   └── my.domain_webservers.yaml
│   │   ├── kustomization.yaml
│   │   ├── kustomizeconfig.yaml
│   │   └── patches/
│   │       ├── cainjection_in_webservers.yaml
│   │       └── webhook_in_webservers.yaml
│   ├── default/
│   │   ├── kustomization.yaml
│   │   ├── manager_auth_proxy_patch.yaml
│   │   └── manager_config_patch.yaml
│   ├── manager/
│   │   ├── controller_manager_config.yaml
│   │   ├── kustomization.yaml
│   │   └── manager.yaml
│   ├── prometheus/
│   │   ├── kustomization.yaml
│   │   └── monitor.yaml
│   ├── rbac/
│   │   ├── auth_proxy_client_clusterrole.yaml
│   │   ├── auth_proxy_role_binding.yaml
│   │   ├── auth_proxy_role.yaml
│   │   ├── auth_proxy_service.yaml
│   │   ├── kustomization.yaml
│   │   ├── leader_election_role_binding.yaml
│   │   ├── leader_election_role.yaml
│   │   ├── role_binding.yaml
│   │   ├── role.yaml
│   │   ├── service_account.yaml
│   │   ├── webserver_editor_role.yaml
│   │   └── webserver_viewer_role.yaml
│   └── samples/
│       └── _v1_webserver.yaml
├── controllers/
│   ├── suite_test.go
│   └── webserver_controller.go
├── Dockerfile
├── go.mod
├── go.sum
├── hack/
│   └── boilerplate.go.txt
├── main.go
├── Makefile
├── PROJECT
└── README.md

14 directories, 40 files

4. webserver-operator的基本結構

忽略我們此次不關心的諸如leader election、auth_proxy等,我將這個operator例子的主要部分整理到下面這張圖中:

圖中的各個部分就是使用kubebuilder生成的 operator的基本結構

webserver operator主要由CRD和controller組成:

  • CRD

圖中的左下角的框框就是上面生成的CRD yaml檔案:config/crd/bases/my.domain_webservers.yaml。CRD與api/v1/webserver_types.go密切相關。我們在api/v1/webserver_types.go中為CRD定義spec相關欄位,之後make manifests命令可以解析webserver_types.go中的變化並更新CRD的yaml檔案。

  • controller

從圖的右側部分可以看出,controller自身就是作為一個deployment部署在k8s叢集中執行的,它監視CRD的例項CR的執行狀態,並在Reconcile方法中檢查預期狀態與當前狀態是否一致,如果不一致,則執行相關操作。

  • 其它

圖中左上角是有關controller的許可權的設定,controller通過serviceaccount訪問k8s API server,通過role.yaml和role_binding.yaml設定controller的角色和許可權。

5. 為CRD spec新增欄位(field)

為了實現Webserver operator的功能目標,我們需要為CRD spec新增一些狀態欄位。前面說過,CRD與api中的webserver_types.go檔案是同步的,我們只需修改webserver_types.go檔案即可。我們在WebServerSpec結構體中增加Replicas和Image兩個欄位,它們分別用於表示webserver例項的副本數量以及使用的容器映象:

// api/v1/webserver_types.go

// WebServerSpec defines the desired state of WebServer
type WebServerSpec struct {
    // INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
    // Important: Run "make" to regenerate code after modifying this file

    // The number of replicas that the webserver should have
    Replicas int `json:"replicas,omitempty"`

    // The container image of the webserver
    Image string `json:"image,omitempty"`

    // Foo is an example field of WebServer. Edit webserver_types.go to remove/update
    Foo string `json:"foo,omitempty"`
}

儲存修改後, 執行make manifests 重新生成config/crd/bases/my.domain_webservers.yaml

$cat my.domain_webservers.yaml
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  annotations:
    controller-gen.kubebuilder.io/version: v0.9.2
  creationTimestamp: null
  name: webservers.my.domain
spec:
  group: my.domain
  names:
    kind: WebServer
    listKind: WebServerList
    plural: webservers
    singular: webserver
  scope: Namespaced
  versions:
  - name: v1
    schema:
      openAPIV3Schema:
        description: WebServer is the Schema for the webservers API
        properties:
          apiVersion:
            description: 'APIVersion defines the versioned schema of this representation
              of an object. Servers should convert recognized schemas to the latest
              internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
            type: string
          kind:
            description: 'Kind is a string value representing the REST resource this
              object represents. Servers may infer this from the endpoint the client
              submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
            type: string
          metadata:
            type: object
          spec:
            description: WebServerSpec defines the desired state of WebServer
            properties:
              foo:
                description: Foo is an example field of WebServer. Edit webserver_types.go
                  to remove/update
                type: string
              image:
                description: The container image of the webserver
                type: string
              replicas:
                description: The number of replicas that the webserver should have
                type: integer
            type: object
          status:
            description: WebServerStatus defines the observed state of WebServer
            type: object
        type: object
    served: true
    storage: true
    subresources:
      status: {}

一旦定義完CRD,我們就可以將其安裝到k8s中:

$make install
/home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin/controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
test -s /home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin/kustomize || { curl -s "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh" | bash -s -- 3.8.7 /home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin; }
{Version:kustomize/v3.8.7 GitCommit:ad092cc7a91c07fdf63a2e4b7f13fa588a39af4f BuildDate:2020-11-11T23:14:14Z GoOs:linux GoArch:amd64}
kustomize installed to /home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin/kustomize
/home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin/kustomize build config/crd | kubectl apply -f -
customresourcedefinition.apiextensions.k8s.io/webservers.my.domain created

檢查安裝情況:

$kubectl get crd|grep webservers
webservers.my.domain                                             2022-08-06T21:55:45Z

6. 修改role.yaml

在開始controller開發之前,我們先來為controller後續的執行“鋪平道路”,即設定好相應許可權。

我們在controller中會為CRD例項建立對應deployment和service,這樣就要求controller有操作deployments和services的許可權,這樣就需要我們修改role.yaml,增加service account: controller-manager 操作deployments和services的許可權:

// config/rbac/role.yaml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  creationTimestamp: null
  name: manager-role
rules:
- apiGroups:
  - my.domain
  resources:
  - webservers
  verbs:
  - create
  - delete
  - get
  - list
  - patch
  - update
  - watch
- apiGroups:
  - my.domain
  resources:
  - webservers/finalizers
  verbs:
  - update
- apiGroups:
  - my.domain
  resources:
  - webservers/status
  verbs:
  - get
  - patch
  - update
- apiGroups:
  - apps
  resources:
  - deployments
  verbs:
  - create
  - delete
  - get
  - list
  - patch
  - update
  - watch
- apiGroups:
  - apps
  - ""
  resources:
  - services
  verbs:
  - create
  - delete
  - get
  - list
  - patch
  - update
  - watch

修改後的role.yaml先放在這裡,後續與controller一併部署到k8s上。

7. 實現controller的Reconcile(協調)邏輯

kubebuilder為我們搭好了controller的程式碼架子,我們只需要在controllers/webserver_controller.go中實現WebServerReconciler的Reconcile方法即可。下面是Reconcile的一個簡易流程圖,結合這幅圖理解程式碼就容易的多了:

下面是對應的Reconcile方法的程式碼:

// controllers/webserver_controller.go

func (r *WebServerReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
    log := r.Log.WithValues("Webserver", req.NamespacedName)

    instance := &mydomainv1.WebServer{}
    err := r.Get(ctx, req.NamespacedName, instance)
    if err != nil {
        if errors.IsNotFound(err) {
            // Request object not found, could have been deleted after reconcile request.
            // Return and don't requeue
            log.Info("Webserver resource not found. Ignoring since object must be deleted")
            return ctrl.Result{}, nil
        }

        // Error reading the object - requeue the request.
        log.Error(err, "Failed to get Webserver")
        return ctrl.Result{RequeueAfter: time.Second * 5}, err
    }

    // Check if the webserver deployment already exists, if not, create a new one
    found := &appsv1.Deployment{}
    err = r.Get(ctx, types.NamespacedName{Name: instance.Name, Namespace: instance.Namespace}, found)
    if err != nil && errors.IsNotFound(err) {
        // Define a new deployment
        dep := r.deploymentForWebserver(instance)
        log.Info("Creating a new Deployment", "Deployment.Namespace", dep.Namespace, "Deployment.Name", dep.Name)
        err = r.Create(ctx, dep)
        if err != nil {
            log.Error(err, "Failed to create new Deployment", "Deployment.Namespace", dep.Namespace, "Deployment.Name", dep.Name)
            return ctrl.Result{RequeueAfter: time.Second * 5}, err
        }
        // Deployment created successfully - return and requeue
        return ctrl.Result{Requeue: true}, nil
    } else if err != nil {
        log.Error(err, "Failed to get Deployment")
        return ctrl.Result{RequeueAfter: time.Second * 5}, err
    }

    // Ensure the deployment replicas and image are the same as the spec
    var replicas int32 = int32(instance.Spec.Replicas)
    image := instance.Spec.Image

    var needUpd bool
    if *found.Spec.Replicas != replicas {
        log.Info("Deployment spec.replicas change", "from", *found.Spec.Replicas, "to", replicas)
        found.Spec.Replicas = &replicas
        needUpd = true
    }

    if (*found).Spec.Template.Spec.Containers[0].Image != image {
        log.Info("Deployment spec.template.spec.container[0].image change", "from", (*found).Spec.Template.Spec.Containers[0].Image, "to", image)
        found.Spec.Template.Spec.Containers[0].Image = image
        needUpd = true
    }

    if needUpd {
        err = r.Update(ctx, found)
        if err != nil {
            log.Error(err, "Failed to update Deployment", "Deployment.Namespace", found.Namespace, "Deployment.Name", found.Name)
            return ctrl.Result{RequeueAfter: time.Second * 5}, err
        }
        // Spec updated - return and requeue
        return ctrl.Result{Requeue: true}, nil
    }

    // Check if the webserver service already exists, if not, create a new one
    foundService := &corev1.Service{}
    err = r.Get(ctx, types.NamespacedName{Name: instance.Name + "-service", Namespace: instance.Namespace}, foundService)
    if err != nil && errors.IsNotFound(err) {
        // Define a new service
        srv := r.serviceForWebserver(instance)
        log.Info("Creating a new Service", "Service.Namespace", srv.Namespace, "Service.Name", srv.Name)
        err = r.Create(ctx, srv)
        if err != nil {
            log.Error(err, "Failed to create new Servie", "Service.Namespace", srv.Namespace, "Service.Name", srv.Name)
            return ctrl.Result{RequeueAfter: time.Second * 5}, err
        }
        // Service created successfully - return and requeue
        return ctrl.Result{Requeue: true}, nil
    } else if err != nil {
        log.Error(err, "Failed to get Service")
        return ctrl.Result{RequeueAfter: time.Second * 5}, err
    }

    // Tbd: Ensure the service state is the same as the spec, your homework

    // reconcile webserver operator in again 10 seconds
    return ctrl.Result{RequeueAfter: time.Second * 10}, nil
}

這裡大家可能發現了: 原來CRD的controller最終還是將CR翻譯為k8s原生Resource,比如service、deployment等。CR的狀態變化(比如這裡的replicas、image等)最終都轉換成了deployment等原生resource的update操作 ,這就是operator的精髓!理解到這一層,operator對大家來說就不再是什麼密不可及的概念了。

有些朋友可能也會發現,上面流程圖中似乎沒有考慮CR例項被刪除時對deployment、service的操作,的確如此。不過對於一個7×24小時運行於後臺的服務來說,我們更多關注的是其變更、伸縮、升級等操作,刪除是優先順序最低的需求。

8. 構建controller image

controller程式碼寫完後,我們就來構建controller的image。通過前文我們知道,這個controller其實就是執行在k8s中的一個deployment下的pod。我們需要構建其image並通過deployment部署到k8s中。

kubebuilder建立的operator工程中包含了Makefile,通過make docker-build即可構建controller image。docker-build使用golang builder image來構建controller原始碼,不過如果不對Dockerfile稍作修改,你很難編譯過去,因為預設GOPROXY在國內無法訪問。這裡最簡單的改造方式是使用vendor構建,下面是改造後的Dockerfile:

# Build the manager binary
FROM golang:1.18 as builder

ENV GOPROXY https://goproxy.cn
WORKDIR /workspace
# Copy the Go Modules manifests
COPY go.mod go.mod
COPY go.sum go.sum
COPY vendor/ vendor/
# cache deps before building and copying source so that we don't need to re-download as much
# and so that source changes don't invalidate our downloaded layer
#RUN go mod download

# Copy the go source
COPY main.go main.go
COPY api/ api/
COPY controllers/ controllers/

# Build
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -mod=vendor -a -o manager main.go

# Use distroless as minimal base image to package the manager binary
# Refer to https://github.com/GoogleContainerTools/distroless for more details
#FROM gcr.io/distroless/static:nonroot
FROM katanomi/distroless-static:nonroot
WORKDIR /
COPY --from=builder /workspace/manager .
USER 65532:65532

ENTRYPOINT ["/manager"]

下面是構建的步驟:

$go mod vendor
$make docker-build

test -s /home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin/controller-gen || GOBIN=/home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin go install sigs.k8s.io/controller-tools/cmd/[email protected]
/home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin/controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
/home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
go fmt ./...
go vet ./...
KUBEBUILDER_ASSETS="/home/tonybai/.local/share/kubebuilder-envtest/k8s/1.24.2-linux-amd64" go test ./... -coverprofile cover.out
?       github.com/bigwhite/webserver-operator    [no test files]
?       github.com/bigwhite/webserver-operator/api/v1    [no test files]
ok      github.com/bigwhite/webserver-operator/controllers    4.530s    coverage: 0.0% of statements
docker build -t bigwhite/webserver-controller:latest .
Sending build context to Docker daemon  47.51MB
Step 1/15 : FROM golang:1.18 as builder
 ---> 2d952adaec1e
Step 2/15 : ENV GOPROXY https://goproxy.cn
 ---> Using cache
 ---> db2b06a078e3
Step 3/15 : WORKDIR /workspace
 ---> Using cache
 ---> cc3c613c19c6
Step 4/15 : COPY go.mod go.mod
 ---> Using cache
 ---> 5fa5c0d89350
Step 5/15 : COPY go.sum go.sum
 ---> Using cache
 ---> 71669cd0fe8e
Step 6/15 : COPY vendor/ vendor/
 ---> Using cache
 ---> 502b280a0e67
Step 7/15 : COPY main.go main.go
 ---> Using cache
 ---> 0c59a69091bb
Step 8/15 : COPY api/ api/
 ---> Using cache
 ---> 2b81131c681f
Step 9/15 : COPY controllers/ controllers/
 ---> Using cache
 ---> e3fd48c88ccb
Step 10/15 : RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -mod=vendor -a -o manager main.go
 ---> Using cache
 ---> 548ac10321a2
Step 11/15 : FROM katanomi/distroless-static:nonroot
 ---> 421f180b71d8
Step 12/15 : WORKDIR /
 ---> Running in ea7cb03027c0
Removing intermediate container ea7cb03027c0
 ---> 9d3c0ea19c3b
Step 13/15 : COPY --from=builder /workspace/manager .
 ---> a4387fe33ab7
Step 14/15 : USER 65532:65532
 ---> Running in 739a32d251b6
Removing intermediate container 739a32d251b6
 ---> 52ae8742f9c5
Step 15/15 : ENTRYPOINT ["/manager"]
 ---> Running in 897893b0c9df
Removing intermediate container 897893b0c9df
 ---> e375cc2adb08
Successfully built e375cc2adb08
Successfully tagged bigwhite/webserver-controller:latest

注:執行make命令之前,先將Makefile中的IMG變數初值改為IMG ?= bigwhite/webserver-controller:latest

構建成功後,執行make docker-push將image推送到映象倉庫中(這裡使用了docker公司提供的公共倉庫)。

9. 部署controller

之前我們已經通過make install將CRD安裝到k8s中了,接下來再把controller部署到k8s上,我們的operator就算部署完畢了。執行make deploy即可實現部署:

$make deploy
test -s /home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin/controller-gen || GOBIN=/home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin go install sigs.k8s.io/controller-tools/cmd/[email protected]
/home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin/controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
test -s /home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin/kustomize || { curl -s "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh" | bash -s -- 3.8.7 /home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin; }
cd config/manager && /home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin/kustomize edit set image controller=bigwhite/webserver-controller:latest
/home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin/kustomize build config/default | kubectl apply -f -
namespace/webserver-operator-system created
customresourcedefinition.apiextensions.k8s.io/webservers.my.domain unchanged
serviceaccount/webserver-operator-controller-manager created
role.rbac.authorization.k8s.io/webserver-operator-leader-election-role created
clusterrole.rbac.authorization.k8s.io/webserver-operator-manager-role created
clusterrole.rbac.authorization.k8s.io/webserver-operator-metrics-reader created
clusterrole.rbac.authorization.k8s.io/webserver-operator-proxy-role created
rolebinding.rbac.authorization.k8s.io/webserver-operator-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/webserver-operator-manager-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/webserver-operator-proxy-rolebinding created
configmap/webserver-operator-manager-config created
service/webserver-operator-controller-manager-metrics-service created
deployment.apps/webserver-operator-controller-manager created

我們看到deploy不僅會安裝controller、serviceaccount、role、rolebinding,它還會建立namespace,也會將crd安裝一遍。也就是說deploy是一個完整的operator安裝命令。

注:使用make undeploy可以完整解除安裝operator相關resource。

我們用kubectl logs檢視一下controller的執行日誌:

$kubectl logs -f deployment.apps/webserver-operator-controller-manager -n webserver-operator-system
1.6600280818476188e+09    INFO    controller-runtime.metrics    Metrics server is starting to listen    {"addr": "127.0.0.1:8080"}
1.6600280818478029e+09    INFO    setup    starting manager
1.6600280818480284e+09    INFO    Starting server    {"path": "/metrics", "kind": "metrics", "addr": "127.0.0.1:8080"}
1.660028081848097e+09    INFO    Starting server    {"kind": "health probe", "addr": "[::]:8081"}
I0809 06:54:41.848093       1 leaderelection.go:248] attempting to acquire leader lease webserver-operator-system/63e5a746.my.domain...
I0809 06:54:57.072336       1 leaderelection.go:258] successfully acquired lease webserver-operator-system/63e5a746.my.domain
1.6600280970724037e+09    DEBUG    events    Normal    {"object": {"kind":"Lease","namespace":"webserver-operator-system","name":"63e5a746.my.domain","uid":"e05aaeb5-4a3a-4272-b036-80d61f0b6788","apiVersion":"coordination.k8s.io/v1","resourceVersion":"5238800"}, "reason": "LeaderElection", "message": "webserver-operator-controller-manager-6f45bc88f7-ptxlc_0e960015-9fbe-466d-a6b1-ff31af63a797 became leader"}
1.6600280970724993e+09    INFO    Starting EventSource    {"controller": "webserver", "controllerGroup": "my.domain", "controllerKind": "WebServer", "source": "kind source: *v1.WebServer"}
1.6600280970725305e+09    INFO    Starting Controller    {"controller": "webserver", "controllerGroup": "my.domain", "controllerKind": "WebServer"}
1.660028097173026e+09    INFO    Starting workers    {"controller": "webserver", "controllerGroup": "my.domain", "controllerKind": "WebServer", "worker count": 1}

可以看到,controller已經成功啟動,正在等待一個WebServer CR的相關事件(比如建立)!下面我們就來建立一個WebServer CR!

10. 建立WebServer CR

webserver-operator專案中有一個CR sample,位於config/samples下面,我們對其進行改造,新增我們在spec中加入的欄位:

// config/samples/_v1_webserver.yaml 

apiVersion: my.domain/v1
kind: WebServer
metadata:
  name: webserver-sample
spec:
  # TODO(user): Add fields here
  image: nginx:1.23.1
  replicas: 3

我們通過kubectl建立該WebServer CR:

$cd config/samples
$kubectl apply -f _v1_webserver.yaml
webserver.my.domain/webserver-sample created

觀察controller的日誌:

1.6602084232243123e+09  INFO    controllers.WebServer   Creating a new Deployment   {"Webserver": "default/webserver-sample", "Deployment.Namespace": "default", "Deployment.Name": "webserver-sample"}
1.6602084233446114e+09  INFO    controllers.WebServer   Creating a new Service  {"Webserver": "default/webserver-sample", "Service.Namespace": "default", "Service.Name": "webserver-sample-service"}

我們看到當CR被建立後,controller監聽到相關事件,建立了對應的Deployment和service,我們檢視一下為CR建立的Deployment、三個Pod以及service:

$kubectl get service
NAME                       TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes                 ClusterIP   172.26.0.1     <none>        443/TCP        22d
webserver-sample-service   NodePort    172.26.173.0   <none>        80:30010/TCP   2m58s

$kubectl get deployment
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
webserver-sample   3/3     3            3           4m44s

$kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
webserver-sample-bc698b9fb-8gq2h   1/1     Running   0          4m52s
webserver-sample-bc698b9fb-vk6gw   1/1     Running   0          4m52s
webserver-sample-bc698b9fb-xgrgb   1/1     Running   0          4m52s

我們訪問一下該服務:

$curl http://192.168.10.182:30010
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

服務如預期返回響應!

11. 伸縮、變更版本和Service自愈

接下來我們來對CR做一些常見的運維操作。

  • 副本數由3變為4

我們將CR的replicas由3改為4,對容器例項做一次擴充套件操作:

// config/samples/_v1_webserver.yaml 

apiVersion: my.domain/v1
kind: WebServer
metadata:
  name: webserver-sample
spec:
  # TODO(user): Add fields here
  image: nginx:1.23.1
  replicas: 4

然後通過kubectl apply使之生效:

$kubectl apply -f _v1_webserver.yaml
webserver.my.domain/webserver-sample configured

上述命令執行後,我們觀察到operator的controller日誌如下:

1.660208962767797e+09   INFO    controllers.WebServer   Deployment spec.replicas change {"Webserver": "default/webserver-sample", "from": 3, "to": 4}

稍後,檢視pod數量:

$kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
webserver-sample-bc698b9fb-8gq2h   1/1     Running   0          9m41s
webserver-sample-bc698b9fb-v9gvg   1/1     Running   0          42s
webserver-sample-bc698b9fb-vk6gw   1/1     Running   0          9m41s
webserver-sample-bc698b9fb-xgrgb   1/1     Running   0          9m41s

webserver pod副本數量成功從3擴為4。

  • 變更webserver image版本

我們將CR的image的版本從nginx:1.23.1改為nginx:1.23.0,然後執行kubectl apply使之生效。

我們檢視controller的響應日誌如下:

1.6602090494113188e+09  INFO    controllers.WebServer   Deployment spec.template.spec.container[0].image change {"Webserver": "default/webserver-sample", "from": "nginx:1.23.1", "to": "nginx:1.23.0"}

controller會更新deployment,導致所轄pod進行滾動升級:

$kubectl get pods
NAME                               READY   STATUS              RESTARTS   AGE
webserver-sample-bc698b9fb-8gq2h   1/1     Running             0          10m
webserver-sample-bc698b9fb-vk6gw   1/1     Running             0          10m
webserver-sample-bc698b9fb-xgrgb   1/1     Running             0          10m
webserver-sample-ffcf549ff-g6whk   0/1     ContainerCreating   0          12s
webserver-sample-ffcf549ff-ngjz6   0/1     ContainerCreating   0          12s

耐心等一小會兒,最終的pod列表為:

$kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
webserver-sample-ffcf549ff-g6whk   1/1     Running   0          6m22s
webserver-sample-ffcf549ff-m6z24   1/1     Running   0          3m12s
webserver-sample-ffcf549ff-ngjz6   1/1     Running   0          6m22s
webserver-sample-ffcf549ff-t7gvc   1/1     Running   0          4m16s
  • service自愈:恢復被無刪除的Service

我們來一次“誤操作”,將webserver-sample-service刪除,看看controller能否幫助service自愈:

$kubectl delete service/webserver-sample-service
service "webserver-sample-service" deleted

檢視controller日誌:

1.6602096994710526e+09  INFO    controllers.WebServer   Creating a new Service  {"Webserver": "default/webserver-sample", "Service.Namespace": "default", "Service.Name": "webserver-sample-service"}

我們看到controller檢測到了service被刪除的狀態,並重建了一個新service!

訪問新建的service:

$curl http://192.168.10.182:30010
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

可以看到service在controller的幫助下完成了自愈!

五. 小結

本文對Kubernetes Operator的概念以及優點做了初步的介紹,並基於kubebuilder這個工具開發了一個具有2級能力的operator。當然這個operator離完善還有很遠的距離,其主要目的還是幫助大家理解operator的概念以及實現套路。

相信你閱讀完本文後,對operator,尤其是其基本結構會有一個較為清晰的瞭解,並具備開發簡單operator的能力!

文中涉及的原始碼可以在 這裡 下載 – https://github.com/bigwhite/experiments/tree/master/webserver-operator。

六. 參考資料

  • kubernetes operator 101, Part 1: Overview and key features – https://developers.redhat.com/articles/2021/06/11/kubernetes-operators-101-part-1-overview-and-key-features
  • Kubernetes Operators 101, Part 2: How operators work – https://developers.redhat.com/articles/2021/06/22/kubernetes-operators-101-part-2-how-operators-work
  • Operator SDK: Build Kubernetes Operators – https://developers.redhat.com/blog/2020/04/28/operator-sdk-build-kubernetes-operators-and-deploy-them-on-openshift
  • kubernetes doc: Custom Resources – https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/
  • kubernetes doc: Operator pattern – https://kubernetes.io/docs/concepts/extend-kubernetes/operator/
  • kubernetes doc: API concepts – https://kubernetes.io/docs/reference/using-api/api-concepts/
  • Introducing Operators: Putting Operational Knowledge into Software 第一篇有關operator的文章 by coreos – https://web.archive.org/web/20170129131616/https://coreos.com/blog/introducing-operators.html
  • CNCF Operator白皮書v1.0 – https://github.com/cncf/tag-app-delivery/blob/main/operator-whitepaper/v1/Operator-WhitePaper_v1-0.md
  • Best practices for building Kubernetes Operators and stateful apps – https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps
  • A deep dive into Kubernetes controllers – https://docs.bitnami.com/tutorials/a-deep-dive-into-kubernetes-controllers
  • Kubernetes Operators Explained – https://blog.container-solutions.com/kubernetes-operators-explained
  • 書籍《Kubernetes Operator》 – https://book.douban.com/subject/34796009/
  • 書籍《Programming Kubernetes》 – https://book.douban.com/subject/35498478/
  • Operator SDK Reaches v1.0 – https://cloud.redhat.com/blog/operator-sdk-reaches-v1.0
  • What is the difference between kubebuilder and operator-sdk – https://github.com/operator-framework/operator-sdk/issues/1758
  • Kubernetes Operators in Depth – https://www.infoq.com/articles/kubernetes-operators-in-depth/
  • Get started using Kubernetes Operators – https://developer.ibm.com/learningpaths/kubernetes-operators/
  • Use Kubernetes operators to extend Kubernetes’ functionality – https://developer.ibm.com/learningpaths/kubernetes-operators/operators-extend-kubernetes/
  • memcached operator – https://github.com/operator-framework/operator-sdk-samples/tree/master/go/memcached-operator

“Gopher部落”知識星球 旨在打造一個精品Go學習和進階社群!高品質首發Go技術文章,“三天”首發閱讀權,每年兩期Go語言發展現狀分析,每天提前1小時閱讀到新鮮的Gopher日報,網課、技術專欄、圖書內容前瞻,六小時內必答保證等滿足你關於Go語言生態的所有需求!2022年,Gopher部落全面改版,將持續分享Go語言與Go應用領域的知識、技巧與實踐,並增加諸多互動形式。歡迎大家加入!

我愛發簡訊 :企業級簡訊平臺定製開發專家 https://51smspush.com/。smspush : 可部署在企業內部的定製化簡訊平臺,三網覆蓋,不懼大併發接入,可定製擴充套件; 簡訊內容你來定,不再受約束, 介面豐富,支援長簡訊,簽名可選。2020年4月8日,中國三大電信運營商聯合釋出《5G訊息白皮書》,51簡訊平臺也會全新升級到“51商用訊息平臺”,全面支援5G RCS訊息。

著名雲主機服務廠商DigitalOcean釋出最新的主機計劃,入門級Droplet配置升級為:1 core CPU、1G記憶體、25G高速SSD,價格5$/月。有使用DigitalOcean需求的朋友,可以開啟這個 連結地址 :https://m.do.co/c/bff6eed92687 開啟你的DO主機之路。

Gopher Daily(Gopher每日新聞)歸檔倉庫 – https://github.com/bigwhite/gopherdaily

我的聯絡方式:

  • 微博:https://weibo.com/bigwhite20xx
  • 部落格:tonybai.com
  • github: https://github.com/bigwhite

商務合作方式:撰稿、出書、培訓、線上課程、合夥創業、諮詢、廣告合作。

© 2022,bigwhite. 版權所有.