微服務後端開發的痛點,你是否曾經歷過?

語言: CN / TW / HK

趁熱記錄下,給未來的自己

從痛點出發

相信很多微服務後端開發的小夥伴,在日常工作中會遇到一個痛點:

本地開發了一個新功能或者解了一個bug後進行需要調試,由於k8s集羣在雲端,本地的服務是無法直接接入的,此時開發同學只能通過本地啟動服務,然後通過postman等API調用工具,構造請求進行測試。

或者,如果遇到了一些邏輯比較複雜的請求,無法通過構造單一請求去測試;或者開發的服務本身依賴其他微服務,服務間又是通過feign等方式進行內部調用,本地單獨啟動自己開發的服務就無法調用其他微服務。這些情況下,就只能將代碼推到開發環境上,在漫長的CICD後,再通過前端頁面觸發,然後看後台日誌的方式進行調試。如果請求結果不符合預期,也無法在線上環境單步調試找原因,只能在本地環境找可能存在問題的地方,加上日誌,然後再推到開發環境上,週而復始。

這個痛點會非常影響到我們的開發體驗和開發效率。所以,這篇文章,將會直擊這個痛點,讓本地開發的體驗更加絲滑。

來解決痛點

我們的目標是:本地環境(如IDEA)啟動服務後,可以被雲端k8s集羣感知並納管,開發同學可以從前端頁面直接請求,流量可以打到本地環境,然後在本地環境的IDE裏進行單步調試。

技術選型

經過調研,發現市面上針對這種場景,有比較成熟的技術方案,如telepresence,阿里的kt-connect等等。

Telepresence的原理

kt-connect原理

這兩種方案都可行,但是teleprecence現在的部分功能需要在其商業雲平台上進行賬號註冊,然後在雲端進行配置,這就導致了本地開發環境和雲端k8s環境的閉環被打破了,有代碼泄露的風險,而且telepresence有商業化的傾向,不適合團隊長時間穩定使用。

雖然阿里的開源產品dddd吧,但至少還是滿足我們現有的需求,而且接入簡單,所以這裏選擇kt-connect(已star)吧。

安裝

這裏以linux_x86_64位系統為例(更多系統安裝):

``` $ curl -OL https://github.com/alibaba/kt-connect/releases/download/v0.3.6/ktctl_0.3.6_Linux_x86_64.tar.gz

$ tar zxf ktctl_0.3.6_Linux_x86_64.tar.gz

$ mv ktctl /usr/local/bin/ktctl

$ ktctl --version ```

使用

  1. 首先將本地環境連接到雲端k8s集羣

使用ktctl connect命令建立從本地到集羣的網絡通道,注意該命令需要管理員權限。

``` $ sudo ktctl connect

2:07PM INF Using cluster context {k8s_cluster_context} ({k8s_cluster_context}) 2:07PM INF kt-connect 0.3.6 start at 3527 (linux amd64) 2:07PM INF Fetching cluster time ... 2:07PM INF Using tun2socks mode 2:07PM INF Successful create config map kt-connect-shadow-ibufe 2:07PM INF Deploying shadow pod kt-connect-shadow-ibufe in namespace default 2:07PM INF Waiting for pod kt-connect-shadow-ibufe ... 2:07PM INF Pod kt-connect-shadow-ibufe is ready 2:07PM INF Port forward local:22129 -> pod kt-connect-shadow-ibufe:22 established 2:07PM INF Socks proxy established 2:07PM INF Tun device kt0 is ready 2:07PM INF Adding route to 192.168.0.0/16 2:07PM INF Adding route to 10.20.0.0/16 2:07PM INF Adding route to 172.31.0.0/16 2:07PM INF Route to tun device completed 2:07PM INF Setting up dns in local mode 2:07PM INF Port forward local:36922 -> pod kt-connect-shadow-ibufe:53 established 2:07PM INF Setup local DNS with upstream [tcp:127.0.0.1:36922 udp:10.1.7.5:53] 2:07PM INF Creating udp dns on port 10053 2:07PM INF --------------------------------------------------------------- 2:07PM INF All looks good, now you can access to resources in the kubernetes cluster 2:07PM INF --------------------------------------------------------------- ```

看到 All looks good, now you can access to resources in the kubernetes cluster 表示連接成功。

  1. 將k8s集羣的流量轉發到本地

kt-connect提供了兩種能夠讓集羣流量重定向到本地服務的命令,在使用場景上稍有不同。

  • Exchange:將集羣指定服務的所有流量轉向本地的指定端口,適合單人獨立開發一個服務
  • Mesh:將集羣指定服務的部分流量(按Header或Label規則)轉向本地,適合多人同時開發一個服務

ktctl exchangektctl mesh命令的最大區別在於,前者會將原應用實例流量全部替換為由本地服務接收,而後者僅將包含指定Header的流量導流到本地,同時保證測試環境正常鏈路始終可用。

Exchange

示意圖

image.png

執行

``` $ ktctl exchange -n {your_namespace_in_k8s} {your_service_in_k8s} --expose {your_local_service_port}:{your_service_target_port_in_k8s}

2:43PM INF Using cluster context {k8s_cluster_context} ({k8s_cluster_context}) 2:43PM INF kt-connect 0.3.6 start at 5848 (linux amd64) 2:43PM INF Fetching cluster time ... 2:43PM INF Using selector mode 2:43PM INF Service {your_service_in_k8s} locked 2:43PM INF Successful create config map {your_service_in_k8s}-kt-exchange-rnpwz 2:43PM INF Deploying shadow pod {your_service_in_k8s}-kt-exchange-rnpwz in namespace {your_namespace_in_k8s} 2:43PM INF Waiting for pod {your_service_in_k8s}-kt-exchange-rnpwz ... 2:43PM INF Pod {your_service_in_k8s}-kt-exchange-rnpwz is ready 2:43PM INF Forwarding pod {your_service_in_k8s}-kt-exchange-rnpwz to local via port {your_service_target_port_in_k8s} 2:43PM INF Port forward local:63755 -> pod {your_service_in_k8s}-kt-exchange-rnpwz:22 established 2:43PM INF Reverse tunnel 0.0.0.0:{your_service_target_port_in_k8s} -> 127.0.0.1:{your_local_service_port} established 2:43PM INF Service {your_service_in_k8s} unlocked 2:43PM INF --------------------------------------------------------------- 2:43PM INF Now all request to service '{your_service_in_k8s}' will be redirected to local 2:43PM INF --------------------------------------------------------------- ```

當看到Now all request to service '{your_service_in_k8s}' will be redirected to local,表示exchange成功。

Mesh

示意圖

image.png

執行

``` $ ktctl mesh -n {your_namespace_in_k8s} {your_service_in_k8s} --expose {your_local_service_port}:{your_service_target_port_in_k8s}

2:56PM DBG Background task log to /tmp/kt-2463223336 2:56PM INF Using cluster context {k8s_cluster_context} ({k8s_cluster_context}) 2:56PM INF kt-connect 0.3.6 start at 6886 (linux amd64) 2:56PM DBG Rectify pod kt-rectifier-iprip created 2:56PM INF Fetching cluster time ... 2:56PM DBG Execute command [date +%s] in kt-rectifier-iprip:standalone 2:56PM DBG No time difference 2:56PM DBG Service target ports: [{your_service_target_port_in_k8s}] 2:56PM INF Using auto mode 2:56PM INF Service {your_service_in_k8s} locked 2:56PM INF Service {your_service_in_k8s}-kt-stuntman created 2:56PM INF Service {your_service_in_k8s}-kt-mesh-khgvl created 2:56PM INF Router pod {your_service_in_k8s}-kt-router created 2:56PM INF Waiting for pod {your_service_in_k8s}-kt-router ... 2:56PM INF Pod {your_service_in_k8s}-kt-router is ready 2:56PM INF Router pod is ready 2:56PM DBG Execute command [/usr/sbin/router setup {your_service_in_k8s} 80:{your_local_service_port} version:khgvl] in {your_service_in_k8s}-kt-router:standalone 2:56PM DBG Stdout: 2:56PM DBG Stderr: 6:56AM INF Route setup completed. 2:56PM INF Router pod configuration done 2:56PM DBG Private Key generated 2:56PM DBG Public key generated 2:56PM INF Successful create config map {your_service_in_k8s}-kt-mesh-khgvl 2:56PM INF Deploying shadow pod {your_service_in_k8s}-kt-mesh-khgvl in namespace {your_namespace_in_k8s} 2:56PM INF Waiting for pod {your_service_in_k8s}-kt-mesh-khgvl ... 2:56PM INF Pod {your_service_in_k8s}-kt-mesh-khgvl is ready 2:56PM INF Forwarding pod {your_service_in_k8s}-kt-mesh-khgvl to local via port {your_local_service_port} 2:56PM DBG Using port 26879 2:56PM DBG Request port forward pod:22 -> local:26879 via https://{k8s_api_server}:6443 2:56PM INF Port forward local:26879 -> pod {your_service_in_k8s}-kt-mesh-khgvl:22 established 2:56PM DBG Forwarding 127.0.0.1:26879 to local endpoint 0.0.0.0:{your_local_service_port} via 127.0.0.1:{your_local_service_port} 2:56PM INF Reverse tunnel 0.0.0.0:{your_local_service_port} -> 127.0.0.1:{your_local_service_port} established 2:56PM INF --------------------------------------------------------------- 2:56PM INF Now you can access your service by header 'VERSION: khgvl' 2:56PM INF --------------------------------------------------------------- 2:56PM DBG Service {your_service_in_k8s} modified 2:56PM INF Service {your_service_in_k8s} unlocked ```

當看到Now you can access your service by header 'VERSION: khgvl'表示mesh成功,此時只需要在瀏覽器請求的時候,在Header里加上VERSION: khgvl即可將請求轉發到本地服務(Header裏沒有VERSION: khgvl字段的所有請求還是轉發到雲端服務上)。

怎麼修改瀏覽器請求頭? 用 modheader

踩坑記錄

如果微服務的負載均衡是通過第三方組件完成的,如nacos的服務發現,需要將nacos的服務發現關掉,使用k8s自帶的service做負載均衡。主要修改的點在配置網關的路由spring.cloud.gateway.routes.uri字段,將 lb://{your_service} 改為 http://{your_service}, 這樣服務的負載均衡就由nacos轉為了k8s的service了。

之所以需要這麼配置,是因為kt-connect的本質就是通過"劫持"後端微服務在k8s裏的service,對service進行修改,將請求轉發到對應的pod裏。

以上。