微服务后端开发的痛点,你是否曾经历过?

语言: CN / TW / HK

趁热记录下,给未来的自己

从痛点出发

相信很多微服务后端开发的小伙伴,在日常工作中会遇到一个痛点:

本地开发了一个新功能或者解了一个bug后进行需要调试,由于k8s集群在云端,本地的服务是无法直接接入的,此时开发同学只能通过本地启动服务,然后通过postman等API调用工具,构造请求进行测试。

或者,如果遇到了一些逻辑比较复杂的请求,无法通过构造单一请求去测试;或者开发的服务本身依赖其他微服务,服务间又是通过feign等方式进行内部调用,本地单独启动自己开发的服务就无法调用其他微服务。这些情况下,就只能将代码推到开发环境上,在漫长的CICD后,再通过前端页面触发,然后看后台日志的方式进行调试。如果请求结果不符合预期,也无法在线上环境单步调试找原因,只能在本地环境找可能存在问题的地方,加上日志,然后再推到开发环境上,周而复始。

这个痛点会非常影响到我们的开发体验和开发效率。所以,这篇文章,将会直击这个痛点,让本地开发的体验更加丝滑。

来解决痛点

我们的目标是:本地环境(如IDEA)启动服务后,可以被云端k8s集群感知并纳管,开发同学可以从前端页面直接请求,流量可以打到本地环境,然后在本地环境的IDE里进行单步调试。

技术选型

经过调研,发现市面上针对这种场景,有比较成熟的技术方案,如telepresence,阿里的kt-connect等等。

Telepresence的原理

kt-connect原理

这两种方案都可行,但是teleprecence现在的部分功能需要在其商业云平台上进行账号注册,然后在云端进行配置,这就导致了本地开发环境和云端k8s环境的闭环被打破了,有代码泄露的风险,而且telepresence有商业化的倾向,不适合团队长时间稳定使用。

虽然阿里的开源产品dddd吧,但至少还是满足我们现有的需求,而且接入简单,所以这里选择kt-connect(已star)吧。

安装

这里以linux_x86_64位系统为例(更多系统安装):

``` $ curl -OL https://github.com/alibaba/kt-connect/releases/download/v0.3.6/ktctl_0.3.6_Linux_x86_64.tar.gz

$ tar zxf ktctl_0.3.6_Linux_x86_64.tar.gz

$ mv ktctl /usr/local/bin/ktctl

$ ktctl --version ```

使用

  1. 首先将本地环境连接到云端k8s集群

使用ktctl connect命令建立从本地到集群的网络通道,注意该命令需要管理员权限。

``` $ sudo ktctl connect

2:07PM INF Using cluster context {k8s_cluster_context} ({k8s_cluster_context}) 2:07PM INF kt-connect 0.3.6 start at 3527 (linux amd64) 2:07PM INF Fetching cluster time ... 2:07PM INF Using tun2socks mode 2:07PM INF Successful create config map kt-connect-shadow-ibufe 2:07PM INF Deploying shadow pod kt-connect-shadow-ibufe in namespace default 2:07PM INF Waiting for pod kt-connect-shadow-ibufe ... 2:07PM INF Pod kt-connect-shadow-ibufe is ready 2:07PM INF Port forward local:22129 -> pod kt-connect-shadow-ibufe:22 established 2:07PM INF Socks proxy established 2:07PM INF Tun device kt0 is ready 2:07PM INF Adding route to 192.168.0.0/16 2:07PM INF Adding route to 10.20.0.0/16 2:07PM INF Adding route to 172.31.0.0/16 2:07PM INF Route to tun device completed 2:07PM INF Setting up dns in local mode 2:07PM INF Port forward local:36922 -> pod kt-connect-shadow-ibufe:53 established 2:07PM INF Setup local DNS with upstream [tcp:127.0.0.1:36922 udp:10.1.7.5:53] 2:07PM INF Creating udp dns on port 10053 2:07PM INF --------------------------------------------------------------- 2:07PM INF All looks good, now you can access to resources in the kubernetes cluster 2:07PM INF --------------------------------------------------------------- ```

看到 All looks good, now you can access to resources in the kubernetes cluster 表示连接成功。

  1. 将k8s集群的流量转发到本地

kt-connect提供了两种能够让集群流量重定向到本地服务的命令,在使用场景上稍有不同。

  • Exchange:将集群指定服务的所有流量转向本地的指定端口,适合单人独立开发一个服务
  • Mesh:将集群指定服务的部分流量(按Header或Label规则)转向本地,适合多人同时开发一个服务

ktctl exchangektctl mesh命令的最大区别在于,前者会将原应用实例流量全部替换为由本地服务接收,而后者仅将包含指定Header的流量导流到本地,同时保证测试环境正常链路始终可用。

Exchange

示意图

image.png

执行

``` $ ktctl exchange -n {your_namespace_in_k8s} {your_service_in_k8s} --expose {your_local_service_port}:{your_service_target_port_in_k8s}

2:43PM INF Using cluster context {k8s_cluster_context} ({k8s_cluster_context}) 2:43PM INF kt-connect 0.3.6 start at 5848 (linux amd64) 2:43PM INF Fetching cluster time ... 2:43PM INF Using selector mode 2:43PM INF Service {your_service_in_k8s} locked 2:43PM INF Successful create config map {your_service_in_k8s}-kt-exchange-rnpwz 2:43PM INF Deploying shadow pod {your_service_in_k8s}-kt-exchange-rnpwz in namespace {your_namespace_in_k8s} 2:43PM INF Waiting for pod {your_service_in_k8s}-kt-exchange-rnpwz ... 2:43PM INF Pod {your_service_in_k8s}-kt-exchange-rnpwz is ready 2:43PM INF Forwarding pod {your_service_in_k8s}-kt-exchange-rnpwz to local via port {your_service_target_port_in_k8s} 2:43PM INF Port forward local:63755 -> pod {your_service_in_k8s}-kt-exchange-rnpwz:22 established 2:43PM INF Reverse tunnel 0.0.0.0:{your_service_target_port_in_k8s} -> 127.0.0.1:{your_local_service_port} established 2:43PM INF Service {your_service_in_k8s} unlocked 2:43PM INF --------------------------------------------------------------- 2:43PM INF Now all request to service '{your_service_in_k8s}' will be redirected to local 2:43PM INF --------------------------------------------------------------- ```

当看到Now all request to service '{your_service_in_k8s}' will be redirected to local,表示exchange成功。

Mesh

示意图

image.png

执行

``` $ ktctl mesh -n {your_namespace_in_k8s} {your_service_in_k8s} --expose {your_local_service_port}:{your_service_target_port_in_k8s}

2:56PM DBG Background task log to /tmp/kt-2463223336 2:56PM INF Using cluster context {k8s_cluster_context} ({k8s_cluster_context}) 2:56PM INF kt-connect 0.3.6 start at 6886 (linux amd64) 2:56PM DBG Rectify pod kt-rectifier-iprip created 2:56PM INF Fetching cluster time ... 2:56PM DBG Execute command [date +%s] in kt-rectifier-iprip:standalone 2:56PM DBG No time difference 2:56PM DBG Service target ports: [{your_service_target_port_in_k8s}] 2:56PM INF Using auto mode 2:56PM INF Service {your_service_in_k8s} locked 2:56PM INF Service {your_service_in_k8s}-kt-stuntman created 2:56PM INF Service {your_service_in_k8s}-kt-mesh-khgvl created 2:56PM INF Router pod {your_service_in_k8s}-kt-router created 2:56PM INF Waiting for pod {your_service_in_k8s}-kt-router ... 2:56PM INF Pod {your_service_in_k8s}-kt-router is ready 2:56PM INF Router pod is ready 2:56PM DBG Execute command [/usr/sbin/router setup {your_service_in_k8s} 80:{your_local_service_port} version:khgvl] in {your_service_in_k8s}-kt-router:standalone 2:56PM DBG Stdout: 2:56PM DBG Stderr: 6:56AM INF Route setup completed. 2:56PM INF Router pod configuration done 2:56PM DBG Private Key generated 2:56PM DBG Public key generated 2:56PM INF Successful create config map {your_service_in_k8s}-kt-mesh-khgvl 2:56PM INF Deploying shadow pod {your_service_in_k8s}-kt-mesh-khgvl in namespace {your_namespace_in_k8s} 2:56PM INF Waiting for pod {your_service_in_k8s}-kt-mesh-khgvl ... 2:56PM INF Pod {your_service_in_k8s}-kt-mesh-khgvl is ready 2:56PM INF Forwarding pod {your_service_in_k8s}-kt-mesh-khgvl to local via port {your_local_service_port} 2:56PM DBG Using port 26879 2:56PM DBG Request port forward pod:22 -> local:26879 via https://{k8s_api_server}:6443 2:56PM INF Port forward local:26879 -> pod {your_service_in_k8s}-kt-mesh-khgvl:22 established 2:56PM DBG Forwarding 127.0.0.1:26879 to local endpoint 0.0.0.0:{your_local_service_port} via 127.0.0.1:{your_local_service_port} 2:56PM INF Reverse tunnel 0.0.0.0:{your_local_service_port} -> 127.0.0.1:{your_local_service_port} established 2:56PM INF --------------------------------------------------------------- 2:56PM INF Now you can access your service by header 'VERSION: khgvl' 2:56PM INF --------------------------------------------------------------- 2:56PM DBG Service {your_service_in_k8s} modified 2:56PM INF Service {your_service_in_k8s} unlocked ```

当看到Now you can access your service by header 'VERSION: khgvl'表示mesh成功,此时只需要在浏览器请求的时候,在Header里加上VERSION: khgvl即可将请求转发到本地服务(Header里没有VERSION: khgvl字段的所有请求还是转发到云端服务上)。

怎么修改浏览器请求头? 用 modheader

踩坑记录

如果微服务的负载均衡是通过第三方组件完成的,如nacos的服务发现,需要将nacos的服务发现关掉,使用k8s自带的service做负载均衡。主要修改的点在配置网关的路由spring.cloud.gateway.routes.uri字段,将 lb://{your_service} 改为 http://{your_service}, 这样服务的负载均衡就由nacos转为了k8s的service了。

之所以需要这么配置,是因为kt-connect的本质就是通过"劫持"后端微服务在k8s里的service,对service进行修改,将请求转发到对应的pod里。

以上。