【雲原生】Hive on k8s 環境部署

語言: CN / TW / HK

一、概述

Hive是基於Hadoop的一個數據倉庫(Data Aarehouse,簡稱數倉、DW),可以將結構化的資料檔案對映為一張資料庫表,並提供類SQL查詢功能。是用於儲存、分析、報告的資料系統。這裡只講部署,相關概念可以參考我這篇文章:大資料Hadoop之——資料倉庫Hive

Hive 架構 在這裡插入圖片描述 Hive 客戶端架構 在這裡插入圖片描述

二、開始部署

因為hive依賴與Hadoop,所以這裡是在把hive封裝在Hadoop ha on k8s 編排中,關於更多,可以參考:【雲原生】Hadoop HA on k8s 環境部署

1)構建映象

Dockerfile ```bash FROM myharbor.com/bigdata/centos:7.9.2009

RUN rm -f /etc/localtime && ln -sv /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo "Asia/Shanghai" > /etc/timezone

RUN export LANG=zh_CN.UTF-8

建立使用者和使用者組,跟yaml編排裡的spec.template.spec.containers. securityContext.runAsUser: 9999

RUN groupadd --system --gid=9999 admin && useradd --system --home-dir /home/admin --uid=9999 --gid=admin admin

安裝sudo

RUN yum -y install sudo ; chmod 640 /etc/sudoers

給admin新增sudo許可權

RUN echo "admin ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers

RUN yum -y install install net-tools telnet wget

RUN mkdir /opt/apache/

ADD jdk-8u212-linux-x64.tar.gz /opt/apache/

ENV JAVA_HOME=/opt/apache/jdk1.8.0_212 ENV PATH=$JAVA_HOME/bin:$PATH

ENV HADOOP_VERSION 3.3.2 ENV HADOOP_HOME=/opt/apache/hadoop

ENV HADOOP_COMMON_HOME=${HADOOP_HOME} \ HADOOP_HDFS_HOME=${HADOOP_HOME} \ HADOOP_MAPRED_HOME=${HADOOP_HOME} \ HADOOP_YARN_HOME=${HADOOP_HOME} \ HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop \ PATH=${PATH}:${HADOOP_HOME}/bin

RUN curl --silent --output /tmp/hadoop.tgz https://ftp-stud.hs-esslingen.de/pub/Mirrors/ftp.apache.org/dist/hadoop/common/hadoop-${HADOOP_VERSION}/hadoop-${HADOOP_VERSION}.tar.gz && tar --directory /opt/apache -xzf /tmp/hadoop.tgz && rm /tmp/hadoop.tgz

ADD hadoop-${HADOOP_VERSION}.tar.gz /opt/apache RUN ln -s /opt/apache/hadoop-${HADOOP_VERSION} ${HADOOP_HOME}

ENV HIVE_VERSION 3.1.2 ADD hive-${HIVE_VERSION}.tar.gz /opt/apache/ ENV HIVE_HOME=/opt/apache/hive ENV PATH=$HIVE_HOME/bin:$PATH RUN ln -s /opt/apache/hive-${HIVE_VERSION} ${HIVE_HOME}

RUN chown -R admin:admin /opt/apache

WORKDIR /opt/apache

Hdfs ports

EXPOSE 50010 50020 50070 50075 50090 8020 9000

Mapred ports

EXPOSE 19888

Yarn ports

EXPOSE 8030 8031 8032 8033 8040 8042 8088

Other ports

EXPOSE 49707 2122

開始構建映象bash docker build -t myharbor.com/bigdata/hadoop-hive:v3.3.2-3.1.2 . --no-cache

引數解釋

-t:指定映象名稱

. :當前目錄Dockerfile

-f:指定Dockerfile路徑

--no-cache:不快取

docker push myharbor.com/bigdata/hadoop-hive:v3.3.2-3.1.2 ```

2)新增 Metastore 服務編排

1、配置

hadoop/templates/hive/hive-configmap.yaml

```yaml apiVersion: v1 kind: ConfigMap metadata: name: {{ include "hadoop.fullname" . }}-hive labels: app.kubernetes.io/name: {{ include "hadoop.name" . }} helm.sh/chart: {{ include "hadoop.chart" . }} app.kubernetes.io/instance: {{ .Release.Name }} data: hive-site.xml: | <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

  <!-- 配置hdfs儲存目錄 -->
  <property>
    <name>hive.metastore.warehouse.dir</name>
    <value>/user/hive_remote/warehouse</value>
  </property>

  <property>
    <name>hive.metastore.local</name>
    <value>false</value>
  </property>

  <!-- 所連線的 MySQL 資料庫的地址,hive_local是資料庫,程式會自動建立,自定義就行 -->
  <property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://mysql-primary-headless.mysql:3306/hive_metastore?createDatabaseIfNotExist=true&amp;useSSL=false&amp;serverTimezone=Asia/Shanghai</value>
  </property>

  <!-- MySQL 驅動 -->
  <property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.cj.jdbc.Driver</value>-->
    <!--<value>com.mysql.jdbc.Driver</value>-->
  </property>

  <!-- mysql連線使用者 -->
  <property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>root</value>
  </property>

  <!-- mysql連線密碼 -->
  <property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>WyfORdvwVm</value>
  </property>

  <!--元資料是否校驗-->
  <property>
    <name>hive.metastore.schema.verification</name>
    <value>false</value>
  </property>

  <property>
    <name>system:user.name</name>
    <value>root</value>
    <description>user name</description>
  </property>

  <property>
    <name>hive.metastore.uris</name>
    <value>thrift://{{ include "hadoop.fullname" . }}-hive-metastore.{{ .Release.Namespace }}.svc.cluster.local:9083</value>
  </property>

  <!-- host -->
  <property>
    <name>hive.server2.thrift.bind.host</name>
    <value>0.0.0.0</value>
    <description>Bind host on which to run the HiveServer2 Thrift service.</description>
  </property>

  <!-- hs2埠 預設是1000,為了區別,我這裡不使用預設埠-->
  <property>
    <name>hive.server2.thrift.port</name>
    <value>10000</value>
  </property>

  <property>
    <name>hive.server2.active.passive.ha.enable</name>
    <value>true</value>
  </property>

</configuration>

```

2、控制器

hadoop/templates/hive/hiveserver2-statefulset.yaml

yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: {{ include "hadoop.fullname" . }}-hive-metastore annotations: checksum/config: {{ include (print $.Template.BasePath "/hadoop-configmap.yaml") . | sha256sum }} labels: app.kubernetes.io/name: {{ include "hadoop.name" . }} helm.sh/chart: {{ include "hadoop.chart" . }} app.kubernetes.io/instance: {{ .Release.Name }} app.kubernetes.io/component: hive-metastore spec: serviceName: {{ include "hadoop.fullname" . }}-hive-metastore selector: matchLabels: app.kubernetes.io/name: {{ include "hadoop.name" . }} app.kubernetes.io/instance: {{ .Release.Name }} app.kubernetes.io/component: hive-metastore replicas: {{ .Values.hive.metastore.replicas }} template: metadata: labels: app.kubernetes.io/name: {{ include "hadoop.name" . }} app.kubernetes.io/instance: {{ .Release.Name }} app.kubernetes.io/component: hive-metastore spec: affinity: podAntiAffinity: {{- if eq .Values.antiAffinity "hard" }} requiredDuringSchedulingIgnoredDuringExecution: - topologyKey: "kubernetes.io/hostname" labelSelector: matchLabels: app.kubernetes.io/name: {{ include "hadoop.name" . }} app.kubernetes.io/instance: {{ .Release.Name }} app.kubernetes.io/component: hive-metastore {{- else if eq .Values.antiAffinity "soft" }} preferredDuringSchedulingIgnoredDuringExecution: - weight: 5 podAffinityTerm: topologyKey: "kubernetes.io/hostname" labelSelector: matchLabels: app.kubernetes.io/name: {{ include "hadoop.name" . }} app.kubernetes.io/instance: {{ .Release.Name }} app.kubernetes.io/component: hive-metastore {{- end }} terminationGracePeriodSeconds: 0 initContainers: - name: wait-nn image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" command: ['sh', '-c', "until curl -m 3 -sI http://{{ include "hadoop.fullname" . }}-hdfs-nn-{{ sub .Values.hdfs.nameNode.replicas 1 }}.{{ include "hadoop.fullname" . }}-hdfs-nn.{{ .Release.Namespace }}.svc.cluster.local:9870 | egrep --silent 'HTTP/1.1 200 OK|HTTP/1.1 302 Found'; do echo waiting for nn; sleep 1; done"] containers: - name: hive-metastore image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" imagePullPolicy: {{ .Values.image.pullPolicy | quote }} command: - "/bin/bash" - "/opt/apache/tmp/hadoop-config/bootstrap.sh" - "-d" resources: {{ toYaml .Values.hive.metastore.resources | indent 10 }} readinessProbe: tcpSocket: port: 9083 initialDelaySeconds: 10 timeoutSeconds: 2 livenessProbe: tcpSocket: port: 9083 initialDelaySeconds: 10 timeoutSeconds: 2 volumeMounts: - name: hadoop-config mountPath: /opt/apache/tmp/hadoop-config - name: hive-config mountPath: /opt/apache/hive/conf securityContext: runAsUser: {{ .Values.securityContext.runAsUser }} privileged: {{ .Values.securityContext.privileged }} volumes: - name: hadoop-config configMap: name: {{ include "hadoop.fullname" . }} - name: hive-config configMap: name: {{ include "hadoop.fullname" . }}-hive

3、Service

hadoop/templates/hive/metastore-svc.yaml

```yaml

A headless service to create DNS records

apiVersion: v1 kind: Service metadata: name: {{ include "hadoop.fullname" . }}-hive-metastore labels: app.kubernetes.io/name: {{ include "hadoop.name" . }} helm.sh/chart: {{ include "hadoop.chart" . }} app.kubernetes.io/instance: {{ .Release.Name }} app.kubernetes.io/component: hive-metastore spec: ports: - name: metastore port: {{ .Values.service.hive.metastore.port }} nodePort: {{ .Values.service.hive.metastore.nodePort }} type: {{ .Values.service.hive.metastore.type }} selector: app.kubernetes.io/name: {{ include "hadoop.name" . }} app.kubernetes.io/instance: {{ .Release.Name }} app.kubernetes.io/component: hive-metastore ```

3)新增 HiveServer2 服務編排

1、控制器

hadoop/templates/hive/hiveserver2-statefulset.yaml

yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: {{ include "hadoop.fullname" . }}-hive-hiveserver2 annotations: checksum/config: {{ include (print $.Template.BasePath "/hadoop-configmap.yaml") . | sha256sum }} labels: app.kubernetes.io/name: {{ include "hadoop.name" . }} helm.sh/chart: {{ include "hadoop.chart" . }} app.kubernetes.io/instance: {{ .Release.Name }} app.kubernetes.io/component: hive-hiveserver2 spec: serviceName: {{ include "hadoop.fullname" . }}-hive-hiveserver2 selector: matchLabels: app.kubernetes.io/name: {{ include "hadoop.name" . }} app.kubernetes.io/instance: {{ .Release.Name }} app.kubernetes.io/component: hive-hiveserver2 replicas: {{ .Values.hive.hiveserver2.replicas }} template: metadata: labels: app.kubernetes.io/name: {{ include "hadoop.name" . }} app.kubernetes.io/instance: {{ .Release.Name }} app.kubernetes.io/component: hive-hiveserver2 spec: affinity: podAntiAffinity: {{- if eq .Values.antiAffinity "hard" }} requiredDuringSchedulingIgnoredDuringExecution: - topologyKey: "kubernetes.io/hostname" labelSelector: matchLabels: app.kubernetes.io/name: {{ include "hadoop.name" . }} app.kubernetes.io/instance: {{ .Release.Name }} app.kubernetes.io/component: hive-hiveserver2 {{- else if eq .Values.antiAffinity "soft" }} preferredDuringSchedulingIgnoredDuringExecution: - weight: 5 podAffinityTerm: topologyKey: "kubernetes.io/hostname" labelSelector: matchLabels: app.kubernetes.io/name: {{ include "hadoop.name" . }} app.kubernetes.io/instance: {{ .Release.Name }} app.kubernetes.io/component: hive-hiveserver2 {{- end }} terminationGracePeriodSeconds: 0 initContainers: - name: wait-hive-metastore image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" command: ['sh', '-c', "until (echo 'q')|telnet -e 'q' {{ include "hadoop.fullname" . }}-hive-metastore.{{ .Release.Namespace }}.svc.cluster.local {{ .Values.service.hive.metastore.port }} >/dev/null 2>&1; do echo waiting for hive metastore; sleep 1; done"] containers: - name: hive-hiveserver2 image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" imagePullPolicy: {{ .Values.image.pullPolicy | quote }} command: - "/bin/bash" - "/opt/apache/tmp/hadoop-config/bootstrap.sh" - "-d" resources: {{ toYaml .Values.hive.metastore.resources | indent 10 }} readinessProbe: tcpSocket: port: 10000 initialDelaySeconds: 10 timeoutSeconds: 2 livenessProbe: tcpSocket: port: 10000 initialDelaySeconds: 10 timeoutSeconds: 2 volumeMounts: - name: hadoop-config mountPath: /opt/apache/tmp/hadoop-config - name: hive-config mountPath: /opt/apache/hive/conf securityContext: runAsUser: {{ .Values.securityContext.runAsUser }} privileged: {{ .Values.securityContext.privileged }} volumes: - name: hadoop-config configMap: name: {{ include "hadoop.fullname" . }} - name: hive-config configMap: name: {{ include "hadoop.fullname" . }}-hive

2、Service

hadoop/templates/hive/hiveserver2-svc.yaml

```yaml

A headless service to create DNS records

apiVersion: v1 kind: Service metadata: name: {{ include "hadoop.fullname" . }}-hive-hiveserver2 labels: app.kubernetes.io/name: {{ include "hadoop.name" . }} helm.sh/chart: {{ include "hadoop.chart" . }} app.kubernetes.io/instance: {{ .Release.Name }} app.kubernetes.io/component: hive-hiveserver2 spec: ports: - name: metastore port: {{ .Values.service.hive.hiveserver2.port }} nodePort: {{ .Values.service.hive.hiveserver2.nodePort }} type: {{ .Values.service.hive.hiveserver2.type }} selector: app.kubernetes.io/name: {{ include "hadoop.name" . }} app.kubernetes.io/instance: {{ .Release.Name }} app.kubernetes.io/component: hive-hiveserver2 ```

4)修改values.yaml

hadoop/values.yaml

```yaml image: repository: myharbor.com/bigdata/hadoop-hive tag: v3.3.2-3.1.2 pullPolicy: IfNotPresent

The version of the hadoop libraries being used in the image.

hadoopVersion: 3.3.2 logLevel: INFO

Select antiAffinity as either hard or soft, default is soft

antiAffinity: "soft"

hdfs: nameNode: replicas: 2 pdbMinAvailable: 1

resources:
  requests:
    memory: "256Mi"
    cpu: "10m"
  limits:
    memory: "2048Mi"
    cpu: "1000m"

dataNode: # Will be used as dfs.datanode.hostname # You still need to set up services + ingress for every DN # Datanodes will expect to externalHostname: example.com externalDataPortRangeStart: 9866 externalHTTPPortRangeStart: 9864

replicas: 3

pdbMinAvailable: 1

resources:
  requests:
    memory: "256Mi"
    cpu: "10m"
  limits:
    memory: "2048Mi"
    cpu: "1000m"

webhdfs: enabled: true

jounralNode: replicas: 3 pdbMinAvailable: 1

resources:
  requests:
    memory: "256Mi"
    cpu: "10m"
  limits:
    memory: "2048Mi"
    cpu: "1000m"

hive: metastore: replicas: 1 pdbMinAvailable: 1

resources:
  requests:
    memory: "256Mi"
    cpu: "10m"
  limits:
    memory: "2048Mi"
    cpu: "1000m"

hiveserver2: replicas: 1 pdbMinAvailable: 1

resources:
  requests:
    memory: "256Mi"
    cpu: "10m"
  limits:
    memory: "1024Mi"
    cpu: "500m"

yarn: resourceManager: pdbMinAvailable: 1 replicas: 2

resources:
  requests:
    memory: "256Mi"
    cpu: "10m"
  limits:
    memory: "2048Mi"
    cpu: "2000m"

nodeManager: pdbMinAvailable: 1

# The number of YARN NodeManager instances.
replicas: 1

# Create statefulsets in parallel (K8S 1.7+)
parallelCreate: false

# CPU and memory resources allocated to each node manager pod.
# This should be tuned to fit your workload.
resources:
  requests:
    memory: "256Mi"
    cpu: "500m"
  limits:
    memory: "2048Mi"
    cpu: "1000m"

persistence: nameNode: enabled: true storageClass: "hadoop-ha-nn-local-storage" accessMode: ReadWriteOnce size: 1Gi local: - name: hadoop-ha-nn-0 host: "local-168-182-110" path: "/opt/bigdata/servers/hadoop-ha/nn/data/data1" - name: hadoop-ha-nn-1 host: "local-168-182-111" path: "/opt/bigdata/servers/hadoop-ha/nn/data/data1"

dataNode: enabled: true enabledStorageClass: false storageClass: "hadoop-ha-dn-local-storage" accessMode: ReadWriteOnce size: 1Gi local: - name: hadoop-ha-dn-0 host: "local-168-182-110" path: "/opt/bigdata/servers/hadoop-ha/dn/data/data1" - name: hadoop-ha-dn-1 host: "local-168-182-110" path: "/opt/bigdata/servers/hadoop-ha/dn/data/data2" - name: hadoop-ha-dn-2 host: "local-168-182-110" path: "/opt/bigdata/servers/hadoop-ha/dn/data/data3" - name: hadoop-ha-dn-3 host: "local-168-182-111" path: "/opt/bigdata/servers/hadoop-ha/dn/data/data1" - name: hadoop-ha-dn-4 host: "local-168-182-111" path: "/opt/bigdata/servers/hadoop-ha/dn/data/data2" - name: hadoop-ha-dn-5 host: "local-168-182-111" path: "/opt/bigdata/servers/hadoop-ha/dn/data/data3" - name: hadoop-ha-dn-6 host: "local-168-182-112" path: "/opt/bigdata/servers/hadoop-ha/dn/data/data1" - name: hadoop-ha-dn-7 host: "local-168-182-112" path: "/opt/bigdata/servers/hadoop-ha/dn/data/data2" - name: hadoop-ha-dn-8 host: "local-168-182-112" path: "/opt/bigdata/servers/hadoop-ha/dn/data/data3" volumes: - name: dfs1 mountPath: /opt/apache/hdfs/datanode1 hostPath: /opt/bigdata/servers/hadoop-ha/dn/data/data1 - name: dfs2 mountPath: /opt/apache/hdfs/datanode2 hostPath: /opt/bigdata/servers/hadoop-ha/dn/data/data2 - name: dfs3 mountPath: /opt/apache/hdfs/datanode3 hostPath: /opt/bigdata/servers/hadoop-ha/dn/data/data3

journalNode: enabled: true storageClass: "hadoop-ha-jn-local-storage" accessMode: ReadWriteOnce size: 1Gi local: - name: hadoop-ha-jn-0 host: "local-168-182-110" path: "/opt/bigdata/servers/hadoop-ha/jn/data/data1" - name: hadoop-ha-jn-1 host: "local-168-182-111" path: "/opt/bigdata/servers/hadoop-ha/jn/data/data1" - name: hadoop-ha-jn-2 host: "local-168-182-112" path: "/opt/bigdata/servers/hadoop-ha/jn/data/data1" volumes: - name: jn mountPath: /opt/apache/hdfs/journalnode

service: nameNode: type: NodePort ports: dfs: 9000 webhdfs: 9870 nodePorts: dfs: 30900 webhdfs: 30870 nameNode1: type: NodePort ports: webhdfs: 9870 nodePorts: webhdfs: 31870 nameNode2: type: NodePort ports: webhdfs: 9870 nodePorts: webhdfs: 31871 dataNode: type: NodePort ports: webhdfs: 9864 nodePorts: webhdfs: 30864 resourceManager: type: NodePort ports: web: 8088 nodePorts: web: 30088 resourceManager1: type: NodePort ports: web: 8088 nodePorts: web: 31088 resourceManager2: type: NodePort ports: web: 8088 nodePorts: web: 31089 journalNode: type: ClusterIP ports: jn: 8485 nodePorts: jn: "" hive: metastore: type: NodePort port: 9083 nodePort: 31183 hiveserver2: type: NodePort port: 10000 nodePort: 30000

securityContext: runAsUser: 9999 privileged: true

```

5)開始部署

```bash

更新

helm upgrade hadoop-ha ./hadoop -n hadoop-ha

重新安裝

helm install hadoop-ha ./hadoop -n hadoop-ha --create-namespace

更新

helm upgrade hadoop-ha ./hadoop -n hadoop-ha NOTESbash NAME: hadoop-ha LAST DEPLOYED: Thu Sep 29 23:42:02 2022 NAMESPACE: hadoop-ha STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: 1. You can check the status of HDFS by running this command: kubectl exec -n hadoop-ha -it hadoop-ha-hadoop-hdfs-nn-0 -- /opt/hadoop/bin/hdfs dfsadmin -report

  1. You can list the yarn nodes by running this command: kubectl exec -n hadoop-ha -it hadoop-ha-hadoop-yarn-rm-0 -- /opt/hadoop/bin/yarn node -list

  2. Create a port-forward to the yarn resource manager UI: kubectl port-forward -n hadoop-ha hadoop-ha-hadoop-yarn-rm-0 8088:8088

Then open the ui in your browser:

open http://localhost:8088

  1. You can run included hadoop tests like this: kubectl exec -n hadoop-ha -it hadoop-ha-hadoop-yarn-nm-0 -- /opt/hadoop/bin/hadoop jar /opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.3.2-tests.jar TestDFSIO -write -nrFiles 5 -fileSize 128MB -resFile /tmp/TestDFSIOwrite.txt

  2. You can list the mapreduce jobs like this: kubectl exec -n hadoop-ha -it hadoop-ha-hadoop-yarn-rm-0 -- /opt/hadoop/bin/mapred job -list

  3. This chart can also be used with the zeppelin chart helm install --namespace hadoop-ha --set hadoop.useConfigMap=true,hadoop.configMapName=hadoop-ha-hadoop stable/zeppelin

  4. You can scale the number of yarn nodes like this: helm upgrade hadoop-ha --set yarn.nodeManager.replicas=4 stable/hadoop

Make sure to update the values.yaml if you want to make this permanent.

``` 在這裡插入圖片描述

6)測試驗證

檢視 bash kubectl get pods,svc -n hadoop-ha -owide 在這裡插入圖片描述 測試

```bash beeline -u jdbc:hive2://localhost:10000 -n admin

create database test;

CREATE TABLE IF NOT EXISTS test.person_1 ( id INT COMMENT 'ID', name STRING COMMENT '名字', age INT COMMENT '年齡', likes ARRAY COMMENT '愛好', address MAP COMMENT '地址' ) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' COLLECTION ITEMS TERMINATED BY '-' MAP KEYS TERMINATED BY ':' LINES TERMINATED BY '\n'; ``` 在這裡插入圖片描述

7)解除安裝

```bash helm uninstall hadoop-ha -n hadoop-ha

kubectl delete pod -n hadoop-ha kubectl get pod -n hadoop-ha|awk 'NR>1{print $1}' --force kubectl patch ns hadoop-ha -p '{"metadata":{"finalizers":null}}' kubectl delete ns hadoop-ha --force

rm -fr /opt/bigdata/servers/hadoop-ha/{nn,dn,jn}/data/data{1..3}/* ``` git下載地址:https://gitee.com/hadoop-bigdata/hadoop-ha-on-k8s

這裡只是把hive相關的部分編排列出來了,有疑問的小夥伴歡迎給我留言,對應的修改也提交到git上了,有需要的小夥伴自行下載,hive的編排部署就先到這裡了,後續會持續分享【雲原生+大資料】相關的教程,請小夥伴耐心等待~