K8S 中使用 YAML 安裝 ECK
Kubernetes 是目前最受歡迎的容器編排技術,越來越多的應用開始往 Kubernetes 中遷移。Kubernetes 現有的 ReplicaSet、Deployment、Service 等資源對象已經可以滿足無狀態應用對於自動擴縮容、負載均衡等基本需求。但是對於有狀態的、分佈式的應用,通常擁有各自的一套模型定義規範,例如 Prometheus,Etcd,Zookeeper,Elasticsearch 等等。部署這些分佈式應用往往需要熟悉特定領域的知識,並且在擴縮容和升級時需要考慮如何保證應用服務的可用性等問題。為了簡化有狀態、分佈式應用的部署,Kubernetes Operator 應運而生。
Kubernetes Operator 是一種特定的應用控制器,通過 CRD(Custom Resource Definitions,自定義資源定義)擴展 Kubernetes API 的功能,可以用它來創建、配置和管理特定的有狀態應用,而不需要直接去使用 Kubernetes 中最原始的一些資源對象,比如 Pod,Deployment,Service 等等。
Elastic Cloud on Kubernetes(ECK) 是其中的一種 Kubernetes Operator,方便我們管理 Elastic Stack 家族中的各種組件,例如 Elasticsearch,Kibana,APM,Beats 等等。比如只需要定義一個 Elasticsearch 類型的 CRD 對象,ECK 就可以幫助我們快速搭建出一套 Elasticsearch 集羣。
使用create安裝Elastic的自定義資源定義
[root@k8s-192-168-1-140 ~]# kubectl create -f https://download.elastic.co/downloads/eck/3.2.0/crds.yaml
customresourcedefinition.apiextensions.k8s.io/agents.agent.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/apmservers.apm.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/beats.beat.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/elasticmapsservers.maps.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/elasticsearchautoscalers.autoscaling.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/elasticsearches.elasticsearch.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/enterprisesearches.enterprisesearch.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/kibanas.kibana.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/logstashes.logstash.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/stackconfigpolicies.stackconfigpolicy.k8s.elastic.co created
[root@k8s-192-168-1-140 ~]#
使用kubectl apply 安裝operator及其RBAC規則
[root@k8s-192-168-1-140 ~]# kubectl apply -f https://download.elastic.co/downloads/eck/3.2.0/operator.yaml
namespace/elastic-system created
serviceaccount/elastic-operator created
secret/elastic-webhook-server-cert created
configmap/elastic-operator created
clusterrole.rbac.authorization.k8s.io/elastic-operator created
clusterrole.rbac.authorization.k8s.io/elastic-operator-view created
clusterrole.rbac.authorization.k8s.io/elastic-operator-edit created
clusterrolebinding.rbac.authorization.k8s.io/elastic-operator created
service/elastic-webhook-server created
statefulset.apps/elastic-operator created
validatingwebhookconfiguration.admissionregistration.k8s.io/elastic-webhook.k8s.elastic.co created
[root@k8s-192-168-1-140 ~]#
查看是否啓動完成
[root@k8s-192-168-1-140 ~]kubectl get -n elastic-system pods
NAME READY STATUS RESTARTS AGE
elastic-operator-0 1/1 Running 0 8m38s
[root@k8s-192-168-1-140 ~]#
啓動部署
Operator自動創建和管理Kubernetes資源,以實現Elasticsearch集羣的期望狀態。可能需要幾分鐘的時間才能創建所有資源並準備好使用羣集。
cat <<EOF | kubectl apply -f -
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 9.2.2
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
EOF
存儲用量
創建時默認為1G存儲空間,可以在創建時配置申明空間
cat <<EOF | kubectl apply -f -
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 9.2.2
nodeSets:
- name: default
count: 1
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: nfs-storage
config:
node.store.allow_mmap: false
EOF
查看部署狀態
[root@k8s-192-168-1-140 ~]#
[root@k8s-192-168-1-140 ~]# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default nginx-66686b6766-tdwt2 1/1 Running 2 (<invalid> ago) 81d
default quickstart-es-default-0 0/1 Pending 0 2m1s
elastic-system elastic-operator-0 1/1 Running 0 12m
kube-system calico-kube-controllers-78dcb7b647-8f2ph 1/1 Running 2 (<invalid> ago) 81d
kube-system calico-node-hpwvr 1/1 Running 2 (<invalid> ago) 81d
kube-system coredns-6746f4cb74-bhkv8 1/1 Running 2 (<invalid> ago) 81d
kube-system metrics-server-55c56cb875-bwbpr 1/1 Running 2 (<invalid> ago) 81d
kube-system node-local-dns-nz4q7 1/1 Running 2 (<invalid> ago) 81d
[root@k8s-192-168-1-140 ~]#
查看日誌
[root@k8s-192-168-1-140 ~]# kubectl logs -f quickstart-es-default-0
Defaulted container "elasticsearch" out of: elasticsearch, elastic-internal-init-filesystem (init), elastic-internal-suspend (init)
[root@k8s-192-168-1-140 ~]#
安裝NFS動態掛載
[root@k8s-192-168-1-140 ~]# yum install nfs-utils -y
[root@k8s-192-168-1-140 ~]# mkdir /nfs
[root@k8s-192-168-1-140 ~]# vim /etc/exports
/nfs *(rw,sync,no_root_squash,no_subtree_check)
[root@k8s-192-168-1-140 ~]# systemctl restart rpcbind
[root@k8s-192-168-1-140 ~]# systemctl restart nfs-server
[root@k8s-192-168-1-140 ~]# systemctl enable rpcbind
[root@k8s-192-168-1-140 ~]# systemctl enable nfs-server
K8S 編寫NFS供給
[root@k8s-192-168-1-140 ~]# vim nfs-storage.yaml
[root@k8s-192-168-1-140 ~]#
[root@k8s-192-168-1-140 ~]# cat nfs-storage.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-storage
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
archiveOnDelete: "true" ## 刪除pv的時候,pv的內容是否要備份
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: registry.cn-hangzhou.aliyuncs.com/chenby/nfs-subdir-external-provisioner:v4.0.2
# resources:
# limits:
# cpu: 10m
# requests:
# cpu: 10m
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: 192.168.1.140 ## 指定自己nfs服務器地址
- name: NFS_PATH
value: /nfs/ ## nfs服務器共享的目錄
volumes:
- name: nfs-client-root
nfs:
server: 192.168.1.140
path: /nfs/
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
開始安裝
[root@k8s-192-168-1-140 ~]# kubectl apply -f nfs-storage.yaml
storageclass.storage.k8s.io/nfs-storage created
deployment.apps/nfs-client-provisioner created
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
[root@k8s-192-168-1-140 ~]#
查看存儲
[root@k8s-192-168-1-140 ~]# kubectl get storageclasses.storage.k8s.io
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-storage (default) k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 6h7m
[root@k8s-192-168-1-140 ~]#
[root@k8s-192-168-1-140 ~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
elasticsearch-data-quickstart-es-default-0 Bound pvc-2df832aa-1c54-4af6-8384-5e5c5f167445 5Gi RWO nfs-storage <unset> 39s
[root@k8s-192-168-1-140 ~]#
[root@k8s-192-168-1-140 ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
pvc-2df832aa-1c54-4af6-8384-5e5c5f167445 5Gi RWO Delete Bound default/elasticsearch-data-quickstart-es-default-0 nfs-storage <unset> 43s
[root@k8s-192-168-1-140 ~]#
查看ES服務發現
[root@k8s-192-168-1-140 ~]# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.68.0.1 <none> 443/TCP 81d
nginx NodePort 10.68.148.53 <none> 80:30330/TCP 81d
quickstart-es-default ClusterIP None <none> 9200/TCP 74s
quickstart-es-http ClusterIP 10.68.66.232 <none> 9200/TCP 75s
quickstart-es-internal-http ClusterIP 10.68.121.73 <none> 9200/TCP 75s
quickstart-es-transport ClusterIP None <none> 9300/TCP 75s
[root@k8s-192-168-1-140 ~]#
查看ES密碼
[root@k8s-192-168-1-140 ~]# PASSWORD=$(kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}')
[root@k8s-192-168-1-140 ~]#
[root@k8s-192-168-1-140 ~]# kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}'
V3VPqwQMURTSg6zFYvVIsH13[root@k8s-192-168-1-140 ~]#
[root@k8s-192-168-1-140 ~]#
[root@k8s-192-168-1-140 ~]# curl -u "elastic:$PASSWORD" -k "https://10.68.66.232:9200"
{
"name" : "quickstart-es-default-0",
"cluster_name" : "quickstart",
"cluster_uuid" : "JNqGubnmSeao_LO-JmypHg",
"version" : {
"number" : "9.2.2",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "ed771e6976fac1a085affabd45433234a4babeaf",
"build_date" : "2025-11-27T08:06:51.614397514Z",
"build_snapshot" : false,
"lucene_version" : "10.3.2",
"minimum_wire_compatibility_version" : "8.19.0",
"minimum_index_compatibility_version" : "8.0.0"
},
"tagline" : "You Know, for Search"
}
[root@k8s-192-168-1-140 ~]#
安裝Kibana服務
cat <<EOF | kubectl apply -f -
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: quickstart
spec:
version: 9.2.2
count: 1
elasticsearchRef:
name: quickstart
EOF
查看服務狀態以及密碼信息
[root@k8s-192-168-1-140 ~]# kubectl get kibana
NAME HEALTH NODES VERSION AGE
quickstart red 9.2.2 16s
[root@k8s-192-168-1-140 ~]#
# 查看密碼
[root@k8s-192-168-1-140 ~]# kubectl get secret quickstart-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode; echo
V3VPqwQMURTSg6zFYvVIsH13
[root@k8s-192-168-1-140 ~]#
[root@k8s-192-168-1-140 ~]# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.68.0.1 <none> 443/TCP 81d
nginx NodePort 10.68.148.53 <none> 80:30330/TCP 81d
quickstart-es-default ClusterIP None <none> 9200/TCP 2m47s
quickstart-es-http ClusterIP 10.68.66.232 <none> 9200/TCP 2m48s
quickstart-es-internal-http ClusterIP 10.68.121.73 <none> 9200/TCP 2m48s
quickstart-es-transport ClusterIP None <none> 9300/TCP 2m48s
quickstart-kb-http ClusterIP 10.68.103.103 <none> 5601/TCP 24s
[root@k8s-192-168-1-140 ~]#
[root@k8s-192-168-1-140 ~]# kubectl get service quickstart-kb-http
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
quickstart-kb-http ClusterIP 10.68.103.103 <none> 5601/TCP 2m39s
[root@k8s-192-168-1-140 ~]#
開啓訪問
[root@k8s-192-168-1-140 ~] kubectl port-forward --address 0.0.0.0 service/quickstart-kb-http 5601
Forwarding from 0.0.0.0:5601 -> 5601
# 登錄地址
https://192.168.1.140:5601/login
用户:
elastic
密碼:
V3VPqwQMURTSg6zFYvVIsH13
修改ES副本數
cat <<EOF | kubectl apply -f -
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 9.2.2
nodeSets:
- name: default
count: 3
config:
node.store.allow_mmap: false
EOF
查看狀態
[root@k8s-192-168-1-140 ~]# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default nfs-client-provisioner-58d465c998-w8mfq 1/1 Running 0 5h38m
default nginx-66686b6766-tdwt2 1/1 Running 2 (<invalid> ago) 81d
default quickstart-es-default-0 1/1 Running 0 5h51m
default quickstart-kb-57cc78f6b8-b8fpf 1/1 Running 0 5h24m
elastic-system elastic-operator-0 1/1 Running 0 6h2m
kube-system calico-kube-controllers-78dcb7b647-8f2ph 1/1 Running 2 (<invalid> ago) 81d
kube-system calico-node-hpwvr 1/1 Running 2 (<invalid> ago) 81d
kube-system coredns-6746f4cb74-bhkv8 1/1 Running 2 (<invalid> ago) 81d
kube-system metrics-server-55c56cb875-bwbpr 1/1 Running 2 (<invalid> ago) 81d
kube-system node-local-dns-nz4q7 1/1 Running 2 (<invalid> ago) 81d
[root@k8s-192-168-1-140 ~]# kubectl get pod -A -w
NAMESPACE NAME READY STATUS RESTARTS AGE
default nfs-client-provisioner-58d465c998-w8mfq 1/1 Running 0 5h38m
default nginx-66686b6766-tdwt2 1/1 Running 2 (<invalid> ago) 81d
default quickstart-es-default-0 1/1 Running 0 5h51m
default quickstart-kb-57cc78f6b8-b8fpf 1/1 Running 0 5h24m
elastic-system elastic-operator-0 1/1 Running 0 6h2m
kube-system calico-kube-controllers-78dcb7b647-8f2ph 1/1 Running 2 (<invalid> ago) 81d
kube-system calico-node-hpwvr 1/1 Running 2 (<invalid> ago) 81d
kube-system coredns-6746f4cb74-bhkv8 1/1 Running 2 (<invalid> ago) 81d
kube-system metrics-server-55c56cb875-bwbpr 1/1 Running 2 (<invalid> ago) 81d
kube-system node-local-dns-nz4q7 1/1 Running 2 (<invalid> ago) 81d
default quickstart-es-default-1 0/1 Pending 0 0s
default quickstart-es-default-1 0/1 Pending 0 0s
default quickstart-es-default-1 0/1 Pending 0 0s
default quickstart-es-default-1 0/1 Init:0/2 0 0s
default quickstart-es-default-1 0/1 Init:0/2 0 1s
default quickstart-es-default-0 1/1 Running 0 5h51m
default quickstart-es-default-1 0/1 Init:0/2 0 1s
default quickstart-es-default-0 1/1 Running 0 5h51m
default quickstart-es-default-1 0/1 Init:0/2 0 2s
default quickstart-es-default-1 0/1 Init:1/2 0 3s
default quickstart-es-default-1 0/1 PodInitializing 0 4s
default quickstart-es-default-1 0/1 Running 0 5s
default quickstart-es-default-1 1/1 Running 0 26s
default quickstart-es-default-2 0/1 Pending 0 0s
default quickstart-es-default-2 0/1 Pending 0 0s
default quickstart-es-default-2 0/1 Pending 0 0s
default quickstart-es-default-2 0/1 Init:0/2 0 0s
default quickstart-es-default-2 0/1 Init:0/2 0 1s
default quickstart-es-default-0 1/1 Running 0 5h51m
default quickstart-es-default-1 1/1 Running 0 30s
default quickstart-es-default-2 0/1 Init:0/2 0 2s
default quickstart-es-default-1 1/1 Running 0 30s
default quickstart-es-default-2 0/1 Init:0/2 0 2s
default quickstart-es-default-0 1/1 Running 0 5h51m
default quickstart-es-default-2 0/1 Init:1/2 0 3s
default quickstart-es-default-2 0/1 PodInitializing 0 4s
default quickstart-es-default-2 0/1 Running 0 5s
default quickstart-es-default-2 1/1 Running 0 33s
^C[root@k8s-192-168-1-140 ~]#
[root@k8s-192-168-1-140 ~]#
[root@k8s-192-168-1-140 ~]#
[root@k8s-192-168-1-140 ~]# kubectl get pod -A -w
NAMESPACE NAME READY STATUS RESTARTS AGE
default nfs-client-provisioner-58d465c998-w8mfq 1/1 Running 0 5h40m
default nginx-66686b6766-tdwt2 1/1 Running 2 (<invalid> ago) 81d
default quickstart-es-default-0 1/1 Running 0 5h53m
default quickstart-es-default-1 1/1 Running 0 2m19s
default quickstart-es-default-2 1/1 Running 0 111s
default quickstart-kb-57cc78f6b8-b8fpf 1/1 Running 0 5h26m
elastic-system elastic-operator-0 1/1 Running 0 6h4m
kube-system calico-kube-controllers-78dcb7b647-8f2ph 1/1 Running 2 (<invalid> ago) 81d
kube-system calico-node-hpwvr 1/1 Running 2 (<invalid> ago) 81d
kube-system coredns-6746f4cb74-bhkv8 1/1 Running 2 (<invalid> ago) 81d
kube-system metrics-server-55c56cb875-bwbpr 1/1 Running 2 (<invalid> ago) 81d
kube-system node-local-dns-nz4q7 1/1 Running 2 (<invalid> ago) 81d
卸載刪除
# 刪除所有命名空間中的所有Elastic資源
kubectl get namespaces --no-headers -o custom-columns=:metadata.name \
| xargs -n1 kubectl delete elastic --all -n
# 刪除operator
kubectl delete -f https://download.elastic.co/downloads/eck/3.2.0/operator.yaml
kubectl delete -f https://download.elastic.co/downloads/eck/3.2.0/crds.yaml
關於
https://www.oiox.cn/
https://www.oiox.cn/index.php/start-page.html
CSDN、GitHub、知乎、開源中國、思否、掘金、簡書、華為雲、阿里雲、騰訊雲、嗶哩嗶哩、今日頭條、新浪微博、個人博客
全網可搜《小陳運維》
文章主要發佈於微信公眾號:《Linux運維交流社區》