kubernetes弹性伸缩之 Pod基于HPA_实现自动扩容/缩容

kubernetes弹性伸缩之 Pod基于HPA_实现自动扩容/缩容

Pod自动扩容/缩容:HPA介绍

Horizontal Pod Autoscaler(HPA,Pod水平自动伸缩):根据资源利用率或者自定义指

标自动调整Deployment的Pod副本数量,提供应用并发。HPA不适于无法缩放的对象,例

如DaemonSet。

k8s 弹性伸缩之 Pod基于HPA_实现自动扩容/缩容

Pod自动扩容/缩容:HPA基本工作原理

Kubernetes 中的 Metrics Server 持续采集所有 Pod 副本的指标数据。HPA 控制器通过 Metrics-Server 的 API(聚合 API)获取这些数据,基于用户定义的扩缩容规则进行计算,得到目标 Pod 副本数量。当目标 Pod 副本数量与当前副本数量不同时,HPA 控制器就像 Pod 的Deployment控制器发起scale 操作,调整 Pod 的副本数量,完成扩缩容操作。

k8s 弹性伸缩之 Pod基于HPA_实现自动扩容/缩容

Pod自动扩容/缩容:使用HPA前提条件

使用HPA,确保满足以下条件:

• 启用Kubernetes API聚合层

• 相应的API已注册:

• 对于资源指标(例如CPU、内存),将使用metrics.k8s.io API,一般由metrics-server提供。

• 对于自定义指标(例如QPS),将使用custom.metrics.k8s.io API,由相关适配器(Adapter)服务提供。

已知适配器列表:

https://github.com/kubernetes/metrics/blob/master/IMPLEMENTATIONS.md#custom-metrics-api

Kubernetes API聚合层:

在 Kubernetes 1.7 版本引入了聚合层,允许第三方应用程序通过将自己注册到kube-apiserver上,仍然通过 API Server 的 HTTP URL 对新的 API 进行访问和操作。为了实现这个机制,Kubernetes 在 kube-apiserver 服务中引入了一个API 聚合层(API Aggregation Layer),用于将扩展 API 的访问请求转发到用户服务的功能。

k8s 弹性伸缩之 Pod基于HPA_实现自动扩容/缩容

启用聚合层:

如果你使用kubeadm部署的,默认已开启。

如果你使用二进制方式部署的话,需要在kubeAPIServer中添加启动参数,增加以下配置:

# vim /opt/kubernetes/cfg/kube-apiserver.conf

..

--requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem 

--proxy-client-cert-file=/opt/kubernetes/ssl/server.pem 

--proxy-client-key-file=/opt/kubernetes/ssl/server-key.pem 

--requestheader-allowed-names=kubernetes 

--requestheader-extra-headers-prefix=X-Remote-Extra- 

--requestheader-group-headers=X-Remote-Group 

--requestheader-username-headers=X-Remote-User 

--enable-aggregator-routing=true 

...

Pod自动扩容/缩容: 基于资源指标

Metrics Server:是一个数据聚合器,从kubelet收集资源指标,并通过Metrics API在Kubernetes apiserver暴露,以供HPA使用。

项目地址:https://github.com/kubernetes-sigs/metrics-server

Metrics Server部署:

wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml

[root@master-1 yaml]# vim components.yaml

apiVersion: v1

kind: ServiceAccount

metadata:

labels:

k8s-app: metrics-server

name: metrics-server

namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRole

metadata:

labels:

k8s-app: metrics-server

rbac.authorization.k8s.io/aggregate-to-admin: "true"

rbac.authorization.k8s.io/aggregate-to-edit: "true"

rbac.authorization.k8s.io/aggregate-to-view: "true"

name: system:aggregated-metrics-reader

rules:

- apiGroups:

- metrics.k8s.io

resources:

- pods

- nodes

verbs:

- get

- list

- watch

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRole

metadata:

labels:

k8s-app: metrics-server

name: system:metrics-server

rules:

- apiGroups:

- ""

resources:

- pods

- nodes

- nodes/stats

- namespaces

- configmaps

verbs:

- get

- list

- watch

---

apiVersion: rbac.authorization.k8s.io/v1

kind: RoleBinding

metadata:

labels:

k8s-app: metrics-server

name: metrics-server-auth-reader

namespace: kube-system

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: Role

name: extension-apiserver-authentication-reader

subjects:

- kind: ServiceAccount

name: metrics-server

namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

labels:

k8s-app: metrics-server

name: metrics-server:system:auth-delegator

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: ClusterRole

name: system:auth-delegator

subjects:

- kind: ServiceAccount

name: metrics-server

namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

labels:

k8s-app: metrics-server

name: system:metrics-server

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: ClusterRole

name: system:metrics-server

subjects:

- kind: ServiceAccount

name: metrics-server

namespace: kube-system

---

apiVersion: v1

kind: Service

metadata:

labels:

k8s-app: metrics-server

name: metrics-server

namespace: kube-system

spec:

ports:

- name: https

port: 443

protocol: TCP

targetPort: https

selector:

k8s-app: metrics-server

---

apiVersion: apps/v1

kind: Deployment

metadata:

labels:

k8s-app: metrics-server

name: metrics-server

namespace: kube-system

spec:

selector:

matchLabels:

k8s-app: metrics-server

strategy:

rollingUpdate:

maxUnavailable: 0

template:

metadata:

labels:

k8s-app: metrics-server

spec:

containers:

- args:

- --cert-dir=/tmp

- --secure-port=4443

- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname

- --kubelet-use-node-status-port

- --kubelet-insecure-tls

#不验证kubelet提供的https证书

image: feixiangkeji974907/metrics-server:v0.4.1

#官方给的镜像地址国内无法下载,我下载后传到了dockerhub上了

imagePullPolicy: IfNotPresent

livenessProbe:

failureThreshold: 3

httpGet:

path: /livez

port: https

scheme: HTTPS

periodSeconds: 10

name: metrics-server

ports:

- containerPort: 4443

name: https

protocol: TCP

readinessProbe:

failureThreshold: 3

httpGet:

path: /readyz

port: https

scheme: HTTPS

periodSeconds: 10

securityContext:

readOnlyRootFilesystem: true

runAsNonRoot: true

runAsUser: 1000

volumeMounts:

- mountPath: /tmp

name: tmp-dir

nodeSelector:

kubernetes.io/os: linux

priorityClassName: system-cluster-critical

serviceAccountName: metrics-server

volumes:

- emptyDir: {}

name: tmp-dir

---

apiVersion: apiregistration.k8s.io/v1

kind: APIService

metadata:

labels:

k8s-app: metrics-server

name: v1beta1.metrics.k8s.io

spec:

group: metrics.k8s.io

groupPriorityMinimum: 100

insecureSkipTLSVerify: true

service:

name: metrics-server

namespace: kube-system

version: v1beta1

versionPriority: 100

[root@master-1 yaml]# kubectl apply -f components.yaml

[root@master-1 yaml]# kubectl get all -n kube-system

NAME READY STATUS RESTARTS AGE

pod/coredns-5ffbfd976d-txzpq 1/1 Running 5 7d3h

pod/kube-flannel-ds-amd64-5tnrj 1/1 Running 11 7d3h

pod/kube-flannel-ds-amd64-fx7vc 1/1 Running 11 7d3h

pod/kube-flannel-ds-amd64-kwfxn 1/1 Running 2 5d

pod/kube-flannel-ds-amd64-l4g4t 1/1 Running 7 7d3h

pod/metrics-server-664489867d-5zwpf 1/1 Running 20 7d3h



NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

service/kube-dns ClusterIP 10.0.0.2 <none> 53/UDP,53/TCP,9153/TCP 7d3h

service/metrics-server ClusterIP 10.0.0.42 <none> 443/TCP 7d3h



NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE

daemonset.apps/kube-flannel-ds-amd64 4 4 4 4 4 <none> 7d3h



NAME READY UP-TO-DATE AVAILABLE AGE

deployment.apps/coredns 1/1 1 1 7d3h

deployment.apps/metrics-server 1/1 1 1 7d3h


NAME DESIRED CURRENT READY AGE

replicaset.apps/coredns-5ffbfd976d 1 1 1 7d3h

replicaset.apps/metrics-server-664489867d 1 1 1 7d3h

测试metrics-server 是否可用:

[root@master-1 yaml]# kubectl get apiservice | grep metric

v1beta1.metrics.k8s.io kube-system/metrics-server True 15s

[root@master-1 yaml]# kubectl get –raw /apis/metrics.k8s.io/v1beta1/nodes

{"kind":"NodeMetricsList","apiVersion":"metrics.k8s.io/v1beta1","metadata":{"selfLink":"/apis/metrics.k8s.io/v1beta1/nodes"},"items":[{"metadata":{"name":"master-1","selfLink":"/apis/metrics.k8s.io/v1beta1/nodes/master-1","creationTimestamp":"2021-01-03T16:03:35Z"},"timestamp":"2021-01-03T16:02:35Z","window":"30s","usage":{"cpu":"433676959n","memory":"1127920Ki"}},{"metadata":{"name":"node-1","selfLink":"/apis/metrics.k8s.io/v1beta1/nodes/node-1","creationTimestamp":"2021-01-03T16:03:35Z"},"timestamp":"2021-01-03T16:02:38Z","window":"30s","usage":{"cpu":"211126782n","memory":"595564Ki"}},{"metadata":{"name":"node-2","selfLink":"/apis/metrics.k8s.io/v1beta1/nodes/node-2","creationTimestamp":"2021-01-03T16:03:35Z"},"timestamp":"2021-01-03T16:02:35Z","window":"30s","usage":{"cpu":"170548141n","memory":"366576Ki"}},{"metadata":{"name":"node-3","selfLink":"/apis/metrics.k8s.io/v1beta1/nodes/node-3","creationTimestamp":"2021-01-03T16:03:35Z"},"timestamp":"2021-01-03T16:02:32Z","window":"30s","usage":{"cpu":"120488736n","memory":"425804Ki"}}]}

[root@master-1 yaml]# kubectl get –raw /apis/metrics.k8s.io/v1beta1/pods

{"kind":"PodMetricsList","apiVersion":"metrics.k8s.io/v1beta1","metadata":{"selfLink":"/apis/metrics.k8s.io/v1beta1/pods"},"items":[{"metadata":{"name":"web-test-5cdbd79b55-87pqt","namespace":"default","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/web-test-5cdbd79b55-87pqt","creationTimestamp":"2021-01-03T16:04:02Z"},"timestamp":"2021-01-03T16:03:28Z","window":"30s","containers":[{"name":"nginx","usage":{"cpu":"0","memory":"6420Ki"}}]},{"metadata":{"name":"web-test-5cdbd79b55-p54nq","namespace":"default","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/web-test-5cdbd79b55-p54nq","creationTimestamp":"2021-01-03T16:04:02Z"},"timestamp":"2021-01-03T16:03:35Z","window":"30s","containers":[{"name":"nginx","usage":{"cpu":"0","memory":"5160Ki"}}]},{"metadata":{"name":"web-test-5cdbd79b55-r9swh","namespace":"default","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/web-test-5cdbd79b55-r9swh","creationTimestamp":"2021-01-03T16:04:02Z"},"timestamp":"2021-01-03T16:03:27Z","window":"30s","containers":[{"name":"nginx","usage":{"cpu":"0","memory":"6040Ki"}}]},{"metadata":{"name":"web-test-5cdbd79b55-t8pcx","namespace":"default","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/web-test-5cdbd79b55-t8pcx","creationTimestamp":"2021-01-03T16:04:02Z"},"timestamp":"2021-01-03T16:03:34Z","window":"30s","containers":[{"name":"nginx","usage":{"cpu":"0","memory":"5996Ki"}}]},{"metadata":{"name":"nginx-ingress-controller-766fb9f77-zzwmp","namespace":"ingress-nginx","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/ingress-nginx/pods/nginx-ingress-controller-766fb9f77-zzwmp","creationTimestamp":"2021-01-03T16:04:02Z"},"timestamp":"2021-01-03T16:03:30Z","window":"30s","containers":[{"name":"nginx-ingress-controller","usage":{"cpu":"21249411n","memory":"96340Ki"}}]},{"metadata":{"name":"coredns-5ffbfd976d-txzpq","namespace":"kube-system","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/coredns-5ffbfd976d-txzpq","creationTimestamp":"2021-01-03T16:04:02Z"},"timestamp":"2021-01-03T16:03:30Z","window":"30s","containers":[{"name":"coredns","usage":{"cpu":"7583457n","memory":"19704Ki"}}]},{"metadata":{"name":"kube-flannel-ds-amd64-5tnrj","namespace":"kube-system","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-flannel-ds-amd64-5tnrj","creationTimestamp":"2021-01-03T16:04:02Z"},"timestamp":"2021-01-03T16:03:39Z","window":"30s","containers":[{"name":"kube-flannel","usage":{"cpu":"4707209n","memory":"16228Ki"}}]},{"metadata":{"name":"kube-flannel-ds-amd64-fx7vc","namespace":"kube-system","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-flannel-ds-amd64-fx7vc","creationTimestamp":"2021-01-03T16:04:02Z"},"timestamp":"2021-01-03T16:03:34Z","window":"30s","containers":[{"name":"kube-flannel","usage":{"cpu":"4736628n","memory":"16096Ki"}}]},{"metadata":{"name":"kube-flannel-ds-amd64-kwfxn","namespace":"kube-system","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-flannel-ds-amd64-kwfxn","creationTimestamp":"2021-01-03T16:04:02Z"},"timestamp":"2021-01-03T16:03:30Z","window":"30s","containers":[{"name":"kube-flannel","usage":{"cpu":"5035329n","memory":"19940Ki"}}]},{"metadata":{"name":"kube-flannel-ds-amd64-l4g4t","namespace":"kube-system","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-flannel-ds-amd64-l4g4t","creationTimestamp":"2021-01-03T16:04:02Z"},"timestamp":"2021-01-03T16:03:33Z","window":"30s","containers":[{"name":"kube-flannel","usage":{"cpu":"5185835n","memory":"15388Ki"}}]},{"metadata":{"name":"metrics-server-664489867d-5zwpf","namespace":"kube-system","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/metrics-server-664489867d-5zwpf","creationTimestamp":"2021-01-03T16:04:02Z"},"timestamp":"2021-01-03T16:03:39Z","window":"30s","containers":[{"name":"metrics-server","usage":{"cpu":"7712269n","memory":"26064Ki"}}]},{"metadata":{"name":"prometheus-5fc97df657-w4qcm","namespace":"kube-system","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/prometheus-5fc97df657-w4qcm","creationTimestamp":"2021-01-03T16:04:02Z"},"timestamp":"2021-01-03T16:03:26Z","window":"30s","containers":[{"name":"prometheus-server-configmap-reload","usage":{"cpu":"0","memory":"2844Ki"}},{"name":"prometheus-server","usage":{"cpu":"15569447n","memory":"124600Ki"}}]},{"metadata":{"name":"dashboard-metrics-scraper-6b4884c9d5-g8dx8","namespace":"kubernetes-dashboard","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/kubernetes-dashboard/pods/dashboard-metrics-scraper-6b4884c9d5-g8dx8","creationTimestamp":"2021-01-03T16:04:02Z"},"timestamp":"2021-01-03T16:03:23Z","window":"30s","containers":[{"name":"dashboard-metrics-scraper","usage":{"cpu":"224169n","memory":"12228Ki"}}]},{"metadata":{"name":"kubernetes-dashboard-7f99b75bf4-ljhtn","namespace":"kubernetes-dashboard","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/kubernetes-dashboard/pods/kubernetes-dashboard-7f99b75bf4-ljhtn","creationTimestamp":"2021-01-03T16:04:02Z"},"timestamp":"2021-01-03T16:03:33Z","window":"30s","containers":[{"name":"kubernetes-dashboard","usage":{"cpu":"1175979n","memory":"21788Ki"}}]}]}

也可以使用kubectl top访问Metrics API:

#查看Node资源消耗

[root@master-1 yaml]# kubectl top node

NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%

master-1 584m 14% 1101Mi 19%

node-1 219m 5% 567Mi 9%

node-2 175m 4% 363Mi 6%

node-3 146m 3% 408Mi 2%

#查看Pod资源消耗:

[root@master-1 yaml]# kubectl top pod

NAME CPU(cores) MEMORY(bytes)

web-test-5cdbd79b55-x6snq 0m 2Mi

模拟部署一个nginx pod 测试

#创建deployment

[root@master-1 yaml]# kubectl create deployment web –image=nginx –dry-run=client -o yaml > deployment.yaml

[root@master-1 yaml]# vim deployment.yaml

#修改yaml,修改副本数为2,增加resources.requests.cpu

.....

spec:

replicas: 2

.....

spec:

containers:

- image: nginx

name: nginx

resources:

requests:

cpu: "250m"

........

#创建service

[root@master-1 yaml]# kubectl expose deployment web –port=80 –target-port=80

[root@master-1 yaml]# kubectl get pods,service,deployment

NAME READY STATUS RESTARTS AGE

pod/web-6fc98bbd69-dbbxx 1/1 Running 0 21s

pod/web-6fc98bbd69-xzrbm 1/1 Running 0 3m15s


NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 7d4h

service/web ClusterIP 10.0.0.205 <none> 80/TCP 7d3h


NAME READY UP-TO-DATE AVAILABLE AGE

deployment.apps/web 2/2 2 2 7m46s

创建HPA,设置指标

[root@master-1 yaml]# kubectl autoscale deployment web –min=2 –max=5 –cpu-percent=80

horizontalpodautoscaler.autoscaling/web autoscaled

[root@master-1 yaml]# kubectl get hpa

NAME 	REFERENCE			TARGETS 		MINPODS 	MAXPODS		REPLICAS 	AGE

web 	Deployment/web 			0%/80% 			2 		5		2 		39s

说明:为名为web的deployment创建一个HPA对象,目标CPU使用率为80%,副本数量配置为2到5之间。

压力测试

使用ac 压力测试

总10w请求,并发1000,从1循环到100

[root@master-1 yaml]# for i in {1..100}

> do

> ab -n 100000 -c 1000 http://10.0.0.205/index.html

> done

k8s 弹性伸缩之 Pod基于HPA_实现自动扩容/缩容

观察扩容状态:

另起一个终端,查看CPU 使用率 ,以及扩容状态

#可以看到CPU 使用率超过了191%, pod也成功扩容到了5个

[root@master-1 ~]# kubectl get hpa

NAME	REFERENCE 		TARGETS		MINPODS 	MAXPODS 	REPLICAS      AGE

web	Deployment/web 		191%/80% 	2 		5 		4	      5m43s

[root@master-1 ~]# kubectl get pods -o wide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES

web-6fc98bbd69-dbbxx 1/1 Running 0 9m8s 10.244.0.13 node-2 <none> <none>

web-6fc98bbd69-lplz8 1/1 Running 0 2m51s 10.244.0.14 node-2 <none> <none>

web-6fc98bbd69-q5pcs 1/1 Running 0 3m6s 10.244.1.30 node-1 <none> <none>

web-6fc98bbd69-vncjr 1/1 Running 0 3m6s 10.244.3.15 node-3 <none> <none>

web-6fc98bbd69-xzrbm 1/1 Running 0 12m 10.244.2.28 master-1 <none> <none>

web-test-5cdbd79b55-x6snq 1/1 Running 0 22m 10.244.3.14 node-3 <none> <none>

Pod自动扩容/缩容:

冷却周期

在弹性伸缩中,冷却周期是不能逃避的一个话题, 由于评估的度量标准是动态特性,副本的数量可能会不断波动,

造成丢失流量,所以不应该在任意时间扩容和缩容。

 

在 HPA 中,为缓解该问题,默认有一定控制:

– –horizontal-pod-autoscaler-downscale-delay 当前操作完成后等待多次时间才能执行缩容操作,默认5分钟

– –horizontal-pod-autoscaler-upscale-delay 当前操作完成后等待多长时间才能执行扩容操作,默认3分钟

可以通过调整kube-controller-manager组件启动参数调整,一般使用默认参数即可

 

为了方便测试,我们先停止ab压力测试,记录下时间

[root@master-1 yaml]# date

Mon Jan 4 00:44:16 CST 2021

等待冷却周期(默认5分钟)

[root@master-1 ~]# date
Mon Jan  4 00:49:30 CST 2021
[root@master-1 ~]# kubectl get pods
NAME                        READY   STATUS    RESTARTS   AGE
web-6fc98bbd69-vncjr        1/1     Running   0          25m
web-6fc98bbd69-xzrbm        1/1     Running   0          34m

可以看到 pod 又变回来原先的2个副本数

HPA 指标也显示正常

[root@master-1 ~]# kubectl get hpa
NAME   REFERENCE        TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
web    Deployment/web   0%/80%    2         5         2          31m

至此,k8s 弹性伸缩之 Pod基于HPA_实现自动扩容/缩容 到此结束

© 版权声明
THE END
喜欢就支持一下吧
点赞0 分享
评论 抢沙发

请登录后发表评论

    暂无评论内容