kubernetes 弹性伸缩之 Node自动扩容/缩容

弹性伸缩概述:

从传统意义上,弹性伸缩主要解决的问题是容量规划与实际负载的矛盾。蓝色水位线表示可用资源容量负载的增量不断扩大容差,红色曲线表示可用资源实际负载变化。弹性伸缩就是要解决当实际负载增加,而更多资源容量没来得及反应的问题。

kubernetes 弹性伸缩之 Node自动扩容/缩容

Kubernetes弹性伸缩布局:

在Kubernetes平台中,资源分为两个维度:

  • node级别:K8s将多台服务器抽象一个积累资源池,每个Node提供这些资源
  • Pod级别:Pod是K8s最小部署单元,运行实际的应用程序,使用request和limit为Pod配额 因此,K8s实现弹性伸缩也是这两个级别,当Node资源充裕情况下,Pod可任意弹性,当不足情况下需要弹性增加节 点来扩容资源池。

针对Pod负载:当Pod资源不足时,使用HPA(Horizontal Pod Autoscaler自动增加Pod副本数量

针对Node负载:当集群资源池不足时,使用CA(Cluster Autoscaler)自动增加Node (目前Cluster Autoscaler 只适用于公有云)

Node自动扩容/缩容:

Node弹性伸缩有两种方案:

  • Cluster Autoscaler:是一个自动调整Kubernetes集群大小的组件,需要与公有云一起使用,例如AWS、Azure、Aliyun,项目地址:https://github.com/kubernetes/autoscaler
  • 自研发:根据Node监控指标或者Pod调度状态判断是否增加Node,需要一定开发成本

Node自动扩容/缩容:实现思路:

kubernetes 弹性伸缩之 Node自动扩容/缩容

Node自动扩容/缩容: Cluster Autoscaler

Cluster Autoscaler支持的云提供商:

阿里云:https://github.com/kubernetes/autoscaler/blob/master/clusterautoscaler/cloudprovider/alicloud/README.md

AWS:https://github.com/kubernetes/autoscaler/blob/master/clusterautoscaler/cloudprovider/aws/README.md

Azure:https://github.com/kubernetes/autoscaler/blob/master/clusterautoscaler/cloudprovider/azure/README.md

GCE:https://kubernetes.io/docs/concepts/cluster-administration/cluster-management/

GKE:https://cloud.google.com/container-engine/docs/cluster-autoscaler

Node自动扩容/缩容: 自研发 【大概思路】

小编把他们分为两个场景:

  • 公司资源充足,有闲置的机器,当发现node 资源不足时,随时扩容(可以提前先部署好node ,打上污点,这样pod 就不会分配过来)
  • 将要有活动上线,预估范围量比较大,同时公司资源紧张,没有空闲机器,需要申请机器,再自动化部署node,加入到k8s 集群中

自研,新增Node大概思路:

  1. 申请一台服务器
  2. 调用Ansible脚本部署Node组件并自动加入集群
  3. 检查服务是否可用,加入监控(prometheus)
  4. 完成Node扩容,接收新Pod
kubernetes 弹性伸缩之 Node自动扩容/缩容

本文基于 ansible 快速的增加部署node:

项目参考:https://github.com/lizhenliang/ansible-install-k8s 阿良老师的ansible 脚本

执行ansible 命令自动化增加Node:

说明: [本实验所用的K8S集群也是基于这个项目自动化搭建的]

环境声明:

kubernetes 弹性伸缩之 Node自动扩容/缩容

为了直观的看见效果,我们先创建一个下POD 的资源,因为K8s集群的机器都为4C6G配置,

那我们的POD 资源,最小内存 和 最大内存 限制在 5000M,副本数为 4

先查看下当前K8S集群有几个node

[root@master-1 yaml]# kubectl get nodes

NAME STATUS ROLES AGE VERSION

master-1 Ready <none> 26h v1.18.6

node-1 Ready <none> 26h v1.18.6

node-2 Ready <none> 26h v1.18.6

创建pod测试的资源:

[root@master-1 yaml]# cat nginx-test.yaml

apiVersion: apps/v1

kind: Deployment

metadata:

creationTimestamp: null

labels:

app: web-test

name: web-test

spec:

replicas: 3

selector:

matchLabels:

app: web-test

strategy: {}

template:

metadata:

creationTimestamp: null

labels:

app: web-test

spec:

containers:

- image: nginx

name: nginx

resources:

requests:

memory: "5000Mi"

limits:

memory: "5050Mi"

[root@master-1 yaml]# kubectl apply -f nginx-test.yaml

deployment.apps/web-test configured

#可以看到一个POD 是没有创建成功的,显示为Pending状态

[root@master-1 yaml]# kubectl get pods

NAME READY STATUS RESTARTS AGE

web-test-86456767c4-5dvlk 1/1 	Running 	1 	17h

web-test-86456767c4-6m5rb 1/1 	Running 	0 	40m

web-test-86456767c4-s9c5d 1/1 	Running 	1 	17h

web-test-c8c599597-6gzks 0/1  	Pending 	0 	42s

#仔细查看下这个没有启动 成功的POD 的状态:

[root@master-1 yaml]# kubectl describe web-test-c8c599597-6gzks

error: the server doesn't have a resource type "web-test-c8c599597-6gzks"

[root@master-1 yaml]# kubectl describe pod web-test-c8c599597-6gzks

Name: web-test-c8c599597-6gzks

Namespace: default

Priority: 0

Node: <none>

Labels: app=web-test

pod-template-hash=c8c599597

Annotations: <none>

Status: Pending

IP:

IPs: <none>

Controlled By: ReplicaSet/web-test-c8c599597

Containers:

nginx:

Image: nginx

Port: <none>

Host Port: <none>

Limits:

memory: 5050Mi

Requests:

memory: 5000Mi

Environment: <none>

Mounts:

/var/run/secrets/kubernetes.io/serviceaccount from default-token-pz842 (ro)

Conditions:

Type Status

PodScheduled False

Volumes:

default-token-pz842:

Type: Secret (a volume populated by a Secret)

SecretName: default-token-pz842

Optional: false

QoS Class: Burstable

Node-Selectors: <none>

Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s

node.kubernetes.io/unreachable:NoExecute for 300s

Events:

Type Reason Age From Message

---- ------ ---- ---- -------

Warning FailedScheduling <unknown> default-scheduler 0/3 nodes are available: 3 Insufficient memory.

Warning FailedScheduling <unknown> default-scheduler 0/3 nodes are available: 3 Insufficient memory

#很明显提示Node 资源不够使用,这时我们就要去扩容Node机器

按照思路,我们先要申请一台全新的机器, 1921.68. 31.67 node-3, 4C8G 的配置

在一台单独的 ansible 管理机上(192.168.31.100),使用上面的ansible 去自动化部署 node-3

git clone 项目下来

[root@ansible ~]# git clone https://github.com/lizhenliang/ansible-install-k8s.git

Cloning into ‘ansible-install-k8s’…

remote: Enumerating objects: 349, done.

remote: Counting objects: 100% (349/349), done.

remote: Compressing objects: 100% (303/303), done.

remote: Total 349 (delta 172), reused 102 (delta 21), pack-reused 0

Receiving objects: 100% (349/349), 152.26 KiB | 0 bytes/s, done.

Resolving deltas: 100% (172/172), done.

#下载软件包并解压到 /root/binary_pkg 目录下:

[root@ansible ~]# mkdir /root/binary_pkg

[root@ansible ~]# tar zxf binary_pkg.tar.gz -C /root/binary_pkg

[root@ansible ~]# ls -l

total 0

drwxr-xr-x 5 root root 230 Dec 29 16:19 ansible-install-k8s

drwxr-xr-x 2 root root 280 Dec 29 16:18 binary_pkg

#修改hosts,添加新节点IP

[root@ansible ~]# vim ansible-install-k8s/hosts

……

[newnode]

192.168.31.67 node_name=node3

……..

#【上面创建K8S 集群的内容不需要改动】,只需要在[newnode] 这个组下面写上需要增加node的 ip和节点名

#使用ansible-playbook 去部署【效果因人而异,无非就是主机的环境不一样导致的,自己慢慢调整 还是可以自动化去部署的】

#执行 add-node.yaml 增加node的ansible 脚本,不要搞错了。

[root@ansible ansible-install-k8s]# ansible-playbook -i hosts add-node.yml

PLAY [0.系统初始化] ********************************************************************************************************************************************************************************************

TASK [common : 添加hosts] ***********************************************************************************************************************************************************************************

changed: [192.168.31.67]

PLAY [1.部署Docker] *****************************************************************************************************************************************************************************************

TASK [docker : 创建临时目录] ************************************************************************************************************************************************************************************

changed: [192.168.31.67]

TASK [分发并解压docker二进制包] ************************************************************************************************************************************************************************************

changed: [192.168.31.67] => (item=/root/binary_pkg/docker-19.03.9.tgz)

TASK [移动docker二进制文件] **************************************************************************************************************************************************************************************

changed: [192.168.31.67]

TASK [docker : 分发service文件] *******************************************************************************************************************************************************************************

changed: [192.168.31.67]

TASK [docker : 创建目录] **************************************************************************************************************************************************************************************

changed: [192.168.31.67]

TASK [配置docker] *******************************************************************************************************************************************************************************************

changed: [192.168.31.67]

TASK [启动docker] *******************************************************************************************************************************************************************************************

changed: [192.168.31.67]

TASK [docker : 查看状态] **************************************************************************************************************************************************************************************

changed: [192.168.31.67]

TASK [docker : debug] *************************************************************************************************************************************************************************************

ok: [192.168.31.67] => {

"docker.stdout_lines": [

"Client:",

" Debug Mode: false",

"",

"Server:",

" Containers: 0",

" Running: 0",

" Paused: 0",

" Stopped: 0",

" Images: 0",

" Server Version: 19.03.9",

" Storage Driver: overlay2",

" Backing Filesystem: xfs",

" Supports d_type: true",

" Native Overlay Diff: true",

" Logging Driver: json-file",

" Cgroup Driver: cgroupfs",

" Plugins:",

" Volume: local",

" Network: bridge host ipvlan macvlan null overlay",

" Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog",

" Swarm: inactive",

" Runtimes: runc",

" Default Runtime: runc",

" Init Binary: docker-init",

" containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429",

" runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd",

" init version: fec3683",

" Security Options:",

" seccomp",

" Profile: default",

" Kernel Version: 3.10.0-957.el7.x86_64",

" Operating System: CentOS Linux 7 (Core)",

" OSType: linux",

" Architecture: x86_64",

" CPUs: 4",

" Total Memory: 5.651GiB",

" Name: localhost.localdomain",

" ID: DASE:GZTC:3KCB:5LWP:UT6D:ITEI:OAR4:QLDK:NBJ5:ZGK4:7UZL:BPK2",

" Docker Root Dir: /var/lib/docker",

" Debug Mode: false",

" Registry: https://index.docker.io/v1/",

" Labels:",

" Experimental: false",

" Insecure Registries:",

" 192.168.31.70",

" 127.0.0.0/8",

" Registry Mirrors:",

" https://b9pmyelo.mirror.aliyuncs.com/",

" Live Restore Enabled: false",

" Product License: Community Engine"

]

}

PLAY [2.部署K8S Node] ***************************************************************************************************************************************************************************************

TASK [node : 创建工作目录] **************************************************************************************************************************************************************************************

changed: [192.168.31.67] => (item=bin)

changed: [192.168.31.67] => (item=cfg)

changed: [192.168.31.67] => (item=ssl)

changed: [192.168.31.67] => (item=logs)

TASK [node : 创建cni插件目录] ***********************************************************************************************************************************************************************************

changed: [192.168.31.67] => (item=/opt/cni/bin)

changed: [192.168.31.67] => (item=/etc/cni/net.d)

TASK [node : 创建临时目录] **************************************************************************************************************************************************************************************

ok: [192.168.31.67]

TASK [node : 分发并解压k8s二进制包] ********************************************************************************************************************************************************************************

changed: [192.168.31.67] => (item=/root/binary_pkg/kubernetes-server-linux-amd64.tar.gz)

TASK [node : 分发并解压cni插件二进制包] ******************************************************************************************************************************************************************************

changed: [192.168.31.67] => (item=/root/binary_pkg/cni-plugins-linux-amd64-v0.8.6.tgz)

TASK [移动k8s node二进制文件] ************************************************************************************************************************************************************************************

changed: [192.168.31.67]

TASK [node : 分发k8s证书] *************************************************************************************************************************************************************************************

changed: [192.168.31.67] => (item=ca.pem)

changed: [192.168.31.67] => (item=kube-proxy.pem)

changed: [192.168.31.67] => (item=kube-proxy-key.pem)

TASK [node : 分发k8s配置文件] ***********************************************************************************************************************************************************************************

changed: [192.168.31.67] => (item=bootstrap.kubeconfig.j2)

changed: [192.168.31.67] => (item=kubelet.conf.j2)

changed: [192.168.31.67] => (item=kubelet-config.yml.j2)

changed: [192.168.31.67] => (item=kube-proxy.kubeconfig.j2)

changed: [192.168.31.67] => (item=kube-proxy.conf.j2)

changed: [192.168.31.67] => (item=kube-proxy-config.yml.j2)

TASK [node : 分发service文件] *********************************************************************************************************************************************************************************

changed: [192.168.31.67] => (item=kubelet.service.j2)

changed: [192.168.31.67] => (item=kube-proxy.service.j2)

TASK [启动k8s node组件] ***************************************************************************************************************************************************************************************

changed: [192.168.31.67] => (item=kubelet)

changed: [192.168.31.67] => (item=kube-proxy)

TASK [node : 分发预准备镜像] *************************************************************************************************************************************************************************************

changed: [192.168.31.67]

TASK [node : 导入镜像] ****************************************************************************************************************************************************************************************

changed: [192.168.31.67]

PLAY RECAP ************************************************************************************************************************************************************************************************

192.168.31.67 : ok=22 changed=20 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

#部署已经结束了,再登陆 Master节点允许颁发证书并加入集群

[root@master-1 ~]# kubectl get csr

NAME AGE SIGNERNAME REQUESTOR CONDITION

node-csr–W_LjdS8pWW3h18Z_AaQHPl6EH0MTCwHdP7qtXOztW0 17m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending

[root@master-1 ~]# kubectl certificate approve node-csr–W_LjdS8pWW3h18Z_AaQHPl6EH0MTCwHdP7qtXOztW0

certificatesigningrequest.certificates.k8s.io/node-csr–W_LjdS8pWW3h18Z_AaQHPl6EH0MTCwHdP7qtXOztW0 approved

#可以看到新部署的node-3 节点已经成功加入到k8s 集群中了

[root@master-1 ~]# kubectl get nodes

NAME STATUS ROLES AGE VERSION
master-1 Ready <none> 46h v1.18.6
node-1 Ready <none> 46h v1.18.6
node-2 Ready <none> 46h v1.18.6
node-3 Ready <none> 46h v1.18.6

#增加Node 后, 查看下 pod 在node 的运行情况

[root@master-1 yaml]# kubectl get pods -o wide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES

web-test-c8c599597-6gzks 1/1 Running 1 156m 10.244.3.4 node-1 <none> <none>

web-test-c8c599597-cxdr4 1/1 Running 0 36s 10.244.0.9 node-2 <none> <none>

web-test-c8c599597-mptzk 1/1 Running 0 8m51s 10.244.3.3 master-1 <none> <none>

web-test-c8c599597-zzqqf 1/1 Running 0 2m41s 10.244.3.5 node-3 <none> <none>

#可以看到 刚刚没有启动的POD 自动运行在Node-3 上,这个就是Node资源的扩容


下面我们再来看看,自动减少Node

自动减少node:

如果你想从Kubernetes集群中删除节点,正确流程如下:

1、获取节点列表

kubectl get node

2、设置不可调度(uncordon 则表示取消不可被调度)

kubectl cordon <node_name>

3、驱逐节点上的Pod

kubectl drain <node_name> –ignore-daemonsets

4、移除节点

kubectl delete node <node_name>


或者:

1、获取节点列表

kubectl get node

2、给Node 打上污点,并且设置为 NoExecute,驱逐节点上的Pod

kubectl taint node <node_name> key=value:NoExecute

3、移除节点

kubectl delete node <node_name>

如果这块自动化的话,前提要获取长期空闲的Node,然后执行这个步骤。

案例:

某电商公司,促销活动结束后,资源使用率降低,需要把闲置节点从集群中 踢出来,恢复原先

(1)方法一:

#在给node打标签前,先看下当前pod 运行在哪些Node 主机上

[root@master-1 yaml]# kubectl get pods -o wide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES

web-test-5cdbd79b55-6xtnt 1/1 Running 0 29s 10.244.0.10 node-2 <none> <none>

web-test-5cdbd79b55-87pqt 1/1 Running 0 34m 10.244.1.16 node-3 <none> <none>

web-test-5cdbd79b55-p54nq 1/1 Running 0 34m 10.244.1.17 node-3 <none> <none>

web-test-5cdbd79b55-t8pcx 1/1 Running 0 29s 10.244.2.19 master-1 <none> <none>

#给node-3 打上一个污点,且污点为 NoExecute,不仅不会调度。还会驱逐Node 上已有的Pod

[root@master-1 yaml]# kubectl taint node node-3 repair=yes:NoExecute

node/node-3 tainted

#查看node-3 的污点有没有设置成功

[root@master-1 yaml]# kubectl describe nodes node-3 | grep -i taint

Taints: repair=yes:NoExecute

#查看POD 在哪些Node 主机上

可以看到 web-test-5cdbd79b55-87pqt,web-test-5cdbd79b55-p54nq POD 都跑到了 node-1 上了

[root@master-1 yaml]# kubectl get pods -o wide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES

web-test-5cdbd79b55-6xtnt 1/1 Running 0 29s 10.244.0.10 node-2 <none> <none>

web-test-5cdbd79b55-87pqt 1/1 Running 0 34m 10.244.1.16 node-1 <none> <none>

web-test-5cdbd79b55-p54nq 1/1 Running 0 34m 10.244.1.17 node-1 <none> <none>

web-test-5cdbd79b55-t8pcx 1/1 Running 0 29s 10.244.2.19 master-1 <none> <none>

#移除节点,将node-3 移出集群

[root@master-1 yaml]# kubectl delete node node-3

#查看下Node 列表,以及 pod

[root@master-1 yaml]# kubectl get nodes

NAME STATUS ROLES AGE VERSION

master-1 Ready <none> 2d3h v1.18.6

node-1 	Ready <none> 2d3h v1.18.6

node-2 	Ready <none> 2d3h v1.18.6

[root@master-1 yaml]# kubectl get pods -o wide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES

web-test-5cdbd79b55-6xtnt 1/1 Running 0 29s 10.244.0.10 node-2 <none> <none>

web-test-5cdbd79b55-87pqt 1/1 Running 0 34m 10.244.1.16 node-1 <none> <none>

web-test-5cdbd79b55-p54nq 1/1 Running 0 34m 10.244.1.17 node-1 <none> <none>

web-test-5cdbd79b55-t8pcx 1/1 Running 0 29s 10.244.2.19 master-1 <none> <none>

(2)方法二:

#先看下当前pod 运行在哪些Node 主机上

[root@master-1 yaml]# kubectl get pods -o wide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES

web-test-5cdbd79b55-6xtnt 1/1 Running 0 20m 10.244.0.10 node-2 <none> <none>

web-test-5cdbd79b55-87pqt 1/1 Running 0 54m 10.244.1.16 node-1 <none> <none>

web-test-5cdbd79b55-p54nq 1/1 Running 0 54m 10.244.1.17 node-1 <none> <none>

web-test-5cdbd79b55-t8pcx 1/1 Running 0 20m 10.244.2.19 master-1 <none> <none>

#当前集群中的Node 主机

[root@master-1 yaml]# kubectl get nodes

NAME STATUS ROLES AGE VERSION

master-1 Ready 	<none> 2d3h v1.18.6

node-1 	Ready	 	<none> 2d3h v1.18.6

node-2 	Ready 	<none> 2d3h v1.18.6

node-3 	Ready 	<none> 5h6m v1.18.6

#设置node-2 节点不可调度

[root@master-1 yaml]# kubectl cordon node-2

node/node-2 cordoned

#可以看到 node-1 的status 状态变为了 SchedulingDisabled

[root@master-1 yaml]# kubectl get nodes

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES

web-test-5cdbd79b55-6xtnt 1/1 Running 0 20m 10.244.0.10 node-3 <none> <none>

web-test-5cdbd79b55-87pqt 1/1 Running 0 54m 10.244.1.16 node-1 <none> <none>

web-test-5cdbd79b55-p54nq 1/1 Running 0 54m 10.244.1.17 node-1 <none> <none>

web-test-5cdbd79b55-t8pcx 1/1 Running 0 20m 10.244.2.19 master-1 <none> <none>

#驱逐node-2节点上的Pod

[root@master-1 yaml]# kubectl drain node-2 –ignore-daemonsets

node/node-2 already cordoned

WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-amd64-fx7vc

evicting pod default/web-test-5cdbd79b55-6xtnt

evicting pod ingress-nginx/nginx-ingress-controller-766fb9f77-jn7pc

pod/web-test-5cdbd79b55-6xtnt evicted

pod/nginx-ingress-controller-766fb9f77-jn7pc evicted

node/node-2 evicted

#看下驱逐后的POD

可以看到web-test-5cdbd79b55-6xtnt Pod 已经不在node-2 上了

[root@master-1 yaml]# kubectl get pods -o wide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES

web-test-5cdbd79b55-6xtnt 1/1 Running 0 20m 10.244.0.10 node-3 <none> <none>

web-test-5cdbd79b55-87pqt 1/1 Running 0 54m 10.244.1.16 node-1 <none> <none>

web-test-5cdbd79b55-p54nq 1/1 Running 0 54m 10.244.1.17 node-1 <none> <none>

web-test-5cdbd79b55-t8pcx 1/1 Running 0 20m 10.244.2.19 master-1 <none> <none>

#移除节点,将node-2 移出集群

[root@master-1 yaml]# kubectl delete node node-2

[root@master-1 yaml]# kubectl get nodes

NAME STATUS ROLES AGE VERSION

master-1 	Ready <none> 2d3h v1.18.6

node-1 		Ready <none> 2d3h v1.18.6

node-3 		Ready <none> 5h17m v1.18.6

[root@master-1 yaml]# kubectl get pods -o wide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES

web-test-5cdbd79b55-6xtnt 1/1 Running 0 20m 10.244.0.10 node-3 <none> <none>

web-test-5cdbd79b55-87pqt 1/1 Running 0 54m 10.244.1.16 node-1 <none> <none>

web-test-5cdbd79b55-p54nq 1/1 Running 0 54m 10.244.1.17 node-1 <none> <none>

web-test-5cdbd79b55-t8pcx 1/1 Running 0 20m 10.244.2.19 master-1 <none> <none>

到此,本篇结束,下篇和大家分享下 kubernetes 弹性伸缩之 Pod 自动扩容/缩容

© 版权声明
THE END
喜欢就支持一下吧
点赞0 分享
评论 共1条

请登录后发表评论