k8s集群V1.15.3怎样升级到V1.16.0

65次阅读
没有评论

共计 26294 个字符,预计需要花费 66 分钟才能阅读完成。

本篇文章为大家展示了 k8s 集群 V1.15.3 怎样升级到 V1.16.0,内容简明扼要并且容易理解,绝对能使你眼前一亮,通过这篇文章的详细介绍希望你能有所收获。

 
1. 查看 k8s 当前版本号及最新版本
[root@k8s01 ~]# kubectl get nodes  – 查看当前集群节点数和版本号
NAME  STATUS  ROLES  AGE  VERSION
k8s01  Ready  master  41d  v1.15.3
k8s02  Ready  none   41d  v1.15.3
k8s03  Ready  none   41d  v1.15.3
[root@k8s01 ~]# kubectl version  – 查看服务端和客户端版本号
Client Version: version.Info{Major: 1 , Minor: 15 , GitVersion: v1.15.3 , GitCommit: 2d3c76f9091b6bec110a5e63777c332469e0cba2 , GitTreeState: clean , BuildDate: 2019-08-19T11:13:54Z , GoVersion: go1.12.9 , Compiler: gc , Platform: linux/amd64}
Server Version: version.Info{Major: 1 , Minor: 15 , GitVersion: v1.15.3 , GitCommit: 2d3c76f9091b6bec110a5e63777c332469e0cba2 , GitTreeState: clean , BuildDate: 2019-08-19T11:05:50Z , GoVersion: go1.12.9 , Compiler: gc , Platform: linux/amd64}
[root@k8s01 ~]# yum list –showduplicates kubeadm –disableexcludes=kubernetes  – 查看仓库集群版本
 
2. 升级 kubeadm 版本及查看集群是否满足升级需求
[root@k8s01 ~]# yum install -y kubeadm-1.16.0-0 –disableexcludes=kubernetes  – 升级 kubeadm 版本
[root@k8s01 ~]# kubeadm version  – 查看升级后的版本
kubeadm version: version.Info{Major: 1 , Minor: 16 , GitVersion: v1.16.0 , GitCommit: 2bd9643cee5b3b3a5ecbd3af49d09018f0773c77 , GitTreeState: clean , BuildDate: 2019-09-18T14:34:01Z , GoVersion: go1.12.9 , Compiler: gc , Platform: linux/amd64}
[root@k8s01 ~]# kubeadm upgrade plan  – 查看集群是否可以升级,升级后各组件的版本信息
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster…
[upgrade/config] FYI: You can look at this config file with kubectl -n kube-system get cm kubeadm-config -oyaml
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.15.3
[upgrade/versions] kubeadm version: v1.16.0
W1019 13:11:18.402833  66426 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL https://dl.k8s.io/release/stable.txt : Get https://dl.k8s.io/release/stable.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W1019 13:11:18.402860  66426 version.go:102] falling back to the local client version: v1.16.0
[upgrade/versions] Latest stable version: v1.16.0
W1019 13:11:28.427246  66426 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL https://dl.k8s.io/release/stable-1.15.txt : Get https://dl.k8s.io/release/stable-1.15.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W1019 13:11:28.427289  66426 version.go:102] falling back to the local client version: v1.16.0
[upgrade/versions] Latest version in the v1.15 series: v1.16.0
Components that must be upgraded manually after you have upgraded the control plane with kubeadm upgrade apply :
COMPONENT  CURRENT  AVAILABLE
Kubelet  3 x v1.15.3  v1.16.0
Upgrade to the latest version in the v1.15 series:
COMPONENT  CURRENT  AVAILABLE
API Server  v1.15.3  v1.16.0
Controller Manager  v1.15.3  v1.16.0
Scheduler  v1.15.3  v1.16.0
Kube Proxy  v1.15.3  v1.16.0
CoreDNS  1.3.1  1.6.2
Etcd  3.3.10  3.3.15-0
You can now apply the upgrade by executing the following command:
 kubeadm upgrade apply v1.16.0
_____________________________________________________________________
[root@k8s01 ~]# 

3. 下载升级组件(在谷歌下载基础组件后升级,加快升级速度)
[root@k8s01 ~]# cat 16.sh

#!/bin/bash
# download k8s 1.15.2 images
# get image-list by kubeadm config images list –kubernetes-version=v1.15.2
# gcr.azk8s.cn/google-containers == k8s.gcr.io
images=(
kube-apiserver:v1.16.0
kube-controller-manager:v1.16.0
kube-scheduler:v1.16.0
kube-proxy:v1.16.0
pause:3.1
etcd:3.3.15-0
coredns:1.6.2
)
for imageName in ${images[@]};do
  docker pull gcr.azk8s.cn/google-containers/$imageName 

  docker tag  gcr.azk8s.cn/google-containers/$imageName k8s.gcr.io/$imageName 

  docker rmi  gcr.azk8s.cn/google-containers/$imageName
done
[root@k8s01 ~]# sh 16.sh

v1.16.0: Pulling from google-containers/kube-apiserver
39fafc05754f: Already exists

f7d981e9e2f5: Pull complete

Digest: sha256:f4168527c91289da2708f62ae729fdde5fb484167dd05ffbb7ab666f60de96cd
Status: Downloaded newer image for gcr.azk8s.cn/google-containers/kube-apiserver:v1.16.0
gcr.azk8s.cn/google-containers/kube-apiserver:v1.16.0
Untagged: gcr.azk8s.cn/google-containers/kube-apiserver:v1.16.0
Untagged: gcr.azk8s.cn/google-containers/kube-apiserver@sha256:f4168527c91289da2708f62ae729fdde5fb484167dd05ffbb7ab666f60de96cd
v1.16.0: Pulling from google-containers/kube-controller-manager
39fafc05754f: Already exists

9fc21167a2c9: Pull complete

Digest: sha256:c156a05ee9d40e3ca2ebf9337f38a10558c1fc6c9124006f128a82e6c38cdf3e
Status: Downloaded newer image for gcr.azk8s.cn/google-containers/kube-controller-manager:v1.16.0
gcr.azk8s.cn/google-containers/kube-controller-manager:v1.16.0
Untagged: gcr.azk8s.cn/google-containers/kube-controller-manager:v1.16.0
Untagged: gcr.azk8s.cn/google-containers/kube-controller-manager@sha256:c156a05ee9d40e3ca2ebf9337f38a10558c1fc6c9124006f128a82e6c38cdf3e
v1.16.0: Pulling from google-containers/kube-scheduler
39fafc05754f: Already exists

c589747bc37c: Pull complete

Digest: sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0
Status: Downloaded newer image for gcr.azk8s.cn/google-containers/kube-scheduler:v1.16.0
gcr.azk8s.cn/google-containers/kube-scheduler:v1.16.0
Untagged: gcr.azk8s.cn/google-containers/kube-scheduler:v1.16.0
Untagged: gcr.azk8s.cn/google-containers/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0
v1.16.0: Pulling from google-containers/kube-proxy
39fafc05754f: Already exists

db3f71d0eb90: Already exists

1531d95908fb: Pull complete

Digest: sha256:e7f0f8e320cfeeaafdc9c0cb8e23f51e542fa1d955ae39c8131a0531ba72c794
Status: Downloaded newer image for gcr.azk8s.cn/google-containers/kube-proxy:v1.16.0
gcr.azk8s.cn/google-containers/kube-proxy:v1.16.0
Untagged: gcr.azk8s.cn/google-containers/kube-proxy:v1.16.0
Untagged: gcr.azk8s.cn/google-containers/kube-proxy@sha256:e7f0f8e320cfeeaafdc9c0cb8e23f51e542fa1d955ae39c8131a0531ba72c794
3.1: Pulling from google-containers/pause
Digest: sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea
Status: Downloaded newer image for gcr.azk8s.cn/google-containers/pause:3.1
gcr.azk8s.cn/google-containers/pause:3.1
Untagged: gcr.azk8s.cn/google-containers/pause:3.1
Untagged: gcr.azk8s.cn/google-containers/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea
3.3.15-0: Pulling from google-containers/etcd
39fafc05754f: Already exists

aee6f172d490: Pull complete

e6aae814a194: Pull complete

Digest: sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa
Status: Downloaded newer image for gcr.azk8s.cn/google-containers/etcd:3.3.15-0
gcr.azk8s.cn/google-containers/etcd:3.3.15-0
Untagged: gcr.azk8s.cn/google-containers/etcd:3.3.15-0
Untagged: gcr.azk8s.cn/google-containers/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa
1.6.2: Pulling from google-containers/coredns
c6568d217a00: Pull complete

3970bc7cbb16: Pull complete

Digest: sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5
Status: Downloaded newer image for gcr.azk8s.cn/google-containers/coredns:1.6.2
gcr.azk8s.cn/google-containers/coredns:1.6.2
Untagged: gcr.azk8s.cn/google-containers/coredns:1.6.2
Untagged: gcr.azk8s.cn/google-containers/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5
[root@k8s01 ~]#

4. 升级 k8s 集群(master 节点)
[root@k8s01 ~]# kubeadm upgrade apply v1.16.0 -v 5 

I1019 13:37:55.767778  87227 apply.go:118] [upgrade/apply] verifying health of cluster
I1019 13:37:55.767819  87227 apply.go:119] [upgrade/apply] retrieving configuration from cluster
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster…
[upgrade/config] FYI: You can look at this config file with kubectl -n kube-system get cm kubeadm-config -oyaml
I1019 13:37:55.803144  87227 common.go:122] running preflight checks
[preflight] Running pre-flight checks.
I1019 13:37:55.803169  87227 preflight.go:78] validating if there are any unsupported CoreDNS plugins in the Corefile
I1019 13:37:55.820014  87227 preflight.go:103] validating if migration can be done for the current CoreDNS release.
[upgrade] Making sure the cluster is healthy:
I1019 13:37:55.837178  87227 apply.go:131] [upgrade/apply] validating requested and actual version
I1019 13:37:55.837241  87227 apply.go:147] [upgrade/version] enforcing version skew policies
[upgrade/version] You have chosen to change the cluster version to v1.16.0
[upgrade/versions] Cluster version: v1.15.3
[upgrade/versions] kubeadm version: v1.16.0
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
I1019 13:37:58.228724  87227 apply.go:163] [upgrade/apply] creating prepuller
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
I1019 13:38:00.888210  87227 apply.go:174] [upgrade/apply] performing upgrade
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version v1.16.0 …
Static pod: kube-apiserver-k8s01 hash: 5bfb05e7cb17fe8298d61706cb2263b6
Static pod: kube-controller-manager-k8s01 hash: 9c5db0eef4ba8d433ced5874b5688886
Static pod: kube-scheduler-k8s01 hash: 7d5d3c0a6786e517a8973fa06754cb75
I1019 13:38:00.974269  87227 etcd.go:107] etcd endpoints read from pods: https://192.168.54.128:2379
I1019 13:38:01.033816  87227 etcd.go:156] etcd endpoints read from etcd: https://192.168.54.128:2379
I1019 13:38:01.033856  87227 etcd.go:125] update etcd endpoints: https://192.168.54.128:2379
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-k8s01 hash: af0f40c2a1ce2695115431265406ca0d
I1019 13:38:04.475950  87227 local.go:69] [etcd] wrote Static Pod manifest for a local etcd member to /etc/kubernetes/tmp/kubeadm-upgraded-manifests460824650/etcd.yaml
[upgrade/staticpods] Preparing for etcd upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to /etc/kubernetes/manifests/etcd.yaml and backed up old manifest to /etc/kubernetes/tmp/kubeadm-backup-manifests-2019-10-19-13-38-00/etcd.yaml
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: etcd-k8s01 hash: af0f40c2a1ce2695115431265406ca0d
Static pod: etcd-k8s01 hash: af0f40c2a1ce2695115431265406ca0d
Static pod: etcd-k8s01 hash: 8b854fdc3768d8f9aac3dfb09c123400
[apiclient] Found 1 Pods for label selector component=etcd
[apiclient] Found 0 Pods for label selector component=etcd
[apiclient] Found 1 Pods for label selector component=etcd
[apiclient] Found 0 Pods for label selector component=etcd
[apiclient] Found 1 Pods for label selector component=etcd
[upgrade/staticpods] Component etcd upgraded successfully!
I1019 13:38:35.604122  87227 etcd.go:107] etcd endpoints read from pods: https://192.168.54.128:2379
I1019 13:38:35.618217  87227 etcd.go:156] etcd endpoints read from etcd: https://192.168.54.128:2379
I1019 13:38:35.618242  87227 etcd.go:125] update etcd endpoints: https://192.168.54.128:2379
[upgrade/etcd] Waiting for etcd to become available
I1019 13:38:35.618254  87227 etcd.go:372] [etcd] attempting to see if all cluster endpoints ([https://192.168.54.128:2379]) are available 1/10
[upgrade/staticpods] Writing new Static Pod manifests to /etc/kubernetes/tmp/kubeadm-upgraded-manifests460824650
I1019 13:38:35.631290  87227 manifests.go:42] [control-plane] creating static Pod files
I1019 13:38:35.631334  87227 manifests.go:91] [control-plane] getting StaticPodSpecs
I1019 13:38:35.639202  87227 manifests.go:116] [control-plane] wrote static Pod manifest for component kube-apiserver to /etc/kubernetes/tmp/kubeadm-upgraded-manifests460824650/kube-apiserver.yaml
I1019 13:38:35.639809  87227 manifests.go:116] [control-plane] wrote static Pod manifest for component kube-controller-manager to /etc/kubernetes/tmp/kubeadm-upgraded-manifests460824650/kube-controller-manager.yaml
I1019 13:38:35.640103  87227 manifests.go:116] [control-plane] wrote static Pod manifest for component kube-scheduler to /etc/kubernetes/tmp/kubeadm-upgraded-manifests460824650/kube-scheduler.yaml
[upgrade/staticpods] Preparing for kube-apiserver upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to /etc/kubernetes/manifests/kube-apiserver.yaml and backed up old manifest to /etc/kubernetes/tmp/kubeadm-backup-manifests-2019-10-19-13-38-00/kube-apiserver.yaml
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-k8s01 hash: 5bfb05e7cb17fe8298d61706cb2263b6
Static pod: kube-apiserver-k8s01 hash: 5bfb05e7cb17fe8298d61706cb2263b6
Static pod: kube-apiserver-k8s01 hash: c7a6a6cd079e4034a3258c4d94365d5a
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[apiclient] Found 0 Pods for label selector component=kube-apiserver
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[apiclient] Found 0 Pods for label selector component=kube-apiserver
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component kube-apiserver upgraded successfully!
[upgrade/staticpods] Preparing for kube-controller-manager upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to /etc/kubernetes/manifests/kube-controller-manager.yaml and backed up old manifest to /etc/kubernetes/tmp/kubeadm-backup-manifests-2019-10-19-13-38-00/kube-controller-manager.yaml
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-k8s01 hash: 9c5db0eef4ba8d433ced5874b5688886
Static pod: kube-controller-manager-k8s01 hash: a174e7fbc474c3449c0ee50ba7220e8e
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component kube-controller-manager upgraded successfully!
[upgrade/staticpods] Preparing for kube-scheduler upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to /etc/kubernetes/manifests/kube-scheduler.yaml and backed up old manifest to /etc/kubernetes/tmp/kubeadm-backup-manifests-2019-10-19-13-38-00/kube-scheduler.yaml
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-k8s01 hash: 7d5d3c0a6786e517a8973fa06754cb75
Static pod: kube-scheduler-k8s01 hash: b8e7c07b524b78e0b03577d5f61f79ef
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component kube-scheduler upgraded successfully!
I1019 13:38:54.548181  87227 apply.go:180] [upgrade/postupgrade] upgrading RBAC rules and addons
[upload-config] Storing the configuration used in ConfigMap kubeadm-config in the kube-system Namespace
[kubelet] Creating a ConfigMap kubelet-config-1.16 in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Downloading configuration for the kubelet from the kubelet-config-1.16 ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file /var/lib/kubelet/config.yaml
I1019 13:38:54.670273  87227 patchnode.go:30] [patchnode] Uploading the CRI Socket information /var/run/dockershim.sock to the Node API object k8s01 as an annotation
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1019 13:38:55.222748  87227 clusterinfo.go:79] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace
I1019 13:38:55.376508  87227 request.go:538] Throttling request took 145.385879ms, request: PUT:https://192.168.54.128:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/kubeadm:bootstrap-signer-clusterinfo
I1019 13:38:55.576272  87227 request.go:538] Throttling request took 197.037818ms, request: POST:https://192.168.54.128:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings
I1019 13:38:55.776513  87227 request.go:538] Throttling request took 196.268563ms, request: PUT:https://192.168.54.128:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/kubeadm:bootstrap-signer-clusterinfo
I1019 13:38:55.976281  87227 request.go:538] Throttling request took 181.099838ms, request: POST:https://192.168.54.128:6443/apis/rbac.authorization.k8s.io/v1/clusterroles
I1019 13:38:56.176460  87227 request.go:538] Throttling request took 184.088286ms, request: PUT:https://192.168.54.128:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:coredns
I1019 13:38:56.376299  87227 request.go:538] Throttling request took 196.69921ms, request: POST:https://192.168.54.128:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings
I1019 13:38:56.576335  87227 request.go:538] Throttling request took 187.238443ms, request: PUT:https://192.168.54.128:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:coredns
[addons] Applied essential addon: CoreDNS
I1019 13:38:56.776530  87227 request.go:538] Throttling request took 122.128829ms, request: POST:https://192.168.54.128:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings
I1019 13:38:56.976951  87227 request.go:538] Throttling request took 194.821608ms, request: PUT:https://192.168.54.128:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/kubeadm:node-proxier
I1019 13:38:57.176476  87227 request.go:538] Throttling request took 192.024614ms, request: POST:https://192.168.54.128:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles
I1019 13:38:57.376234  87227 request.go:538] Throttling request took 180.872083ms, request: PUT:https://192.168.54.128:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/kube-proxy
I1019 13:38:57.576309  87227 request.go:538] Throttling request took 197.323877ms, request: POST:https://192.168.54.128:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings
I1019 13:38:57.776253  87227 request.go:538] Throttling request took 190.156387ms, request: PUT:https://192.168.54.128:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/kube-proxy
[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to v1.16.0 . Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven t already done so.
[root@k8s01 ~]# 

5. 如果有多个 master 节点,升级其它 master 节点(单个 master 可以忽略)
[root@k8s01 ~]# kubeadm upgrade plan  – 测试升级过程
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster…
[upgrade/config] FYI: You can look at this config file with kubectl -n kube-system get cm kubeadm-config -oyaml
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.16.0
[upgrade/versions] kubeadm version: v1.16.0
W1019 13:46:26.923622  92337 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL https://dl.k8s.io/release/stable.txt : Get https://dl.k8s.io/release/stable.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W1019 13:46:26.923660  92337 version.go:102] falling back to the local client version: v1.16.0
[upgrade/versions] Latest stable version: v1.16.0
W1019 13:46:36.952719  92337 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL https://dl.k8s.io/release/stable-1.16.txt : Get https://dl.k8s.io/release/stable-1.16.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W1019 13:46:36.952743  92337 version.go:102] falling back to the local client version: v1.16.0
[upgrade/versions] Latest version in the v1.16 series: v1.16.0
Awesome, you re up-to-date! Enjoy!
[root@k8s01 ~]# kubeadm upgrade node  – 升级其它 master 节点
[upgrade] Reading configuration from the cluster…
[upgrade] FYI: You can look at this config file with kubectl -n kube-system get cm kubeadm-config -oyaml
[upgrade] Upgrading your Static Pod-hosted control plane instance to version v1.16.0 …
Static pod: kube-apiserver-k8s01 hash: c7a6a6cd079e4034a3258c4d94365d5a
Static pod: kube-controller-manager-k8s01 hash: a174e7fbc474c3449c0ee50ba7220e8e
Static pod: kube-scheduler-k8s01 hash: b8e7c07b524b78e0b03577d5f61f79ef
[upgrade/staticpods] Writing new Static Pod manifests to /etc/kubernetes/tmp/kubeadm-upgraded-manifests902864317
[upgrade/staticpods] Preparing for kube-apiserver upgrade
[upgrade/staticpods] Current and new manifests of kube-apiserver are equal, skipping upgrade
[upgrade/staticpods] Preparing for kube-controller-manager upgrade
[upgrade/staticpods] Current and new manifests of kube-controller-manager are equal, skipping upgrade
[upgrade/staticpods] Preparing for kube-scheduler upgrade
[upgrade/staticpods] Current and new manifests of kube-scheduler are equal, skipping upgrade
[upgrade] The control plane instance for this node was successfully updated!
[kubelet-start] Downloading configuration for the kubelet from the kubelet-config-1.16 ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file /var/lib/kubelet/config.yaml
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
[root@k8s01 ~]# 

6. 升级所有 master 节点的 kubelet 和 kubectl(单个 master 忽略)
[root@k8s01 ~]# yum install -y kubelet-1.16.0 kubectl-1.16.0 –disableexcludes=kubernetes
[root@k8s01 ~]# systemctl daemon-reload
[root@k8s01 ~]# systemctl restart kubelet

7. 在所有 node 节点升级 kubeadm 组件(在所有 node 节点执行)
[root@k8s02 ~]# yum install -y kubeadm-1.16.0 –disableexcludes=kubernetes

8. 将 node 节点标记为不可调试,维护节点升级(在 master 执行)
[root@k8s01 ~]# kubectl drain k8s02 –ignore-daemonsets  – 先升级 k8s02 节点,如果多节点要重复升级,每个节点都要执行,如果报错根据提示添加 –delete-local-data 参数
node/k8s02 already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-amd64-888×6, kube-system/kube-proxy-xm9fn
evicting pod tiller-deploy-8557598fbc-x96gq
evicting pod myapp-5f57d6857b-xgj8l
evicting pod coredns-5644d7b6d9-lmpd5
evicting pod metrics-server-6b445cb696-zp94w
evicting pod myapp-5f57d6857b-2g8ss
pod/tiller-deploy-8557598fbc-x96gq evicted
pod/metrics-server-6b445cb696-zp94w evicted
pod/myapp-5f57d6857b-2g8ss evicted
pod/coredns-5644d7b6d9-lmpd5 evicted
pod/myapp-5f57d6857b-xgj8l evicted
node/k8s02 evicted
[root@k8s01 ~]# kubectl  get nodes
NAME  STATUS  ROLES  AGE  VERSION
k8s01  Ready  master  41d  v1.16.0
k8s02  Ready,SchedulingDisabled  none   41d  v1.15.3
k8s03  Ready  none   41d  v1.15.3
[root@k8s01 ~]#

9. 在 k8s02 节点升级 kubelet 和 kubectl
[root@k8s02 ~]# kubeadm upgrade node
[upgrade] Reading configuration from the cluster…
[upgrade] FYI: You can look at this config file with kubectl -n kube-system get cm kubeadm-config -oyaml
[upgrade] Skipping phase. Not a control plane node[kubelet-start] Downloading configuration for the kubelet from the kubelet-config-1.16 ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file /var/lib/kubelet/config.yaml
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
[root@k8s02 ~]# yum install -y kubelet-1.16.0 kubectl-1.16.0 –disableexcludes=kubernetes
[root@k8s02 ~]# systemctl daemon-reload
[root@k8s02 ~]# systemctl restart kubelet

10. 在 master 节点恢复 k8s02 节点的调试策略
[root@k8s01 ~]# kubectl uncordon k8s02
node/k8s02 uncordoned
[root@k8s01 ~]# kubectl  get nodes
NAME  STATUS  ROLES  AGE  VERSION
k8s01  Ready  master  41d  v1.16.0
k8s02  Ready  none   41d  v1.16.0
k8s03  Ready  none   41d  v1.15.3
[root@k8s01 ~]#

11. 使用步骤 8 到 10 升级 k8s03 节点

12. 查看整个 k8s 集群状态
[root@k8s01 ~]# kubectl  get pods -n kube-system
NAME  READY  STATUS  RESTARTS  AGE
coredns-5644d7b6d9-8wvgt  1/1  Running  0  11m
coredns-5644d7b6d9-pzr7g  1/1  Running  0  11m
coredns-5c98db65d4-rtktb  1/1  Running  0  11m
etcd-k8s01  1/1  Running  0  34m
kube-apiserver-k8s01  1/1  Running  0  34m
kube-controller-manager-k8s01  1/1  Running  0  34m
kube-flannel-ds-amd64-888×6  1/1  Running  5  41d
kube-flannel-ds-amd64-d648v  1/1  Running  15  41d
kube-flannel-ds-amd64-rc9bc  1/1  Running  2  46h
kube-proxy-d4rd5  1/1  Running  1  46h
kube-proxy-wtk2j  1/1  Running  11  41d
kube-proxy-xm9fn  1/1  Running  0  45m
kube-scheduler-k8s01  1/1  Running  0  34m
metrics-server-6b445cb696-65r5k  1/1  Running  0  11m
tiller-deploy-8557598fbc-6jfp7  1/1  Running  0  11m
[root@k8s01 ~]# kubectl  get nodes
NAME  STATUS  ROLES  AGE  VERSION
k8s01  Ready  master  41d  v1.16.0
k8s02  Ready  none   41d  v1.16.0
k8s03  Ready  none   41d  v1.16.0
[root@k8s01 ~]#

错误总结:
Oct 19 14:46:20 k8s01 kubelet[653]: E1019 14:46:20.701679  653 pod_workers.go:191] Error syncing pod e641b551-7f22-40fa-b847-658f6c7696fa (tiller-deploy-8557598fbc-6jfp7_kube-system(e641b551-7f22-40fa-b847-658f6c7696fa) ), skipping: network is not ready: runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Oct 19 14:46:20 k8s01 kubelet[653]: E1019 14:46:20.702091  653 pod_workers.go:191] Error syncing pod bd45bbe0-8529-4ee4-9fcf-90528178dc0d (coredns-5c98db65d4-rtktb_kube-system(bd45bbe0-8529-4ee4-9fcf-90528178dc0d) ), skipping: network is not ready: runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

Oct 19 14:46:20 k8s01 kubelet[653]: E1019 14:46:20.702396  653 pod_workers.go:191] Error syncing pod 87d24c8c-bba8-420b-8901-9e2b8bc339ac (coredns-5644d7b6d9-8wvgt_kube-system(87d24c8c-bba8-420b-8901-9e2b8bc339ac) ), skipping: network is not ready: runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

解决方法:(如果 CNI 插件报错,需要重新安装 flannel)
[root@k8s01 ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@k8s01 ~]# kubectl apply -f  kube-flannel.yml

podsecuritypolicy.policy/psp.flannel.unprivileged configured
clusterrole.rbac.authorization.k8s.io/flannel unchanged
clusterrolebinding.rbac.authorization.k8s.io/flannel unchanged
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg configured
daemonset.apps/kube-flannel-ds-amd64 configured
daemonset.apps/kube-flannel-ds-arm64 configured
daemonset.apps/kube-flannel-ds-arm configured
daemonset.apps/kube-flannel-ds-ppc64le configured
daemonset.apps/kube-flannel-ds-s390x configured
[root@k8s01 ~]#

上述内容就是 k8s 集群 V1.15.3 怎样升级到 V1.16.0,你们学到知识或技能了吗?如果还想学到更多技能或者丰富自己的知识储备,欢迎关注丸趣 TV 行业资讯频道。

正文完
 
丸趣
版权声明:本站原创文章,由 丸趣 2023-08-25发表,共计26294字。
转载说明:除特殊说明外本站除技术相关以外文章皆由网络搜集发布,转载请注明出处。
评论(没有评论)