m6米乐安卓版下载-米乐app官网下载
2

tidb v3.0.20升级到tidb v5.4.0步骤 -m6米乐安卓版下载

tidb3.0时通常使用的tidb-ansiable管理的集群,高版本都是使用tiup进行管理,需要安装tiup及tiup cluster
从tidb3.0不可以直接升级到5.4,需要先升级到4.0版本

$ curl --proto '=https' --tlsv1.2 -ssf https://tiup-mirrors.pingcap.com/install.sh | sh
$ tiup cluster
[tidb@tidbser1 ~]$ tiup cluster import -d /home/tidb/tidb-ansible
tiup is checking updates for component cluster ...timeout!
starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.9.4/tiup-cluster /home/tidb/.tiup/components/cluster/v1.9.4/tiup-cluster import -d /home/tidb/tidb-ansible
found inventory file /home/tidb/tidb-ansible/inventory.ini, parsing...
found cluster "test-cluster" (v3.0.20), deployed with user tidb.
tidb-ansible and tiup cluster can not be used together, please do not try to use ansible to manage the imported cluster anymore to avoid metadata conflict.
the ansible directory will be moved to /home/tidb/.tiup/storage/cluster/clusters/test-cluster/ansible-backup after import.
do you want to continue? [y/n]: (default=n) y       
prepared to import tidb v3.0.20 cluster test-cluster.
do you want to continue? [y/n]:(default=n) y
imported 2 tidb node(s).
imported 2 tikv node(s).
imported 2 pd node(s).
imported 1 monitoring node(s).
imported 1 alertmanager node(s).
imported 1 grafana node(s).
imported 1 pump node(s).
imported 1 drainer node(s).
copying config file(s) of pd...
copying config file(s) of tikv...
copying config file(s) of pump...
copying config file(s) of tidb...
copying config file(s) of tiflash...
copying config file(s) of drainer...
copying config file(s) of cdc...
copying config file(s) of prometheus...
copying config file(s) of grafana...
copying config file(s) of alertmanager...
copying config file(s) of tispark...
copying config file(s) of tispark...
  [ serial ] - sshkeyset: privatekey=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/ssh/id_rsa, publickey=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/ssh/id_rsa.pub
  [ serial ] - sshkeyset: privatekey=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/ssh/id_rsa, publickey=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/ssh/id_rsa.pub
  [ serial ] - sshkeyset: privatekey=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/ssh/id_rsa, publickey=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/ssh/id_rsa.pub
  [ serial ] - userssh: user=tidb, host=192.168.40.62
  [ serial ] - copyfile: remote=192.168.40.62:/u01/deploy2/conf/pd.toml, local=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/ansible-imported-configs/pd-192.168.40.62-2381.toml
  [ serial ] - userssh: user=tidb, host=192.168.40.62
  [ serial ] - copyfile: remote=192.168.40.62:/u01/deploy/pump/conf/pump.toml, local=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/ansible-imported-configs/pump-192.168.40.62-8250.toml
  [ serial ] - userssh: user=tidb, host=192.168.40.62
  [ serial ] - sshkeyset: privatekey=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/ssh/id_rsa, publickey=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/ssh/id_rsa.pub
  [ serial ] - userssh: user=tidb, host=192.168.40.62
  [ serial ] - copyfile: remote=192.168.40.62:/u01/deploy2/conf/tikv.toml, local=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/ansible-imported-configs/tikv-192.168.40.62-20161.toml
  [ serial ] - copyfile: remote=192.168.40.62:/u01/deploy/conf/tikv.toml, local=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/ansible-imported-configs/tikv-192.168.40.62-20160.toml
  [ serial ] - sshkeyset: privatekey=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/ssh/id_rsa, publickey=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/ssh/id_rsa.pub
  [ serial ] - userssh: user=tidb, host=192.168.40.62
  [ serial ] - copyfile: remote=192.168.40.62:/u01/deploy/conf/pd.toml, local=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/ansible-imported-configs/pd-192.168.40.62-2379.toml
  [ serial ] - sshkeyset: privatekey=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/ssh/id_rsa, publickey=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/ssh/id_rsa.pub
  [ serial ] - userssh: user=tidb, host=192.168.40.62
  [ serial ] - copyfile: remote=192.168.40.62:/u01/deploy/conf/tidb.toml, local=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/ansible-imported-configs/tidb-192.168.40.62-4000.toml
  [ serial ] - sshkeyset: privatekey=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/ssh/id_rsa, publickey=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/ssh/id_rsa.pub
  [ serial ] - userssh: user=tidb, host=192.168.40.62
  [ serial ] - copyfile: remote=192.168.40.62:/u01/deploy2/conf/tidb.toml, local=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/ansible-imported-configs/tidb-192.168.40.62-4001.toml
  [ serial ] - sshkeyset: privatekey=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/ssh/id_rsa, publickey=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/ssh/id_rsa.pub
  [ serial ] - userssh: user=tidb, host=192.168.40.62
  [ serial ] - copyfile: remote=192.168.40.62:/u01/deploy/drainer/conf/drainer.toml, local=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/ansible-imported-configs/drainer-192.168.40.62-8249.toml
finished copying configs.
[tidb@tidbser1 config-cache]$ tiup cluster upgrade test-cluster v4.0.16
tiup is checking updates for component cluster ...
starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.9.4/tiup-cluster /home/tidb/.tiup/components/cluster/v1.9.4/tiup-cluster upgrade test-cluster v4.0.16
this operation will upgrade tidb v3.0.20 cluster test-cluster to v4.0.16.
do you want to continue? [y/n]:(default=n) y
upgrading cluster...
  [ serial ] - sshkeyset: privatekey=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/ssh/id_rsa, publickey=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/ssh/id_rsa.pub
  [parallel] - userssh: user=tidb, host=192.168.40.62
  [parallel] - userssh: user=tidb, host=192.168.40.62
  [parallel] - userssh: user=tidb, host=192.168.40.62
  [parallel] - userssh: user=tidb, host=192.168.40.62
  [parallel] - userssh: user=tidb, host=192.168.40.62
  [parallel] - userssh: user=tidb, host=192.168.40.62
  [parallel] - userssh: user=tidb, host=192.168.40.62
  [parallel] - userssh: user=tidb, host=192.168.40.62
  [parallel] - userssh: user=tidb, host=192.168.40.62
  [parallel] - userssh: user=tidb, host=192.168.40.62
  [parallel] - userssh: user=tidb, host=192.168.40.62
  [ serial ] - download: component=drainer, version=v4.0.16, os=linux, arch=amd64
  [ serial ] - download: component=tikv, version=v4.0.16, os=linux, arch=amd64
  [ serial ] - download: component=pump, version=v4.0.16, os=linux, arch=amd64
  [ serial ] - download: component=tidb, version=v4.0.16, os=linux, arch=amd64
  [ serial ] - download: component=pd, version=v4.0.16, os=linux, arch=amd64
  [ serial ] - download: component=prometheus, version=v4.0.16, os=linux, arch=amd64
  [ serial ] - download: component=grafana, version=v4.0.16, os=linux, arch=amd64
  [ serial ] - download: component=alertmanager, version=, os=linux, arch=amd64
  [ serial ] - mkdir: host=192.168.40.62, directories='/u01/deploy/pump/data.pump'
  [ serial ] - mkdir: host=192.168.40.62, directories='/u01/deploy/data.pd'
  [ serial ] - mkdir: host=192.168.40.62, directories='/u01/deploy2/data'
  [ serial ] - mkdir: host=192.168.40.62, directories='/u01/deploy/data'
  [ serial ] - mkdir: host=192.168.40.62, directories='/u01/deploy2/data.pd'
  [ serial ] - backupcomponent: component=tikv, currentversion=v3.0.20, remote=192.168.40.62:/u01/deploy
  [ serial ] - backupcomponent: component=tikv, currentversion=v3.0.20, remote=192.168.40.62:/u01/deploy2
  [ serial ] - backupcomponent: component=pd, currentversion=v3.0.20, remote=192.168.40.62:/u01/deploy
  [ serial ] - backupcomponent: component=pd, currentversion=v3.0.20, remote=192.168.40.62:/u01/deploy2
  [ serial ] - copycomponent: component=tikv, version=v4.0.16, remote=192.168.40.62:/u01/deploy2 os=linux, arch=amd64
  [ serial ] - copycomponent: component=tikv, version=v4.0.16, remote=192.168.40.62:/u01/deploy os=linux, arch=amd64
  [ serial ] - copycomponent: component=pd, version=v4.0.16, remote=192.168.40.62:/u01/deploy os=linux, arch=amd64
  [ serial ] - copycomponent: component=pd, version=v4.0.16, remote=192.168.40.62:/u01/deploy2 os=linux, arch=amd64
  [ serial ] - backupcomponent: component=pump, currentversion=v3.0.20, remote=192.168.40.62:/u01/deploy/pump
  [ serial ] - copycomponent: component=pump, version=v4.0.16, remote=192.168.40.62:/u01/deploy/pump os=linux, arch=amd64
  [ serial ] - initconfig: cluster=test-cluster, user=tidb, host=192.168.40.62, path=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache/pump-8250.service, deploy_dir=/u01/deploy/pump, data_dir=[/u01/deploy/pump/data.pump], log_dir=/u01/deploy/pump/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache
  [ serial ] - initconfig: cluster=test-cluster, user=tidb, host=192.168.40.62, path=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache/pd-2381.service, deploy_dir=/u01/deploy2, data_dir=[/u01/deploy2/data.pd], log_dir=/u01/deploy2/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache
  [ serial ] - initconfig: cluster=test-cluster, user=tidb, host=192.168.40.62, path=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache/pd-2379.service, deploy_dir=/u01/deploy, data_dir=[/u01/deploy/data.pd], log_dir=/u01/deploy/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache
  [ serial ] - mkdir: host=192.168.40.62, directories=''
  [ serial ] - backupcomponent: component=tidb, currentversion=v3.0.20, remote=192.168.40.62:/u01/deploy
  [ serial ] - copycomponent: component=tidb, version=v4.0.16, remote=192.168.40.62:/u01/deploy os=linux, arch=amd64
  [ serial ] - initconfig: cluster=test-cluster, user=tidb, host=192.168.40.62, path=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache/tikv-20160.service, deploy_dir=/u01/deploy, data_dir=[/u01/deploy/data], log_dir=/u01/deploy/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache
  [ serial ] - initconfig: cluster=test-cluster, user=tidb, host=192.168.40.62, path=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache/tikv-20161.service, deploy_dir=/u01/deploy2, data_dir=[/u01/deploy2/data], log_dir=/u01/deploy2/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache
  [ serial ] - mkdir: host=192.168.40.62, directories=''
  [ serial ] - backupcomponent: component=tidb, currentversion=v3.0.20, remote=192.168.40.62:/u01/deploy2
  [ serial ] - mkdir: host=192.168.40.62, directories='/u01/deploy/drainer/data.drainer'
  [ serial ] - copycomponent: component=tidb, version=v4.0.16, remote=192.168.40.62:/u01/deploy2 os=linux, arch=amd64
  [ serial ] - initconfig: cluster=test-cluster, user=tidb, host=192.168.40.62, path=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache/tidb-4000.service, deploy_dir=/u01/deploy, data_dir=[], log_dir=/u01/deploy/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache
  [ serial ] - backupcomponent: component=drainer, currentversion=v3.0.20, remote=192.168.40.62:/u01/deploy/drainer
  [ serial ] - mkdir: host=192.168.40.62, directories='/u01/deploy/prometheus2.0.0.data.metrics'
  [ serial ] - mkdir: host=192.168.40.62, directories=''
  [ serial ] - copycomponent: component=grafana, version=v4.0.16, remote=192.168.40.62:/u01/deploy os=linux, arch=amd64
  [ serial ] - copycomponent: component=drainer, version=v4.0.16, remote=192.168.40.62:/u01/deploy/drainer os=linux, arch=amd64
  [ serial ] - copycomponent: component=prometheus, version=v4.0.16, remote=192.168.40.62:/u01/deploy os=linux, arch=amd64
  [ serial ] - initconfig: cluster=test-cluster, user=tidb, host=192.168.40.62, path=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache/tidb-4001.service, deploy_dir=/u01/deploy2, data_dir=[], log_dir=/u01/deploy2/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache
  [ serial ] - initconfig: cluster=test-cluster, user=tidb, host=192.168.40.62, path=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache/drainer-8249.service, deploy_dir=/u01/deploy/drainer, data_dir=[/u01/deploy/drainer/data.drainer], log_dir=/u01/deploy/drainer/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache
  [ serial ] - mkdir: host=192.168.40.62, directories='/u01/deploy/data.alertmanager'
  [ serial ] - copycomponent: component=alertmanager, version=, remote=192.168.40.62:/u01/deploy os=linux, arch=amd64
  [ serial ] - backupcomponent: component=prometheus, currentversion=v3.0.20, remote=192.168.40.62:/u01/deploy
  [ serial ] - backupcomponent: component=grafana, currentversion=v3.0.20, remote=192.168.40.62:/u01/deploy
  [ serial ] - copycomponent: component=prometheus, version=v4.0.16, remote=192.168.40.62:/u01/deploy os=linux, arch=amd64
  [ serial ] - copycomponent: component=grafana, version=v4.0.16, remote=192.168.40.62:/u01/deploy os=linux, arch=amd64
  [ serial ] - backupcomponent: component=alertmanager, currentversion=v3.0.20, remote=192.168.40.62:/u01/deploy
  [ serial ] - copycomponent: component=alertmanager, version=, remote=192.168.40.62:/u01/deploy os=linux, arch=amd64
  [ serial ] - initconfig: cluster=test-cluster, user=tidb, host=192.168.40.62, path=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache/prometheus-9090.service, deploy_dir=/u01/deploy, data_dir=[/u01/deploy/prometheus2.0.0.data.metrics], log_dir=/u01/deploy/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache
  [ serial ] - initconfig: cluster=test-cluster, user=tidb, host=192.168.40.62, path=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache/grafana-3000.service, deploy_dir=/u01/deploy, data_dir=[], log_dir=/u01/deploy/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache
  [ serial ] - initconfig: cluster=test-cluster, user=tidb, host=192.168.40.62, path=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache/alertmanager-9093.service, deploy_dir=/u01/deploy, data_dir=[/u01/deploy/data.alertmanager], log_dir=/u01/deploy/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache
  [ serial ] - upgradecluster
upgrading component pd
        restarting instance 192.168.40.62:2381
        restart instance 192.168.40.62:2381 success
        restarting instance 192.168.40.62:2379
        restart instance 192.168.40.62:2379 success
upgrading component tikv
        evicting 91 leaders from store 192.168.40.62:20161...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
ignore evicting store leader from 192.168.40.62:20161, error evicting store leader from 192.168.40.62:20161, operation timed out after 5m0s
        restarting instance 192.168.40.62:20161
        restart instance 192.168.40.62:20161 success
        restarting instance 192.168.40.62:20160
        restart instance 192.168.40.62:20160 success
upgrading component pump
        restarting instance 192.168.40.62:8250
        restart instance 192.168.40.62:8250 success
upgrading component tidb
        restarting instance 192.168.40.62:4000
        restart instance 192.168.40.62:4000 success
        restarting instance 192.168.40.62:4001
        restart instance 192.168.40.62:4001 success
upgrading component drainer
        restarting instance 192.168.40.62:8249
        restart instance 192.168.40.62:8249 success
upgrading component prometheus
        restarting instance 192.168.40.62:9090
        restart instance 192.168.40.62:9090 success
upgrading component grafana
        restarting instance 192.168.40.62:3000
        restart instance 192.168.40.62:3000 success
upgrading component alertmanager
        restarting instance 192.168.40.62:9093
        restart instance 192.168.40.62:9093 success
stopping component node_exporter
        stopping instance 192.168.40.62
        stop 192.168.40.62 success
stopping component blackbox_exporter
        stopping instance 192.168.40.62
        stop 192.168.40.62 success
starting component node_exporter
        starting instance 192.168.40.62
        start 192.168.40.62 success
starting component blackbox_exporter
        starting instance 192.168.40.62
        start 192.168.40.62 success
upgraded cluster `test-cluster` successfully

4.1.去掉v5.4.0不支持的参数

pessimistic-txn.enabled --此参数在v5.4.0中不支持需要去掉
tiup cluster edit-config test-cluster
tiup cluster reload test-cluster

4.2.检查是否满足升级要求

[tidb@tidbser1 manifests]$ tiup cluster check test-cluster --cluster
tiup is checking updates for component cluster ...
starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.9.4/tiup-cluster /home/tidb/.tiup/components/cluster/v1.9.4/tiup-cluster check test-cluster --cluster
  download necessary tools
  - downloading check tools for linux/amd64 ... done
  collect basic system information
  collect basic system information
  - getting system info of 192.168.40.62:22 ... done
  check system requirements
  check system requirements
  check system requirements
  - checking node 192.168.40.62 ... done
  - checking node 192.168.40.62 ... done
  - checking node 192.168.40.62 ... done
  - checking node 192.168.40.62 ... done
  - checking node 192.168.40.62 ... done
  - checking node 192.168.40.62 ... done
  - checking node 192.168.40.62 ... done
  - checking node 192.168.40.62 ... done
  - checking node 192.168.40.62 ... done
  - checking node 192.168.40.62 ... done
  - checking node 192.168.40.62 ... done
  cleanup check files
  - cleanup check files on 192.168.40.62:22 ... done
  - cleanup check files on 192.168.40.62:22 ... done
  - cleanup check files on 192.168.40.62:22 ... done
  - cleanup check files on 192.168.40.62:22 ... done
  - cleanup check files on 192.168.40.62:22 ... done
  - cleanup check files on 192.168.40.62:22 ... done
  - cleanup check files on 192.168.40.62:22 ... done
  - cleanup check files on 192.168.40.62:22 ... done
  - cleanup check files on 192.168.40.62:22 ... done
  - cleanup check files on 192.168.40.62:22 ... done
  - cleanup check files on 192.168.40.62:22 ... done
node           check         result  message
----           -----         ------  -------
192.168.40.62  os-version    pass    os is red hat enterprise linux server 7.6 (maipo) 7.6
192.168.40.62  cpu-governor  warn    unable to determine current cpu frequency governor policy
192.168.40.62  memory        pass    memory size is 16384mb
192.168.40.62  selinux       pass    selinux is disabled
192.168.40.62  command       pass    numactl: policy: default
192.168.40.62  permission    pass    /u01/deploy2 is writable
192.168.40.62  permission    pass    /u01/deploy/data is writable
192.168.40.62  permission    pass    /u01/deploy/data.pd is writable
192.168.40.62  permission    pass    /u01/deploy2/data is writable
192.168.40.62  permission    pass    /u01/deploy2/data.pd is writable
192.168.40.62  permission    pass    /u01/deploy/prometheus2.0.0.data.metrics is writable
192.168.40.62  permission    pass    /u01/deploy is writable
192.168.40.62  permission    pass    /u01/deploy/pump is writable
192.168.40.62  permission    pass    /u01/deploy/pump/data.pump is writable
192.168.40.62  permission    pass    /u01/deploy/drainer is writable
192.168.40.62  permission    pass    /u01/deploy/drainer/data.drainer is writable
192.168.40.62  permission    pass    /u01/deploy/data.alertmanager is writable
192.168.40.62  network       pass    network speed of ens33 is 1000mb
192.168.40.62  network       pass    network speed of ens32 is 1000mb
192.168.40.62  disk          fail    multiple components tikv:/u01/deploy2/data,tikv:/u01/deploy/data are using the same partition 192.168.40.62:/u01 as data dir
192.168.40.62  thp           pass    thp is disabled
192.168.40.62  cpu-cores     pass    number of cpu cores / threads: 8
checking region status of the cluster test-cluster...
all regions are healthy.

4.3.升级tidb

[tidb@tidbser1 data]$ tiup cluster upgrade test-cluster v5.4.0
tiup is checking updates for component cluster ...
starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.9.4/tiup-cluster /home/tidb/.tiup/components/cluster/v1.9.4/tiup-cluster upgrade test-cluster v5.4.0
this operation will upgrade tidb v4.0.16 cluster test-cluster to v5.4.0.
do you want to continue? [y/n]:(default=n) y
upgrading cluster...
  [ serial ] - sshkeyset: privatekey=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/ssh/id_rsa, publickey=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/ssh/id_rsa.pub
  [parallel] - userssh: user=tidb, host=192.168.40.62
  [parallel] - userssh: user=tidb, host=192.168.40.62
  [parallel] - userssh: user=tidb, host=192.168.40.62
  [parallel] - userssh: user=tidb, host=192.168.40.62
  [parallel] - userssh: user=tidb, host=192.168.40.62
  [parallel] - userssh: user=tidb, host=192.168.40.62
  [parallel] - userssh: user=tidb, host=192.168.40.62
  [parallel] - userssh: user=tidb, host=192.168.40.62
  [parallel] - userssh: user=tidb, host=192.168.40.62
  [parallel] - userssh: user=tidb, host=192.168.40.62
  [parallel] - userssh: user=tidb, host=192.168.40.62
  [ serial ] - download: component=drainer, version=v5.4.0, os=linux, arch=amd64
  [ serial ] - download: component=tikv, version=v5.4.0, os=linux, arch=amd64
  [ serial ] - download: component=pump, version=v5.4.0, os=linux, arch=amd64
  [ serial ] - download: component=tidb, version=v5.4.0, os=linux, arch=amd64
  [ serial ] - download: component=pd, version=v5.4.0, os=linux, arch=amd64
failed to download /pd-v5.4.0-linux-amd64.tar.gz(download from https://tiup-mirrors.pingcap.com/pd-v5.4.0-linux-amd64.tar.gz failed: stream error: stream id 1; internal_error; received from peer), retrying...
  [ serial ] - download: component=prometheus, version=v5.4.0, os=linux, arch=amd64
  [ serial ] - download: component=grafana, version=v5.4.0, os=linux, arch=amd64
failed to download /pump-v5.4.0-linux-amd64.tar.gz(download from https://tiup-mirrors.pingcap.com/pump-v5.4.0-linux-amd64.tar.gz failed: stream error: stream id 1; internal_error; received from peer), retrying...
failed to download /tikv-v5.4.0-linux-amd64.tar.gz(download from https://tiup-mirrors.pingcap.com/tikv-v5.4.0-linux-amd64.tar.gz failed: stream error: stream id 1; internal_error; received from peer), retrying...
failed to download /tidb-v5.4.0-linux-amd64.tar.gz(download from https://tiup-mirrors.pingcap.com/tidb-v5.4.0-linux-amd64.tar.gz failed: stream error: stream id 1; internal_error; received from peer), retrying...
  [ serial ] - download: component=alertmanager, version=, os=linux, arch=amd64
failed to download /prometheus-v5.4.0-linux-amd64.tar.gz(download from https://tiup-mirrors.pingcap.com/prometheus-v5.4.0-linux-amd64.tar.gz failed: stream error: stream id 1; internal_error; received from peer), retrying...
failed to download /grafana-v5.4.0-linux-amd64.tar.gz(download from https://tiup-mirrors.pingcap.com/grafana-v5.4.0-linux-amd64.tar.gz failed: stream error: stream id 1; internal_error; received from peer), retrying...
failed to download /tikv-v5.4.0-linux-amd64.tar.gz(download from https://tiup-mirrors.pingcap.com/tikv-v5.4.0-linux-amd64.tar.gz failed: stream error: stream id 3; internal_error; received from peer), retrying...
failed to download /prometheus-v5.4.0-linux-amd64.tar.gz(download from https://tiup-mirrors.pingcap.com/prometheus-v5.4.0-linux-amd64.tar.gz failed: stream error: stream id 3; internal_error; received from peer), retrying...
  [ serial ] - mkdir: host=192.168.40.62, directories='/u01/deploy/pump/data.pump'
  [ serial ] - mkdir: host=192.168.40.62, directories='/u01/deploy/data.pd'
  [ serial ] - mkdir: host=192.168.40.62, directories='/u01/deploy2/data'
  [ serial ] - mkdir: host=192.168.40.62, directories='/u01/deploy/data'
  [ serial ] - mkdir: host=192.168.40.62, directories='/u01/deploy2/data.pd'
  [ serial ] - backupcomponent: component=tikv, currentversion=v4.0.16, remote=192.168.40.62:/u01/deploy
  [ serial ] - backupcomponent: component=pd, currentversion=v4.0.16, remote=192.168.40.62:/u01/deploy2
  [ serial ] - backupcomponent: component=tikv, currentversion=v4.0.16, remote=192.168.40.62:/u01/deploy2
  [ serial ] - backupcomponent: component=pd, currentversion=v4.0.16, remote=192.168.40.62:/u01/deploy
  [ serial ] - copycomponent: component=pd, version=v5.4.0, remote=192.168.40.62:/u01/deploy os=linux, arch=amd64
  [ serial ] - copycomponent: component=tikv, version=v5.4.0, remote=192.168.40.62:/u01/deploy2 os=linux, arch=amd64
  [ serial ] - backupcomponent: component=pump, currentversion=v4.0.16, remote=192.168.40.62:/u01/deploy/pump
  [ serial ] - copycomponent: component=pump, version=v5.4.0, remote=192.168.40.62:/u01/deploy/pump os=linux, arch=amd64
  [ serial ] - copycomponent: component=pd, version=v5.4.0, remote=192.168.40.62:/u01/deploy2 os=linux, arch=amd64
  [ serial ] - initconfig: cluster=test-cluster, user=tidb, host=192.168.40.62, path=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache/pd-2379.service, deploy_dir=/u01/deploy, data_dir=[/u01/deploy/data.pd], log_dir=/u01/deploy/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache
  [ serial ] - initconfig: cluster=test-cluster, user=tidb, host=192.168.40.62, path=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache/pump-8250.service, deploy_dir=/u01/deploy/pump, data_dir=[/u01/deploy/pump/data.pump], log_dir=/u01/deploy/pump/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache
  [ serial ] - initconfig: cluster=test-cluster, user=tidb, host=192.168.40.62, path=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache/tikv-20161.service, deploy_dir=/u01/deploy2, data_dir=[/u01/deploy2/data], log_dir=/u01/deploy2/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache
  [ serial ] - copycomponent: component=tikv, version=v5.4.0, remote=192.168.40.62:/u01/deploy os=linux, arch=amd64
  [ serial ] - initconfig: cluster=test-cluster, user=tidb, host=192.168.40.62, path=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache/pd-2381.service, deploy_dir=/u01/deploy2, data_dir=[/u01/deploy2/data.pd], log_dir=/u01/deploy2/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache
  [ serial ] - mkdir: host=192.168.40.62, directories=''
  [ serial ] - backupcomponent: component=tidb, currentversion=v4.0.16, remote=192.168.40.62:/u01/deploy
  [ serial ] - copycomponent: component=tidb, version=v5.4.0, remote=192.168.40.62:/u01/deploy os=linux, arch=amd64
  [ serial ] - initconfig: cluster=test-cluster, user=tidb, host=192.168.40.62, path=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache/tikv-20160.service, deploy_dir=/u01/deploy, data_dir=[/u01/deploy/data], log_dir=/u01/deploy/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache
  [ serial ] - mkdir: host=192.168.40.62, directories=''
  [ serial ] - backupcomponent: component=tidb, currentversion=v4.0.16, remote=192.168.40.62:/u01/deploy2
  [ serial ] - copycomponent: component=tidb, version=v5.4.0, remote=192.168.40.62:/u01/deploy2 os=linux, arch=amd64
  [ serial ] - mkdir: host=192.168.40.62, directories='/u01/deploy/drainer/data.drainer'
  [ serial ] - mkdir: host=192.168.40.62, directories='/u01/deploy/prometheus2.0.0.data.metrics'
  [ serial ] - initconfig: cluster=test-cluster, user=tidb, host=192.168.40.62, path=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache/tidb-4000.service, deploy_dir=/u01/deploy, data_dir=[], log_dir=/u01/deploy/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache
  [ serial ] - copycomponent: component=prometheus, version=v5.4.0, remote=192.168.40.62:/u01/deploy os=linux, arch=amd64
  [ serial ] - initconfig: cluster=test-cluster, user=tidb, host=192.168.40.62, path=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache/tidb-4001.service, deploy_dir=/u01/deploy2, data_dir=[], log_dir=/u01/deploy2/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache
  [ serial ] - backupcomponent: component=drainer, currentversion=v4.0.16, remote=192.168.40.62:/u01/deploy/drainer
  [ serial ] - mkdir: host=192.168.40.62, directories=''
  [ serial ] - copycomponent: component=grafana, version=v5.4.0, remote=192.168.40.62:/u01/deploy os=linux, arch=amd64
  [ serial ] - backupcomponent: component=prometheus, currentversion=v4.0.16, remote=192.168.40.62:/u01/deploy
  [ serial ] - copycomponent: component=drainer, version=v5.4.0, remote=192.168.40.62:/u01/deploy/drainer os=linux, arch=amd64
  [ serial ] - mkdir: host=192.168.40.62, directories='/u01/deploy/data.alertmanager'
  [ serial ] - copycomponent: component=prometheus, version=v5.4.0, remote=192.168.40.62:/u01/deploy os=linux, arch=amd64
  [ serial ] - initconfig: cluster=test-cluster, user=tidb, host=192.168.40.62, path=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache/drainer-8249.service, deploy_dir=/u01/deploy/drainer, data_dir=[/u01/deploy/drainer/data.drainer], log_dir=/u01/deploy/drainer/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache
  [ serial ] - copycomponent: component=alertmanager, version=, remote=192.168.40.62:/u01/deploy os=linux, arch=amd64
  [ serial ] - backupcomponent: component=grafana, currentversion=v4.0.16, remote=192.168.40.62:/u01/deploy
  [ serial ] - initconfig: cluster=test-cluster, user=tidb, host=192.168.40.62, path=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache/prometheus-9090.service, deploy_dir=/u01/deploy, data_dir=[/u01/deploy/prometheus2.0.0.data.metrics], log_dir=/u01/deploy/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache
  [ serial ] - copycomponent: component=grafana, version=v5.4.0, remote=192.168.40.62:/u01/deploy os=linux, arch=amd64
  [ serial ] - backupcomponent: component=alertmanager, currentversion=v4.0.16, remote=192.168.40.62:/u01/deploy
  [ serial ] - copycomponent: component=alertmanager, version=, remote=192.168.40.62:/u01/deploy os=linux, arch=amd64
  [ serial ] - initconfig: cluster=test-cluster, user=tidb, host=192.168.40.62, path=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache/alertmanager-9093.service, deploy_dir=/u01/deploy, data_dir=[/u01/deploy/data.alertmanager], log_dir=/u01/deploy/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache
  [ serial ] - initconfig: cluster=test-cluster, user=tidb, host=192.168.40.62, path=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache/grafana-3000.service, deploy_dir=/u01/deploy, data_dir=[], log_dir=/u01/deploy/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache
  [ serial ] - upgradecluster
upgrading component pd
        restarting instance 192.168.40.62:2379
        restart instance 192.168.40.62:2379 success
        restarting instance 192.168.40.62:2381
        restart instance 192.168.40.62:2381 success
upgrading component tikv
        evicting 91 leaders from store 192.168.40.62:20161...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
          still waitting for 91 store leaders to transfer...
ignore evicting store leader from 192.168.40.62:20161, error evicting store leader from 192.168.40.62:20161, operation timed out after 5m0s
        restarting instance 192.168.40.62:20161
        restart instance 192.168.40.62:20161 success
        restarting instance 192.168.40.62:20160
        restart instance 192.168.40.62:20160 success
upgrading component pump
        restarting instance 192.168.40.62:8250
        restart instance 192.168.40.62:8250 success
upgrading component tidb
        restarting instance 192.168.40.62:4000
        restart instance 192.168.40.62:4000 success
        restarting instance 192.168.40.62:4001
        restart instance 192.168.40.62:4001 success
upgrading component drainer
        restarting instance 192.168.40.62:8249
        restart instance 192.168.40.62:8249 success
upgrading component prometheus
        restarting instance 192.168.40.62:9090
        restart instance 192.168.40.62:9090 success
upgrading component grafana
        restarting instance 192.168.40.62:3000
        restart instance 192.168.40.62:3000 success
upgrading component alertmanager
        restarting instance 192.168.40.62:9093
        restart instance 192.168.40.62:9093 success
stopping component node_exporter
        stopping instance 192.168.40.62
        stop 192.168.40.62 success
stopping component blackbox_exporter
        stopping instance 192.168.40.62
        stop 192.168.40.62 success
starting component node_exporter
        starting instance 192.168.40.62
        start 192.168.40.62 success
starting component blackbox_exporter
        starting instance 192.168.40.62
        start 192.168.40.62 success
upgraded cluster `test-cluster` successfully

下图问题通过tiup 编辑配置文件修改,然后reload都无法解决,最后通过单独连接pd后解决

$ pd-ctl -u http://192.168.40.62:2379 -i 
»  config set location-labels dc,zone,host;
最后修改时间:2022-04-29 11:07:29
「喜欢文章,快来给作者赞赏墨值吧」
【米乐app官网下载的版权声明】本文为墨天轮用户原创内容,转载时必须标注文章的来源(墨天轮),文章链接,文章作者等基本信息,否则作者和墨天轮有权追究责任。如果您发现墨天轮中有涉嫌抄袭或者侵权的内容,欢迎发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。

评论

网站地图