m6米乐安卓版下载-米乐app官网下载
暂无图片
9

tidb7.5lts集群安装配置手册 -m6米乐安卓版下载

原创 潇湘秦 2023-12-20
286

因近期有一个项目需要上线,在评估使用什么架构时,和开发同仁沟通需求,了解到该应用为oltp但是数据量很集中,会有几张超大的表,如果要保证事务效率,使用mysql集群难免会要做分库分表,对后期的运维带来很大的挑战;而tidb属于分布式集群,tikv的行存模式非常适用于大表事务性业务,因此选型用tidb来作为该应用的底层数据库架构;

安装集群的一些基本配置要求请参考如下官方文档,这里不再赘述

https://docs.pingcap.com/zh/tidb/stable/hardware-and-software-requirements

tidb7.5lts 长期支持版本相关特性介绍如下链接

https://www.modb.pro/db/1734025930354548736

ps:默认命令为加粗斜体

二.安装前准备

本次安装为测试环境配置为1 台tidb/pd 3台tikv

配置信息

ip

tidb/pd

1台8c/16gb 200gb centos7.9

10.189.60.201

tikv

3台8c/32gb 200gb centos7.9

10.189.60.202/203/204

  1. 1.需要开通外网,并配置好外部yum源 (安装依赖包,tiup,mysql等都需要外网拉取)
  2. 2.安装依赖包

编译和构建 tidb 所需的依赖库

版本

golang

1.21 及以上版本

rust

nightly-2022-07-31 及以上版本

gcc

7.x

llvm

13.0 及以上版本

ntp

none

ntpdate

none

sshpass

1.06 及以上

numactl

2.0.12 及以上

2.1 安装依赖包

yum install –y gcc llvm sshpass numactl ntp ntpdate

2.2安装go语言包1.21 及以上版本

go m6米乐安卓版下载官网下载

下载go1.21.5.linux-amd64.tar.gz

上传到集群各个主机

chown root:root go1.21.5.linux-amd64.tar.gz ##修改属性
tar -c /usr/local -xzf go1.21.5.linux-amd64.tar.gz ##解压到执行目录
vi .bash_profile ##修改root 环境变量

path=$path:$home/bin:/usr/local/go/bin

# go version ##生效环境变量后检查go版本

2.3安装rust语言包

curl --proto '=https' --tlsv1.2 https://sh.rustup.rs -ssf | sh

安装完成后确认版本

# rustc --version

3.设置临时空间

sudo mkdir /tmp/tidb

如果目录 /tmp/tidb 已经存在,需确保有写入权限。

sudo chmod -r 777 /tmp/tidb

4.关闭防火墙

检查防火墙状态(以 centos 7.x 为例)

sudo firewall-cmd --state

sudo systemctl status firewalld.service

关闭防火墙服务

sudo systemctl stop firewalld.service

关闭防火墙自动启动服务

sudo systemctl disable firewalld.service

检查防火墙状态

sudo systemctl status firewalld.service

5.配置ntp服务

yum install -y ntp ntpdate

systemctl start ntpd.service

systemctl enable ntpd.service

systemctl status ntpd.service

6.检测及关闭swap

echo "vm.swappiness = 0">> /etc/sysctl.conf

swapoff –a

sysctl –p

vi /etc/fstab

# 注释加载swap分区的那行记录

#uuid=4f863b5f-20b3-4a99-a680-ddf84a3602a4 swap swap defaults 0 0

三.检查和配置操作系统优化参数

在生产系统的 tidb 中,建议对操作系统进行如下的配置优化:

  1. 1.关闭透明大页(即 transparent huge pages,缩写为 thp)。数据库的内存访问模式往往是稀疏的而非连续的。当高阶内存碎片化比较严重时,分配 thp 页面会出现较高的延迟。
  2. 2.将存储介质的 i/o 调度器设置为 noop。对于高速 ssd 存储介质,内核的 i/o 调度操作会导致性能损失。将调度器设置为 noop 后,内核不做任何操作,直接将 i/o 请求下发给硬件,以获取更好的性能。同时,noop 调度器也有较好的普适性。
  3. 3.为调整 cpu 频率的 cpufreq 模块选用 performance 模式。将 cpu 频率固定在其支持的最高运行频率上,不进行动态调节,可获取最佳的性能。

因为本次使用虚拟机 且无ssd 2/3项目无需调整

3.1修改当前的内核配置立即关闭透明大页。

echo never > /sys/kernel/mm/transparent_hugepage/enabled

echo never > /sys/kernel/mm/transparent_hugepage/defrag

检查修改后的状态

cat /sys/kernel/mm/transparent_hugepage/enabled

always madvise [never]

3.2 执行以下命令修改 sysctl 参数。

echo "fs.file-max = 1000000">> /etc/sysctl.conf

echo "net.core.somaxconn = 32768">> /etc/sysctl.conf

echo "net.ipv4.tcp_tw_recycle = 0">> /etc/sysctl.conf

echo "net.ipv4.tcp_syncookies = 0">> /etc/sysctl.conf

echo "vm.overcommit_memory = 1">> /etc/sysctl.conf

sysctl -p

3.3 执行以下命令配置用户的 limits.conf 文件。

cat << eof >>/etc/security/limits.conf

tidb soft nofile 1000000

tidb hard nofile 1000000

tidb soft stack 32768

tidb hard stack 32768

eof

四.手动配置 ssh 互信及 sudo 免密码

对于有需求,通过手动配置中控机至目标节点互信的场景,可参考本段。通常推荐使用 tiup 部署工具会自动配置 ssh 互信及免密登录,可忽略本段内容

配置互信和oracle 11g rac配置互信类似

以 root 用户依次登录到部署目标机器创建 tidb 用户并设置登录密码。

useradd tidb && \

passwd tidb

置好 sudo 免密码。

visudo

tidb all=(all) nopasswd: all

配置互信

ssh-keygen -t rsa

ssh-copy-id -i ~/.ssh/id_rsa.pub 10.189.60.201(ip)

因为本次配置tidb和pd在同一台所以也需要配置本机互信

注意:因为修改了默认的ssh端口 ssh时候需要使用

ssh –p xxxx ip/hostname

为了不加-p的参数需要修改

vi /etc/ssh/ssh_config

拿掉port的注释 并修改为修改后的端口号即可


确认互信成功

[tidb@yzptltidb01t ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub 10.189.60.204
/bin/ssh-copy-id: info: source of key(s) to be installed: "/home/tidb/.ssh/id_rsa.pub"the authenticity of host 
'[10.189.60.204]:11122 ([10.189.60.204]:11122)' can't be established.ecdsa key fingerprint 
is sha256:bq5xo2 g76dkfqsjx heznuwutknsfukyy6wrwu3lyc.ecdsa key fingerprint 
is md5:5f:dc:02:69:20:92:cf:4d:56:26:f0:5c:bd:f5:56:ee.are you sure you want to continue connecting (yes/no)? yes
/bin/ssh-copy-id: info: attempting to log in with the new key(s), to filter out any that are already 
installed/bin/ssh-copy-id: info: 1 key(s) remain to be installed 
-- if you are prompted now it is to install the new keystidb@10.189.60.204's password:
 number of key(s) added: 1now try logging into the machine, with: "ssh '10.189.60.204'
"and check to make sure that only the key(s) you wanted were added.
[tidb@yzptltidb01t ~]$ ssh 10.189.60.204 ##直接登录到目标主机
[tidb@yzptltikv03t ~]$

五.搭建集群服务

tiup 是 tidb 4.0 版本引入的集群运维工具,tiup cluster 是 tiup 提供的使用 golang 编写的集群管理组件,通过 tiup cluster 组件就可以进行日常的运维工作,包括部署、启动、关闭、销毁、弹性扩缩容、升级 tidb 集群,以及管理 tidb 集群参数。

目前 tiup 可以支持部署 tidb、tiflash、tidb binlog、ticdc 以及监控系统。

本文将介绍 tidb 集群拓扑的具体部署步骤。

5.1下载并安装tiup

在中控机上下载安装tiup,本次测试集群就是tidb/pd主机

curl --proto '=https' --tlsv1.2 -ssf https://tiup-mirrors.pingcap.com/install.sh | sh

[root@yzptltidb01t ~]# source /root/.bash_profile ##重新声明环境变量

[root@yzptltidb01t ~]# tiup cluster ##安装tiup cluster组件

tiup is checking updates for component cluster ...timeout(2s)!

the component `cluster` version is not installed; downloading from repository.

download https://tiup-mirrors.pingcap.com/cluster-v1.14.0-linux-amd64.tar.gz 8.75 mib / 8.75 mib 100.00% 38.12 mib/s ##提示少个包 后面下了安装一下,我看也有别的博主未下载也可以使用

starting component `cluster`: /root/.tiup/components/cluster/v1.14.0/tiup-cluster

deploy a tidb cluster for production

usage:

tiup cluster [command]

available commands:

check perform preflight checks for the cluster.

deploy deploy a cluster for production

start start a tidb cluster

stop stop a tidb cluster

restart restart a tidb cluster

scale-in scale in a tidb cluster

scale-out scale out a tidb cluster

destroy destroy a specified cluster

clean (experimental) cleanup a specified cluster

upgrade upgrade a specified tidb cluster

display display information of a tidb cluster

prune destroy and remove instances that is in tombstone state

list list all clusters

audit show audit log of cluster operation

import import an exist tidb cluster from tidb-ansible

edit-config edit tidb cluster config

show-config show tidb cluster config

reload reload a tidb cluster's config and restart if needed

patch replace the remote package with a specified package and restart the service

rename rename the cluster

enable enable a tidb cluster automatically at boot

disable disable automatic enabling of tidb clusters at boot

replay replay previous operation and skip successed steps

template print topology template

tls enable/disable tls between tidb components

meta backup/restore meta information

rotatessh rotate ssh keys on all nodes

help help about any command

completion generate the autocompletion script for the specified shell

flags:

-c, --concurrency int max number of parallel tasks allowed (default 5)

--format string (experimental) the format of output, available values are [default, json] (default "default")

-h, --help help for tiup

--ssh string (experimental) the executor type: 'builtin', 'system', 'none'.

--ssh-timeout uint timeout in seconds to connect host via ssh, ignored for operations that don't need an ssh connection. (default 5)

-v, --version version for tiup

--wait-timeout uint timeout in seconds to wait for an operation to complete, ignored for operations that don't fit. (default 120)

-y, --yes skip all confirmations and assumes 'yes'

use "tiup cluster help [command]" for more information about a command.

[root@yzptltidb01t ~]# wget ##下载这个包

--2023-12-18 16:41:12-- https://tiup-mirrors.pingcap.com/cluster-v1.14.0-linux-amd64.tar.gz

resolving tiup-mirrors.pingcap.com (tiup-mirrors.pingcap.com)... 120.240.109.47, 120.241.84.45, 111.48.217.20

connecting to tiup-mirrors.pingcap.com (tiup-mirrors.pingcap.com)|120.240.109.47|:443... connected.

http request sent, awaiting response... 200 ok

length: 9178241 (8.8m) [application/x-compressed]

saving to: ‘cluster-v1.14.0-linux-amd64.tar.gz’

100%[======================================================================================================================>] 9,178,241 16.2mb/s in 0.5s

2023-12-18 16:41:13 (16.2 mb/s) - ‘cluster-v1.14.0-linux-amd64.tar.gz’ saved [9178241/9178241]

[root@yzptltidb01t ~]# tar -xzf cluster-v1.14.0-linux-amd64.tar.gz

[root@yzptltidb01t ~]#

5.2更新tiup

[root@yzptltidb01t ~]# tiup update --self && tiup update cluster

download https://tiup-mirrors.pingcap.com/tiup-v1.14.0-linux-amd64.tar.gz 4.83 mib / 4.83 mib 100.00% 26.31 mib/s

updated successfully!

component cluster version v1.14.0 is already installed

updated successfully!

[root@yzptltidb01t ~]#

[root@yzptltidb01t ~]# tiup --binary cluster ---更新后的版本

/root/.tiup/components/cluster/v1.14.0/tiup-cluster

5.3配置参数文件

[root@yzptltidb01t ~]# cat topo.yaml

# # global variables are applied to all deployments and used as the default value of

# # the deployments if a specific deployment value is missing.

global:

user: "tidb"

ssh_port: 11122

deploy_dir: "/tidb-deploy"

data_dir: "/tidb-data"

pd_servers:

- host: 10.189.60.201

tidb_servers:

- host: 10.189.60.201

tikv_servers:

- host: 10.189.60.202

- host: 10.189.60.203

- host: 10.189.60.204

monitoring_servers:

- host: 10.189.60.201

grafana_servers:

- host: 10.189.60.201

alertmanager_servers:

- host: 10.189.60.201

5.4安装前预检查

tiup cluster check ./topo.yaml --user root –p

[root@yzptltidb01t ~]# tiup cluster check ./topo.yaml --user root –p

tiup is checking updates for component cluster ...

starting component `cluster`: /root/.tiup/components/cluster/v1.14.0/tiup-cluster check ./topo.yaml --user root –p

input ssh password:

detect cpu arch name

- detecting node 10.189.60.201 arch info ... done

- detecting node 10.189.60.202 arch info ... done

- detecting node 10.189.60.203 arch info ... done

- detecting node 10.189.60.204 arch info ... done

detect cpu os name

- detecting node 10.189.60.201 os info ... done

- detecting node 10.189.60.202 os info ... done

- detecting node 10.189.60.203 os info ... done

- detecting node 10.189.60.204 os info ... done

download necessary tools

- downloading check tools for linux/amd64 ... done

collect basic system information

collect basic system information

- getting system info of 10.189.60.201:11122 ... ⠴ copycomponent: component=insight, version=, remote=10.189.60.201:/tmp/tiup os=linux, arch=amd64

collect basic system information

collect basic system information

- getting system info of 10.189.60.201:11122 ... done

- getting system info of 10.189.60.202:11122 ... done

- getting system info of 10.189.60.203:11122 ... done

- getting system info of 10.189.60.204:11122 ... done

check time zone

- checking node 10.189.60.201 ... done

- checking node 10.189.60.202 ... done

- checking node 10.189.60.203 ... done

- checking node 10.189.60.204 ... done

check system requirements

- checking node 10.189.60.201 ... ⠴ checksys: host=10.189.60.201 type=exist

check system requirements

- checking node 10.189.60.201 ... done

check system requirements

- checking node 10.189.60.201 ... done

check system requirements

- checking node 10.189.60.201 ... done

check system requirements

check system requirements

check system requirements

- checking node 10.189.60.201 ... done

check system requirements

- checking node 10.189.60.201 ... done

- checking node 10.189.60.202 ... done

- checking node 10.189.60.203 ... done

- checking node 10.189.60.204 ... done

- checking node 10.189.60.201 ... done

- checking node 10.189.60.201 ... done

- checking node 10.189.60.201 ... done

- checking node 10.189.60.201 ... done

- checking node 10.189.60.201 ... done

- checking node 10.189.60.202 ... done

- checking node 10.189.60.203 ... done

- checking node 10.189.60.204 ... done

cleanup check files

- cleanup check files on 10.189.60.201:11122 ... done

- cleanup check files on 10.189.60.202:11122 ... done

- cleanup check files on 10.189.60.203:11122 ... done

- cleanup check files on 10.189.60.204:11122 ... done

node check result message

---- ----- ------ -------

10.189.60.202 timezone pass time zone is the same as the first pd machine: asia/shanghai

10.189.60.202 cpu-governor warn unable to determine current cpu frequency governor policy

10.189.60.202 swap warn swap is enabled, please disable it for best performance

10.189.60.202 memory pass memory size is 32768mb

10.189.60.202 thp pass thp is disabled

10.189.60.202 command pass numactl: policy: default

10.189.60.202 os-version pass os is centos linux 7 (core) 7.9.2009

10.189.60.202 cpu-cores pass number of cpu cores / threads: 8

10.189.60.202 network pass network speed of ens192 is 10000mb

10.189.60.202 disk warn mount point / does not have 'noatime' option set

10.189.60.202 selinux pass selinux is disabled

10.189.60.203 swap warn swap is enabled, please disable it for best performance

10.189.60.203 memory pass memory size is 32768mb

10.189.60.203 disk warn mount point / does not have 'noatime' option set

10.189.60.203 command pass numactl: policy: default

10.189.60.203 timezone pass time zone is the same as the first pd machine: asia/shanghai

10.189.60.203 os-version pass os is centos linux 7 (core) 7.9.2009

10.189.60.203 network pass network speed of ens192 is 10000mb

10.189.60.203 selinux pass selinux is disabled

10.189.60.203 thp pass thp is disabled

10.189.60.203 cpu-cores pass number of cpu cores / threads: 8

10.189.60.203 cpu-governor warn unable to determine current cpu frequency governor policy

10.189.60.204 selinux pass selinux is disabled

10.189.60.204 cpu-cores pass number of cpu cores / threads: 8

10.189.60.204 swap warn swap is enabled, please disable it for best performance

10.189.60.204 network pass network speed of ens192 is 10000mb

10.189.60.204 disk warn mount point / does not have 'noatime' option set

10.189.60.204 thp pass thp is disabled

10.189.60.204 command pass numactl: policy: default

10.189.60.204 timezone pass time zone is the same as the first pd machine: asia/shanghai

10.189.60.204 os-version pass os is centos linux 7 (core) 7.9.2009

10.189.60.204 cpu-governor warn unable to determine current cpu frequency governor policy

10.189.60.204 memory pass memory size is 32768mb

10.189.60.201 os-version pass os is centos linux 7 (core) 7.9.2009

10.189.60.201 cpu-governor warn unable to determine current cpu frequency governor policy

10.189.60.201 swap warn swap is enabled, please disable it for best performance

10.189.60.201 memory pass memory size is 16384mb

10.189.60.201 sysctl fail vm.swappiness = 60, should be 0

10.189.60.201 command fail numactl not usable, bash: numactl:

##未禁用swap,未安装numactl

command not found

10.189.60.201 cpu-cores pass number of cpu cores / threads: 8

10.189.60.201 network pass network speed of ens192 is 10000mb

10.189.60.201 disk warn mount point / does not have 'noatime' option set

10.189.60.201 selinux pass selinux is disabled

10.189.60.201 thp pass thp is disabled

transaction summary

处理完成后 再次check 确保没有fail

5.5安装tidb

[root@yzptltidb01t ~]# tiup list tidb ##list出可以安装的版本

v6.1.2 2022-10-24t15:16:17 08:00 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v6.1.3 2022-12-05t11:50:23 08:00 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v6.1.4 2023-02-08t11:34:10 08:00 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v6.1.5 2023-02-28t11:23:57 08:00 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v6.1.6 2023-04-12t11:05:35 08:00 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v6.1.7 2023-07-12t11:22:57 08:00 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v6.2.0 2022-08-23t09:14:36 08:00 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v6.3.0 2022-09-30t10:59:36 08:00 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v6.4.0 2022-11-17t11:26:23 08:00 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v6.5.0 2022-12-29t11:32:06 08:00 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v6.5.1 2023-03-10t13:36:50 08:00 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v6.5.2 2023-04-21t10:52:46 08:00 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v6.5.3 2023-06-14t14:36:43 08:00 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v6.5.4 2023-08-28t11:40:24 08:00 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v6.5.5 2023-09-21t11:51:14 08:00 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v6.5.6 2023-12-07t07:12:10z darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v6.6.0 2023-02-20t16:43:16 08:00 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v7.0.0 2023-03-30t10:33:19 08:00 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v7.1.0 2023-05-31t14:49:49 08:00 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v7.1.1 2023-07-24t11:39:38 08:00 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v7.1.2 2023-10-25t03:58:13z darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v7.2.0 2023-06-29t11:57:48 08:00 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v7.3.0 2023-08-14t12:41:31 08:00 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v7.4.0 2023-10-12t04:07:12z darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v7.5.0 2023-12-01t03:55:55z darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v7.6.0-alpha-nightly-20231216 2023-12-16t15:17:07z darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

正式安装tidb

#本次安装选择tidb 7.5 改版本为7.x第二个lts长期支持版本

tiup cluster deploy test-tidb v7.5.0 ./topo.yaml --user root –p

##test-tidb 为当前安装集群的名

[root@yzptltidb01t ~]# tiup cluster deploy test-tidb v7.5.0 ./topo.yaml --user root -p

tiup is checking updates for component cluster ...

starting component `cluster`: /root/.tiup/components/cluster/v1.14.0/tiup-cluster deploy test-tidb v7.5.0 ./topology.yaml --user root -p

input ssh password:

detect cpu arch name

- detecting node 10.189.60.201 arch info ... done

- detecting node 10.189.60.202 arch info ... done

- detecting node 10.189.60.203 arch info ... done

- detecting node 10.189.60.204 arch info ... done

detect cpu os name

- detecting node 10.189.60.201 os info ... done

- detecting node 10.189.60.202 os info ... done

- detecting node 10.189.60.203 os info ... done

- detecting node 10.189.60.204 os info ... done

please confirm your topology:

cluster type: tidb

cluster name: test-tidb

cluster version: v7.5.0

role host ports os/arch directories

---- ---- ----- ------- -----------

pd 10.189.60.201 2379/2380 linux/x86_64 /tidb-deploy/pd-2379,/tidb-data/pd-2379

tikv 10.189.60.202 20160/20180 linux/x86_64 /tidb-deploy/tikv-20160,/tidb-data/tikv-20160

tikv 10.189.60.203 20160/20180 linux/x86_64 /tidb-deploy/tikv-20160,/tidb-data/tikv-20160

tikv 10.189.60.204 20160/20180 linux/x86_64 /tidb-deploy/tikv-20160,/tidb-data/tikv-20160

tidb 10.189.60.201 4000/10080 linux/x86_64 /tidb-deploy/tidb-4000

prometheus 10.189.60.201 9090/12020 linux/x86_64 /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090

grafana 10.189.60.201 3000 linux/x86_64 /tidb-deploy/grafana-3000

alertmanager 10.189.60.201 9093/9094 linux/x86_64 /tidb-deploy/alertmanager-9093,/tidb-data/alertmanager-9093

attention:

1. if the topology is not what you expected, check your yaml file.

2. please confirm there is no port/directory conflicts in same host.

do you want to continue? [y/n]: (default=n) y

generate ssh keys ... done

download tidb components

- download pd:v7.5.0 (linux/amd64) ... done

- download tikv:v7.5.0 (linux/amd64) ... done

- download tidb:v7.5.0 (linux/amd64) ... done

- download prometheus:v7.5.0 (linux/amd64) ... done

- download grafana:v7.5.0 (linux/amd64) ... done

- download alertmanager: (linux/amd64) ... done

- download node_exporter: (linux/amd64) ... done

- download blackbox_exporter: (linux/amd64) ... done

initialize target host environments

- prepare 10.189.60.203:11122 ... done

- prepare 10.189.60.204:11122 ... done

- prepare 10.189.60.201:11122 ... done

- prepare 10.189.60.202:11122 ... done

deploy tidb instance

- copy pd -> 10.189.60.201 ... done

- copy tikv -> 10.189.60.202 ... done

- copy tikv -> 10.189.60.203 ... done

- copy tikv -> 10.189.60.204 ... done

- copy tidb -> 10.189.60.201 ... done

- copy prometheus -> 10.189.60.201 ... done

- copy grafana -> 10.189.60.201 ... done

- copy alertmanager -> 10.189.60.201 ... done

- deploy node_exporter -> 10.189.60.201 ... done

- deploy node_exporter -> 10.189.60.202 ... done

- deploy node_exporter -> 10.189.60.203 ... done

- deploy node_exporter -> 10.189.60.204 ... done

- deploy blackbox_exporter -> 10.189.60.202 ... done

- deploy blackbox_exporter -> 10.189.60.203 ... done

- deploy blackbox_exporter -> 10.189.60.204 ... done

- deploy blackbox_exporter -> 10.189.60.201 ... done

copy certificate to remote host

init instance configs

- generate config pd -> 10.189.60.201:2379 ... done

- generate config tikv -> 10.189.60.202:20160 ... done

- generate config tikv -> 10.189.60.203:20160 ... done

- generate config tikv -> 10.189.60.204:20160 ... done

- generate config tidb -> 10.189.60.201:4000 ... done

- generate config prometheus -> 10.189.60.201:9090 ... done

- generate config grafana -> 10.189.60.201:3000 ... done

- generate config alertmanager -> 10.189.60.201:9093 ... done

init monitor configs

- generate config node_exporter -> 10.189.60.204 ... done

- generate config node_exporter -> 10.189.60.201 ... done

- generate config node_exporter -> 10.189.60.202 ... done

- generate config node_exporter -> 10.189.60.203 ... done

- generate config blackbox_exporter -> 10.189.60.203 ... done

- generate config blackbox_exporter -> 10.189.60.204 ... done

- generate config blackbox_exporter -> 10.189.60.201 ... done

- generate config blackbox_exporter -> 10.189.60.202 ... done

enabling component pd

enabling instance 10.189.60.201:2379

enable instance 10.189.60.201:2379 success

enabling component tikv

enabling instance 10.189.60.204:20160

enabling instance 10.189.60.202:20160

enabling instance 10.189.60.203:20160

enable instance 10.189.60.202:20160 success

enable instance 10.189.60.204:20160 success

enable instance 10.189.60.203:20160 success

enabling component tidb

enabling instance 10.189.60.201:4000

enable instance 10.189.60.201:4000 success

enabling component prometheus

enabling instance 10.189.60.201:9090

enable instance 10.189.60.201:9090 success

enabling component grafana

enabling instance 10.189.60.201:3000

enable instance 10.189.60.201:3000 success

enabling component alertmanager

enabling instance 10.189.60.201:9093

enable instance 10.189.60.201:9093 success

enabling component node_exporter

enabling instance 10.189.60.204

enabling instance 10.189.60.202

enabling instance 10.189.60.201

enabling instance 10.189.60.203

enable 10.189.60.204 success

enable 10.189.60.203 success

enable 10.189.60.201 success

enable 10.189.60.202 success

enabling component blackbox_exporter

enabling instance 10.189.60.204

enabling instance 10.189.60.202

enabling instance 10.189.60.201

enabling instance 10.189.60.203

enable 10.189.60.204 success

enable 10.189.60.203 success

enable 10.189.60.201 success

enable 10.189.60.202 success

cluster `test-tidb` deployed successfully, you can start it with command: `tiup cluster start test-tidb --init`

安装成功

[root@yzptltidb01t ~]# tiup cluster list

tiup is checking updates for component cluster ...

starting component `cluster`: /root/.tiup/components/cluster/v1.14.0/tiup-cluster list

name user version path privatekey

---- ---- ------- ---- ----------

test-tidb tidb v7.5.0 /root/.tiup/storage/cluster/clusters/test-tidb /root/.tiup/storage/cluster/clusters/test-tidb/ssh/id_rsa

查看当前安装集群状态

[root@yzptltidb01t ~]# tiup cluster display test-tidb

tiup is checking updates for component cluster ...

starting component `cluster`: /root/.tiup/components/cluster/v1.14.0/tiup-cluster display test-tidb

cluster type: tidb

cluster name: test-tidb

cluster version: v7.5.0

deploy user: tidb

ssh type: builtin

grafana url: http://10.189.60.201:3000

id role host ports os/arch status data dir deploy dir

-- ---- ---- ----- ------- ------ -------- ----------

10.189.60.201:9093 alertmanager 10.189.60.201 9093/9094 linux/x86_64 down /tidb-data/alertmanager-9093 /tidb-deploy/alertmanager-9093

10.189.60.201:3000 grafana 10.189.60.201 3000 linux/x86_64 down - /tidb-deploy/grafana-3000

10.189.60.201:2379 pd 10.189.60.201 2379/2380 linux/x86_64 down /tidb-data/pd-2379 /tidb-deploy/pd-2379

10.189.60.201:9090 prometheus 10.189.60.201 9090/12020 linux/x86_64 down /tidb-data/prometheus-9090 /tidb-deploy/prometheus-9090

10.189.60.201:4000 tidb 10.189.60.201 4000/10080 linux/x86_64 down - /tidb-deploy/tidb-4000

10.189.60.202:20160 tikv 10.189.60.202 20160/20180 linux/x86_64 n/a /tidb-data/tikv-20160 /tidb-deploy/tikv-20160

10.189.60.203:20160 tikv 10.189.60.203 20160/20180 linux/x86_64 n/a /tidb-data/tikv-20160 /tidb-deploy/tikv-20160

10.189.60.204:20160 tikv 10.189.60.204 20160/20180 linux/x86_64 n/a /tidb-data/tikv-20160 /tidb-deploy/tikv-20160

total nodes: 8

可以看到当前集群的状态都是down的

5.6初始化并启动集群

[root@yzptltidb01t ~]# tiup cluster start test-tidb --init

tiup is checking updates for component cluster ...

starting component `cluster`: /root/.tiup/components/cluster/v1.14.0/tiup-cluster start test-tidb --init

starting cluster test-tidb...

[ serial ] - sshkeyset: privatekey=/root/.tiup/storage/cluster/clusters/test-tidb/ssh/id_rsa, publickey=/root/.tiup/storage/cluster/clusters/test-tidb/ssh/id_rsa.pub

[parallel] - userssh: user=tidb, host=10.189.60.201

[parallel] - userssh: user=tidb, host=10.189.60.202

[parallel] - userssh: user=tidb, host=10.189.60.201

[parallel] - userssh: user=tidb, host=10.189.60.204

[parallel] - userssh: user=tidb, host=10.189.60.201

[parallel] - userssh: user=tidb, host=10.189.60.201

[parallel] - userssh: user=tidb, host=10.189.60.203

[parallel] - userssh: user=tidb, host=10.189.60.201

[ serial ] - startcluster

starting component pd

starting instance 10.189.60.201:2379

start instance 10.189.60.201:2379 success

starting component tikv

starting instance 10.189.60.204:20160

starting instance 10.189.60.202:20160

starting instance 10.189.60.203:20160

start instance 10.189.60.204:20160 success

start instance 10.189.60.202:20160 success

start instance 10.189.60.203:20160 success

starting component tidb

starting instance 10.189.60.201:4000

start instance 10.189.60.201:4000 success

starting component prometheus

starting instance 10.189.60.201:9090

start instance 10.189.60.201:9090 success

starting component grafana

starting instance 10.189.60.201:3000

start instance 10.189.60.201:3000 success

starting component alertmanager

starting instance 10.189.60.201:9093

start instance 10.189.60.201:9093 success

starting component node_exporter

starting instance 10.189.60.204

starting instance 10.189.60.201

starting instance 10.189.60.202

starting instance 10.189.60.203

start 10.189.60.204 success

start 10.189.60.203 success

start 10.189.60.202 success

start 10.189.60.201 success

starting component blackbox_exporter

starting instance 10.189.60.204

starting instance 10.189.60.202

starting instance 10.189.60.203

starting instance 10.189.60.201

start 10.189.60.202 success

start 10.189.60.203 success

start 10.189.60.201 success

start 10.189.60.204 success

[ serial ] - updatetopology: cluster=test-tidb

started cluster `test-tidb` successfully

the root password of tidb database has been changed.

the new password is: 'n_mz@vp^17tg6e 504'.

copy and record it to somewhere safe, it is only displayed once, and will not be stored.

the generated password can not be get and shown again.

[root@yzptltidb01t ~]#

记住初始化时给的密码(和mysql类似)

5.7查看集群的状态

tiup cluster display test-tidb

这时可以看到集群的状态已经都是up了

六.启动、关闭集群命令

启动集群

tiup cluster start test-tidb

关闭集群

tiup cluster stop test-tidb

停止组件

例如,下列命令只停止 tidb 组件:

tiup cluster stop test-tidb -r tidb

停止组件中的某一个节点(如tikv)

tiup cluster stop test-tidb -n 10.189.60.202:20160

停掉后集群状态

启动组件中的某一个节点(tikv)

tiup cluster start test-tidb -n 10.189.60.202:20160

更改组件的配置后,重启组件

tiup cluster reload ${cluster-name} -r 组件名,

比如

tiup cluster reload test-tidb -r t

七. 命令行连接到 tidb 集群

tidb 兼容 mysql 协议,故需要 mysql 客户端连接,则需安装mysql 客户端。

linux7 版本的系统默认自带安装了 mariadb,需要先清理。

删除自带的mariadb

[root@yzptltidb01t ~]# rpm -qa |grep mariadb

mariadb-libs-5.5.68-1.el7.x86_64

[root@yzptltidb01t ~]#

[root@yzptltidb01t ~]# rpm -e --nodeps mariadb-libs-5.5.68-1.el7.x86_64

[root@yzptltidb01t ~]# rpm -qa |grep mariadb

安装mysql客户端

#yum -y install http://dev.mysql.com/get/mysql80-community-release-el7-10.noarch.rpm

# rpm --import https://repo.mysql.com/rpm-gpg-key-mysql-2023

# yum -y install mysql

[root@yzptltidb01t ~]# yum -y install http://dev.mysql.com/get/mysql80-community-release-el7-10.noarch.rpm

loaded plugins: fastestmirror

mysql80-community-release-el7-10.noarch.rpm | 14 kb 00:00:00

examining /var/tmp/yum-root-26cqau/mysql80-community-release-el7-10.noarch.rpm: mysql80-community-release-el7-11.noarch

marking /var/tmp/yum-root-26cqau/mysql80-community-release-el7-10.noarch.rpm to be installed

resolving dependencies

--> running transaction check

---> package mysql80-community-release.noarch 0:el7-11 will be installed

--> finished dependency resolution

dependencies resolved

================================================================================================================================================================

package arch version repository size

================================================================================================================================================================

installing:

mysql80-community-release noarch el7-11 /mysql80-community-release-el7-10.noarch 17 k

transaction summary

================================================================================================================================================================

install 1 package

total size: 17 k

installed size: 17 k

downloading packages:

running transaction check

running transaction test

transaction test succeeded

running transaction

warning: rpmdb altered outside of yum.

** found 2 pre-existing rpmdb problem(s), 'yum check' output follows:

2:postfix-2.10.1-9.el7.x86_64 has missing requires of libmysqlclient.so.18()(64bit)

2:postfix-2.10.1-9.el7.x86_64 has missing requires of libmysqlclient.so.18(libmysqlclient_18)(64bit)

installing : mysql80-community-release-el7-11.noarch 1/1

verifying : mysql80-community-release-el7-11.noarch 1/1

installed:

mysql80-community-release.noarch 0:el7-11

complete!

[root@yzptltidb01t ~]# rpm --import https://repo.mysql.com/rpm-gpg-key-mysql-2023

[root@yzptltidb01t ~]# yum -y install mysql

loaded plugins: fastestmirror

loading mirror speeds from cached hostfile

* base: mirrors.bfsu.edu.cn

* extras: mirrors.bfsu.edu.cn

* updates: mirrors.bfsu.edu.cn

base | 3.6 kb 00:00:00

not using downloaded base/repomd.xml because it is older than what we have:

current : mon mar 20 23:22:29 2023

downloaded: fri oct 30 04:03:00 2020

extras | 2.9 kb 00:00:00

mysql-connectors-community | 2.6 kb 00:00:00

mysql-tools-community | 2.6 kb 00:00:00

mysql80-community | 2.6 kb 00:00:00

updates | 2.9 kb 00:00:00

(1/3): mysql-tools-community/x86_64/primary_db | 95 kb 00:00:00

(2/3): mysql-connectors-community/x86_64/primary_db | 102 kb 00:00:00

(3/3): mysql80-community/x86_64/primary_db | 266 kb 00:00:00

resolving dependencies

--> running transaction check

---> package mysql-community-client.x86_64 0:8.0.35-1.el7 will be installed

--> processing dependency: mysql-community-client-plugins = 8.0.35-1.el7 for package: mysql-community-client-8.0.35-1.el7.x86_64

--> processing dependency: mysql-community-libs(x86-64) >= 8.0.11 for package: mysql-community-client-8.0.35-1.el7.x86_64

--> running transaction check

---> package mysql-community-client-plugins.x86_64 0:8.0.35-1.el7 will be installed

---> package mysql-community-libs.x86_64 0:8.0.35-1.el7 will be installed

--> processing dependency: mysql-community-common(x86-64) >= 8.0.11 for package: mysql-community-libs-8.0.35-1.el7.x86_64

--> running transaction check

---> package mysql-community-common.x86_64 0:8.0.35-1.el7 will be installed

--> finished dependency resolution

dependencies resolved

================================================================================================================================================================

package arch version repository size

================================================================================================================================================================

installing:

mysql-community-client x86_64 8.0.35-1.el7 mysql80-community 16 m

installing for dependencies:

mysql-community-client-plugins x86_64 8.0.35-1.el7 mysql80-community 3.5 m

mysql-community-common x86_64 8.0.35-1.el7 mysql80-community 665 k

mysql-community-libs x86_64 8.0.35-1.el7 mysql80-community 1.5 m

transaction summary

================================================================================================================================================================

install 1 package ( 3 dependent packages)

total download size: 22 m

installed size: 116 m

downloading packages:

warning: /var/cache/yum/x86_64/7/mysql80-community/packages/mysql-community-client-plugins-8.0.35-1.el7.x86_64.rpm: header v4 rsa/sha256 signature, key id 3a79bd29: nokey

public key for mysql-community-client-plugins-8.0.35-1.el7.x86_64.rpm is not installed

(1/4): mysql-community-client-plugins-8.0.35-1.el7.x86_64.rpm | 3.5 mb 00:00:00

(2/4): mysql-community-common-8.0.35-1.el7.x86_64.rpm | 665 kb 00:00:00

(3/4): mysql-community-client-8.0.35-1.el7.x86_64.rpm | 16 mb 00:00:01

(4/4): mysql-community-libs-8.0.35-1.el7.x86_64.rpm | 1.5 mb 00:00:00

----------------------------------------------------------------------------------------------------------------------------------------------------------------

total 13 mb/s | 22 mb 00:00:01

retrieving key from file:///etc/pki/rpm-gpg/rpm-gpg-key-mysql-2023

retrieving key from file:///etc/pki/rpm-gpg/rpm-gpg-key-mysql-2022

importing gpg key 0x3a79bd29:

userid : "mysql release engineering "

fingerprint: 859b e8d7 c586 f538 430b 19c2 467b 942d 3a79 bd29

package : mysql80-community-release-el7-11.noarch (@/mysql80-community-release-el7-10.noarch)

from : /etc/pki/rpm-gpg/rpm-gpg-key-mysql-2022

retrieving key from file:///etc/pki/rpm-gpg/rpm-gpg-key-mysql

importing gpg key 0x5072e1f5:

userid : "mysql release engineering "

fingerprint: a4a9 4068 76fc bd3c 4567 70c8 8c71 8d3b 5072 e1f5

package : mysql80-community-release-el7-11.noarch (@/mysql80-community-release-el7-10.noarch)

from : /etc/pki/rpm-gpg/rpm-gpg-key-mysql

running transaction check

running transaction test

transaction test succeeded

running transaction

installing : mysql-community-client-plugins-8.0.35-1.el7.x86_64 1/4

installing : mysql-community-common-8.0.35-1.el7.x86_64 2/4

installing : mysql-community-libs-8.0.35-1.el7.x86_64 3/4

installing : mysql-community-client-8.0.35-1.el7.x86_64 4/4

verifying : mysql-community-client-plugins-8.0.35-1.el7.x86_64 1/4

verifying : mysql-community-libs-8.0.35-1.el7.x86_64 2/4

verifying : mysql-community-client-8.0.35-1.el7.x86_64 3/4

verifying : mysql-community-common-8.0.35-1.el7.x86_64 4/4

installed:

mysql-community-client.x86_64 0:8.0.35-1.el7

dependency installed:

mysql-community-client-plugins.x86_64 0:8.0.35-1.el7 mysql-community-common.x86_64 0:8.0.35-1.el7 mysql-community-libs.x86_64 0:8.0.35-1.el7

complete!

[root@yzptltidb01t ~]#

连接数据库 并修改初始root密码

mysql -h 10.189.60.201 -p 4000 -uroot –p

[root@yzptltidb01t ~]# mysql -h 10.189.60.201 -p 4000 -uroot -p

enter password:

welcome to the mysql monitor. commands end with ; or \g.

your mysql connection id is 1543503878

server version: 8.0.11-tidb-v7.5.0 tidb server (apache license 2.0) community edition, mysql 8.0 compatible

米乐app官网下载 copyright (c) 2000, 2023, oracle and/or its affiliates.

oracle is a registered trademark of oracle corporation and/or its

affiliates. other names may be trademarks of their respective

owners.

type 'help;' or '\h' for help. type '\c' to clear the current input statement.

mysql>

mysql> show databases;

--------------------

| database |

--------------------

| information_schema |

| metrics_schema |

| performance_schema |

| mysql |

| test |

--------------------

5 rows in set (0.00 sec)

mysql> use mysql

reading table information for completion of table and column names

you can turn off this feature to get a quicker startup with -a

database changed

mysql> alter user 'root'@'%' identified by 'tidb';

query ok, 0 rows affected (0.03 sec)


八.集群监控

集群状态中有监控的url

集群的dashboard 监控整个集群的状态 密码为修改后的mysql的root密码

可以看到整个集群的整体状态

grafana 默认密码为admin/admin

首次登陆需要修改密码

首次使用需要建个新的dashboard

最后修改时间:2024-01-24 17:07:18
「喜欢这篇文章,您的关注和赞赏是给作者最好的鼓励」
关注作者
【米乐app官网下载的版权声明】本文为墨天轮用户原创内容,转载时必须标注文章的来源(墨天轮),文章链接,文章作者等基本信息,否则作者和墨天轮有权追究责任。如果您发现墨天轮中有涉嫌抄袭或者侵权的内容,欢迎发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。

评论

网站地图