大家好, 今天和大家分享一个真实生产案例: 异地数据库同步延时导致的主库关闭失败问题。
简单的说一下背景:
我们要在计划内部署维护窗口做2件事:
1.oracle 迁移到pg的项目,数据同步, 大致几十个gb的数据(包含索引)。
2.pg 端要做一个小版本的升级,从版本15.2升级到15.3 , 需要执行停库操作,进行 make install-world , 再重启数据库
生产的数据库架构是: 武汉pdc (1 primary db 2 standby db) 天津drc灾备中心 (1 standby db)
预计窗口停机: 23:00- 00:00 1个小时
部署当天晚上:
1.执行 oracle 迁移到pg的项目,数据同步 – 比较顺利,和期望是一致的
2.pg实例在关机的时候 出现了命令 pg_ctl stop -d $pg_data hang 住,长时间等待的情况。。。
后台日志出现了大量的关闭失败的信息
2023-07-17 23:45:05.033 cst [109594] repmgr@repmgr-10.25.14.149/[unknown]fatal: the database system is shutting down 2023-07-17 23:45:05.035 cst [109595] repmgr@repmgr-10.25.14.149/[unknown]fatal: the database system is shutting down 2023-07-17 23:45:05.613 cst [109603] capp_lm@cappcore-10.26.212.19/[unknown]fatal: the database system is shutting down 2023-07-17 23:45:05.822 cst [109605] capp_lm@cappcore-10.26.212.49/[unknown]fatal: the database system is shutting down
而此时 ps -ef | grep postgres , 后台进程还是存在的, 但是这个时候尝试连接 psql 也是失败的
由于维护窗口只有一个小时,时间紧迫,采取了 -m immediate 参数 立刻关闭的方式。 (生产环境不建议这种方式)
pg_ctl stop -d $pg_data -m immediate
这下算是顺利关机, 然后给pg 软件小版本升级,然后启动实例一切顺利!
作为运维dba来说,如果生产如果出现问题,恢复生产的可用性永远是第一位的, 然后才是调查 root cause.
初步怀疑原因: 和主从延时有关,异地网络的传输效率比较差, wal sender 进程 blocking the 主库关闭.
我们可以找个测试环境来复盘这个问题:
测试环境模拟 1 主 : pref170 2从: pref171,pref173 1从: pgdr(异地)
postgres=# select pid,usename,application_name,replay_lag,reply_time from pg_stat_replication;
pid | usename | application_name | replay_lag | reply_time
-------- --------- ------------------ ----------------- -------------------------------
22250 | repmgr | pref173 | 00:00:00.000816 | 2023-08-19 10:43:49.650063 08
22252 | repmgr | pref171 | 00:00:00.000923 | 2023-08-19 10:43:49.650243 08
126245 | repmgr | pgdr | 00:00:00.020848 | 2023-08-19 10:43:49.661431 08
(3 rows)
登录主库 pref170: 模拟数据大量数据导入8000w数据,造成主从延时。
postgres=# create table t_load (id bigint, context text);
create table
postgres=# insert into t_load select generate_series(1,80000000),md5('a')||md5('b')||md5('c')||md5('d')||md5('e');
insert 0 80000000
查看延时: 异地节点延时grafana 监控图 同步lag 已经达到了 10分钟以上
我们尝试关闭主库:
infra [postgres@ljzdccapp006 ~]# /opt/pgsql-15/bin/pg_ctl stop -d /data/pref6005/data
waiting for server to shut down............................................................... failed
pg_ctl: server does not shut down
查看主库后台进程:果然是walsender 进程健在,指向的同步节点是异地dc的机器
infra [postgres@ljzdccapp006 ~]# ps -ef | grep postgres
postgres 13444 1 0 aug19 ? 00:01:07 /opt/pgsql-15/bin/postgres -d /data/pref6005/data
postgres 17151 13444 0 aug19 ? 00:00:40 postgres: pref170: walsender repmgr 10.25.15.85(54434) streaming c/f5f06000
我们尝试再次登陆关闭未成功的主库:数据库的状态是 shutting down ,不允许建立任何连接
infra [postgres@ljzdccapp006 ~]# psql
psql: error: connection to server on socket "/tmp/.s.pgsql.6005" failed: fatal: the database system is shutting down
查看主库日志: 也全部都是 the database system is shutting down , 数据库连接失败的信息
2023-08-20 10:21:40.162 cst [96140] repmgr@[unknown]-10.67.38.171/[unknown]fatal: the database system is shutting down 2023-08-20 10:21:40.211 cst [96141] [unknown]@[unknown]-10.67.38.173/[unknown]log: connection received: host=10.67.38.173 port=56246 2023-08-20 10:21:40.212 cst [96141] repmgr@[unknown]-10.67.38.173/[unknown]fatal: the database system is shutting down 2023-08-20 10:21:40.282 cst [96143] [unknown]@[unknown]-10.67.38.173/[unknown]log: connection received: host=10.67.38.173 port=56248 2023-08-20 10:21:40.283 cst [96143] repmgr@repmgr-10.67.38.173/[unknown]fatal: the database system is shutting down 2023-08-20 10:21:40.284 cst [96144] [unknown]@[unknown]-10.67.38.173/[unknown]log: connection received: host=10.67.38.173 port=56250 2023-08-20 10:21:40.284 cst [96144] repmgr@repmgr-10.67.38.173/[unknown]fatal: the database system is shutting down
至此我们完全模拟了生产由于跨dc延时,关闭主库hang 住关闭失败的情况, 果然是后台进程walsender blocking了实例的关闭。
我们更近一步,从源码的角度看一下为什么 walsender 进程会阻碍了 shutdown process.
我们执行的关闭命令: /opt/pgsql-15/bin/pg_ctl stop -d /data/pref6005/data, 对应的后台代码是 pg_ctl.c 的 do_stop(void)
kill((pid_t) pid, sig) 向后台linux 系统发送 kill 的信号量: #define sigint 2 /* interrupt (ansi). */
static void
do_stop(void)
{
pgpid_t pid;
pid = get_pgpid(false);
if (pid == 0) /* no pid file */
{
write_stderr(_("%s: pid file \"%s\" does not exist\n"), progname, pid_file);
write_stderr(_("is server running?\n"));
exit(1);
}
else if (pid < 0) /* standalone backend, not postmaster */
{
pid = -pid;
write_stderr(_("%s: cannot stop server; "
"single-user server is running (pid: %ld)\n"),
progname, pid);
exit(1);
}
if (kill((pid_t) pid, sig) != 0)
{
write_stderr(_("%s: could not send stop signal (pid: %ld): %s\n"), progname, pid,
strerror(errno));
exit(1);
}
if (!do_wait)
{
print_msg(_("server shutting down\n"));
return;
}
else
{
print_msg(_("waiting for server to shut down..."));
if (!wait_for_postmaster_stop())
{
print_msg(_(" failed\n"));
write_stderr(_("%s: server does not shut down\n"), progname);
if (shutdown_mode == smart_mode)
write_stderr(_("hint: the \"-m fast\" option immediately disconnects sessions rather than\n"
"waiting for session-initiated disconnection.\n"));
exit(1);
}
print_msg(_(" done\n"));
print_msg(_("server stopped\n"));
}
}
我们看看 后台主进程 postmaster 怎么处理这个信号量: postmaster.c -> pmdie(signal_args)
/*
* pmdie -- signal handler for processing various postmaster signals.
*/
static void
pmdie(signal_args)
{
case sigint:
/*
* fast shutdown:
*
* abort all children with sigterm (rollback active transactions
* and exit) and shut down when they are gone.
*/
if (shutdown >= fastshutdown)
break;
shutdown = fastshutdown;
ereport(log,
(errmsg("received fast shutdown request")));
/* report status */
addtodatadirlockfile(lock_file_line_pm_status, pm_status_stopping);
#ifdef use_systemd
sd_notify(0, "stopping=1");
#endif
if (pmstate == pm_startup || pmstate == pm_recovery)
{
/* just shut down background processes silently */
pmstate = pm_stop_backends;
}
else if (pmstate == pm_run ||
pmstate == pm_hot_standby)
{
/* report that we're about to zap live client sessions */
ereport(log,
(errmsg("aborting any active transactions")));
pmstate = pm_stop_backends;
}
/*
* postmasterstatemachine will issue any necessary signals, or
* take the next step if no child processes need to be killed.
*/
postmasterstatemachine();
break;
关闭postmaster 主进程下所有子进程函数: postmaster.c -> static void postmasterstatemachine(void)
我们可以看到 /* signal all backend children except walsenders */ , 一幕了然, 也从源码的角度证实了walsenders 进程号是不会被发送信号量 kill掉的
写到最后总结:
1)如果数据库存在异地部署且网络不佳的状况,关闭主库前需要检查数据库延时的lag : select pid,usename,application_name,replay_lag,reply_time from pg_stat_replication;
2)遇到数据库维护重启 数据迁移任务: 可以调整顺序1. 数据库维护重启 2. 执行大量数据任务
3)如果上诉1,2步骤不小心忽略了, 可以尝试直接关闭有延时的异地从库,这样主库进程 walsender 自然会关闭, postmaster 主进程也随之关闭。
have a fun 🙂 !