注册
【全网最详细】本地虚拟机搭建达梦DSC+DW
培训园地/ 文章详情 /

【全网最详细】本地虚拟机搭建达梦DSC+DW

Samurai 2023/07/03 2311 1 0

DSC 共享集群+备库的搭建

一、前期准备工作

1.1 在 Linux 中新增网卡操作(可选)

1、ifconfig 查看新增网卡名称
2、复制一份 ifcfg-ens33 文件
3、查看新增网卡的 uuidgen
4、修改新增网卡配置文件的 UUID、NAME 和 DEVICE 名称
5、重启网络服务

1.2 修改主机名(可选)

目的是为了更好的区分各台虚拟机所执行的功能并易于区分
1、编辑/etc/hostname 文件
2、将原有内容删除
3、输入需要更改的主机名
4、保存文件
5、重启虚拟机后生效(也可以直接在控制台输入 bash 但是不建议这么做)

二、两节点 DSC 集群的搭建

2.1 实验环境

注:本次实验环境均使用一块网卡

2.2 添加网卡(可选)

见一、在 Linux 中新增网卡操作(可选)

2.3 添加共享硬盘

CentOS 7-DSC01(原机)
CentOS 7-DSC02(克隆机)
image.png
1、在原机上新增一块 50GB 的硬盘
2、在克隆机上添加原机 50GB 的硬盘
| HostName | External IP | Intranet IP | Instance Name|
| :--: | :--: | :--: | :--: | :--: |
|Node0 |192.168.233.100 | 192.168.223.100 | DCS0|
|Node1 | 192.168.233.110 | 192.168.223.110 | DSC1|
|Standby | 192.168.223.6 | 192.168.223.6 | DW|
3、在原机的 vmx 文件中添加

disk.locking="FALSE" disk.SharedBus="Virtual" disk.enableUUID="TRUE" disk.shared="TRUE"

(否则不能同时启动原机和克隆机)
4、在原机和克隆机使用 fdisk -l 验证是否添加成功

[root@Node0 ~]# fdisk -l

image.png

[root@Node1 ~]# fdisk -l

image.png

2.4 共享硬盘分区

Node0 或 Node1 节点执行

使用 parted 命令进行分区(主分区个数不限制,大小可以超过 2T----区别于 fdisk)

[root@Node0 ~]# parted GNU Parted 3.1 Using /dev/sda Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) select /dev/sdb Using /dev/sdb (parted) mklabel gpt (parted) mkpart primary 0 2G (parted) mkpart primary 2G 4G (parted) mkpart primary 4G 14G (parted) mkpart primary 14G 24G (parted) mkpart primary 24G -1 (parted) quit [root@Node0 ~]# parted /dev/sdb print

image.png

[root@Node0 ~]# blkid

image.png

2.5 创建磁盘链接

Node0 和 Node1 节点都需执行
配置相同

[root@Node0 ~]# cd /etc/udev/rules.d [root@Node0 rules.d]# vim 88-dm-asmdevices.rules [root@Node0 rules.d]# cat 88-dm-asmdevices.rules KERNEL=="sdb1",SUBSYSTEM=="block",OWNER="dmdba",GROUP="dinstall",SYMLINK+=" DM/dmasm_vote",MODE="0660" KERNEL=="sdb2",SUBSYSTEM=="block",OWNER="dmdba",GROUP="dinstall", SYMLINK+="DM/dmasm_dcr",MODE="0660" KERNEL=="sdb3",SUBSYSTEM=="block",OWNER="dmdba",GROUP="dinstall", SYMLINK+="DM/dmasm_log",MODE="0660" KERNEL=="sdb4",SUBSYSTEM=="block",OWNER="dmdba",GROUP="dinstall", SYMLINK+="DM/dmasm_arch",MODE="0660" KERNEL=="sdb5",SUBSYSTEM=="block",OWNER="dmdba",GROUP="dinstall", SYMLINK+="DM/dmasm_data",MODE="0660"

image.png

2.6 刷新磁盘信息

Node0 和 Node1 节点都需执行
配置相同

[root@Node0 rules.d]# /sbin/udevadm control --reload [root@Node0 rules.d]# /sbin/udevadm trigger --action change [root@Node0 rules.d]# /sbin/udevadm trigger --action add [root@Node0 rules.d]# chown -R dmdba:dinstall /dev/DM

重新加载使之生效后会在/dev 目录下生成一个 DM 的文件夹

[root@Node0 ~]# cd /dev/DM/ [root@Node0 DM]# ll

image.png

2.7 集群搭建

2.7.1 安装达梦数据库

Node0 和 Node1 节点都需执行
配置相同

[root@Node0 ~]# groupadd dinstall [root@Node0 ~]# useradd -g dinstall -m -s /bin/bash dmdba [root@Node0 ~]# passwd dmdba [root@Node0 ~]# vim /etc/security/limits.conf dmdba hard nofile 65536 dmdba soft nofile 65536 dmdba hard stack 32768 dmdba soft stack 16384 [root@Node0 ~]# mount -o loop /opt/dm8_20230104_x86_rh6_64.iso /mnt [root@Node0 ~]# mkdir /dm8 [root@Node0 ~]# chown -R dmdba:dinstall /dm8 [root@Node0 ~]# systemctl status firewalld [root@Node0 ~]# systemctl stop firewalld [root@Node0 ~]# systemctl disable firewalld [root@Node0 ~]# vim /etc/selinux/config SELINUX=disabled [root@Node0 ~]# su - dmdba [dmdba@Node0 ~]$ cd /mnt [dmdba@Node0 mnt]$ ./DMInstall.bin -i [root@Node0 ~]# /dm8/script/root/root_installer.sh

2.7.2 文件配置

2.7.2.1 创建 dmdcr_cfg.ini

Node0 和 Node1 节点都需执行
配置相同

该配置文件记录了集群环境全局信息、集群组信息(CSS、ASM 和 DB)、以及组内节点信息

[dmdba@Node0 conf]$ vim dmdcr_cfg.ini [dmdba@Node0 conf]$ cat dmdcr_cfg.ini DCR_N_GRP = 3 DCR_VTD_PATH = /dev/DM/dmasm_vote DCR_OGUID = 20230612 [GRP] DCR_GRP_TYPE = CSS DCR_GRP_NAME = GRP_CSS DCR_GRP_N_EP = 2 DCR_GRP_DSKCHK_CNT = 65 [GRP_CSS] DCR_EP_NAME = CSS0 DCR_EP_HOST = 192.168.223.100 DCR_EP_PORT = 11286 [GRP_CSS] DCR_EP_NAME = CSS1 DCR_EP_HOST = 192.168.223.110 DCR_EP_PORT = 11287 [GRP] DCR_GRP_TYPE = ASM DCR_GRP_NAME = GRP_ASM DCR_GRP_N_EP = 2 DCR_GRP_DSKCHK_CNT = 61 [GRP_ASM] DCR_EP_NAME = ASM0 DCR_EP_SHM_KEY = 2424 DCR_EP_SHM_SIZE = 1024 DCR_EP_HOST = 192.168.223.100 DCR_EP_PORT = 11276 DCR_EP_ASM_LOAD_PATH = /dev/DM [GRP_ASM] DCR_EP_NAME = ASM1 DCR_EP_SHM_KEY = 2425 DCR_EP_SHM_SIZE = 1024 DCR_EP_HOST = 192.168.223.110 DCR_EP_PORT = 11277 DCR_EP_ASM_LOAD_PATH = /dev/DM [GRP] DCR_GRP_TYPE = DB DCR_GRP_NAME = GRP_DSC DCR_GRP_N_EP = 2 DCR_GRP_DSKCHK_CNT = 20 [GRP_DSC] DCR_EP_NAME = DSC0 DCR_EP_SEQNO = 0 DCR_EP_PORT = 5236 DCR_CHECK_PORT = 11256 [GRP_DSC] DCR_EP_NAME = DSC1 DCR_EP_SEQNO = 1 DCR_EP_PORT = 5236 DCR_CHECK_PORT = 11257
2.7.2.2 创建 dmdcr.ini

Node0 和 Node1 节点都需执行
配置不同

该配置文件记录了当前节点序列号以及 DCR 磁盘路径

Node0

[dmdba@Node0 conf]$ vim dmdcr.ini [dmdba@Node0 conf]$ cat dmdcr.ini DMDCR_PATH = /dev/DM/dmasm_dcr DMDCR_MAL_PATH = /dm8/conf/dmasvrmal.ini DMDCR_SEQNO = 0 DMDCR_ASM_RESTART_INTERVAL = 0 DMDCR_ASM_STARTUP_CMD = /dm8/bin/dmasmsvr dcr_ini = /dm8/conf/dmdcr.ini DMDCR_DB_RESTART_INTERVAL = 0 DMDCR_DB_STARTUP_CMD=/dm8/bin/dmserver path=/dm8/conf/prod0_config/dm.ini dcr_ini=/dm8/conf/dmdcr.ini

Node1

[dmdba@Node1 conf]$ vim dmdcr.ini [dmdba@Node1 conf]$ cat dmdcr.ini DMDCR_PATH = /dev/DM/dmasm_dcr DMDCR_MAL_PATH = /dm8/conf/dmasvrmal.ini DMDCR_SEQNO = 1 DMDCR_ASM_RESTART_INTERVAL = 0 DMDCR_ASM_STARTUP_CMD = /dm8/bin/dmasmsvr dcr_ini = /dm8/conf/dmdcr.ini DMDCR_DB_RESTART_INTERVAL = 0 DMDCR_DB_STARTUP_CMD=/dm8/bin/dmserver path=/dm8/conf/prod0_config/dm.ini dcr_ini=/dm8/conf/dmdcr.ini
2.7.2.3 创建 dmasvrmal.ini

该配置文件用来进行 asm 集群内部网络通讯

Node0 和 Node1 节点都需执行
配置相同

[dmdba@Node0 ~]$ cd /dm8/conf/ [dmdba@Node0 conf]$ cat dmasvrmal.ini [MAL_INST1] MAL_INST_NAME = ASM0 MAL_HOST = 192.168.223.100 MAL_PORT = 5244 [MAL_INST2] MAL_INST_NAME = ASM1 MAL_HOST = 192.168.223.110 MAL_PORT = 5244
2.7.2.4 创建 dmcssm.ini

Standby 节点执行

[dmdba@Standby ~]$ cd /dm8/conf [dmdba@Standby conf]$ cat dmcssm.ini CSSM_OGUID=20230612 CSSM_CSS_IP=192.168.223.100:11286 CSSM_CSS_IP=192.168.223.110:11287 CSSM_LOG_PATH=/dm8/log CSSM_LOG_FILE_SIZE=32 CSSM_LOG_SPACE_LIMIT=0

2.7.3 创建 dcr、vote 和 asm 磁盘

Node0 或 Node1 节点执行

[root@Node0 ~]# cd /dm8/bin [root@Node0 bin]# ./dmasmcmd DMASMCMD V8 ASM>listdisks /dev/DM [/dev/DM/dmasm_vote]: Normal disk [/dev/DM/dmasm_log]: Normal disk [/dev/DM/dmasm_data]: Normal disk [/dev/DM/dmasm_arch]: Normal disk [/dev/DM/dmasm_dcr]: Normal disk Used time: 1.312(ms). ASM>create dcrdisk '/dev/DM/dmasm_dcr' 'DCR' [TRACE]The ASM initialize dcrdisk /dev/DM/dmasm_dcr to name DMASMDCR Used time: 1.928(ms). ASM>create votedisk '/dev/DM/dmasm_vote' 'VOTE' [TRACE]The ASM initialize votedisk /dev/DM/dmasm_vote to name DMASMVOTE Used time: 11.233(ms). ASM>create asmdisk '/dev/DM/dmasm_data' 'DATA' [TRACE]The ASM initialize asmdisk /dev/DM/dmasm_data to name DMASMDATA Used time: 12.769(ms). ASM>create asmdisk '/dev/DM/dmasm_log' 'LOG' [TRACE]The ASM initialize asmdisk /dev/DM/dmasm_log to name DMASMLOG Used time: 14.482(ms). ASM>create asmdisk '/dev/DM/dmasm_arch' 'ARCH'

image.png
注:如果误操作(比如说执行了两次创建 dcr 磁盘命令)使用 dd 命令擦除磁盘信息
例子:

[root@Node0 bin]# dd if=/dev/zero of=/dev/DM/dmasm_vote bs=1024k count=1 [root@Node0 bin]# dd if=/dev/zero of=/dev/DM/dmasm_dcr bs=1024k count=1

2.7.4 初始化 dcr 和 vote 磁盘

Node0 或 Node1 节点执行

[root@Node0 ~]# cd /dm8/bin [root@Node0 bin]# ./dmasmcmd ASM>init dcrdisk '/dev/DM/dmasm_dcr' from '/dm8/conf/dmdcr_cfg.ini' identified by 'dmdsc' ASM>init votedisk '/dev/DM/dmasm_vote' from '/dm8/conf/dmdcr_cfg.ini' ASM>listdisks /dev/DM

2.7.5 注册 css 服务并启动

Node0 和 Node1 节点都需执行
步骤相同

Node0

[root@Node0 root]# ./dm_service_installer.sh -t dmcss -p DSC0 -dcr_ini /dm8/conf/dmdcr.ini [root@Node0 bin]# ./DmCSSServiceDSC0 start

Node1

[root@Node1 root]# ./dm_service_installer.sh -t dmcss -p DSC1 -dcr_ini /dm8/conf/dmdcr.ini [root@Node1 bin]# ./DmCSSServiceDSC1 start

2.7.6 注册 asm 服务并启动

Node0 和 Node1 节点都需执行
步骤相同

Node0

[root@Node0 root]# ./dm_service_installer.sh -t dmasmsvr -p DSC0 -y DmCSSServiceDSC0 -dcr_ini /dm8/conf/dmdcr.ini [root@Node0 bin]# ./DmASMSvrServiceDSC0 start

Node1

[root@Node1 root]# ./dm_service_installer.sh -t dmasmsvr -p DSC1 -y DmCSSServiceDSC1 -dcr_ini /dm8/conf/dmdcr.ini [root@Node1 bin]# ./DmASMSvrServiceDSC1 start

2.7.7 创建 asm 磁盘组

Node0 或 Node1 节点执行

[dmdba@Node0 ~]$ cd /dm8/bin [dmdba@Node0 bin]$ ./dmasmtool dcr_ini=/dm8/conf/dmdcr.ini DMASMTOOL V8 ASM>create diskgroup 'DGDATA01' asmdisk '/dev/DM/dmasm_data' Used time: 25.426(ms). ASM>create diskgroup 'DGLOG01' asmdisk '/dev/DM/dmasm_log' Used time: 29.763(ms). ASM>create diskgroup 'DGARCH01' asmdisk '/dev/DM/dmasm_arch' Used time: 20.525(ms).

image.png

2.7.8 初始化数据库(创建实例)

Node0 或 Node1 节点执行

[dmdba@Node0 ~]$ cd /dm8/conf/ [dmdba@Node0 conf]$ cat dminit.ini db_name = DSC system_path = +DGDATA01 system = +DGDATA01/DM/system.dbf system_size = 128 roll = +DGDATA01/DM/roll.dbf roll_size = 128 main = +DGDATA01/DM/main.dbf main_size = 128 ctl_path = +DGDATA01/DM/dm.ctl ctl_size = 8 log_size = 256 dcr_path = /dev/DM/dmasm_dcr dcr_seqno = 0 auto_overwrite = 1 [DSC0] config_path = /dm8/conf/DSC0_config port_num = 5236 mal_host = 192.168.223.100 mal_port = 9340 log_path = +DGLOG01/DSC0/log01.log log_path = +DGLOG01/DSC0/log02.log [DSC1] config_path = /dm8/conf/DSC1_config port_num = 5236 mal_host = 192.168.223.110 mal_port = 9341 log_path = +DGLOG01/DSC1/log01.log log_path = +DGLOG01/DSC1/log02.log

注:数据库实例端口问题(dmdcr_cfg.ini 配置文件中的数据库实例端口要和初始化的实例
端口保持一致),两个端口不同会导致后续在配置实时备库时 dsc 集群的一个节点的数据库
服务启动失败,而且连接数据库服务时使用的是 dmdcr_cfg.ini 配置文件中的实例端口

[dmdba@Node0 bin]$ ./dminit control=/dm8/conf/dminit.ini

image.png
初始化后会在 conf 目录下生成指定的两个文件夹

[dmdba@Node0 bin]$ cd /dm8/conf/ [dmdba@Node0 conf]$ ll

image.png
将其中一个目录发送到另外一个节点

[dmdba@Node0 conf]$ scp -r DSC1_config/ 192.168.223.110:/dm8/conf/

2.7.9 注册 dmserver 服务并启动

Node0 和 Node1 节点都需执行
步骤相同

Node0

[root@Node0 root]# ./dm_service_installer.sh -t dmserver -y DmASMSvrServiceDSC0 -p DSC0 -dm_ini /dm8/conf/DSC0_config/dm.ini -dcr_ini /dm8/conf/dmdcr.ini [dmdba@Node0 bin]$ ./DmServiceDSC0 start

Node0

[root@Node1 root]# ./dm_service_installer.sh -t dmserver -y DmASMSvrServiceDSC1 -p DSC1 -dm_ini /dm8/conf/DSC1_config/dm.ini -dcr_ini /dm8/conf/dmdcr.ini [dmdba@Node1 bin]$ ./DmServiceDSC1 start

2.7.10 注册 cssm 监视器服务(可选)

一般通过前台启动,便于观察各集群的状态信息

Node0、Node1 或 Standby 节点执行

[root@Standby root]# ./dm_service_installer.sh -t dmcssm -cssm_ini /dm8/conf/dmcssm.ini -p Standby

2.7.11 启动 cssm 监视器(前台方式)

Standby 节点执行

[root@Standby bin]# ./dmcssm ini_path=/dm8/conf/dmcssm.ini

输入 show 命令查看 DSC 集群启动状态
CSS 集群启动正常
image.png
ASM 集群启动正常
image.png
DB 集群启动正常
image.png

2.7.12 验证数据同步

[dmdba@Node0 bin]$ ./disql sysdba/SYSDBA SQL> select * from v$dsc_ep_info;

image.png
节点一创建表空间并新建表最后插入数据,节点二验证查询数据
问题一:在 DSC 中新建表空间时数据文件会如何存放?

SQL> create tablespace dsc_tb datafile '/dev/DM/dmasm_data/dsc.dbf' size 100; create tablespace dsc_tb datafile '/dev/DM/dmasm_data/dsc.dbf' size 100; [-1512]:Not support local path when create tablespace files on DSC system. used time: 61.655(ms). Execute id is 0.

创建表空间

SQL> create tablespace dsc_tb datafile 'dsc.dbf' size 100; executed successfully used time: 213.875(ms). Execute id is 58702.

创建用户并指定表空间

SQL> create user dmtest identified by dameng123 default tablespace dsc_tb; SQL> grant dba to dmtest; SQL> conn dmtest/dameng123; SQL> create table student( 2 id int primary key, 3 name varchar(20) not null, 4 age int not null); SQL> insert into student values(1,'zhangsan',23); SQL> insert into student values(2,'xiaoming',22);

image.png

SQL> select * from student; SQL> commit;

image.png
注:不提交节点二查不到数据

[dmdba@Node1 bin]$ ./disql dmtest/dameng123

image.png

三、搭建实时备库

3.1 环境准备

◼ 操作系统:Centos-7
◼ IP 地址:192.168.223.6
注:搭建前需要关闭 DSC 集群的数据库服务

[dmdba@Node0 bin]$ ./DmServiceDSC0 stop [dmdba@Node1 bin]$ ./DmServiceDSC1 stop

3.2 初始化数据库实例并注册服务

注:安装达梦数据库见 2.7.1 安装达梦数据库

[dmdba@Standby bin]$ ./dminit path=/dm8/data db_name=DSC instance_name=DW [root@Standby root]# ./dm_service_installer.sh -t dmserver -p Standby -dm_ini /dm8/data/DSC/dm.ini

3.3 脱机备份 DSC 集群

Node0 或 Node1 节点执行

注:备份前提是 asm 服务要打开,而 asm 服务要打开必须要启动 css 服务

[dmdba@Node0 bin]$ ./dmrman dcr_ini=/dm8/conf/dmdcr.ini dmrman V8 RMAN> backup database '/dm8/conf/DSC0_config/dm.ini' full backupset '/dm8/dmbak/BACKUP_FILE' RMAN> check backupset '/dm8/dmbak/BACKUP_FILE';

3.4 备机还原

Standby 节点执行

将 Node0 的备份文件发送到备机,并使用 dmrman 工具进行还原
先在备份节点上创建 dmbak 目录(dmdba 用户)

[dmdba@localhost dm8]$ mkdir dmbak [dmdba@Node0 bin]$ scp -r /dm8/dmbak/BACKUP_FILE/ dmdba@192.168.223.6:/dm8/dmbak/BACKUP_FILE/ [dmdba@Standby bin]$ ./dmrman RMAN> check backupset '/dm8/dmbak/BACKUP_FILE' ; RMAN> restore database '/dm8/data/DSC/dm.ini' from backupset '/dm8/dmbak/BACKUP_FILE'; RMAN> recover database '/dm8/data/DSC/dm.ini' from backupset '/dm8/dmbak/BACKUP_FILE'; RMAN> recover database '/dm8/data/DSC/dm.ini' update DB_MAGIC;

image.png

3.5 修改 dm.ini 文件

Node0、Node1 和备库节点都需执行
配置相同

修改如下参数项使所有节点保持一致即可

ALTER_MODE_STATUS = 0 ENABLE_OFFLINE_TS = 2 MAL_INI = 1 ARCH_INI = 1

3.6 修改/创建 dmmal.ini 文件

Node0、Node1 和备库节点都需执行
配置相同

在集群的 dmmal.ini 文件添加备机节点的信息即可,将此文件发送到备机节点

[dmdba@Node0 DSC0_config]$ vim dmmal.ini [dmdba@Node0 DSC0_config]$ cat dmmal.ini MAL_CHECK_INTERVAL=87 MAL_CONN_FAIL_INTERVAL=180 MAL_SYS_BUF_SIZE=600 MAL_BUF_SIZE=300 MAL_VPOOL_SIZE=500 MAL_COMPRESS_LEVEL=0 [MAL_INST0] MAL_INST_NAME = DSC0 MAL_HOST = 192.168.223.100 MAL_PORT = 9340 MAL_INST_HOST = 192.168.223.100 MAL_INST_PORT = 5236 MAL_DW_PORT = 52141 MAL_INST_DW_PORT = 5276 [MAL_INST1] MAL_INST_NAME = DSC1 MAL_HOST = 192.168.223.110 MAL_PORT = 9340 MAL_INST_HOST = 192.168.223.110 MAL_INST_PORT = 5236 MAL_DW_PORT = 52141 MAL_INST_DW_PORT = 5276 [MAL_INST2] MAL_INST_NAME = DW MAL_HOST = 192.168.223.6 MAL_PORT = 9340 MAL_INST_HOST = 192.168.223.6 MAL_INST_PORT = 5236 MAL_DW_PORT = 52141 MAL_INST_DW_PORT = 5276 [dmdba@Node0 DSC0_config]$ scp dmmal.ini dmdba@192.168.223.6:/dm8/data/DSC/ [dmdba@Node0 DSC0_config]$ scp dmmal.ini dmdba@192.168.223.6:/dm8/conf/DSC1_config

3.7 创建 dmarch.ini 文件

Node0、Node1 和备库节点都需执行
配置不同

Node0

[dmdba@Node0 DSC0_config]$ cat dmarch.ini ARCH_WAIT_APPLY = 0 ARCH_LOCAL_SHARE = 1 ARCH_LOCAL_SHARE_CHECK=0 [ARCHIVE_LOCAL1] ARCH_TYPE = LOCAL ARCH_DEST = +DGARCH01/DSC0/ARCH ARCH_FILE_SIZE = 2048 ARCH_SPACE_LIMIT = 102400 [ARCHIVE_REMOTE1] ARCH_TYPE = REMOTE ARCH_DEST = DSC1 ARCH_INCOMING_PATH = +DGARCH01/DSC1/ARCH ARCH_FILE_SIZE = 2048 ARCH_SPACE_LIMIT = 102400 [ARCHIVE_REALTIME1] ARCH_TYPE = REALTIME ARCH_DEST = DW

Node1

[dmdba@Node1 DSC1_config]$ cat dmarch.ini ARCH_WAIT_APPLY = 0 ARCH_LOCAL_SHARE = 1 ARCH_LOCAL_SHARE_CHECK=0 [ARCHIVE_LOCAL1] ARCH_TYPE = LOCAL ARCH_DEST = +DGARCH01/DSC1/ARCH ARCH_FILE_SIZE = 2048 ARCH_SPACE_LIMIT = 102400 [ARCHIVE_REMOTE1] ARCH_TYPE = REMOTE ARCH_DEST = DSC0 ARCH_INCOMING_PATH = +DGARCH01/DSC0/ARCH ARCH_FILE_SIZE = 2048 ARCH_SPACE_LIMIT = 102400 [ARCHIVE_REALTIME1] ARCH_TYPE = REALTIME ARCH_DEST = DW

Standby

[dmdba@Standby DSC]$ cat dmarch.ini [ARCHIVE_REALTIME] ARCH_TYPE = REALTIME ARCH_DEST = DSC0/DSC1 [ARCHIVE_LOCAL1] ARCH_TYPE = LOCAL ARCH_DEST = /dm8/dmarch ARCH_FILE_SIZE = 2048 ARCH_SPACE_LIMIT = 204800

3.8 创建 dmwatcher.ini 文件并注册服务

Node0、Node1 和备库节点都需执行
配置不同

Node0

[dmdba@Node0 DSC0_config]$ vim dmwatcher.ini [dmdba@Node0 DSC0_config]$ cat dmwatcher.ini [GRP1] DW_TYPE = GLOBAL DW_MODE = MANUAL DW_ERROR_TIME = 120 INST_RECOVER_TIME = 60 INST_ERROR_TIME = 120 INST_OGUID = 453331 INST_INI = /dm8/conf/DSC0_config/dm.ini DCR_INI= /dm8/conf/dmdcr.ini INST_STARTUP_CMD = /dm8/bin/DmServiceDSC0 start INST_AUTO_RESTART = 0 RLOG_SEND_THRESHOLD = 0 RLOG_APPLY_THRESHOLD = 0

注册服务

[root@Node0 root]# ./dm_service_installer.sh -t dmwatcher -p DSC0 -watcher_ini /dm8/conf/DSC0_config/dmwatcher.ini

Node1

[dmdba@Node1 DSC1_config]$ vim dmwatcher.ini [dmdba@Node1 DSC1_config]$ cat dmwatcher.ini [GRP1] DW_TYPE = GLOBAL DW_MODE = MANUAL DW_ERROR_TIME = 120 INST_RECOVER_TIME = 60 INST_ERROR_TIME = 120 INST_OGUID = 453331 INST_INI = /dm8/conf/DSC1_config/dm.ini DCR_INI= /dm8/conf/dmdcr.ini INST_STARTUP_CMD = /dm8/bin/DmServiceDSC1 start INST_AUTO_RESTART = 0 RLOG_SEND_THRESHOLD = 0 RLOG_APPLY_THRESHOLD = 0

注册服务

[root@Node1 root]# ./dm_service_installer.sh -t dmwatcher -p DSC1 -watcher_ini /dm8/conf/DSC1_config/dmwatcher.ini

Standby

[dmdba@Standby DSC]$ vim dmwatcher.ini [dmdba@Standby DSC]$ cat dmwatcher.ini [GRP1] DW_TYPE = GLOBAL DW_MODE = MANUAL DW_ERROR_TIME = 120 INST_RECOVER_TIME = 60 INST_ERROR_TIME = 120 INST_OGUID = 453331 INST_INI = /dm8/data/DSC/dm.ini INST_AUTO_RESTART = 0 INST_STARTUP_CMD = /dm8/bin/dmserver RLOG_SEND_THRESHOLD = 0 RLOG_APPLY_THRESHOLD = 0

注册服务

[root@Standby root]# ./dm_service_installer.sh -t dmwatcher -p DW -watcher_ini /dm8/data/DSC/dmwatcher.ini

3.9 创建 dmmonitor.ini 文件

Node0、Node1 或 Standby 节点执行

[dmdba@Node0 DSC0_config]$ vim dmmonitor [dmdba@Node0 DSC0_config]$ cat dmmonitor MON_DW_CONFIRM = 1 MON_LOG_PATH = /dm8/dmmlog MON_LOG_INTERVAL = 0 MON_LOG_FILE_SIZE = 32 MON_LOG_SPACE_LIMIT = 0 [GRP1] MON_INST_OGUID = 453331 MON_DW_IP = 192.168.223.100:52141/192.168.223.110:52141 MON_DW_IP = 192.168.223.6:52141

3.10 启动数据库服务并修改参数(mount)

Node0、Node1 和备库节点都需执行
步骤相同

Node0

[dmdba@Node0 bin]$ ./DmServiceDSC0 start mount [dmdba@Node0 bin]$ ./disql sysdba/SYSDBA Server[LOCALHOST:5236]:mode is normal, state is mount SQL> sp_set_oguid(453331) ; DMSQL executed successfully used time: 101.363(ms). Execute id is 0. SQL> alter database primary ; executed successfully used time: 86.672(ms). Execute id is 0.

Node1

[dmdba@Node1 bin]$ ./DmServiceDSC1 start mount [dmdba@Node1 bin]$ ./disql sysdba/SYSDBA Server[LOCALHOST:5236]:mode is primary, state is mount login used time : 24.262(ms) disql V8 SQL> sp_set_oguid(453331) ; sp_set_oguid(453331) ; [-720]:Dmwatcher is active, or current configuration(ALTER_MODE_STATUS) not allowed to alter database. used time: 18.933(ms). Execute id is 0.

image.png
注:Node2 已自动变成主库,无需修改任何参数
image.png

Standby

[dmdba@Standby bin]$ ./DmServiceStandby start mount [dmdba@Standby bin]$ ./disql sysdba/SYSDBA Server[LOCALHOST:5236]:mode is normal, state is mount SQL> sp_set_oguid(453331) ; DMSQL executed successfully used time: 8.690(ms). Execute id is 0. SQL> alter database standby ; executed successfully used time: 9.400(ms). Execute id is 0.

image.png

3.11 启动守护进程

Node0、Node1 和备库节点都需执行
步骤相同

[dmdba@Node0 bin]$ ./DmWatcherServiceDSC0 start [dmdba@Node1 bin]$ ./DmWatcherServiceDSC1 start [dmdba@Standby bin]$ ./DmWatcherServiceDW start

3.12 在 cssm 监视器中查看信息

见 2.7.11 启动 cssm 监视器(前台方式)

3.13 启动 monitor 监视器

Node0 节点执行

[dmdba@Node0 bin]$ ./dmmonitor /dm8/conf/DSC0_config/dmmonitor.ini

输入 show 命令查看集群和备库信息,所有节点数据库状态已从 mount 状态变成 open 状态
代表实时备库搭建成功
image.png

评论
后发表回复

作者

文章

阅读量

获赞

扫一扫
联系客服