説明
本文僅從功能驗證角度來進行流程操作,gbase v952 兼容模式對混合節點和純data節點縮容。
縮容目標
集羣目前有四個節點,兩個管理節點,四個data節點,其中兩台機器是混合部署。現在準備縮容一台混合節點一台純data節點。
集羣現狀gcadmi信息如下:
[gbase@node1 gcinstall]$ gcadmin
CLUSTER STATE: ACTIVE
VIRTUAL CLUSTER MODE: NORMAL
=================================================================
| GBASE COORDINATOR CLUSTER INFORMATION |
=================================================================
| NodeName | IpAddress | gcware | gcluster | DataState |
-----------------------------------------------------------------
| coordinator1 | 192.168.110.21 | OPEN | OPEN | 0 |
-----------------------------------------------------------------
| coordinator2 | 192.168.110.22 | OPEN | OPEN | 0 |
-----------------------------------------------------------------
=========================================================================================================
| GBASE DATA CLUSTER INFORMATION |
=========================================================================================================
| NodeName | IpAddress | DistributionId | gnode | syncserver | DataState |
---------------------------------------------------------------------------------------------------------
| node1 | 192.168.110.21 | 1 | OPEN | OPEN | 0 |
---------------------------------------------------------------------------------------------------------
| node2 | 192.168.110.22 | 1 | OPEN | OPEN | 0 |
---------------------------------------------------------------------------------------------------------
| node3 | 192.168.110.23 | 1 | OPEN | OPEN | 0 |
---------------------------------------------------------------------------------------------------------
| node4 | 192.168.110.24 | 1 | OPEN | OPEN | 0 |
---------------------------------------------------------------------------------------------------------
對192.168.110.22,192.168.110.23進行縮容,一個混合節點,一個純data節點。
創建新的distribution
[gbase@node1 gcinstall]$ gcadmin distribution gcChangeInfo_20240221.xml p 1 d 1
gcadmin generate distribution ...
gcadmin generate distribution successful
[gbase@node1 gcinstall]$
測試環境僅剩2個節點,不能p 2了,生產環境要注意主備分片數量。
檢查新的distribution
[gbase@node1 gcinstall]$ gcadmin showdistribution
Distribution ID: 2 | State: new | Total segment num: 2
Primary Segment Node IP Segment ID Duplicate Segment node IP
========================================================================================================================
| 192.168.110.21 | 1 | 192.168.110.24 |
------------------------------------------------------------------------------------------------------------------------
| 192.168.110.24 | 2 | 192.168.110.21 |
========================================================================================================================
Distribution ID: 1 | State: old | Total segment num: 8
Primary Segment Node IP Segment ID Duplicate Segment node IP
========================================================================================================================
| 192.168.110.21 | 1 | 192.168.110.22 |
------------------------------------------------------------------------------------------------------------------------
| 192.168.110.22 | 2 | 192.168.110.23 |
------------------------------------------------------------------------------------------------------------------------
| 192.168.110.23 | 3 | 192.168.110.24 |
------------------------------------------------------------------------------------------------------------------------
| 192.168.110.24 | 4 | 192.168.110.21 |
------------------------------------------------------------------------------------------------------------------------
| 192.168.110.21 | 5 | 192.168.110.23 |
------------------------------------------------------------------------------------------------------------------------
| 192.168.110.22 | 6 | 192.168.110.24 |
------------------------------------------------------------------------------------------------------------------------
| 192.168.110.23 | 7 | 192.168.110.21 |
------------------------------------------------------------------------------------------------------------------------
| 192.168.110.24 | 8 | 192.168.110.22 |
========================================================================================================================
目前已經可以看到兩個distribution,其中新的只有21和24
初始化hashmap
[gbase@node1 ~]$ gccli -uroot
GBase client 9.5.2.45.7_patch.10350d612. Copyright (c) 2004-2024, GBase. All Rights Reserved.
gbase> initnodedatamap;
Query OK, 1 row affected, 30 warnings (Elapsed: 00:00:02.46)
設置重分佈rebalance並行度為0
gbase> set global gcluster_rebalancing_concurrent_count = 0;
Query OK, 0 rows affected (Elapsed: 00:00:00.03)
gbase>
gbase> show variables like 'gcluster_rebalancing_concurrent_count';
+---------------------------------------+-------+
| Variable_name | Value |
+---------------------------------------+-------+
| gcluster_rebalancing_concurrent_count | 0 |
+---------------------------------------+-------+
1 row in set (Elapsed: 00:00:00.00)
初始化rebalance
gbase> rebalance instance;
Query OK, 13 rows affected (Elapsed: 00:00:00.32)
驗證gclusterdb.rebalancing_status,測試環境暫時不需要調重分佈整優先級
gbase> select count(1) from gclusterdb.rebalancing_status;
+----------+
| count(1) |
+----------+
| 13 |
+----------+
1 row in set (Elapsed: 00:00:00.03)
gbase> select * from gclusterdb.rebalancing_status;
gclusterdb.rebalancing_status表中可以修改重分佈優先級這裏不進行詳細説明
開始重分佈
gbase> set global gcluster_rebalancing_concurrent_count =2;
Query OK, 0 rows affected (Elapsed: 00:00:00.01)
查看gclusterdb.rebalancing_status表重分佈進度
gbase> select index_name,end_time,status,percentage,priority,distribution_id from gclusterdb.rebalancing_status;
+------------------------------------+----------------------------+-----------+------------+----------+-----------------+
| index_name | end_time | status | percentage | priority | distribution_id |
+------------------------------------+----------------------------+-----------+------------+----------+-----------------+
| tanshuang.tans_test_001 | 2024-02-22 00:14:58.478000 | COMPLETED | 100 | 5 | 2 |
| gclusterdb.import_audit_log_errors | 2024-02-22 00:14:57.830000 | COMPLETED | 100 | 5 | 2 |
| gclusterdb.audit_log_express | 2024-02-22 00:15:10.757000 | COMPLETED | 100 | 5 | 2 |
| tanshuang.tans_test_004 | 2024-02-22 00:15:09.454000 | COMPLETED | 100 | 5 | 2 |
| test.test_20240125_tmp1 | NULL | RUNNING | 90 | 5 | 2 |
| test.test_20240125_tmp2 | NULL | RUNNING | 10 | 5 | 2 |
| test.test | NULL | STARTING | 0 | 5 | 2 |
| tanshuang.tans_test_002 | 2024-02-22 00:15:13.104000 | COMPLETED | 100 | 5 | 2 |
| tanshuang.tans_test_003 | NULL | STARTING | 0 | 5 | 2 |
| test.tan_load_test | NULL | STARTING | 0 | 5 | 2 |
| test.test_sm3_encrypt | NULL | STARTING | 0 | 5 | 2 |
| test.test_20240125_tmp | NULL | STARTING | 0 | 5 | 2 |
| test.test_20240125_tmp3 | NULL | STARTING | 0 | 5 | 2 |
+------------------------------------+----------------------------+-----------+------------+----------+-----------------+
查看進度
gbase> select count(1) from gclusterdb.rebalancing_status where status <> 'COMPLETED';
+----------+
| count(1) |
+----------+
| 8 |
+----------+
1 row in set (Elapsed: 00:00:00.04)
還有8張未完成重分佈
確認重分佈完成:
gbase> select count(1) from gclusterdb.rebalancing_status where status <> 'COMPLETED';
+----------+
| count(1) |
+----------+
| 0 |
+----------+
1 row in set (Elapsed: 00:00:00.02)
沒有再使用data_distribution_id 為1 的表
gbase> select index_name,tbname,data_distribution_id,vc_id from gbase.table_distribution;
+------------------------------------+-------------------------+----------------------+---------+
| index_name | tbname | data_distribution_id | vc_id |
+------------------------------------+-------------------------+----------------------+---------+
| gclusterdb.nodedatamap | nodedatamap | 1 | vc00001 |
| gclusterdb.rebalancing_status | rebalancing_status | 2 | vc00001 |
| gclusterdb.dual | dual | 2 | vc00001 |
| gclusterdb.audit_log_express | audit_log_express | 2 | vc00001 |
| tanshuang.tans_test_002 | tans_test_002 | 2 | vc00001 |
| test.tan_load_test | tan_load_test | 2 | vc00001 |
| tanshuang.tans_test_003 | tans_test_003 | 2 | vc00001 |
| gclusterdb.import_audit_log_errors | import_audit_log_errors | 2 | vc00001 |
| tanshuang.tans_test_001 | tans_test_001 | 2 | vc00001 |
| tanshuang.tans_test_004 | tans_test_004 | 2 | vc00001 |
| test.test_sm3_encrypt | test_sm3_encrypt | 2 | vc00001 |
+------------------------------------+-------------------------+----------------------+---------+
11 rows in set (Elapsed: 00:00:00.00)
gclusterdb.nodedatamap這個表先不用管
從分佈完成之後刪除老的distribution
[gbase@node1 gcinstall]$ gcadmin rmdistribution 1
cluster distribution ID [1]
it will be removed now
please ensure this is ok, input [Y,y] or [N,n]: y
select count(*) from gbase.nodedatamap where data_distribution_id=1 result is not 0
refreshnodedatamap drop 1 success
gcadmin remove distribution [1] success
[gbase@node1 gcinstall]$
[gbase@node1 gcinstall]$
[gbase@node1 gcinstall]$ gcadmin showdistribution
Distribution ID: 2 | State: new | Total segment num: 2
Primary Segment Node IP Segment ID Duplicate Segment node IP
========================================================================================================================
| 192.168.110.21 | 1 | 192.168.110.24 |
------------------------------------------------------------------------------------------------------------------------
| 192.168.110.24 | 2 | 192.168.110.21 |
========================================================================================================================
[gbase@node1 gcinstall]$
從集羣中刪除節點信息:
編寫新的gcChangeInfo_20240221_rm
[gbase@node1 gcinstall]$ cat gcChangeInfo_20240221_rm.xml
[gbase@node1 gcinstall]$
從集羣中刪除節點信息
[gbase@node1 gcinstall]$ gcadmin rmnodes gcChangeInfo_20240221_rm.xml
gcadmin remove nodes ...
gcadmin rmnodes from cluster success
[gbase@node1 gcinstall]$ gcadmin
CLUSTER STATE: ACTIVE
VIRTUAL CLUSTER MODE: NORMAL
=================================================================
| GBASE COORDINATOR CLUSTER INFORMATION |
=================================================================
| NodeName | IpAddress | gcware | gcluster | DataState |
-----------------------------------------------------------------
| coordinator1 | 192.168.110.21 | OPEN | OPEN | 0 |
-----------------------------------------------------------------
| coordinator2 | 192.168.110.22 | OPEN | OPEN | 0 |
-----------------------------------------------------------------
=========================================================================================================
| GBASE DATA CLUSTER INFORMATION |
=========================================================================================================
| NodeName | IpAddress | DistributionId | gnode | syncserver | DataState |
---------------------------------------------------------------------------------------------------------
| node1 | 192.168.110.21 | 2 | OPEN | OPEN | 0 |
---------------------------------------------------------------------------------------------------------
| node4 | 192.168.110.24 | 2 | OPEN | OPEN | 0 |
---------------------------------------------------------------------------------------------------------
[gbase@node1 gcinstall]$
這個時候data節點信息移除了,但是管理節點仍然在,需要進行卸載。
卸載節點
編寫demo.option
[gbase@node1 gcinstall]$ cat demo.options
installPrefix= /opt
coordinateHost = 192.168.110.22
coordinateHostNodeID = 22
dataHost = 192.168.110.22,192.168.110.23
existCoordinateHost = 192.168.110.21
existDataHost = 192.168.110.21,192.168.110.24
dbaUser = gbase
dbaGroup = gbase
dbaPwd = 'gbase'
rootPwd = 'tansh111111'
#dbRootPwd = ''
#rootPwdFile = rootPwd.json
[gbase@node1 gcinstall]$
停服務:
cexec data: 'gcluster_services all stop'
cexec data: 'gcluster_services all info'
卸載
[gbase@node1 gcinstall]$ ./unInstall.py --silent=demo.options
These GCluster nodes will be uninstalled.
CoordinateHost:
192.168.110.22
DataHost:
192.168.110.22 192.168.110.23
Are you sure to uninstall GCluster ([Y,y]/[N,n])? [Y,y] or [N,n] : y
192.168.110.23 unInstall 192.168.110.23 successfully.
192.168.110.22 unInstall 192.168.110.22 successfully.
Update all coordinator gcware conf.
192.168.110.21 update gcware conf successfully.
啓動服務
cexec data: 'gcluster_services all start'
[gbase@node1 gcinstall]$ gcadmin
CLUSTER STATE: ACTIVE
VIRTUAL CLUSTER MODE: NORMAL
=================================================================
| GBASE COORDINATOR CLUSTER INFORMATION |
=================================================================
| NodeName | IpAddress | gcware | gcluster | DataState |
-----------------------------------------------------------------
| coordinator1 | 192.168.110.21 | OPEN | OPEN | 0 |
-----------------------------------------------------------------
=========================================================================================================
| GBASE DATA CLUSTER INFORMATION |
=========================================================================================================
| NodeName | IpAddress | DistributionId | gnode | syncserver | DataState |
---------------------------------------------------------------------------------------------------------
| node1 | 192.168.110.21 | 2 | OPEN | OPEN | 0 |
---------------------------------------------------------------------------------------------------------
| node4 | 192.168.110.24 | 2 | OPEN | OPEN | 0 |
---------------------------------------------------------------------------------------------------------
管理節點需要卸載,純data節點從實際考慮。
縮容測試完成。