作者: Billmay表妹


背景

本次實踐圍繞 OceanBase Binlog Server + Canal + Canal Adapter 實現 OB 增量數據到 TiDB 的同步,核心流程涵蓋搭建部署、配置調整、服務啓動及同步驗證等環節,具體如下



搭建 OceanBase Binlog Server



前提條件

在部署 Binlog Server(即 obbinlog)之前,請確保滿足以下條件:

  1. OceanBase 集羣已配置 obconfig_url 登錄 OceanBase 集羣后執行:
SHOW PARAMETERS LIKE 'obconfig_url';
  1. 若未配置,需手動安裝 obconfigserver 並設置。具體方法參見:使用命令行部署 obconfigserver 。
  2. ODP(OBProxy)已部署且版本兼容 Binlog 服務依賴 ODP 提供連接支持,並要求 ODP 和 OceanBase 數據庫版本在支持範圍內。參見:版本發佈記錄 。
  3. 網絡互通性 確保 Binlog Server 能訪問 OceanBase 實例的 SQL/RPC 端口、元數據庫端口,同時 ODP 能訪問 binlog_service_ip


步驟一:安裝
  1. 社區版安裝方式(以 yum 安裝為例)
# 添加軟件源後安裝
yum install -y obbinlog

安裝完成後,默認路徑為 /home/ds/oblogproxy

注意:企業版用户需聯繫 OceanBase 技術支持獲取安裝包。詳情見:Binlog 服務介紹

  1. 手動解壓部署(可選)

也可下載 RPM 包後使用 rpm2cpio 解壓至指定目錄。



步驟二:初始化與啓動節點

首次啓動時需要初始化元數據表,後續節點無需重複初始化。

啓動後可通過以下命令查詢節點狀態:

SHOW NODES;

詳細説明見:節點管理



OceanBase 租户如何訂閲 Binlog Server



步驟:創建 Binlog 任務

首先確認租户信息:

-- 查看集羣名
SHOW PARAMETERS LIKE 'cluster';
-- 獲取 config_url
SHOW PARAMETERS LIKE 'obconfig_url';

然後在 Binlog Server 上執行 CREATE BINLOG 命令,示例如下:

CREATE BINLOG INSTANCE binlog1 FOR `demo`.`obmysql` CLUSTER_URL='http://1xx.xx.xx.1:8080/services?Action=ObRootServiceInfo';

參數説明:

  • ${cluster_name}:實際集羣名
  • ${tenant_name}:租户名稱
  • ${config_url}:通過 SHOW PARAMETERS LIKE 'obconfig_url' 獲取的 value 值

參考文檔:創建 Binlog 實例



如何檢查 OceanBase 實例是否正常生成 Binlog



方法一:通過日誌檢查

查看 obbinlog 的運行日誌,通常位於:

/home/ds/oblogproxy/log/logproxy.log

搜索關鍵錯誤或狀態信息,例如是否有拉取 clog 成功的日誌。

若出現資源不足報錯,如:

[error] selection_strategy.cpp(519): [ResourcesFilter] The resource threshold of node ... does not meet requirements

請檢查 CPU、內存、磁盤使用率是否超限。

詳見:問題排查手冊



方法二:監控與診斷工具

可使用 obdiag 工具進行一鍵診斷,收集集羣和 Binlog 相關狀態信息。



如何進入 OceanBase Binlog Server 的安裝目錄和 run 子目錄並檢查包含的文件

默認安裝路徑

社區版默認安裝路徑為:

/home/ds/oblogproxy

進入 run 目錄並查看文件

cd /home/ds/oblogproxy/run
ls -la

常見子目錄和文件包括:

  • bin/:可執行程序,如 logproxy 主進程
  • conf/:配置文件目錄
  • log/:日誌文件,特別是 logproxy.log
  • run/:運行時產生的 PID 文件、socket 文件等
  • lib/:依賴庫文件

你可以查看當前運行的進程:

ps -ef | grep logproxy



補充説明

  • 不適用場景:OceanBase 的 Binlog 服務暫不適用於主備搭建和增量恢復等場景。參見:Binlog 服務介紹
  • 版本兼容性:不同版本的 obbinlog 支持不同的 OceanBase 版本。如果版本不在支持範圍,可手動安裝對應版本的 obcdc 依賴。參見:obbinlog V4.3.2


總結

操作項

關鍵命令/路徑

檢查 obconfig_url

SHOW PARAMETERS LIKE 'obconfig_url';

創建 Binlog 實例

CREATE BINLOG INSTANCE ... FOR \cluster`.`tenant` CLUSTER_URL='...'`

安裝目錄

/home/ds/oblogproxy

日誌路徑

/home/ds/oblogproxy/log/logproxy.log

查詢節點

SHOW NODES;

建議結合 OCP 或 obd 工具進行可視化管理和自動化部署,提升運維效率。

更多詳情請參考官方文檔:

  • 部署指南
  • 問題排查手冊


安裝 zookeeper

kafka 也是基於 zk 的,而這個包能直接把 zk 拉起來

wget https://archive.apache.org/dist/kafka/3.9.0/kafka_2.13-3.9.0.tgz 
tar zxvf kafka_2.13-3.9.0.tgz
cd kafka_2.13-3.9.0
bin/zookeeper-server-start.sh config/zookeeper.properties



安裝 java

yum -y install java
java --version

結果輸出

#openjdk 11.0.21 2023-10-17
#OpenJDK Runtime Environment Bisheng (build 11.0.21+9)
#OpenJDK 64-Bit Server VM Bisheng (build 11.0.21+9, mixed mode, sharing)



安裝 canal

安裝 canal.deployer-1.1.8.tar.gz 、canal.adapter-1.1.8.tar.gz

wget https://github.com/alibaba/canal/releases/download/canal-1.1.8/canal.deployer-1.1.8.tar.gz
wget https://github.com/alibaba/canal/releases/download/canal-1.1.8/canal.adapter-1.1.8.tar.gz



修改 deployer 的配置

需要修改兩個配置文件 canal.properties 、instance.properties

配置 canal.properties 文件

vi /root/canal-for-ob-1.1.8/conf/canal.properties

canal.properties 配置文件

#################################################
common argument
#################################################
tcp bind ip
canal.ip =
register ip to zookeeper
canal.register.ip =
canal.port = 11111
canal.metrics.pull.port = 11112
canal instance user/passwd
canal.user = canal
canal.passwd =
 
canal admin config
#canal.admin.manager = 127.0.0.1:8089
canal.admin.port = 11110
canal.admin.user = admin
canal.admin.passwd =
admin auto register
#canal.admin.register.auto = true
#canal.admin.register.cluster =
#canal.admin.register.name =
 
canal.zkServers = 127.0.0.1:2181  <--- 填上 zk 的地址
flush data to zk
canal.zookeeper.flush.period = 1000
canal.withoutNetty = false
tcp, kafka, rocketMQ, rabbitMQ, pulsarMQ
canal.serverMode = tcp  <--- 填上 tcp
flush meta cursor/parse position to file
canal.file.data.dir = ${canal.conf.dir}
canal.file.flush.period = 1000
memory store RingBuffer size, should be Math.pow(2,n)
canal.instance.memory.buffer.size = 16384
memory store RingBuffer used memory unit size , default 1kb
canal.instance.memory.buffer.memunit = 1024
meory store gets mode used MEMSIZE or ITEMSIZE
canal.instance.memory.batch.mode = MEMSIZE
canal.instance.memory.rawEntry = true
 
detecing config
canal.instance.detecting.enable = false
#canal.instance.detecting.sql = insert into retl.xdual values(1,now()) on duplicate key update x=now()
canal.instance.detecting.sql = select 1
canal.instance.detecting.interval.time = 3
canal.instance.detecting.retry.threshold = 3
canal.instance.detecting.heartbeatHaEnable = false
 
support maximum transaction size, more than the size of the transaction will be cut into multiple transactions delivery
canal.instance.transaction.size =  1024
mysql fallback connected to new master should fallback times
canal.instance.fallbackIntervalInSeconds = 60
 
network config
canal.instance.network.receiveBufferSize = 16384
canal.instance.network.sendBufferSize = 16384
canal.instance.network.soTimeout = 30
 
binlog filter config
canal.instance.filter.druid.ddl = true
canal.instance.filter.query.dcl = false
canal.instance.filter.query.dml = false
canal.instance.filter.query.ddl = false
canal.instance.filter.table.error = false
canal.instance.filter.rows = false
canal.instance.filter.transaction.entry = false
canal.instance.filter.dml.insert = false
canal.instance.filter.dml.update = false
canal.instance.filter.dml.delete = false
 
binlog format/image check
canal.instance.binlog.format = ROW,STATEMENT,MIXED
canal.instance.binlog.image = FULL,MINIMAL,NOBLOB
 
binlog ddl isolation
canal.instance.get.ddl.isolation = false
 
parallel parser config
canal.instance.parser.parallel = true
concurrent thread number, default 60% available processors, suggest not to exceed Runtime.getRuntime().availableProcessors()
#canal.instance.parser.parallelThreadSize = 16
disruptor ringbuffer size, must be power of 2
canal.instance.parser.parallelBufferSize = 256
 
table meta tsdb info
canal.instance.tsdb.enable = true
canal.instance.tsdb.dir = ${canal.file.data.dir:../conf}/${canal.instance.destination:}
canal.instance.tsdb.url = jdbc:h2:${canal.instance.tsdb.dir}/h2;CACHE_SIZE=1000;MODE=MYSQL;
canal.instance.tsdb.dbUsername = canal
canal.instance.tsdb.dbPassword = canal
dump snapshot interval, default 24 hour
canal.instance.tsdb.snapshot.interval = 24
purge snapshot expire , default 360 hour(15 days)
canal.instance.tsdb.snapshot.expire = 360
 
#################################################
destinations
#################################################
canal.destinations = example
conf root dir
canal.conf.dir = ../conf
auto scan instance dir add/remove and start/stop instance
canal.auto.scan = true
canal.auto.scan.interval = 5
set this value to 'true' means that when binlog pos not found, skip to latest.
WARN: pls keep 'false' in production env, or if you know what you want.
canal.auto.reset.latest.pos.mode = false
 
canal.instance.tsdb.spring.xml = classpath:spring/tsdb/h2-tsdb.xml
#canal.instance.tsdb.spring.xml = classpath:spring/tsdb/mysql-tsdb.xml
 
canal.instance.global.mode = spring
canal.instance.global.lazy = false
canal.instance.global.manager.address = ${canal.admin.manager}
#canal.instance.global.spring.xml = classpath:spring/memory-instance.xml
#canal.instance.global.spring.xml = classpath:spring/file-instance.xml
canal.instance.global.spring.xml = classpath:spring/default-instance.xml
#canal.instance.global.spring.xml = classpath:spring/ob-default-instance.xml
 
##################################################
MQ Properties
##################################################
aliyun ak/sk , support rds/mq
canal.aliyun.accessKey =
canal.aliyun.secretKey =
canal.aliyun.uid=
 
canal.mq.flatMessage = true
canal.mq.canalBatchSize = 50
canal.mq.canalGetTimeout = 100
Set this value to "cloud", if you want open message trace feature in aliyun.
canal.mq.accessChannel = local
 
canal.mq.database.hash = true
canal.mq.send.thread.size = 30
canal.mq.build.thread.size = 8
 
##################################################
Kafka
##################################################
kafka.bootstrap.servers = 127.0.0.1:9092
kafka.acks = all
kafka.compression.type = none
kafka.batch.size = 16384
kafka.linger.ms = 1
kafka.max.request.size = 1048576
kafka.buffer.memory = 33554432
kafka.max.in.flight.requests.per.connection = 1
kafka.retries = 0
 
kafka.kerberos.enable = false
kafka.kerberos.krb5.file = ../conf/kerberos/krb5.conf
kafka.kerberos.jaas.file = ../conf/kerberos/jaas.conf
 
sasl demo
kafka.sasl.jaas.config = org.apache.kafka.common.security.scram.ScramLoginModule required \\n username=\"alice\" \\npassword="alice-secret\";
kafka.sasl.mechanism = SCRAM-SHA-512
kafka.security.protocol = SASL_PLAINTEXT
 
##################################################
RocketMQ
##################################################
rocketmq.producer.group = test
rocketmq.enable.message.trace = false
rocketmq.customized.trace.topic =
rocketmq.namespace =
rocketmq.namesrv.addr = 127.0.0.1:9876
rocketmq.retry.times.when.send.failed = 0
rocketmq.vip.channel.enabled = false
rocketmq.tag =
 
##################################################
RabbitMQ
##################################################
rabbitmq.host =
rabbitmq.virtual.host =
rabbitmq.exchange =
rabbitmq.username =
rabbitmq.password =
rabbitmq.queue =
rabbitmq.routingKey =
rabbitmq.deliveryMode =
 
 
##################################################
Pulsar
##################################################
pulsarmq.serverUrl =
pulsarmq.roleToken =
pulsarmq.topicTenantPrefix =

配置 instance.properties 文件

vi /root/canal-for-ob-1.1.8/conf/example/instance.properties

配置文件參數配置,注意必選參數

#################################################
mysql serverId , v1.0.26+ will autoGen
canal.instance.mysql.slaveId=0
 
enable gtid use true/false
canal.instance.gtidon=false
 
rds oss binlog
canal.instance.rds.accesskey=
canal.instance.rds.secretkey=
canal.instance.rds.instanceId=
 
position info
canal.instance.master.address=10.10.10.101:2883   <--- obproxy 的地址
canal.instance.master.journal.name=
canal.instance.master.position=
canal.instance.master.timestamp=
canal.instance.master.gtid=
 
multi stream for polardbx
canal.instance.multi.stream.on=false
 
ssl
#canal.instance.master.sslMode=DISABLED
#canal.instance.master.tlsVersions=
#canal.instance.master.trustCertificateKeyStoreType=
#canal.instance.master.trustCertificateKeyStoreUrl=
#canal.instance.master.trustCertificateKeyStorePassword=
#canal.instance.master.clientCertificateKeyStoreType=
#canal.instance.master.clientCertificateKeyStoreUrl=
#canal.instance.master.clientCertificateKeyStorePassword=
 
table meta tsdb info
canal.instance.tsdb.enable=true
#canal.instance.tsdb.url=jdbc:mysql://127.0.0.1:3306/canal_tsdb
#canal.instance.tsdb.dbUsername=canal
#canal.instance.tsdb.dbPassword=canal
 
#canal.instance.standby.address =
#canal.instance.standby.journal.name =
#canal.instance.standby.position =
#canal.instance.standby.timestamp =
#canal.instance.standby.gtid=
 
username/password
canal.instance.dbUsername=root@ob_user1#ob_test1   <--- ob user
canal.instance.dbPassword=PassworD123              <--- ob password
canal.instance.connectionCharset = UTF-8
enable druid Decrypt database password
canal.instance.enableDruid=false
#canal.instance.pwdPublicKey=MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBALK4BUxdDltRRE5/zXpVEVPUgunvscYFtEip3pmLlhrWpacX7y7GCMo2/JM6LeHmiiNdH1FWgGCpUfircSwlWKUCAwEAAQ==
 
table regex
canal.instance.filter.regex=.\\..
table black regex
canal.instance.filter.black.regex=mysql\\.slave_.*
table field filter(format: schema1.tableName1:field1/field2,schema2.tableName2:field1/field2)
#canal.instance.filter.field=test1.t_product:id/subject/keywords,test2.t_company:id/name/contact/ch
table field black filter(format: schema1.tableName1:field1/field2,schema2.tableName2:field1/field2)
#canal.instance.filter.black.field=test1.t_product:subject/product_image,test2.t_company:id/name/contact/ch
 
mq config
canal.mq.topic=example
dynamic topic route by schema or table regex
#canal.mq.dynamicTopic=mytest1.user,topic2:mytest2\\..*,.*\\..*
canal.mq.partition=0
hash partition config
#canal.mq.enableDynamicQueuePartition=false
#canal.mq.partitionsNum=3
#canal.mq.dynamicTopicPartitionNum=test.*:4,mycanal:6
#canal.mq.partitionHash=test.table:id^name,.\\..
#################################################



啓動 canal server

sh /root/canal-for-ob-1.1.8/bin/startup.sh

正常日誌輸出沒有報錯,如有建議分析解決

2025-12-11 17:18:50.995 [main] INFO com.alibaba.otter.canal.deployer.CanalLauncher - ## set default uncaught exception handler
2025-12-11 17:18:51.001 [main] INFO com.alibaba.otter.canal.deployer.CanalLauncher - ## load canal configurations
2025-12-11 17:18:51.008 [main] INFO com.alibaba.otter.canal.deployer.CanalStarter - ## start the canal server.
2025-12-11 17:18:51.089 [main] INFO com.alibaba.otter.canal.deployer.CanalController - ## start the canal server[172.17.0.1(172.17.0.1):11111]
2025-12-11 17:18:52.038 [main] INFO com.alibaba.otter.canal.deployer.CanalStarter - ## the canal server is running now ......

修改 canal adapter 的配置

vi /root/canal-for-adapter-ob-1.1.8/conf/application.yml

配置文件 canal adapter

zaserver:
  port: 8081
spring:
  jackson:
    date-format: yyyy-MM-dd HH:mm:ss
    time-zone: GMT+8
    default-property-inclusion: non_null
 
canal.conf:
  mode: tcp #tcp kafka rocketMQ rabbitMQ
  flatMessage: true
  zookeeperHosts:
  syncBatchSize: 1000
  retries: -1
  timeout:
  accessKey:
  secretKey:
  consumerProperties:
    # canal tcp consumer
    canal.tcp.server.host: 127.0.0.1:11111  <--- canalserver 的地址
    canal.tcp.zookeeper.hosts:
    canal.tcp.batch.size: 500
    canal.tcp.username:
    canal.tcp.password:
    # kafka consumer
    # rocketMQ consumer
    # rabbitMQ consumer

  srcDataSources:
    defaultDS:
      url: jdbc:mysql://xx.xxx.xx.203:2883/db1?useUnicode=true  <-- 源端的地址
      username: root@ob_user1#ob_test1  <-- ob 用户名
      password: PassworD123   <-- ob 密碼
  canalAdapters:
instance: example # canal instance Name or mq topic name
groups:
groupId: g1
  outerAdapters:
name: rdb
    key: mysql1  <--- 這個名字要記住,因為在後面的配置文件中要用到
    properties:
      jdbc.driverClassName: com.mysql.jdbc.Driver
      jdbc.url: jdbc:mysql://xx.xxx.xxx.247:4000/db1?useUnicode=true  <-- 目標端的地址
      jdbc.username: tidb_test1
      jdbc.password: PassworD123

修改 mytest_user.yml 配置訂閲同步配置

vi /root/canal-for-adapter-ob-1.1.8/conf/rdb/mytest_user.yml

mytest_user.yml 配置參數

dataSourceKey: defaultDS
destination: example
groupId: g1
outerAdapterKey: mysql1   <--- 這個名字和前面的要一致
concurrent: true
dbMapping:
  mirrorDb: true
  database: db1



啓動 canal-adapter

sh /root/canal-for-adapter-ob-1.1.8/bin/startup.sh

日誌相關信息無報錯

2025-12-11 15:30:28.800 [SpringApplicationShutdownHook] INFO  ru.yandex.clickhouse.ClickHouseDriver - Driver registered
2025-12-11 15:30:29.885 [SpringApplicationShutdownHook] INFO  c.a.o.canal.adapter.launcher.loader.CanalAdapterService - ## stop the canal client adapters
2025-12-11 15:30:29.886 [pool-9-thread-1] INFO  c.a.otter.canal.adapter.launcher.loader.AdapterProcessor - destination example is waiting for adapters' worker thread die!
2025-12-11 15:30:29.961 [pool-9-thread-1] INFO  c.a.otter.canal.adapter.launcher.loader.AdapterProcessor - destination example adapters worker thread dead!
2025-12-11 15:30:30.158 [pool-9-thread-1] INFO  com.alibaba.druid.pool.DruidDataSource - {dataSource-2} closing ...
2025-12-11 15:30:30.162 [pool-9-thread-1] INFO  com.alibaba.druid.pool.DruidDataSource - {dataSource-2} closed
2025-12-11 15:30:30.162 [pool-9-thread-1] INFO  c.a.otter.canal.adapter.launcher.loader.AdapterProcessor - destination example all adapters destroyed!
2025-12-11 15:30:30.162 [SpringApplicationShutdownHook] INFO  c.a.o.canal.adapter.launcher.loader.CanalAdapterLoader - All canal adapters destroyed
2025-12-11 15:30:30.162 [SpringApplicationShutdownHook] INFO  com.alibaba.druid.pool.DruidDataSource - {dataSource-1} closing ...
2025-12-11 15:30:30.162 [SpringApplicationShutdownHook] INFO  com.alibaba.druid.pool.DruidDataSource - {dataSource-1} closed
2025-12-11 15:30:30.163 [SpringApplicationShutdownHook] INFO  c.a.o.canal.adapter.launcher.loader.CanalAdapterService - ## canal client adapters are down.
2025-12-11 17:26:01.842 [main] INFO  c.a.otter.canal.adapter.launcher.CanalAdapterApplication - Starting CanalAdapterApplication using Java xx.0.21 on tidbxxx.xxx.xxx.xxx.net with PID 3965171 (/root/canal-for-adapter-ob-1.1.8/lib/client-adapter.launcher-1.1.8.jar started by root in /root/canal-for-adapter-ob-1.1.8/bin)
2025-12-11 17:26:01.847 [main] INFO  c.a.otter.canal.adapter.launcher.CanalAdapterApplication - No active profile set, falling back to 1 default profile: "default"
2025-12-11 17:26:02.300 [main] INFO  org.springframework.cloud.context.scope.GenericScope - BeanFactory id=d4f2b56b-aacd-327d-9217-5ce4cfc37805
2025-12-11 17:26:02.480 [main] INFO  o.s.boot.web.embedded.tomcat.TomcatWebServer - Tomcat initialized with port(s): 8081 (http)
2025-12-11 17:26:02.487 [main] INFO  org.apache.coyote.http11.Http11NioProtocol - Initializing ProtocolHandler ["http-nio-8081"]
2025-12-11 17:26:02.487 [main] INFO  org.apache.catalina.core.StandardService - Starting service [Tomcat]
2025-12-11 17:26:02.487 [main] INFO  org.apache.catalina.core.StandardEngine - Starting Servlet engine: [Apache Tomcat/9.0.75]
2025-12-11 17:26:02.570 [main] INFO  o.a.catalina.core.ContainerBase.[Tomcat].[localhost].[/] - Initializing Spring embedded WebApplicationContext
2025-12-11 17:26:02.570 [main] INFO  o.s.b.w.s.context.ServletWebServerApplicationContext - Root WebApplicationContext: initialization completed in 692 ms
2025-12-11 17:26:02.806 [main] INFO  com.alibaba.druid.pool.DruidDataSource - {dataSource-1} inited
2025-12-11 17:26:03.104 [main] INFO  org.apache.coyote.http11.Http11NioProtocol - Starting ProtocolHandler ["http-nio-8081"]
2025-12-11 17:26:03.115 [main] INFO  o.s.boot.web.embedded.tomcat.TomcatWebServer - Tomcat started on port(s): 8081 (http) with context path ''
2025-12-11 17:26:03.118 [main] INFO  c.a.o.canal.adapter.launcher.loader.CanalAdapterService - ## syncSwitch refreshed.
2025-12-11 17:26:03.118 [main] INFO  c.a.o.canal.adapter.launcher.loader.CanalAdapterService - ## start the canal client adapters.
2025-12-11 17:26:03.119 [main] INFO  c.a.otter.canal.client.adapter.support.ExtensionLoader - extension classpath dir: /root/canal-for-adapter-ob-1.1.8/plugin
2025-12-11 17:26:03.166 [main] INFO  c.a.otter.canal.client.adapter.rdb.config.ConfigLoader - ## Start loading rdb mapping config ...
2025-12-11 17:26:03.174 [main] INFO  c.a.otter.canal.client.adapter.rdb.config.ConfigLoader - ## Rdb mapping config loaded
2025-12-11 17:26:03.198 [main] INFO  com.alibaba.druid.pool.DruidDataSource - {dataSource-2} inited
2025-12-11 17:26:03.202 [main] INFO  c.a.o.canal.adapter.launcher.loader.CanalAdapterLoader - Load canal adapter: rdb succeed
2025-12-11 17:26:03.207 [main] INFO  c.alibaba.otter.canal.connector.core.spi.ExtensionLoader - extension classpath dir: /root/canal-for-adapter-ob-1.1.8/plugin
2025-12-11 17:26:03.221 [main] INFO  c.a.o.canal.adapter.launcher.loader.CanalAdapterLoader - Start adapter for canal-client mq topic: example-g1 succeed
2025-12-11 17:26:03.222 [main] INFO  c.a.o.canal.adapter.launcher.loader.CanalAdapterService - ## the canal client adapters are running now ......
2025-12-11 17:26:03.222 [Thread-3] INFO  c.a.otter.canal.adapter.launcher.loader.AdapterProcessor - =============> Start to connect destination: example <=============
2025-12-11 17:26:03.228 [main] INFO  c.a.otter.canal.adapter.launcher.CanalAdapterApplication - Started CanalAdapterApplication in 1.697 seconds (JVM running for 2.164)
2025-12-11 17:26:03.354 [Thread-3] INFO  c.a.otter.canal.adapter.launcher.loader.AdapterProcessor - =============> Subscribe destination: example succeed <=============



驗證 Oceanbase 增量同步成功

OB 插入數據驗證增量數據同步

mysql> select version();
+------------------------------+
| version()                    |
+------------------------------+
| 5.7.25-OceanBase_CE-v4.3.5.4 |
+------------------------------+
1 row in set (0.00 sec)
 
mysql> use db1;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
 
Database changed
mysql> show tables;
+---------------+
| Tables_in_db1 |
+---------------+
| t1            |
+---------------+
1 row in set (0.00 sec)
 
mysql> desc t1;
+-------+-------------+------+-----+---------+-------+
| Field | Type        | Null | Key | Default | Extra |
+-------+-------------+------+-----+---------+-------+
| id    | int(11)     | NO   | PRI | NULL    |       |
| col1  | varchar(20) | YES  |     | NULL    |       |
+-------+-------------+------+-----+---------+-------+
2 rows in set (0.01 sec)
 
mysql> select * from t1;
+----+------+
| id | col1 |
+----+------+
|  1 | ccc  |
|  2 | ccc  |
|  3 | ccc  |
+----+------+
3 rows in set (0.00 sec)
 
mysql> insert into \c
mysql> insert into t1 (id,col1) values (4,'ddd');
Query OK, 1 row affected (0.01 sec)
 
mysql> select * from t1;
+----+------+
| id | col1 |
+----+------+
|  1 | ccc  |
|  2 | ccc  |
|  3 | ccc  |
|  4 | ddd  |
+----+------+
4 rows in set (0.00 sec)
 
tidb 同步數據
mysql> select version();
+--------------------+
| version()          |
+--------------------+
| 8.0.11-TiDB-v7.5.5 |
+--------------------+
1 row in set (0.00 sec)
 
mysql> use db1;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
 
Database changed
mysql> select * from t1;
+----+------+
| id | col1 |
+----+------+
|  1 | ccc  |
|  2 | ccc  |
|  3 | ccc  |
|  4 | ddd  |
+----+------+
4 rows in set (0.00 sec)



注意事項

  • 版本兼容性:確保obbinlog、OB 集羣、ODP、Canal 版本匹配;
  • 日誌監控:定期檢查logproxy.log、Canal Server/Adapter 日誌,及時排查資源不足(CPU/內存/磁盤)或連接異常;
  • 運維效率:建議結合 OCP 或obd工具實現可視化管理和自動化部署。


總結

TiDB 與 OceanBase 作為國產分佈式數據庫的代表性產品,均憑藉各自技術特性成為運維 DBA 的優選工具。近年來,越來越多的 OceanBase 用户選擇 TiDB 作為下游數據庫,這一趨勢反映了兩者在功能、生態及用户需求適配性上的差異。OceanBase 用户選擇 TiDB 作為下游的核心動因,如技術棧簡化與運維降本、TiDB 對業務友好性與開發適配、跨城同步與穩定性需求 、活躍的社區與長期發展

隨着企業對技術靈活性、運維效率及長期成本的關注,TiDB 憑藉兼容性、擴展性與生態優勢,正成為 OceanBase 用户拓展技術棧、降低綁定風險的優選下游數據庫。這一趨勢不僅體現了分佈式數據庫市場的多元化需求,也驗證了 TiDB 在複雜場景下的綜合競爭力。