incloud开发文档 5.1.0 Help

中间件

数据库

mysql

默认运行命令

docker run -d --name db --privileged=true --restart=always -p 3306:3306 \ -v ~/docker_store/mysql_db/data:/var/lib/mysql -v ~/docker_store/mysql_db/log:/var/log/mysql \ -e TZ=Asia/Shanghai -e MYSQL_ROOT_PASSWORD="Netwisd*8" mysql:8.1.0 --lower_case_table_names=1 --default-authentication-plugin=mysql_native_password

修改访问限制

默认情况下,运行完上面命令即可,如果有无法远程访问的情况,运行下面

ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'Netwisd*8'; ALTER USER 'root'@'%' IDENTIFIED WITH mysql_native_password BY 'Netwisd*8';

postgresql

默认运行命令

docker run -d --name pg \ -p 5432:5432 --restart=always --privileged \ -e POSTGRES_PASSWORD="Netwisd*8" \ -e POSTGRES_USER=root \ -e POSTGRES_DB=incloud5 \ -e TZ=Asia/Shanghai \ -v ~/docker_store/pg/data:/var/lib/postgresql/data \ -v ~/docker_store/pg/log:/var/log/postgresql \ postgres:14.6 -c max_connections=500

其他配置

client_min_messages = warning lc_messages = 'C' max_connections = 5000

获取全部配置

docker run -i --rm postgres cat /usr/share/postgresql/postgresql.conf.sample > my-postgres.conf

redis

docker run -d --name redis -p 6379:6379 \ -e TZ=Asia/Shanghai \ --restart=always redis:7.2.2 --requirepass "Netwisd*8"

elasticsearch

官网镜像

运行命令

docker run -d --name es -p 9200:9200 -p 9300:9300 \ -e TZ=Asia/Shanghai \ -e "discovery.type=single-node" -e "node.name=incloud" -e "ES_JAVA_OPTS=-Xms512m -Xmx512m" \ --restart=always elasticsearch:7.9.1
docker exec -it es /bin/bash cd /usr/share/elasticsearch/bin elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.9.1/elasticsearch-analysis-ik-7.9.1.zip

如果下载不下来,可以直接从这里下载:【ik.tar

下载完成后,如果es做了磁盘映射,直接放到其plugins目录下解压即可,如果没有做映射,那么可以 :

docker cp ./ik.tar es:/usr/share/elasticsearch/plugins/ docker exec -it es /bin/bash cd /usr/share/elasticsearch/plugins/ tar -xvf ik.tar rm -rf ik.tar

带认证的镜像

运行命令

docker run -d --name es -p 9200:9200 -p 9300:9300 \ -e TZ=Asia/Shanghai \ -e "discovery.type=single-node" -e "node.name=incloud" -e "ES_JAVA_OPTS=-Xms512m -Xmx512m" \ --restart=always dockerhub.kubekey.local/netwisd/elasticsearch:7.9.1-secret

安装分词器同上

设置密码:

docker exec -it es /bin/bash cd /usr/share/elasticsearch/bin ./elasticsearch-setup-passwords interactive

按照上面的提示为每个用户设置密码即可

mq

mq系统默认情况下使用rocketmq,当然也支持rabbitmq。

rocketmq

由于rocketmq使用了集群模式,所以需要先启动nameserver,再启动broker。为了方便,我们封装了统一的启动命令,只需要执行下面命令即可。

macos下的方式指的是arm64下的方式,由于当前版本下rocketmq没的提供官网arm64的镜像, 所以此镜像为平台使用官网安装包做的镜像包,并封装了相应的变量。

方式一

  1. brokerClusterName = DefaultCluster brokerName = broker-a brokerId = 0 deleteWhen = 04 fileReservedTime = 48 brokerRole = ASYNC_MASTER flushDiskType = ASYNC_FLUSH autoCreateTopicEnable=true #conf文件的brokerIp1使用宿主机host名,能完全解决宿主机经常换ip的问题,配置如下: brokerIP1=zoulimingdeMacBook-Pro.local
  2. docker run --name mq -d \ -e TZ=Asia/Shanghai \ --restart=always \ --publish 9876:9876 \ --publish 10909:10909 \ --publish 10911-10912:10911-10912 \ -v ~/docker_store/mq/conf/broker.conf:/root/incloud/rocketmq-4.9.3/conf/broker.conf \ zouliming/rocketmq4-arm64:4.9.3

方式二

直接使用bip传值,bip为自己封装的一个变量,官方没有此变量

docker run --name mq -d \ -e TZ=Asia/Shanghai \ -e bip=192.168.3.93 \ --restart=always \ --publish 9876:9876 \ --publish 10909:10909 \ --publish 10911-10912:10911-10912 \ zouliming/rocketmq-arm64:4.9.3

    方式一

    1. 创建一个配置文件:~/docker_store/mq/conf/broker.conf,内容如下, 注意修改brokerIP1为真实ip

      brokerClusterName = DefaultCluster brokerName = broker-a brokerId = 0 deleteWhen = 04 fileReservedTime = 48 brokerRole = ASYNC_MASTER flushDiskType = ASYNC_FLUSH autoCreateTopicEnable=true brokerIP1=xxx
    2. 运行以下命令:

      docker run --name mq -d -e TZ=Asia/Shanghai --restart=always \ --publish 9876:9876 \ --publish 10909:10909 \ --publish 10911-10912:10911-10912 \ -v ~/docker_store/mq/conf/broker.conf:/home/rocketmq/rocketmq-4.9.3/conf/broker.conf \ apache/rocketmq:4.9.3 \ sh -c "./mqnamesrv & ./mqbroker -n localhost:9876 -c ../conf/broker.conf"

    方式二

    直接在命令中增加参数

    docker run --name mq -d -e TZ=Asia/Shanghai --restart=always \ --publish 9876:9876 \ --publish 10909:10909 \ --publish 10911-10912:10911-10912 \ apache/rocketmq:4.9.3 \ sh -c 'echo -e brokerIP1=192.168.41.134"\n"autoCreateTopicEnable=true >> ../conf/broker.conf && ./mqnamesrv & ./mqbroker -n localhost:9876 -c ../conf/broker.conf'

      方式三

      使用封装的mq镜像,bip为自己封装的一个变量,官方没有此变量

      docker run --name mq -d \ -e TZ=Asia/Shanghai \ -e bip=192.168.124.165 \ --restart=always \ --publish 9876:9876 \ --publish 10909:10909 \ --publish 10911-10912:10911-10912 \ zouliming/rocketmq-linux64:4.9.3
        1. 创建一个配置文件:d:/docker_store/mq/conf/broker.conf,内容如下, 注意修改brokerIP1为真实ip

          brokerClusterName = DefaultCluster brokerName = broker-a brokerId = 0 deleteWhen = 04 fileReservedTime = 48 brokerRole = ASYNC_MASTER flushDiskType = ASYNC_FLUSH autoCreateTopicEnable=true brokerIP1=xxx
        2. 运行以下命令:

          docker run --name mq -d -e TZ=Asia/Shanghai --restart=always \ --publish 9876:9876 \ --publish 10909:10909 \ --publish 10911-10912:10911-10912 \ -v ~/docker_store/mq/conf/broker.conf:/home/rocketmq/rocketmq-4.9.3/conf/broker.conf \ apache/rocketmq:4.9.3 \ sh -c "./mqnamesrv & ./mqbroker -n localhost:9876 -c ../conf/broker.conf"

        rabbitmq

        rabbitmq的使用方式与rocketmq类似,只是需要注意的是,rabbitmq的镜像中没有启动命令,所以需要自己写启动命令,如下:

        docker run --name mq -d -p 15672:15672 -p 5672:5672 --restart=always \ -e TZ=Asia/Shanghai --net=netwisd rabbitmq:3.8.2-management
        hostnamectl set-hostname dev01 docker run -d --hostname dev01 --net=netwisd --name mq --privileged=true --restart=always -p 15672:15672 -p 5672:5672 -p 25672:25672 -p 4369:4369 \ --add-host=dev02:10.255.0.42 --add-host=dev03:10.255.0.43 \ -e RABBITMQ_ERLANG_COOKIE='rabbitcookie' -e TZ=Asia/Shanghai rabbitmq:3.8.9-management docker exec -it mq bash rabbitmqctl stop_app rabbitmqctl reset rabbitmqctl start_app exit docker run -d --hostname dev02 --net=netwisd --name mq --privileged=true --restart=always -p 15672:15672 -p 5672:5672 -p 25672:25672 -p 4369:4369 \ --add-host=dev01:10.255.0.41 --add-host=dev03:10.255.0.43 \ -e RABBITMQ_ERLANG_COOKIE='rabbitcookie' -e TZ=Asia/Shanghai rabbitmq:3.8.9-management //--link mq_node1:mq_node1 docker exec -it mq bash rabbitmqctl stop_app rabbitmqctl reset rabbitmqctl join_cluster --ram rabbit@dev01 rabbitmqctl start_app exit docker run -d --hostname dev03 --net=netwisd --name mq --privileged=true --restart=always -p 15672:15672 -p 5672:5672 -p 25672:25672 -p 4369:4369 \ --add-host=dev01:10.255.0.41 --add-host=dev02:10.255.0.42 \ -e RABBITMQ_ERLANG_COOKIE='rabbitcookie' -e TZ=Asia/Shanghai rabbitmq:3.8.9-management //--link mq_node1:mq_node1 docker exec -it mq bash rabbitmqctl stop_app rabbitmqctl reset rabbitmqctl join_cluster --ram rabbit@dev01 rabbitmqctl start_app exit rabbitmqctl set_policy ha-all "^" '{"ha-mode":"all"}'

        nacos

        docker run --name nacos2 -d \ -p 8848:8848 \ -p 9848:9848 \ -p 9849:9849 \ --privileged=true \ --restart=always \ -e MODE=standalone \ -e TZ=Asia/Shanghai \ nacos/nacos-server:v2.2.3

        seata

        seata使用最新的1.7.x版本,请先映射相应配置文件,再启动seata服务。

        server: port: 7091 spring: application: name: seata-server logging: config: classpath:logback-spring.xml file: path: ${user.home}/logs/seata extend: logstash-appender: destination: 127.0.0.1:4560 kafka-appender: bootstrap-servers: 127.0.0.1:9092 topic: logback_to_logstash console: user: username: seata password: seata seata: config: type: file registry: type: nacos preferred-networks: 30.240.* nacos: application: seata-server server-addr: zoulimingdeMacBook-Pro.local:8848 group: DEFAULT_GROUP namespace: incloud5 cluster: default username: nacos password: nacos context-path: store: mode: file security: secretKey: SeataSecretKey0c382ef121d778043159209298fd40bf3850a017 tokenValidityInMilliseconds: 1800000 ignore: urls: /,/**/*.css,/**/*.js,/**/*.html,/**/*.map,/**/*.svg,/**/*.png,/**/*.jpeg,/**/*.ico,/api/v1/auth/login
        docker run --restart=always -d --name seata \ -p 8091:8091 -p 7091:7091 --restart=always \ -e SEATA_IP=host.docker.internal \ -e TZ=Asia/Shanghai \ -v ~/docker_store/seata/sessionStore:/seata-server/sessionStore \ -v ~/docker_store/seata/conf/application.yml:/seata-server/resources/application.yml \ seataio/seata-server:1.7.1

        minio

        提供两个主流的minio镜像,一个是官方的,是一个bitnami的

        docker run -d -p 9000:9000 -p 9001:9001 --name minio --restart=always \ -e "MINIO_ACCESS_KEY=root" \ -e "MINIO_SECRET_KEY=Netwisd*8" \ -v /root/docker_store/minio/data:/data \ -v /root/docker_store/minio/conf:/root/.minio \ -e TZ=Asia/Shanghai minio/minio server /data --console-address ":9001"
        docker run -d --name minio \ --publish 9000:9000 \ --publish 9001:9001 \ --env MINIO_ROOT_USER="root" \ --env MINIO_ROOT_PASSWORD="Netwisd*8" \ -e TZ=Asia/Shanghai \ --restart=always \ --volume /path/to/minio-persistence:/data \ bitnami/minio:latest

        rocketmq本地化安装

        如果想在祼机上安装rocketmq,以4.9.3为例,先到官网上下载

        https://rocketmq.apache.org/download

        broker.conf,请参考上面docker配置。

        编写脚本

        编写运行脚本,并注意修改:ROCKETMQ_HOME

        #!/bin/bash # 设置环境变量(根据实际路径调整) export ROCKETMQ_HOME=/root/dev/rocketmq-4.9.3 # 定义NameServer和Broker的启动命令 NAMESRV_CMD="$ROCKETMQ_HOME/bin/mqnamesrv" BROKER_CMD="$ROCKETMQ_HOME/bin/mqbroker -n localhost:9876 -c $ROCKETMQ_HOME/conf/broker.conf" # 查找并杀掉当前运行的mqnamesrv和mqbroker进程 NAMESRV_PID=$(jps | grep NamesrvStartup | awk '{print $1}') if [[ ! -z "$NAMESRV_PID" ]]; then echo "Killing running NameServer process, PID: $NAMESRV_PID" kill -9 $NAMESRV_PID fi BROKER_PID=$(jps | grep BrokerStartup | awk '{print $1}') if [[ ! -z "$BROKER_PID" ]]; then echo "Killing running Broker process, PID: $BROKER_PID" kill -9 $BROKER_PID fi # 等待进程完全终止 sleep 5 # 启动NameServer echo "Starting NameServer..." nohup $NAMESRV_CMD > /dev/null 2>&1 & # 为确保NameServer启动,稍作等待 sleep 10 echo "NameServer started!" # 启动Broker echo "Starting Broker..." nohup $BROKER_CMD > /dev/null 2>&1 & echo "Broker started!" # 打印后台运行的RocketMQ进程 echo "RocketMQ processes running in background:" jps | grep -E 'NamesrvStartup|BrokerStartup' echo "RocketMQ has been started successfully."

        做成系统服务

        做成系统服务,并做自启动

        vi /etc/systemd/system/rocketmq-netwisd.service

        然后,添加以下内容到文件中:

        [Unit] Description=RocketMQ netwisd service After=network.target [Service] Type=forking ExecStart=/root/dev/rocketmq-4.9.3/bin/netwisd.sh Restart=on-failure User=root Group=root [Install] WantedBy=multi-user.target

        使服务生效:首先,重新加载systemd,使得它识别新的服务文件:

        systemctl daemon-reload

        启用服务:现在,启用服务,这样它就会在开机时启动:

        systemctl enable rocketmq-netwisd.service

        启动服务:现在,启动服务:

        systemctl start rocketmq-netwisd.service

        检查服务状态:最后,检查服务是否正在运行:

        systemctl status rocketmq-netwisd.service

        最后注意修改rocketmq的jvm参数。

        canal

        此中间是非必须的,如果需要使用同步数据到es,需要使用此中间件,如果不需要,可以不用启动。

        相应的数据库配置

        #canal配置相关 #先开启mysql的bin_log配置 [mysqld] log-bin=mysql-bin # 开启 binlog binlog-format=ROW # 选择 ROW 模式 server_id=1 # 配置 MySQL replaction 需要定义,不要和 canal 的 slaveId 重复 #show variables like ‘log_bin’ 查看是否真正开启binlog #设置一个拥有replication权限的账户 CREATE USER canal IDENTIFIED BY 'canal'; GRANT SELECT, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'canal'@'%'; #或者使用 GRANT ALL PRIVILEGES ON *.* TO 'canal'@'%' ; FLUSH PRIVILEGES;

        结合公司产品运行原理

        Canal

        官方提供了脚本的运行,不过只提供了一个shell脚本,对于windows上如果没有安装子系统的话,运行不方便;下面是实际运行的相对全一些的docker命令:

        canal-admin

        基于内部数据库存储

        不推荐,修改配置不方便,使用k8s做集群部署更不方便,自己本地测试是可以的

        docker run -d -it \ --name=canal-admin -m 1024m \ -h 192.168.192.129 \ -p 8089:8089 \ -e server.port=8089 \ -e canal.adminUser=admin \ -e canal.adminPasswd=admin \ canal/canal-admin:v1.1.5

        基于mysql存储

        docker run -d -it -h 1000 -p 8089:8089 \ --name=canal-admin -m 1024m \ -e server.port=8089 \ -e spring.datasource.address=192.168.192.129 \ -e spring.datasource.database=canal_manager \ -e spring.datasource.username=root \ -e spring.datasource.password=Netwisd*8 \ canal/canal-admin

        sql

        CREATE DATABASE /*!32312 IF NOT EXISTS*/ `canal_manager` /*!40100 DEFAULT CHARACTER SET utf8 COLLATE utf8_bin */; USE `canal_manager`; SET NAMES utf8; SET FOREIGN_KEY_CHECKS = 0; -- ---------------------------- -- Table structure for canal_adapter_config -- ---------------------------- DROP TABLE IF EXISTS `canal_adapter_config`; CREATE TABLE `canal_adapter_config` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `category` varchar(45) NOT NULL, `name` varchar(45) NOT NULL, `status` varchar(45) DEFAULT NULL, `content` text NOT NULL, `modified_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- ---------------------------- -- Table structure for canal_cluster -- ---------------------------- DROP TABLE IF EXISTS `canal_cluster`; CREATE TABLE `canal_cluster` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `name` varchar(63) NOT NULL, `zk_hosts` varchar(255) NOT NULL, `modified_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- ---------------------------- -- Table structure for canal_config -- ---------------------------- DROP TABLE IF EXISTS `canal_config`; CREATE TABLE `canal_config` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `cluster_id` bigint(20) DEFAULT NULL, `server_id` bigint(20) DEFAULT NULL, `name` varchar(45) NOT NULL, `status` varchar(45) DEFAULT NULL, `content` text NOT NULL, `content_md5` varchar(128) NOT NULL, `modified_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`), UNIQUE KEY `sid_UNIQUE` (`server_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- ---------------------------- -- Table structure for canal_instance_config -- ---------------------------- DROP TABLE IF EXISTS `canal_instance_config`; CREATE TABLE `canal_instance_config` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `cluster_id` bigint(20) DEFAULT NULL, `server_id` bigint(20) DEFAULT NULL, `name` varchar(45) NOT NULL, `status` varchar(45) DEFAULT NULL, `content` text NOT NULL, `content_md5` varchar(128) DEFAULT NULL, `modified_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`), UNIQUE KEY `name_UNIQUE` (`name`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- ---------------------------- -- Table structure for canal_node_server -- ---------------------------- DROP TABLE IF EXISTS `canal_node_server`; CREATE TABLE `canal_node_server` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `cluster_id` bigint(20) DEFAULT NULL, `name` varchar(63) NOT NULL, `ip` varchar(63) NOT NULL, `admin_port` int(11) DEFAULT NULL, `tcp_port` int(11) DEFAULT NULL, `metric_port` int(11) DEFAULT NULL, `status` varchar(45) DEFAULT NULL, `modified_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- ---------------------------- -- Table structure for canal_user -- ---------------------------- DROP TABLE IF EXISTS `canal_user`; CREATE TABLE `canal_user` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `username` varchar(31) NOT NULL, `password` varchar(128) NOT NULL, `name` varchar(31) NOT NULL, `roles` varchar(31) NOT NULL, `introduction` varchar(255) DEFAULT NULL, `avatar` varchar(255) DEFAULT NULL, `creation_date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; SET FOREIGN_KEY_CHECKS = 1; -- ---------------------------- -- Records of canal_user -- ---------------------------- BEGIN; INSERT INTO `canal_user` VALUES (1, 'admin', '6BB4837EB74329105EE4568DDA7DC67ED2CA2AD9', 'Canal Manager', 'admin', NULL, NULL, '2019-07-14 00:05:28'); COMMIT; SET FOREIGN_KEY_CHECKS = 1;

        canal-server

        docker run -d -it -h 192.168.192.129 \ --name=canal-server -m 4096m \ -p 11110:11110 -p 11111:11111 -p 11112:11112 \ -e canal.admin.manager=192.168.192.129:8089 \ -e canal.admin.port=11110 \ -e canal.admin.user=admin \ -e canal.admin.passwd=4ACFE3202A5FF5CF467898FC58AAB1D615029441 \ dockerhub.kubekey.local/incloud/canal-server

        “坑”及注意点

        避坑

        对于自建 MySQL , 需要先开启 Binlog 写入功能,配置 binlog-format 为 ROW 模式,my.cnf 中配置如下

        [mysqld] log-bin=mysql-bin # 开启 binlog binlog-format=ROW # 选择 ROW 模式 server_id=1 # 配置 MySQL replaction 需要定义,不要和 canal 的 slaveId 重复

        授权 canal 链接 MySQL 账号具有作为 MySQL slave 的权限, 如果已有账户可直接 grant

        CREATE USER canal IDENTIFIED BY 'canal'; GRANT SELECT, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'canal'@'%'; -- GRANT ALL PRIVILEGES ON *.* TO 'canal'@'%' ; FLUSH PRIVILEGES;

        canal-server配置

        # 配置 ################################################# ######### common argument ############# ################################################# # tcp bind ip canal.ip = # register ip to zookeeper canal.register.ip = canal.port = 11111 canal.metrics.pull.port = 11112 # canal instance user/passwd # canal.user = canal # canal.passwd = E3619321C1A937C46A0D8BD1DAC39F93B27D4458 # canal admin config #canal.admin.manager = 127.0.0.1:8089 canal.admin.port = 11110 canal.admin.user = admin canal.admin.passwd = 4ACFE3202A5FF5CF467898FC58AAB1D615029441 # admin auto register #canal.admin.register.auto = true #canal.admin.register.cluster = #canal.admin.register.name = canal.zkServers = # flush data to zk canal.zookeeper.flush.period = 1000 canal.withoutNetty = false # tcp, kafka, rocketMQ, rabbitMQ, pulsarMQ canal.serverMode = rocketMQ # flush meta cursor/parse position to file canal.file.data.dir = ${canal.conf.dir} canal.file.flush.period = 1000 ## memory store RingBuffer size, should be Math.pow(2,n) canal.instance.memory.buffer.size = 16384 ## memory store RingBuffer used memory unit size , default 1kb canal.instance.memory.buffer.memunit = 1024 ## meory store gets mode used MEMSIZE or ITEMSIZE canal.instance.memory.batch.mode = MEMSIZE canal.instance.memory.rawEntry = true ## detecing config canal.instance.detecting.enable = false #canal.instance.detecting.sql = insert into retl.xdual values(1,now()) on duplicate key update x=now() canal.instance.detecting.sql = select 1 canal.instance.detecting.interval.time = 3 canal.instance.detecting.retry.threshold = 3 canal.instance.detecting.heartbeatHaEnable = false # support maximum transaction size, more than the size of the transaction will be cut into multiple transactions delivery canal.instance.transaction.size = 1024 # mysql fallback connected to new master should fallback times canal.instance.fallbackIntervalInSeconds = 60 # network config canal.instance.network.receiveBufferSize = 16384 canal.instance.network.sendBufferSize = 16384 canal.instance.network.soTimeout = 30 # binlog filter config canal.instance.filter.druid.ddl = true canal.instance.filter.query.dcl = false canal.instance.filter.query.dml = false canal.instance.filter.query.ddl = false canal.instance.filter.table.error = false canal.instance.filter.rows = false canal.instance.filter.transaction.entry = false canal.instance.filter.dml.insert = false canal.instance.filter.dml.update = false canal.instance.filter.dml.delete = false # binlog format/image check canal.instance.binlog.format = ROW,STATEMENT,MIXED canal.instance.binlog.image = FULL,MINIMAL,NOBLOB # binlog ddl isolation canal.instance.get.ddl.isolation = false # parallel parser config canal.instance.parser.parallel = true ## concurrent thread number, default 60% available processors, suggest not to exceed Runtime.getRuntime().availableProcessors() #canal.instance.parser.parallelThreadSize = 16 ## disruptor ringbuffer size, must be power of 2 canal.instance.parser.parallelBufferSize = 256 # table meta tsdb info canal.instance.tsdb.enable = true canal.instance.tsdb.dir = ${canal.file.data.dir:../conf}/${canal.instance.destination:} canal.instance.tsdb.url = jdbc:h2:${canal.instance.tsdb.dir}/h2;CACHE_SIZE=1000;MODE=MYSQL; canal.instance.tsdb.dbUsername = canal canal.instance.tsdb.dbPassword = canal # dump snapshot interval, default 24 hour canal.instance.tsdb.snapshot.interval = 24 # purge snapshot expire , default 360 hour(15 days) canal.instance.tsdb.snapshot.expire = 360 ################################################# ######### destinations ############# ################################################# canal.destinations = example # conf root dir canal.conf.dir = ../conf # auto scan instance dir add/remove and start/stop instance canal.auto.scan = true canal.auto.scan.interval = 5 # set this value to 'true' means that when binlog pos not found, skip to latest. # WARN: pls keep 'false' in production env, or if you know what you want. canal.auto.reset.latest.pos.mode = false canal.instance.tsdb.spring.xml = classpath:spring/tsdb/h2-tsdb.xml #canal.instance.tsdb.spring.xml = classpath:spring/tsdb/mysql-tsdb.xml canal.instance.global.mode = spring canal.instance.global.lazy = false canal.instance.global.manager.address = ${canal.admin.manager} #canal.instance.global.spring.xml = classpath:spring/memory-instance.xml canal.instance.global.spring.xml = classpath:spring/file-instance.xml #canal.instance.global.spring.xml = classpath:spring/default-instance.xml ################################################## ######### MQ Properties ############# ################################################## # aliyun ak/sk , support rds/mq canal.aliyun.accessKey = canal.aliyun.secretKey = canal.aliyun.uid= canal.mq.flatMessage = true canal.mq.canalBatchSize = 50 canal.mq.canalGetTimeout = 100 # Set this value to "cloud", if you want open message trace feature in aliyun. canal.mq.accessChannel = local canal.mq.database.hash = true canal.mq.send.thread.size = 30 canal.mq.build.thread.size = 8 ################################################## ######### Kafka ############# ################################################## kafka.bootstrap.servers = 127.0.0.1:9092 kafka.acks = all kafka.compression.type = none kafka.batch.size = 16384 kafka.linger.ms = 1 kafka.max.request.size = 1048576 kafka.buffer.memory = 33554432 kafka.max.in.flight.requests.per.connection = 1 kafka.retries = 0 kafka.kerberos.enable = false kafka.kerberos.krb5.file = ../conf/kerberos/krb5.conf kafka.kerberos.jaas.file = ../conf/kerberos/jaas.conf # sasl demo # kafka.sasl.jaas.config = org.apache.kafka.common.security.scram.ScramLoginModule required \\n username=\"alice\" \\npassword="alice-secret\"; # kafka.sasl.mechanism = SCRAM-SHA-512 # kafka.security.protocol = SASL_PLAINTEXT ################################################## ######### RocketMQ ############# ################################################## rocketmq.producer.group = canalGroup rocketmq.enable.message.trace = false rocketmq.customized.trace.topic = rocketmq.namespace = rocketmq.namesrv.addr = 127.0.0.1:9876 rocketmq.retry.times.when.send.failed = 0 rocketmq.vip.channel.enabled = false rocketmq.tag = incloud4BinLog ################################################## ######### RabbitMQ ############# ################################################## rabbitmq.host = rabbitmq.virtual.host = rabbitmq.exchange = rabbitmq.username = rabbitmq.password = rabbitmq.deliveryMode = ################################################## ######### Pulsar ############# ################################################## pulsarmq.serverUrl = pulsarmq.roleToken = pulsarmq.topicTenantPrefix =

        Instance配置

        ## mysql serverId canal.instance.mysql.slaveId = 1234 #position info,需要改成自己的数据库信息 canal.instance.master.address = 127.0.0.1:3306 canal.instance.master.journal.name = canal.instance.master.position = canal.instance.master.timestamp = #canal.instance.standby.address = #canal.instance.standby.journal.name = #canal.instance.standby.position = #canal.instance.standby.timestamp = #username/password,需要改成自己的数据库信息 canal.instance.dbUsername = canal canal.instance.dbPassword = canal canal.instance.defaultDatabaseName = canal.instance.connectionCharset = UTF-8 #table regex canal.instance.filter.regex = .\*\\\\..\*
        Last modified: 20 一月 2025