Seata官方提供的有专门的各种场景的demo源码,有兴趣的可以自己拉下来尝试一下。由于本人日常的工作环境是Spring Boot + Dubbo + mysql,出于实用性以及自己动手实践方面的考虑,我没有去跑官方的demo,而是选择尝试以日常项目环境为基础搭建一套demo来熟悉Seata各种场景的使用,并期望在此过程中发现并解决各种可能出现的问题。

首先,最简单的,接入AT模式。

工具及环境准备

IDEA-2019,Spring Boot-2.3.0.RELEASE, JDK-8, Dubbo-2.7.1, maven-3.6.2, mysql-8.0.20。

官方用例

用户购买商品的业务逻辑。整个业务逻辑由3个微服务提供支持:

  • 仓储服务:对给定的商品扣除仓储数量。
  • 订单服务:根据采购需求创建订单。
  • 帐户服务:从用户帐户中扣除余额。

架构图

seata springboot对应关系_分布式

 

仓储服务

public interface StorageService {

    /**
     * 扣除存储数量
     */
    void deduct(String commodityCode, int count);
}

订单服务

public interface OrderService {

    /**
     * 创建订单
     */
    Order create(String userId, String commodityCode, int orderCount);
}

帐户服务

public interface AccountService {

    /**
     * 从用户账户中借出
     */
    void debit(String userId, int money);
}

主要业务逻辑

public class BusinessServiceImpl implements BusinessService {

    private StorageService storageService;

    private OrderService orderService;

    /**
     * 采购
     */
    public void purchase(String userId, String commodityCode, int orderCount) {

        storageService.deduct(commodityCode, orderCount);

        orderService.create(userId, commodityCode, orderCount);
    }
}
public class OrderServiceImpl implements OrderService {

    private OrderDAO orderDAO;

    private AccountService accountService;

    public Order create(String userId, String commodityCode, int orderCount) {

        int orderMoney = calculate(commodityCode, orderCount);

        accountService.debit(userId, orderMoney);

        Order order = new Order();
        order.userId = userId;
        order.commodityCode = commodityCode;
        order.count = orderCount;
        order.money = orderMoney;

        // INSERT INTO orders ...
        return orderDAO.insert(order);
    }
}

SEATA 的分布式交易解决方案

seata springboot对应关系_spring_02

我们只需要使用一个 @GlobalTransactional 注解在业务方法上:

@GlobalTransactional
    public void purchase(String userId, String commodityCode, int orderCount) {
        ......
    }

数据库准备

分别创建数据库order, account, storage,并建立对应的业务表及undo_log表。(sql见附件)

服务搭建

1、新建Spring Boot POM项目zhengcs-seata

seata springboot对应关系_seata_03

2、新建Module zhengcs-seata-account(Spring Boot 项目)

seata springboot对应关系_mysql_04

将zhengcs-seata-account转为zhengcs-seata的子模块

seata springboot对应关系_seata_05

搭建一个完整的基于Spring boot + dubbo + mysql 的maven应用

1)配置pom文件

<!--dubbo-->
		<dependency>
			<groupId>org.apache.dubbo</groupId>
			<artifactId>dubbo</artifactId>
			<version>2.7.1</version>
		</dependency>
		<dependency>
			<groupId>org.apache.dubbo</groupId>
			<artifactId>dubbo-spring-boot-starter</artifactId>
			<version>2.7.1</version>
		</dependency>

		<!--zk-->
		<dependency>
			<groupId>org.apache.curator</groupId>
			<artifactId>curator-framework</artifactId>
			<version>2.13.0</version>
		</dependency>
		<dependency>
			<groupId>org.apache.curator</groupId>
			<artifactId>curator-recipes</artifactId>
			<version>2.13.0</version>
		</dependency>

		<!--DB-->
		<dependency>
			<groupId>com.alibaba</groupId>
			<artifactId>druid</artifactId>
			<version>1.1.10</version>
		</dependency>
		<dependency>
			<groupId>org.mybatis.spring.boot</groupId>
			<artifactId>mybatis-spring-boot-starter</artifactId>
			<version>1.3.2</version>
		</dependency>
		<dependency>
			<groupId>mysql</groupId>
			<artifactId>mysql-connector-java</artifactId>
			<scope>runtime</scope>
		</dependency>

2)配置application.yml

server:
  port: 8083
spring:
  application:
    name: zhengcs-seata-account
  datasource:
    type: com.alibaba.druid.pool.DruidDataSource
    driver-class-name: com.mysql.jdbc.Driver
    url: jdbc:mysql://localhost:3306/account
    username: test
    password: 123456
    filters: stat,slf4j
    maxActive: 5
    maxWait: 60000
    minIdle: 1
    initialSize: 1
    timeBetweenEvictionRunsMillis: 60000
    minEvictableIdleTimeMillis: 300000
    validationQuery: select 1
    testWhileIdle: true
    testOnBorrow: false
    testOnReturn: false
    poolPreparedStatements: true
    maxOpenPreparedStatements: 20

dubbo:
  application:
    name: zhengcs-seata-account
  protocol:
    name: dubbo
    port: 20883
  registry:
    address: N/A
    check: false
  consumer:
    check: false
    timeout: 10000

此处为了减小成本,并没有搭建zk环境,直接使用本地直连的方式进行dubbo rpc调用,registry.address设置为N/A。

3)生成account表对应的mapper, xml, service, entity等

此处可以以自己习惯的方式去实现,简单起见,直接从官方demo源码中copy过来即可。

seata springboot对应关系_spring_06

4) 配置DBConfig

@Configuration
public class DBConfig {

    @Bean
    @ConfigurationProperties(prefix = "spring.datasource")
    public DataSource dataSource(){
        return new DruidDataSource();
    }


    @Bean
    public SqlSessionFactory sqlSessionFactory(DataSource dataSource) throws Exception{
        SqlSessionFactoryBean factoryBean = new SqlSessionFactoryBean();
        factoryBean.setDataSource(dataSource);
        factoryBean.setMapperLocations(new PathMatchingResourcePatternResolver().getResources("classpath*:/mapper/*.xml"));
        factoryBean.setConfigLocation(new ClassPathResource("mybatis-configuration.xml"));
        return factoryBean.getObject();
    }

    @Bean
    public DataSourceTransactionManager dataSourceTransactionManager(DataSource dataSource){
        return new DataSourceTransactionManager(dataSource);
    }

}

5) 注册dubbo接口

此处为了对dubbo接口统一处理,提供一个Maven module zhengcs-seata-interface来统一管理项目中dubbo接口的定义。

seata springboot对应关系_mysql_07

account项目中实现DubboAccountService接口

seata springboot对应关系_mysql_08

6) 配置启动项

seata springboot对应关系_mysql_09

至此,一个完整的Spring boot dubbo项目搭建完成了,可以在test中测试一下,看下项目是否可以正常运行。

seata springboot对应关系_seata_10

接入Seata AT

接入Seata AT是今天的重头戏,seata针对不同的环境提供有不同的接入方式,不过比较坑的是seata提供的demo源码中各种情景太多,然而又没有一些比较详细的文档说明,需要自己去demo中自己看,自己去总结。另外,seata 有各种参数,特别是注册和配置支持多种第三方框架,作为演示或者说上手demo来说,一切尽量从简,先追求把架子搭起来,服务跑起来,再考虑在这个基础上去引入更高层的东西。

1)启动TC-sever

Usage: sh seata-server.sh(for linux and mac) or cmd seata-server.bat(for windows) [options]
  Options:
    --host, -h
      The host to bind.
      Default: 0.0.0.0
    --port, -p
      The port to listen.
      Default: 8091
    --storeMode, -m
      log store mode : file、db
      Default: file
    --help

e.g.

sh seata-server.sh -p 8091 -h 127.0.0.1 -m file

此时不需要考虑服务端的一些参数配置,直接使用默认配置启动即可,先关注客户端的使用。

2)服务引入seata

针对Spring boot 主要有两种引入方式----seata-all和seata-spring-boot-starter,分别介绍。

seata-all:

<!--seata-->
		<dependency>
			<groupId>io.seata</groupId>
			<artifactId>seata-all</artifactId>
			<version>1.2.0</version>
		</dependency>

		<!--jackson-->
		<dependency>
			<groupId>com.fasterxml.jackson.core</groupId>
			<artifactId>jackson-databind</artifactId>
			<version>2.11.0</version>
		</dependency>

seata-all是seata提供的传统的服务引入方式,需要配合使用conf配置文件。registry.conf是seata的配置文件入口,配置信息如下:

registry {
  # file 、nacos 、eureka、redis、zk、consul、etcd3、sofa
  type = "file"

  nacos {
    application = "seata-server"
    serverAddr = "localhost"
    namespace = ""
    username = ""
    password = ""
  }
  eureka {
    serviceUrl = "http://localhost:8761/eureka"
    weight = "1"
  }
  redis {
    serverAddr = "localhost:6379"
    db = "0"
    password = ""
    timeout = "0"
  }
  zk {
    serverAddr = "127.0.0.1:2181"
    sessionTimeout = 6000
    connectTimeout = 2000
    username = ""
    password = ""
  }
  consul {
    serverAddr = "127.0.0.1:8500"
  }
  etcd3 {
    serverAddr = "http://localhost:2379"
  }
  sofa {
    serverAddr = "127.0.0.1:9603"
    region = "DEFAULT_ZONE"
    datacenter = "DefaultDataCenter"
    group = "SEATA_GROUP"
    addressWaitTime = "3000"
  }
  file {
    name = "file.conf"
  }
}

config {
  # file、nacos 、apollo、zk、consul、etcd3、springCloudConfig
  type = "file"

  nacos {
    serverAddr = "localhost"
    namespace = ""
    group = "SEATA_GROUP"
    username = ""
    password = ""
  }
  consul {
    serverAddr = "127.0.0.1:8500"
  }
  apollo {
    appId = "seata-server"
    apolloMeta = "http://192.168.1.204:8801"
    namespace = "application"
  }
  zk {
    serverAddr = "127.0.0.1:2181"
    sessionTimeout = 6000
    connectTimeout = 2000
    username = ""
    password = ""
  }
  etcd3 {
    serverAddr = "http://localhost:2379"
  }
  file {
    name = "file.conf"
  }
}

其中比较重要的配置是registry.type和config.type。这两个参数分表指定了注册中心和配置中心的类型。客服端和服务端的配置要保持一致,这里都选择默认的file。关于具体的各个参数的含义,可以参考官方说明文档

当类型选择file时,关注file.name参数,这里指向的是file.conf,所以还需要一个file.conf文件。file.conf文件主要配置三个方面的内容:

  • transport transport 部分的配置对应 NettyServerConfig 类,用于定义 Netty 相关的参数,TM、RM 与 seata-server 之间使用 Netty 进行通信。
transport {
  # tcp udt unix-domain-socket
  type = "TCP"
  #NIO NATIVE
  server = "NIO"
  #enable heartbeat
  heartbeat = true
  #thread factory for netty
  thread-factory {
    boss-thread-prefix = "NettyBoss"
    worker-thread-prefix = "NettyServerNIOWorker"
    server-executor-thread-prefix = "NettyServerBizHandler"
    share-boss-worker = false
    client-selector-thread-prefix = "NettyClientSelector"
    client-selector-thread-size = 1
    client-worker-thread-prefix = "NettyClientWorkerThread"
    # netty boss thread size,will not be used for UDT
    boss-thread-size = 1
    #auto default pin or 8
    worker-thread-size = 8
  }
  shutdown {
    # when destroy server, wait seconds
    wait = 3
  }
  serialization = "seata"
  compressor = "none"
}
  • service
service {
  #vgroup->rgroup
  vgroup_mapping.my_test_tx_group = "default"
  #only support single node
  #配置Client连接TC的地址
  default.grouplist = "127.0.0.1:8091"
  #degrade current not support
  enableDegrade = false
  #disable
  disable = false
  #unit ms,s,m,h,d represents milliseconds, seconds, minutes, hours, days, default permanent
  max.commit.retry.timeout = "-1"
  max.rollback.retry.timeout = "-1"
}
  • client
client {
# RM接收TC的commit通知后缓冲上限
  async.commit.buffer.limit = 10000
  lock {
    retry.internal = 10
    retry.times = 30
  }
  report.retry.count = 5
  tm.commit.retry.count = 1
  tm.rollback.retry.count = 1
}

配置数据源代理

seata AT的运行机制是通过JDBC数据源代理进行业务sql解析并生成对应的undo_log,因此需要配置代理数据源。

@Bean
    public DataSourceProxy dataSourceProxy(DataSource dataSource){
        return new DataSourceProxy(dataSource);
    }

    @Bean
    public SqlSessionFactory sqlSessionFactory(DataSourceProxy dataSource) throws Exception{
        SqlSessionFactoryBean factoryBean = new SqlSessionFactoryBean();
        factoryBean.setDataSource(dataSource);
        factoryBean.setMapperLocations(new PathMatchingResourcePatternResolver().getResources("classpath*:/mapper/*.xml"));
        factoryBean.setConfigLocation(new ClassPathResource("mybatis-configuration.xml"));
        return factoryBean.getObject();
    }

在DBConfig配置类中新增DataSourceProxy代理dataSource,并将其实例注入到SqlSessionFactory实例中。

配置全局事务扫描器GlobalTransactionScanner

@Bean
    public GlobalTransactionScanner globalTransactionScanner(){
        return new GlobalTransactionScanner("zhengcs-seata-account", "my_test_tx_group");
    }

GlobalTransactionScanner是seata的配置入口,是客户端启动类,TM,RM的初始化操作都在该类中,有兴趣的同学可以看下源码。GlobalTransactionScanner中的两个参数分表代表应用ID和事务分组,这里的事务分组要和file.conf文件中的service.vgroup_mapping的下级参数名称保持一致,若不配置,默认获取属性spring.application.name的值+"-fescar-service-group"。拿到事务分组名"my_test_tx_group"后拼接成"service.vgroupMapping.my_test_tx_group"可以查找到对应的TC集群名,然后根据TC集群名拼接"service."+clusterName+".grouplist"找到真实TC服务地址。

上述一系列操作相对而言有些复杂,配置文件化与我们平时直接在application.yml中进行配置的习惯不太符合,那么seata只是支持直接在application.yml中配置呢?答案是肯定的,这就是seata-spring-boot-starter的作用。

seata-spring-boot-starter配置

seata-spring-boot-starter是seata 1.0版本之后新增加的,支持全自动配置seata与spring-boot的集成,包括数据源的自动代理以及GlobalTransactionScanner初始化。

<dependency>
			<groupId>io.seata</groupId>
			<artifactId>seata-spring-boot-starter</artifactId>
			<version>1.2.0</version>
		</dependency>

配置application.yml

seata:
  enabled: true
  application-id: account-service
  tx-service-group: my_test_tx_group
  #enable-auto-data-source-proxy: true
  #use-jdk-proxy: false
  client:
    rm:
      async-commit-buffer-limit: 1000
      report-retry-count: 5
      table-meta-check-enable: false
      report-success-enable: false
      lock:
        retry-interval: 10
        retry-times: 30
        retry-policy-branch-rollback-on-conflict: true
    tm:
      commit-retry-count: 5
      rollback-retry-count: 5
    undo:
      data-validation: true
      log-serialization: jackson
      log-table: undo_log
    log:
      exceptionRate: 100
  service:
    vgroup-mapping:
      my_test_tx_group: default
    default:
      grouplist: 127.0.0.1:8091
    #enable-degrade: false
    #disable-global-transaction: false
  transport:
    shutdown:
      wait: 3
    thread-factory:
      boss-thread-prefix: NettyBoss
      worker-thread-prefix: NettyServerNIOWorker
      server-executor-thread-prefix: NettyServerBizHandler
      share-boss-worker: false
      client-selector-thread-prefix: NettyClientSelector
      client-selector-thread-size: 1
      client-worker-thread-prefix: NettyClientWorkerThread
      worker-thread-size: default
      boss-thread-size: 1
    type: TCP
    server: NIO
    heartbeat: true
    serialization: seata
    compressor: none
    enable-client-batch-send-request: true
  config:
    type: file
  registry:
    type: file

只需要上面两步seata就配置OK了。

3)启用全局事务

通过注解@GlobalTransactional启用全局事务

seata springboot对应关系_spring_11

本地启动:

seata springboot对应关系_Source_12

对于一个服务既可以是 TM 角色也可以是 RM 角色,至于什么时候是 TM 或者 RM 则要看在一次全局事务中@GlobalTransactional注解标注在哪。

3、参考上面流程新建Module zhengcs-seata-order(Spring Boot 项目)

4、参考上面流程新建Module zhengcs-seata-storage(Spring Boot 项目)

5、新建Module zhengcs-seata-busi(Spring Boot 项目)

zhengcs-seata-busi作为对外提供服务,模拟下单过程。

@Service
@Slf4j
public class BusiService {

    @Reference(url = "dubbo://localhost:20882", check = false)
    private DubboStorageService dubboStorageService;
    @Reference(url = "dubbo://localhost:20881", check = false)
    private DubboOrderService dubboOrderService;

    /**
     * 减库存,下订单
     *
     * @param userId
     * @param commodityCode
     * @param orderCount
     */
    @GlobalTransactional(name = "purchase")
    public void purchase(String userId, String commodityCode, int orderCount) {
        log.info("purchase begin ... xid: " + RootContext.getXID());

        StorageRequest storageRequest = StorageRequest.builder()
                .commodityCode(commodityCode)
                .count(orderCount)
                .build();
        Result<Boolean> storageResult = dubboStorageService.decreaseStorage(storageRequest);
        log.info("库存扣减结果:{}", JSON.toJSONString(storageResult));
        if(!storageResult.isSuccess()){
            throw new RuntimeException("库存扣减异常");
        }

        OrderRequest orderRequest = OrderRequest.builder()
                .userId(userId)
                .commodityCode(commodityCode)
                .count(orderCount)
                .build();
        Result orderResult = dubboOrderService.createOrder(orderRequest);
        log.info("订单创建结果:{}", JSON.toJSONString(orderResult));
        if(!orderResult.isSuccess()){
            throw new RuntimeException("订单创建异常");
        }

        log.info("事务ID[{}],下单成功", RootContext.getXID());
    }
}

seata springboot对应关系_Source_13

6、启动服务,模拟下单

依次启动zhengcs-seata-account, zhengcs-seata-storage, zhengcs-seata-order, zhengcs-seata-busi服务。

浏览器请求http://localhost:8080/purchase?userId=001&code=123&count=1  ---> 下单成功

追踪一下过程:

zhengcs-seata-busi:

seata springboot对应关系_mysql_14

zhengcs-seata-storage:

seata springboot对应关系_spring_15

zhengcs-seata-order:

seata springboot对应关系_mysql_16

zhengcs-seata-acount:

seata springboot对应关系_分布式_17

浏览器请求http://localhost:8080/purchase?userId=001&code=123&count=100  ---> 下单失败

流程追踪:

zhengcs-seata-busi:

seata springboot对应关系_分布式_18

库存扣减成功,但是下单失败,失败原因是金额扣减失败。

zhengcs-seata-storage:

seata springboot对应关系_seata_19

zhengcs-seata-order:

seata springboot对应关系_spring_20

zhengcs-seata-account:

seata springboot对应关系_mysql_21

至此,基本项目搭建完成,基本的AT模式可以正常运行。

尾语

在整个项目搭建及运行过程中,有很多问题和疑惑,有的已经在实践中得以解决,有的还没有完全厘清或者还没有来得及深入了解,此处记录下几个比较深刻的问题,后面继续深入学习。

1、分组在整个设计中的作用是什么样的?与集群之间的关系如何?

2、注册中心和配置中心怎么接入第三方框架?

3、TC如何保证高可用?如何实现集群部署?

4、TC停掉会对整个系统会产生什么影响?seata对此的应对策略是什么?

。。。