【Hoxton.SR1版本】Spring Cloud Stream消息驱动

目录

一、简介

二、搭建消息生产者端

三、搭建消息消费者端

四、消息重复消费问题

五、消息持久化

六、总结


一、简介

在实际项目中,服务与服务之间的通信往往我们会采用消息中间件方式来处理,比如引入RabbitMQ、Kafka等,但这会有一个问题,就是我们的应用程序跟消息中间件耦合在一块了,还有就是如果我们要替换为Kafka,那么变动会比较大,Spring Cloud官网提供了Spring Cloud Stream组件,用来给我们整合消息中间件,Spring Cloud Stream底层屏蔽了消息中间件的差异,降低了切换成本,统一消息的编程模型,这样就可以降低我们系统和消息中间件的耦合度。总结一句话:就是Spring Cloud Stream有利于应用程序与消息中间件的解耦。

 

  • Spring Cloud Stream是什么?

官方定义Spring Cloud Stream是一个构建消息驱动微服务的框架。应用程序通过inputs和outputs来与Spring Cloud Stream中binder对象交互。通过我们配置来binding(绑定),而Spring Cloud Stream的binder对象负责与消息中间件交互。所以,我们只需要搞清楚如何与Spring Cloud Stream交互就可以方便使用消息驱动的方式。

通过Spring Integration来连接消息代理中间件以实现消息事件驱动。Spring Cloud Stream为一些供应商的消息中间件产品提供了个性化的自动化配置实现,引用了发布-订阅,消费组、分区的三个核心概念。但是目前 Spring Cloud Stream 只支持 RabbitMQ 和 Kafka 的自动化配置。

 

  • Spring Cloud Stream官方文档地址

https://spring.io/projects/spring-cloud-stream

https://cloud.spring.io/spring-cloud-static/spring-cloud-stream/3.0.6.RELEASE/reference/html/

 

  • Spring Cloud Stream官网结构图

可以看到,通过定义绑定器Binder作为中间层,实现了应用程序与消息中间件细节之间的隔离。

  • Spring Cloud Stream编码API和常用注解
  1. Middleware:中间件,目前只支持RabbitMQ和Kafka。
  2. Binder:Binder是应用与消息中间件之间的封装,目前实行了Kafka和RabbitMQ的Binder,通过Binder可以很方便地连接中间件,可以动态的改变消息类型(对应于Kafka的topic,RabbitMQ的exchange),这些都可以通过配置文件来实现。
  3. @Input:注解标识输入通道,通过该输入通道接收到的消息进入应用程序。
  4. @Output:注解标识输出通道,发布的消息将通过该通道离开应用程序。
  5. @StreamListener:监听队列,用于消费者的队列的消息接收。
  6. @EnableBinding:指信道channel和exchange绑定在一起。
  • Stream几个重要概念
  1. Destination Binders:目标绑定器,目标指的是 kafka 还是 RabbitMQ,绑定器就是封装了目标中间件的包。如果操作的是 kafka 就使用 kafka binder ,如果操作的是 RabbitMQ 就使用 rabbitmq binder;
  2. Destination Bindings:外部消息传递系统和应用程序之间的桥梁,提供消息的“生产者”和“消费者”(由目标绑定器创建);
  3. Message:一种规范化的数据结构,生产者和消费者基于这个数据结构通过外部消息系统与目标绑定器和其他应用程序通信;

二、搭建消息生产者端

新建一个module【springcloud-stream-rabbitmq-provider8801】

【a】pom.xml:引入spring-cloud-starter-stream-rabbit依赖

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <parent>
        <artifactId>springcloud2020</artifactId>
        <groupId>com.wsh.springcloud</groupId>
        <version>1.0-SNAPSHOT</version>
    </parent>
    <modelVersion>4.0.0</modelVersion>

    <artifactId>springcloud-stream-rabbitmq-provider8801</artifactId>

    <dependencies>
        <!--引入MQ Stream消息驱动-->
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-starter-stream-rabbit</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-actuator</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>
    </dependencies>
</project>

【b】application.yml:相关配置都在配置文件里面做了较详细的说明

server:
  port: 8801
spring:
  application:
    name: springcloud-stream-rabbitmq-provider
  cloud:
    stream:
      binders: # 在此处配置要绑定的rabbitmq的服务信息
        rabbitmq_binder: # binder绑定器名称,用于binding整合
          type: rabbit # 消息组件类型 如果消息中间件是kafka,则type:kafka
          environment: # rabbitmq相关环境配置
            spring:
              rabbitmq:
                host: localhost  #rabbitmq主机
                port: 5672 #rabbitmq端口
                username: guest #rabbitmq用户名
                password: guest #rabbitmq用户密码
      bindings: # 服务的整合处理
        output: # 输出通道,表示消息生产方
          destination: rabbitmq_stream_exchange   # 指定输出的交换器名称
          content-type: application/json  # 设置消息类型,本次为json,文本则设置“text/plain”
          binder: rabbitmq_binder   # 指定binder的名称,需与上面spring.cloud.stream.binders.xxx中的xxx绑定器名称对应
eureka:
  client:
    service-url:
      defaultZone: http://springcloud-eureka7001.com:7001/eureka/,http://springcloud-eureka7002.com:7002/eureka/   #集群版Eureka注册中心

【c】主启动类

@SpringBootApplication
public class SpringCloudStreamMQServiceApplicaiton8801 {
    public static void main(String[] args) {
        SpringApplication.run(SpringCloudStreamMQServiceApplicaiton8801.class, args);
    }
}

【d】定义消息发送的接口

/**
 * @Description 消息发送接口
 * @Date 2020/8/27 21:37
 * @Author weishihuai
 * 说明:
 */
public interface IMessageProvider {
    /**
     * 发送消息
     */
    String sendMessage();
}

消息发送实现类:

package com.wsh.springcloud.service.impl;

import com.wsh.springcloud.service.IMessageProvider;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.messaging.Source;
import org.springframework.integration.support.MessageBuilder;
import org.springframework.messaging.MessageChannel;

import javax.annotation.Resource;
import java.util.UUID;

/**
 * @Description 消息发送实现类
 * @Date 2020/8/27 21:38
 * @Author weishihuai
 * 说明: @EnableBinding表示信道channel和exchange绑定在一起.
 */
@EnableBinding(Source.class)
public class MessageProviderImpl implements IMessageProvider {

    private static final Logger logger = LoggerFactory.getLogger(MessageProviderImpl.class);

    /**
     * 消息发送管道
     */
    @Resource
    private MessageChannel output;

    @Override
    public String sendMessage() {
        String uuid = UUID.randomUUID().toString();
        output.send(MessageBuilder.withPayload(uuid).build());
        logger.info("消息发送者发送消息: {}", uuid);
        return "消息发送者发送消息: " + uuid;
    }

}

 【e】定义消息发送Controller

package com.wsh.springcloud.controller;

import com.wsh.springcloud.service.IMessageProvider;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

import javax.annotation.Resource;

/**
 * @Description 消息发送测试Controller
 * @Date 2020/8/27 21:39
 * @Author weishihuai
 * 说明:
 */
@RestController
public class SendMessageController {
    @Resource
    private IMessageProvider messageProvider;

    @GetMapping(value = "/sendMessage")
    public String sendMessage() {
        return messageProvider.sendMessage();
    }

}

【f】测试

启动Eureka注册中心以及消息驱动发送方服务,浏览器访问:http://localhost:8801/sendMessage 测试发送消息,然后我们去RabbitMQ界面观察流量情况:

注意:下图中的rabbitmq_stream_exchange就是我们在application.yml中指定的将消息输出到哪个desitination交换机上面。

观察后台日志:

 可见,消息成功发送到MQ中,正在等待消费方进行消费消息,至此,消息发送者端搭建成功,接下来搭建消息消费方服务。

三、搭建消息消费者端

新建module【springcloud-stream-rabbitmq-consumer8802】

【a】pom.xml

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <parent>
        <artifactId>springcloud2020</artifactId>
        <groupId>com.wsh.springcloud</groupId>
        <version>1.0-SNAPSHOT</version>
    </parent>
    <modelVersion>4.0.0</modelVersion>

    <artifactId>springcloud-stream-rabbitmq-consumer8802</artifactId>

    <dependencies>
        <!--rabbitmq stream消息驱动依赖-->
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-starter-stream-rabbit</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-actuator</artifactId>
        </dependency>

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>
    </dependencies>
</project>

【b】applicaiton.yml

server:
  port: 8802
spring:
  application:
    name: springcloud-stream-rabbitmq-consumer
  cloud:
    stream:
      binders:
        rabbitmq_binder:  # binder绑定器名称,用于binding整合
          type: rabbit # 消息组件类型 如果消息中间件是kafka,则type:kafka
          environment: # 设置rabbitmq的相关的环境配置
            spring:
              rabbitmq:
                host: localhost  #rabbitmq主机
                port: 5672 #rabbitmq端口
                username: guest #rabbitmq用户名
                password: guest #rabbitmq用户密码
      bindings: # 服务的整合处理
        input: # 输入通道,表示消息消费方
          destination: rabbitmq_stream_exchange  # 指定接收的交换器名称,需与消息发送方的destination对应上
          content-type: application/json # 设置消息类型,本次为对象json,如果是文本则设置“text/plain”
          binder: rabbitmq_binder # 指定binder的名称,需与上面spring.cloud.stream.binders.xxx对应中的xxx对应
eureka:
  client:
    service-url:
      defaultZone: http://springcloud-eureka7001.com:7001/eureka/,http://springcloud-eureka7002.com:7002/eureka/   #集群版Eureka注册中心

【c】主启动类

@SpringBootApplication
public class RabbitMQStreamServiceApplication8802 {
    public static void main(String[] args) {
        SpringApplication.run(RabbitMQStreamServiceApplication8802.class, args);
    }
}

【d】新增接收消息发送方发送消息的方法

package com.wsh.springcloud.controller;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.messaging.Sink;
import org.springframework.messaging.Message;
import org.springframework.stereotype.Component;

/**
 * @version V1.0
 * @ClassName: com.wsh.springcloud.controller.ReceiveMessageController.java
 * @Description: 接收消息发送方发送的消息
 * @author: weishihuai
 * @date: 2020/8/28 10:55
 */
@Component
@EnableBinding(Sink.class)
public class ReceiveMessageController {

    private static final Logger logger = LoggerFactory.getLogger(ReceiveMessageController.class);

    @Value("${server.port}")
    private String serverPort;

    /**
     * 接收消息发送方发送的消息
     *
     * @param message 消息
     * @StreamListener 通过@StreamListener注解来监听exchange中的消息
     */
    @StreamListener(Sink.INPUT)
    private void receiveMessage(Message<String> message) {
        String payload = message.getPayload();
        logger.info("消息接收方接收消息: {}, 服务端口号:{}", payload, serverPort);
    }

}

 【e】测试

启动消息消费端,浏览器访问两次:http://localhost:8801/sendMessage 模拟发送两条消息到RabbitMQ中,查看消费者端是否成功消费此消息。

下图是消息发送方的日志:

下图是消息消费方的日志:

由此可见,成功实现了消息接收者获取到了发送者发送的消息,同时我们在RabbitMQ的web界面也可以看到相关的信息:

四、消息重复消费问题

为了模拟消息重复消费的问题,这里我们还需要一个消息消费端,所以我们新建一个module【springcloud-stream-rabbitmq-consumer8803】,此子模块跟【springcloud-stream-rabbitmq-consumer8802】除了端口号,其他一模一样,这里不再过多阐述。

启动8803消费者和8802消费者,浏览器访问两次:http://localhost:8801/sendMessage,模拟发送两条消息。

(1)、消息发送端日志

(2)、消息接收端【8802】日志

(3)、消息接收端【8803】日志

由此可见,同一条消息同时被两个消费者处理,这是不对的。

比如在如下场景中,假如订单服务调用支付服务,支付服务我们做集群部署,那如果支付服务重复消费了订单服务发送过来的支付消息,那么就会造成数据错误,我们得避免这种情况。试想一下重复扣用户的款,这肯定不行的。

接下来,我们谈谈怎么利用Stream来处理重复消费的问题。Spring Cloud Stream提供了Group组的概念,我们可以使用Stream中的消息分组来解决。

注意在Stream中处于同一个group中的多个消费者是竞争关系,就能够保证消息只会被其中一个应用消费一次。不同组是可以全面消费的(重复消费)。

导致原因:默认分组group是不同的,组流水号不一样,被认为不同组,可以消费。

从RabbitMQ可视化界面中,我们可以看到【8802】和【8803】被分配的默认分组信息:

可以看到,两个消费者的group组名是不一样的,所以导致了重复消费。Spring Cloud Stream提供了自定义分组配置的功能,我们可以将【8802】和【8803】分配相同的组名,具体配置如下:

 在【8802】和【8803】的application.yml配置文件中都加入: group:group1  指定相同的分组名称,如下图所示:

重启【8802】和【8803】,浏览器访问两次:http://localhost:8801/sendMessage  模拟发送两条消息

(1)、消息发送端日志

(2)、消息接收端【8802】日志

(3)、消息接收端【8803】日志

可以看到,同一条消息同时只能被一个消费者处理,成功防止了消息的重复消费问题。同时我们在RabbitMQ的web界面也可以看到相关的信息:

五、消息持久化

除了使用group能防止消息重复消费,其实group还能将消息进行持久化,下面我们来测试一下。

(1)、停掉【8802】和【8803】两个消息消费者服务

(2)、注释掉【8802】服务中的group分组属性,注意【8803】需要保留group分组属性

(3)、浏览器访问两次http://localhost:8801/sendMessage,模拟发送两条消息。

     (a)、消息发送端日志

接着我们重启【8802】和【8803】服务,注意观察后台日志:

     (b)、消息接收方【8802】日志

可见,没有消息消费的日志信息。 

     (c)、消息接收方【8803】日志 

可以看到,保留group属性的【8803】服务实现了对消息的持久化,当重启之后会自动去拉取未消费的消息来进行消费;而【8802】由于未保留group属性,所以并没有重新去拉取最新消息进行消费。 

六、总结

本篇文章总结了如何使用Spring Cloud Stream消息驱动屏蔽消息中渐渐的底层实现,极大地方便我们开发者。同时讲解了如何使用分组来避免消息重复消费的问题以及消息持久化。Spring Cloud Stream实现了消息中间件和应用程序的高度解耦以上相关项目的代码我已经放在Gitee上,有需要的小伙伴可以去拉取进行学习:https://gitee.com/weixiaohuai/springcloud_Hoxton,由于笔者水平有限,如有不对之处,还请小伙伴们指正,相互学习,一起进步。

已标记关键词 清除标记
<div class="post-text" itemprop="text"> <p>I have two apps running on marathon.<br> I want <code>web.myblog.com</code> to route to App 1, and <code>web.myblog.com/app</code> to route to app2.</p> <p>With the below config, all traffic is going to app 1.</p> <pre><code>App 1 traefik.frontend.rule=HostRegexp:{subdomain:[a-z]+}.myblog.com App 2 traefik.frontend.rule=Host:web.myblog.com;PathPrefix:/app </code></pre> <p>I tried using negative matching to exclude /app routing for app 1, but it is my understanding that negative lookahead isn't fully supported in golang and by result also not in traefik.</p> <p>I also tried specifying a path prefix for app 1 like so <code>traefik.frontend.rule=Host:web.myblog.com;PathPrefix:/</code> but this has no effect. </p> <p>When I modify the rule for App 1 to point to <code>Host:test.myblog.com</code> all traffic goes to app 2 correctly, From that I conclude that the routing config to App 2 is correct, it is just being ignored because App 1 supersedes it since it also satisfies the route.</p> <p>This issue is discussing this exact use case:<br> <a href="https://github.com/containous/traefik/issues/419#issuecomment-223843103" rel="nofollow noreferrer">https://github.com/containous/traefik/issues/419#issuecomment-223843103</a></p> <p>But it looks like a config without pathPrefix takes precedence over a config with pathPrefix, maybe this type of config is simply not supported on the marathon backend ?</p> <p>It's not clear to me from this documentation:<br> <a href="https://github.com/fclaeys/traefik/blob/master/docs/basics.md" rel="nofollow noreferrer">https://github.com/fclaeys/traefik/blob/master/docs/basics.md</a>.</p> <p>Looking for an answer that will help me understand whether this use case is supported at all on the marathon backend, or an example working config for it.</p> <p>Using Traefik v1.5.1</p> </div>
"C:\Program Files\Java\jdk1.8.0_101\bin\java" "-javaagent:D:\JAVA\IDEA\IntelliJ IDEA 2017.3\lib\idea_rt.jar=64475:D:\JAVA\IDEA\IntelliJ IDEA 2017.3\bin" -Dfile.encoding=UTF-8 -classpath "C:\Program Files\Java\jdk1.8.0_101\jre\lib\charsets.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\deploy.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\ext\access-bridge-64.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\ext\cldrdata.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\ext\dnsns.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\ext\jaccess.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\ext\jfxrt.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\ext\localedata.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\ext\nashorn.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\ext\sunec.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\ext\sunjce_provider.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\ext\sunmscapi.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\ext\sunpkcs11.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\ext\zipfs.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\javaws.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\jce.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\jfr.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\jfxswt.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\jsse.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\management-agent.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\plugin.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\resources.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\rt.jar;D:\work\smbmsa\target\classes;D:\work\smbmsa\src\main\webapp\WEB-INF\lib\log4j-1.2.16.jar;D:\work\smbmsa\src\main\webapp\WEB-INF\lib\mybatis-3.2.2.jar;D:\work\smbmsa\src\main\webapp\WEB-INF\lib\lombok-1.16.18.jar;D:\work\smbmsa\src\main\webapp\WEB-INF\lib\mysql-connector-java-5.1.0-bin.jar" cn.smbms.test.UserMapperTest [DEBUG] 2017-12-16 16:46:45,220 org.apache.ibatis.logging.LogFactory - Logging initialized using 'class org.apache.ibatis.logging.log4j.Log4jImpl' adapter. [DEBUG] 2017-12-16 16:46:45,223 org.apache.ibatis.io.ResolverUtil - Class not found: org.jboss.vfs.VFS [DEBUG] 2017-12-16 16:46:45,223 org.apache.ibatis.io.ResolverUtil - JBoss 6 VFS API is not available in this environment. [DEBUG] 2017-12-16 16:46:45,224 org.apache.ibatis.io.ResolverUtil - Class not found: org.jboss.vfs.VirtualFile [DEBUG] 2017-12-16 16:46:45,224 org.apache.ibatis.io.ResolverUtil - VFS implementation org.apache.ibatis.io.JBoss6VFS is not valid in this environment. [DEBUG] 2017-12-16 16:46:45,225 org.apache.ibatis.io.ResolverUtil - Using VFS adapter org.apache.ibatis.io.DefaultVFS [DEBUG] 2017-12-16 16:46:45,225 org.apache.ibatis.io.ResolverUtil - Find JAR URL: file:/D:/work/smbmsa/target/classes/cn/smbms/pojo [DEBUG] 2017-12-16 16:46:45,225 org.apache.ibatis.io.ResolverUtil - Not a JAR: file:/D:/work/smbmsa/target/classes/cn/smbms/pojo [DEBUG] 2017-12-16 16:46:45,269 org.apache.ibatis.io.ResolverUtil - Reader entry: Bill.class [DEBUG] 2017-12-16 16:46:45,270 org.apache.ibatis.io.ResolverUtil - Reader entry: Provider.class [DEBUG] 2017-12-16 16:46:45,270 org.apache.ibatis.io.ResolverUtil - Reader entry: Role.class [DEBUG] 2017-12-16 16:46:45,270 org.apache.ibatis.io.ResolverUtil - Reader entry: User.class [DEBUG] 2017-12-16 16:46:45,270 org.apache.ibatis.io.ResolverUtil - Listing file:/D:/work/smbmsa/target/classes/cn/smbms/pojo [DEBUG] 2017-12-16 16:46:45,270 org.apache.ibatis.io.ResolverUtil - Find JAR URL: file:/D:/work/smbmsa/target/classes/cn/smbms/pojo/Bill.class [DEBUG] 2017-12-16 16:46:45,270 org.apache.ibatis.io.ResolverUtil - Not a JAR: file:/D:/work/smbmsa/target/classes/cn/smbms/pojo/Bill.class [DEBUG] 2017-12-16 16:46:45,271 org.apache.ibatis.io.ResolverUtil - Reader entry: ���� 1 b [DEBUG] 2017-12-16 16:46:45,272 org.apache.ibatis.io.ResolverUtil - Find JAR URL: file:/D:/work/smbmsa/target/classes/cn/smbms/pojo/Provider.class [DEBUG] 2017-12-16 16:46:45,272 org.apache.ibatis.io.ResolverUtil - Not a JAR: file:/D:/work/smbmsa/target/classes/cn/smbms/pojo/Provider.class [DEBUG] 2017-12-16 16:46:45,273 org.apache.ibatis.io.ResolverUtil - Reader entry: ���� 1 U [DEBUG] 2017-12-16 16:46:45,273 org.apache.ibatis.io.ResolverUtil - Find JAR URL: file:/D:/work/smbmsa/target/classes/cn/smbms/pojo/Role.class [DEBUG] 2017-12-16 16:46:45,273 org.apache.ibatis.io.ResolverUtil - Not a JAR: file:/D:/work/smbmsa/target/classes/cn/smbms/pojo/Role.class [DEBUG] 2017-12-16 16:46:45,274 org.apache.ibatis.io.ResolverUtil - Reader entry: ���� 1 < [DEBUG] 2017-12-16 16:46:45,274 org.apache.ibatis.io.ResolverUtil - Find JAR URL: file:/D:/work/smbmsa/target/classes/cn/smbms/pojo/User.class [DEBUG] 2017-12-16 16:46:45,274 org.apache.ibatis.io.ResolverUtil - Not a JAR: file:/D:/work/smbmsa/target/classes/cn/smbms/pojo/User.class [DEBUG] 2017-12-16 16:46:45,275 org.apache.ibatis.io.ResolverUtil - Reader entry: ���� 1 d [DEBUG] 2017-12-16 16:46:45,275 org.apache.ibatis.io.ResolverUtil - Checking to see if class cn.smbms.pojo.Bill matches criteria [is assignable to Object] [DEBUG] 2017-12-16 16:46:45,276 org.apache.ibatis.io.ResolverUtil - Checking to see if class cn.smbms.pojo.Provider matches criteria [is assignable to Object] [DEBUG] 2017-12-16 16:46:45,276 org.apache.ibatis.io.ResolverUtil - Checking to see if class cn.smbms.pojo.Role matches criteria [is assignable to Object] [DEBUG] 2017-12-16 16:46:45,277 org.apache.ibatis.io.ResolverUtil - Checking to see if class cn.smbms.pojo.User matches criteria [is assignable to Object] [DEBUG] 2017-12-16 16:46:45,295 org.apache.ibatis.logging.LogFactory - Logging initialized using 'class org.apache.ibatis.logging.log4j.Log4jImpl' adapter. [DEBUG] 2017-12-16 16:46:45,307 org.apache.ibatis.datasource.pooled.PooledDataSource - PooledDataSource forcefully closed/removed all connections. [DEBUG] 2017-12-16 16:46:45,307 org.apache.ibatis.datasource.pooled.PooledDataSource - PooledDataSource forcefully closed/removed all connections. [DEBUG] 2017-12-16 16:46:45,307 org.apache.ibatis.datasource.pooled.PooledDataSource - PooledDataSource forcefully closed/removed all connections. [DEBUG] 2017-12-16 16:46:45,307 org.apache.ibatis.datasource.pooled.PooledDataSource - PooledDataSource forcefully closed/removed all connections. [DEBUG] 2017-12-16 16:46:45,388 org.apache.ibatis.transaction.jdbc.JdbcTransaction - Opening JDBC Connection [DEBUG] 2017-12-16 16:46:45,624 org.apache.ibatis.datasource.pooled.PooledDataSource - Created connection 731395981. [DEBUG] 2017-12-16 16:46:45,626 cn.smbms.dao.user.UserMapper.count - ooo Using Connection [com.mysql.jdbc.JDBC4Connection@2b98378d] [DEBUG] 2017-12-16 16:46:45,626 cn.smbms.dao.user.UserMapper.count - ==> Preparing: SELECT COUNT(1) AS COUNT FROM smbms_user [DEBUG] 2017-12-16 16:46:45,651 cn.smbms.dao.user.UserMapper.count - ==> Parameters: 14 有乱码的地方是怎么回事呀
pom.xml 配置 ``` <dependencies> <!--微服务注册--> <!-- https://mvnrepository.com/artifact/org.springframework.cloud/spring-cloud-starter-netflix-eureka-client --> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId> </dependency> <!--网关配置--> <!-- https://mvnrepository.com/artifact/org.springframework.cloud/spring-cloud-starter-netflix-zuul --> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-netflix-zuul</artifactId> </dependency> <!--限流--> <!-- https://mvnrepository.com/artifact/com.marcosbarbero.cloud/spring-cloud-zuul-ratelimit --> <dependency> <groupId>com.marcosbarbero.cloud</groupId> <artifactId>spring-cloud-zuul-ratelimit</artifactId> <version>2.3.0.RELEASE</version> </dependency> <!--限流redis数据库记录数据--> <!-- https://mvnrepository.com/artifact/org.springframework.boot/spring-boot-starter-data-redis --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-redis</artifactId> </dependency> <!--OAuth--> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-oauth2</artifactId> </dependency> <!-- https://mvnrepository.com/artifact/org.projectlombok/lombok --> <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> </dependency> <!--自定义config配置--> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-configuration-processor</artifactId> <optional>true</optional> </dependency> <dependency> <groupId>com.alibaba</groupId> <artifactId>fastjson</artifactId> <version>1.2.62</version> </dependency> <!-- 被zipkin服务追踪的启动依赖--> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-zipkin</artifactId> </dependency> <!--配置rabbitmq--> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-stream-rabbit</artifactId> </dependency> </dependencies> ``` application.yml ``` server: port: 6020 spring: application: name: Gateway-Zuul #对曝光的微服务的名称 #配置redis redis: host: 127.0.0.1 port: 6379 password: admin jedis: pool: max-active: 100 max-idle: 8 min-idle: 4 max-wait: 10000 timeout: 3000 #rabbitmq rabbitmq: host: 127.0.0.1 port: 5672 username: admin password: admin #zipkin在rabbitmq消息队列zipkin zipkin: sender: type: rabbit rabbitmq: queue: zipkin #被追踪的可能性,默认是0.1 表示百分之10 sleuth: sampler: probability: 1.0 eureka: client: service-url: defaultZone: http://Eureka7001.com:7001/eureka/ registry-fetch-interval-seconds: 5 # 默认为30秒 表示eureka client间隔多久去拉取服务注册信息,默认为30秒,对于api-gateway,如果要迅速获取服务注册状态,可以缩小该值,比如5秒 instance: instance-id: ${spring.application.name}:${spring.cloud.client.ip-address}:${spring.application.instance_id:${server.port}} #修改之后的ip prefer-ip-address: true #访问路径显示IP地址 zuul: host: connect-timeout-millis: 15000 #HTTP连接超时要比Hystrix的大 socket-timeout-millis: 60000 #socket超时 ignored-services: "*" # 不允许用微服务名访问了,如果禁用所有的,可以使用 "*" routes: OAuth-server: /auth/** sensitive-headers: #允许传递敏感信息 #限流配置 ratelimit: enabled: true repository: REDIS behind-proxy: true add-response-headers: false default-policy-list: #optional - will apply unless specific policy exists - limit: 20 #optional - request number limit per refresh interval window quota: 1 #optional - request time limit per refresh interval window (in seconds) refresh-interval: 1 #default value (in seconds) type: #optional # - user - origin - url - httpmethod security: oauth2: client: #令牌端点 access-token-uri: http://localhost:${server.port}/auth/oauth/token #授权端点 user-authorization-uri: http://localhost:${server.port}/auth/oauth/authorize #OAuth2客户端ID client-id: test #OAuth2客户端密钥 client-secret: test authorization: check-token-access: http://localhost:${server.port}/auth/oauth/check_token resource: jwt: key-value: jkdfjkdf ribbon: ReadTimeout: 10000 ConnectTimeout: 10000 hystrix: command: default: execution: isolation: thread: timeoutInMilliseconds: 3000 circuitBreaker: enabled: true requestVolumeThreshold: 10 sleepWindowInMilliseconds: 10000 errorThresholdPercentage: 60 ``` 报错信息: ``` 2020-02-09 17:04:44.448 ERROR [Gateway-Zuul,bb484313c41a709a,0245f4527d3b0826,true] 20368 --- [ask-scheduler-1] o.s.integration.handler.LoggingHandler : org.springframework.messaging.MessagingException: Failed to invoke method; nested exception is java.lang.UnsupportedOperationException: This converter does not support this method at org.springframework.integration.endpoint.MethodInvokingMessageSource.doReceive(MethodInvokingMessageSource.java:115) at org.springframework.integration.endpoint.AbstractMessageSource.receive(AbstractMessageSource.java:167) at org.springframework.integration.endpoint.SourcePollingChannelAdapter.receiveMessage(SourcePollingChannelAdapter.java:250) at org.springframework.integration.endpoint.AbstractPollingEndpoint.doPoll(AbstractPollingEndpoint.java:359) at org.springframework.integration.endpoint.AbstractPollingEndpoint.pollForMessage(AbstractPollingEndpoint.java:328) at org.springframework.integration.endpoint.AbstractPollingEndpoint.lambda$null$1(AbstractPollingEndpoint.java:275) at org.springframework.integration.util.ErrorHandlingTaskExecutor.lambda$execute$0(ErrorHandlingTaskExecutor.java:57) at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:50) at org.springframework.integration.util.ErrorHandlingTaskExecutor.execute(ErrorHandlingTaskExecutor.java:55) at org.springframework.integration.endpoint.AbstractPollingEndpoint.lambda$createPoller$2(AbstractPollingEndpoint.java:272) at org.springframework.cloud.sleuth.instrument.async.TraceRunnable.run(TraceRunnable.java:67) at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54) at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:93) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.UnsupportedOperationException: This converter does not support this method at org.springframework.integration.support.converter.DefaultDatatypeChannelMessageConverter.toMessage(DefaultDatatypeChannelMessageConverter.java:85) at org.springframework.messaging.converter.CompositeMessageConverter.toMessage(CompositeMessageConverter.java:83) at org.springframework.cloud.function.context.catalog.BeanFactoryAwareFunctionRegistry$FunctionInvocationWrapper.lambda$convertOutputValueIfNecessary$2(BeanFactoryAwareFunctionRegistry.java:620) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.ArrayList$ArrayListSpliterator.tryAdvance(ArrayList.java:1359) at java.util.stream.ReferencePipeline.forEachWithCancel(ReferencePipeline.java:126) at java.util.stream.AbstractPipeline.copyIntoWithCancel(AbstractPipeline.java:499) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:486) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472) at java.util.stream.FindOps$FindOp.evaluateSequential(FindOps.java:152) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.util.stream.ReferencePipeline.findFirst(ReferencePipeline.java:464) at org.springframework.cloud.function.context.catalog.BeanFactoryAwareFunctionRegistry$FunctionInvocationWrapper.convertOutputValueIfNecessary(BeanFactoryAwareFunctionRegistry.java:626) at org.springframework.cloud.function.context.catalog.BeanFactoryAwareFunctionRegistry$FunctionInvocationWrapper.doApply(BeanFactoryAwareFunctionRegistry.java:569) at org.springframework.cloud.function.context.catalog.BeanFactoryAwareFunctionRegistry$FunctionInvocationWrapper.get(BeanFactoryAwareFunctionRegistry.java:474) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.springframework.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:282) at org.springframework.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:266) at org.springframework.integration.endpoint.MethodInvokingMessageSource.doReceive(MethodInvokingMessageSource.java:112) ... 19 moreyi ```
©️2020 CSDN 皮肤主题: 数字20 设计师:CSDN官方博客 返回首页
实付 19.90元
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、C币套餐、付费专栏及课程。

余额充值