kandi background
Explore Kits

Discovery | ️ Nepxion Discovery is a solution for Spring Cloud | Microservice library

 by   Nepxion Java Version: 6.12.1 License: Apache-2.0

 by   Nepxion Java Version: 6.12.1 License: Apache-2.0

Download this library from

kandi X-RAY | Discovery Summary

Discovery is a Java library typically used in Architecture, Microservice, Spring applications. Discovery has no bugs, it has no vulnerabilities, it has build file available, it has a Permissive License and it has medium support. You can download it from GitHub, Maven.
           .
Support
Support
Quality
Quality
Security
Security
License
License
Reuse
Reuse

kandi-support Support

  • Discovery has a medium active ecosystem.
  • It has 4518 star(s) with 1227 fork(s). There are 238 watchers for this library.
  • There were 4 major release(s) in the last 12 months.
  • There are 1 open issues and 94 have been closed. On average issues are closed in 7 days. There are no pull requests.
  • It has a neutral sentiment in the developer community.
  • The latest version of Discovery is 6.12.1
Discovery Support
Best in #Microservice
Average in #Microservice
Discovery Support
Best in #Microservice
Average in #Microservice

quality kandi Quality

  • Discovery has 0 bugs and 0 code smells.
Discovery Quality
Best in #Microservice
Average in #Microservice
Discovery Quality
Best in #Microservice
Average in #Microservice

securitySecurity

  • Discovery has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
  • Discovery code analysis shows 0 unresolved vulnerabilities.
  • There are 0 security hotspots that need review.
Discovery Security
Best in #Microservice
Average in #Microservice
Discovery Security
Best in #Microservice
Average in #Microservice

license License

  • Discovery is licensed under the Apache-2.0 License. This license is Permissive.
  • Permissive licenses have the least restrictions, and you can use them in most projects.
Discovery License
Best in #Microservice
Average in #Microservice
Discovery License
Best in #Microservice
Average in #Microservice

buildReuse

  • Discovery releases are available to install and integrate.
  • Deployable package is available in Maven.
  • Build file is available. You can build the component from source.
  • Installation instructions are not available. Examples and code snippets are available.
  • It has 29723 lines of code, 2815 functions and 669 files.
  • It has medium code complexity. Code complexity directly impacts maintainability of the code.
Discovery Reuse
Best in #Microservice
Average in #Microservice
Discovery Reuse
Best in #Microservice
Average in #Microservice
Top functions reviewed by kandi - BETA

kandi has reviewed Discovery and discovered the below as its top functions. This is intended to give you an instant insight into Discovery implemented functionality, and help decide if they suit your requirements.

  • Create nacos properties .
  • Parses the weight filter entity .
  • apply region filter
  • apply version filter
  • Parses the weight filter entity .
  • Creates a rule entity from the partial rule entity .
  • Load the rules from the file .
  • Apply host filter to the provider service .
  • Returns information about a plugin .
  • Executes the REST call .

Discovery Key Features

Nepxion开源社区创始人

2020年阿里巴巴中国云原生峰会出品人

2020年被Nacos和Spring Cloud Alibaba纳入相关开源项目

2021年阿里巴巴技术峰会上海站演讲嘉宾

2021年荣获陆奇博士主持的奇绩资本,进行风险投资的关注和调研

2021年入选Gitee最有价值开源项目

Nacos Group Member、Spring Cloud Alibaba Member

Spring Cloud Alibaba、Nacos、Sentinel、OpenTracing Committer & Contributor

基于Nepxion Discovery集成定制

多云、多活、多机房流量调配

跨云动态域名、跨环境适配

DCN、DSU、SET单元化部署

组件灵活装配、配置对外屏蔽

极简低代码PaaS平台

解决方案WIKI版

解决方案PPT版

平台界面WIKI版

快速入门Github版

快速入门Gitee版

框架源码Github版

框架源码Gitee版

指南示例源码Github版

指南示例源码Gitee版

对于入门级玩家,参考6.x.x指南示例极简版,分支为6.x.x-simple

对于熟练级玩家,参考6.x.x指南示例精进版,分支为6.x.x。除上述《极简版》功能外,涉及到指南篇里的绝大多数高级功能

对于骨灰级玩家,参考6.x.x指南示例高级版,分支为6.x.x-complex。除上述《精进版》功能外,涉及到指南篇里的ActiveMQ、MongoDB、RabbitMQ、Redis、RocketMQ、MySQL等高级调用链和蓝绿灰度调用链的整合

上述指南实例分支是针对Spring Cloud旧版本。对于Spring Cloud 202x版本,参考7.x.x指南示例精进版,分支为master

解决方案WIKI版

框架源码Github版

框架源码Gitee版

指南示例源码Github版

指南示例源码Gitee版

Spring Cloud旧版本,参考1.x.x指南示例,分支为1.x.x

Spring Cloud新版本,参考2.x.x指南示例,分支为master

依赖引入

copy iconCopydownload iconDownload
<dependency>
    <groupId>com.nepxion</groupId>
    <artifactId>discovery-plugin-register-center-starter-nacos</artifactId>
    <artifactId>discovery-plugin-register-center-starter-eureka</artifactId>
    <artifactId>discovery-plugin-register-center-starter-consul</artifactId>
    <artifactId>discovery-plugin-register-center-starter-zookeeper</artifactId>
    <version>${discovery.version}</version>
</dependency>

启动服务

copy iconCopydownload iconDownload
API网关 -> 服务A(两个实例) -> 服务B(两个实例)

环境验证

copy iconCopydownload iconDownload
gateway 
-> [ID=discovery-guide-service-a][T=service][P=Nacos][H=192.168.0.107:3001][V=1.0][R=dev][E=env1][Z=zone1][G=discovery-guide-group][TID=48682.7508.15870951148324081][SID=49570.77.15870951148480000] 
-> [ID=discovery-guide-service-b][T=service][P=Nacos][H=192.168.0.107:4001][V=1.0][R=qa][E=env1][Z=zone2][G=discovery-guide-group][TID=48682.7508.15870951148324081][SID=49571.85.15870951189970000]

全链路蓝绿发布

copy iconCopydownload iconDownload
<?xml version="1.0" encoding="UTF-8"?>
<rule>
    <strategy>
        <version>1.0</version>
    </strategy>
</rule>

全链路条件蓝绿发布

copy iconCopydownload iconDownload
[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+.[a-zA-Z]{2,4}

全链路灰度发布

copy iconCopydownload iconDownload
<?xml version="1.0" encoding="UTF-8"?>
<rule>
    <strategy>
        <version-weight>1.0=90;1.1=10</version-weight>
    </strategy>
</rule>

全链路条件灰度发布

copy iconCopydownload iconDownload
<?xml version="1.0" encoding="UTF-8"?>
<rule>
    <!-- 全局缺省路由,当兜底路由存在的时候,全局缺省路由不需要配置 -->
    <!-- 兜底路由如果把全局缺省路由的流量配比设置成100%,其它流量配比设置成0%,那么它等同于如下全局缺省路由 -->
    <!-- <strategy>
        <version>{"discovery-guide-service-a":"1.0", "discovery-guide-service-b":"1.0"}</version>
    </strategy> -->

    <strategy-release>
        <conditions type="gray">
            <!-- 灰度路由1,条件expression驱动 -->
            <!-- <condition id="gray-condition-1" expression="#H['a'] == '1'" version-id="gray-route=10;stable-route=90"/> -->
            <!-- 灰度路由2,条件expression驱动 -->
            <!-- <condition id="gray-condition-2" expression="#H['a'] == '1' and #H['b'] == '2'" version-id="gray-route=85;stable-route=15"/> -->
            <!-- 兜底路由,无条件expression驱动 -->
            <condition id="basic-condition" version-id="gray-route=0;stable-route=100"/>
        </conditions>

        <routes>
            <route id="gray-route" type="version">{"discovery-guide-service-a":"1.1", "discovery-guide-service-b":"1.1"}</route>
            <route id="stable-route" type="version">{"discovery-guide-service-a":"1.0", "discovery-guide-service-b":"1.0"}</route>
        </routes>
    </strategy-release>
</rule>

全链路端到端混合实施蓝绿灰度发布

copy iconCopydownload iconDownload
# 当外界传值Header的时候,网关也设置并传递同名的Header,需要决定哪个Header传递到后边的服务去。如果下面开关为true,以网关设置为优先,否则以外界传值为优先。缺失则默认为true
spring.application.strategy.gateway.header.priority=false
# 当以网关设置为优先的时候,网关未配置Header,而外界配置了Header,仍旧忽略外界的Header。缺失则默认为true
spring.application.strategy.gateway.original.header.ignored=true

# 当外界传值Header的时候,网关也设置并传递同名的Header,需要决定哪个Header传递到后边的服务去。如果下面开关为true,以网关设置为优先,否则以外界传值为优先。缺失则默认为true
spring.application.strategy.zuul.header.priority=false
# 当以网关设置为优先的时候,网关未配置Header,而外界配置了Header,仍旧忽略外界的Header。缺失则默认为true
spring.application.strategy.zuul.original.header.ignored=true

全链路前端触发后端蓝绿灰度发布

copy iconCopydownload iconDownload
n-d-version=1.0
n-d-version={"discovery-guide-service-a":"1.0", "discovery-guide-service-b":"1.0"}

全局订阅式蓝绿灰度发布

copy iconCopydownload iconDownload
<?xml version="1.0" encoding="UTF-8"?>
<rule>
    <strategy>
        <version>{"discovery-guide-service-a":"1.0", "discovery-guide-service-b":"1.1"}</version>
    </strategy>
</rule>

全链路自定义蓝绿灰度发布

copy iconCopydownload iconDownload
public String getRouteVersion();

public String getRouteRegion();

public String getRouteEnvironment();

public String getRouteAddress();

public String getRouteVersionWeight();

public String getRouteRegionWeight();

public String getRouteIdBlacklist();

public String getRouteAddressBlacklist();

全链路动态变更元数据的蓝绿灰度发布

copy iconCopydownload iconDownload
spring.cloud.nacos.discovery.metadata.version=stable

发布失败下的版本故障转移

copy iconCopydownload iconDownload
# 启动和关闭版本故障转移。缺失则默认为false
spring.application.strategy.version.failover.enabled=true

并行发布下的版本偏好

copy iconCopydownload iconDownload
# 启动和关闭版本偏好。缺失则默认为false
spring.application.strategy.version.prefer.enabled=true

全局唯一ID屏蔽

copy iconCopydownload iconDownload
年月日(8位)-小时分钟秒(6位)-毫秒(3位)-随机数(4位)-随机数(3位)-随机数(3位)

IP地址和端口屏蔽

copy iconCopydownload iconDownload
<?xml version="1.0" encoding="UTF-8"?>
<rule>
    <strategy-blacklist>
        <!-- <address>127.0.0.1:3001</address> -->
        <!-- <address>127.0.0.1</address> -->
        <address>3001</address>
    </strategy-blacklist>
</rule>

异步场景下DiscoveryAgent解决方案

copy iconCopydownload iconDownload
# Base thread scan packages
agent.plugin.thread.scan.packages=reactor.core.publisher;org.springframework.aop.interceptor;com.netflix.hystrix

异步场景下Hystrix线程池隔离解决方案

copy iconCopydownload iconDownload
<!-- 当服务用Hystrix做线程隔离的时候,才需要导入下面的包 -->
<dependency>
    <groupId>com.nepxion</groupId>
    <artifactId>discovery-plugin-strategy-starter-hystrix</artifactId>
    <version>${discovery.version}</version>
</dependency>

全链路数据库和消息队列蓝绿发布

copy iconCopydownload iconDownload
<?xml version="1.0" encoding="UTF-8"?>
<rule>
    <parameter>
        <service service-name="discovery-guide-service-a" tag-key="version" tag-value="1.0" key="ShardingSphere" value="db1"/>
        <service service-name="discovery-guide-service-a" tag-key="version" tag-value="1.1" key="ShardingSphere" value="db2"/>
        <service service-name="discovery-guide-service-b" tag-key="region" tag-value="dev" key="RocketMQ" value="queue1"/>
        <service service-name="discovery-guide-service-b" tag-key="region" tag-value="qa" key="RocketMQ" value="queue2"/>
        <service service-name="discovery-guide-service-c" tag-key="env" tag-value="env1" key="ShardingSphere" value="db1"/>
        <service service-name="discovery-guide-service-c" tag-key="env" tag-value="env2" key="ShardingSphere" value="db2"/>
        <service service-name="discovery-guide-service-d" tag-key="zone" tag-value="zone1" key="RocketMQ" value="queue1"/>
        <service service-name="discovery-guide-service-d" tag-key="zone" tag-value="zone2" key="RocketMQ" value="queue2"/>
        <service service-name="discovery-guide-service-e" tag-key="address" tag-value="192.168.43.101:1201" key="ShardingSphere" value="db1"/>
        <service service-name="discovery-guide-service-e" tag-key="address" tag-value="192.168.43.102:1201" key="ShardingSphere" value="db2"/>
    </parameter>
</rule>

Spring-Cloud-Gateway网关动态路由

copy iconCopydownload iconDownload
[
    {
        "id": "route0", 
        "uri": "lb://discovery-guide-service-a", 
        "predicates": [
            "Path=/discovery-guide-service-a/**,/x/**,/y/**"
        ], 
        "filters": [
            "StripPrefix=1"
        ]
    }
]

Zuul网关动态路由

copy iconCopydownload iconDownload
[
    {
        "id": "route0",
        "serviceId": "discovery-guide-service-a",
        "path": "/discovery-guide-service-a/**"
    },
    {
        "id": "route1",
        "serviceId": "discovery-guide-service-a",
        "path": "/x/**"
    },
    {
        "id": "route2",
        "serviceId": "discovery-guide-service-a",
        "path": "/y/**"
    }
]

统一配置订阅执行器

copy iconCopydownload iconDownload
// 把继承类(extends)换成如下任何一个,即可切换配置中心,代码无需任何变动
// 1. NacosProcessor
// 2. ApolloProcessor
// 3. ConsulProcessor
// 4. EtcdProcessor
// 5. ZookeeperProcessor
// 6. RedisProcessor
// Group和DataId自行决定,需要注意
// 1. 对于Nacos、Redis、Zookeeper配置中心,Group和DataId需要和界面相对应
// 2. 对于Apollo、Consul、Etcd配置中心,Key的格式为Group-DataId
// 可以同时支持多个配置中心的订阅,需要同时创建多个不同的Processor,同时@Bean方式进入到Spring容器
public class MyConfigProcessor extends NacosProcessor {
    @Override
    public void beforeInitialization() {
        System.out.println("订阅器初始化之前,可以做一些工作");
    }

    @Override
    public void afterInitialization() {
        System.out.println("订阅器初始化之后,可以做一些工作");
    }

    @Override
    public String getGroup() {
        return "b";
    }

    @Override
    public String getDataId() {
        return "a";
    }

    @Override
    public String getDescription() {
        // description为日志打印显示而设置,作用是帮助使用者在日志上定位订阅器是否在执行
        return "My subscription";
    }

    @Override
    public void callbackConfig(String config) {
        // config为配置中心对应键值的内容变更,使用者可以根据此变更对业务模块做回调处理
        System.out.println("监听配置改变:config=" + config);
    }
}

规则策略示例

copy iconCopydownload iconDownload
<?xml version="1.0" encoding="UTF-8"?>
<rule>
    <!-- 如果不想开启相关功能,只需要把相关节点删除即可,例如不想要黑名单功能,把blacklist节点删除 -->
    <register>
        <!-- 服务注册的黑/白名单注册过滤,只在服务启动的时候生效。白名单表示只允许指定IP地址前缀注册,黑名单表示不允许指定IP地址前缀注册。每个服务只能同时开启要么白名单,要么黑名单 -->
        <!-- filter-type,可选值blacklist/whitelist,表示白名单或者黑名单 -->
        <!-- service-name,表示服务名 -->
        <!-- filter-value,表示黑/白名单的IP地址列表。IP地址一般用前缀来表示,如果多个用“;”分隔,不允许出现空格 -->
        <!-- 表示下面所有服务,不允许10.10和11.11为前缀的IP地址注册(全局过滤) -->
        <blacklist filter-value="10.10;11.11">
            <!-- 表示下面服务,不允许172.16和10.10和11.11为前缀的IP地址注册 -->
            <service service-name="discovery-springcloud-example-a" filter-value="172.16"/>
        </blacklist>

        <!-- <whitelist filter-value="">
            <service service-name="" filter-value=""/>
        </whitelist>  -->

        <!-- 服务注册的数目限制注册过滤,只在服务启动的时候生效。当某个服务的实例注册达到指定数目时候,更多的实例将无法注册 -->
        <!-- service-name,表示服务名 -->
        <!-- filter-value,表示最大实例注册数 -->
        <!-- 表示下面所有服务,最大实例注册数为10000(全局配置) -->
        <count filter-value="10000">
            <!-- 表示下面服务,最大实例注册数为5000,全局配置值10000将不起作用,以局部配置值为准 -->
            <service service-name="discovery-springcloud-example-a" filter-value="5000"/>
        </count>
    </register>

    <discovery>
        <!-- 服务发现的黑/白名单发现过滤,使用方式跟“服务注册的黑/白名单过滤”一致 -->
        <!-- 表示下面所有服务,不允许10.10和11.11为前缀的IP地址被发现(全局过滤) -->
        <blacklist filter-value="10.10;11.11">
            <!-- 表示下面服务,不允许172.16和10.10和11.11为前缀的IP地址被发现 -->
            <service service-name="discovery-springcloud-example-b" filter-value="172.16"/>
        </blacklist>

        <!-- 服务发现的多版本灰度匹配控制 -->
        <!-- service-name,表示服务名 -->
        <!-- version-value,表示可供访问的版本,如果多个用“;”分隔,不允许出现空格 -->
        <!-- 版本策略介绍 -->
        <!-- 1. 标准配置,举例如下 -->
        <!--    <service consumer-service-name="a" provider-service-name="b" consumer-version-value="1.0" provider-version-value="1.0;1.1"/> 表示消费端1.0版本,允许访问提供端1.0和1.1版本 -->
        <!-- 2. 版本值不配置,举例如下 -->
        <!--    <service consumer-service-name="a" provider-service-name="b" provider-version-value="1.0;1.1"/> 表示消费端任何版本,允许访问提供端1.0和1.1版本 -->
        <!--    <service consumer-service-name="a" provider-service-name="b" consumer-version-value="1.0"/> 表示消费端1.0版本,允许访问提供端任何版本 -->
        <!--    <service consumer-service-name="a" provider-service-name="b"/> 表示消费端任何版本,允许访问提供端任何版本 -->
        <!-- 3. 版本值空字符串,举例如下 -->
        <!--    <service consumer-service-name="a" provider-service-name="b" consumer-version-value="" provider-version-value="1.0;1.1"/> 表示消费端任何版本,允许访问提供端1.0和1.1版本 -->
        <!--    <service consumer-service-name="a" provider-service-name="b" consumer-version-value="1.0" provider-version-value=""/> 表示消费端1.0版本,允许访问提供端任何版本 -->
        <!--    <service consumer-service-name="a" provider-service-name="b" consumer-version-value="" provider-version-value=""/> 表示消费端任何版本,允许访问提供端任何版本 -->
        <!-- 4. 版本对应关系未定义,默认消费端任何版本,允许访问提供端任何版本 -->
        <!-- 特殊情况处理,在使用上需要极力避免该情况发生 -->
        <!-- 1. 消费端的application.properties未定义版本号,则该消费端可以访问提供端任何版本 -->
        <!-- 2. 提供端的application.properties未定义版本号,当消费端在xml里不做任何版本配置,才可以访问该提供端 -->
        <version>
            <!-- 表示网关g的1.0,允许访问提供端服务a的1.0版本 -->
            <service consumer-service-name="discovery-springcloud-example-gateway" provider-service-name="discovery-springcloud-example-a" consumer-version-value="1.0" provider-version-value="1.0"/>
            <!-- 表示网关g的1.1,允许访问提供端服务a的1.1版本 -->
            <service consumer-service-name="discovery-springcloud-example-gateway" provider-service-name="discovery-springcloud-example-a" consumer-version-value="1.1" provider-version-value="1.1"/>
            <!-- 表示网关z的1.0,允许访问提供端服务a的1.0版本 -->
            <service consumer-service-name="discovery-springcloud-example-zuul" provider-service-name="discovery-springcloud-example-a" consumer-version-value="1.0" provider-version-value="1.0"/>
            <!-- 表示网关z的1.1,允许访问提供端服务a的1.1版本 -->
            <service consumer-service-name="discovery-springcloud-example-zuul" provider-service-name="discovery-springcloud-example-a" consumer-version-value="1.1" provider-version-value="1.1"/>
            <!-- 表示消费端服务a的1.0,允许访问提供端服务b的1.0版本 -->
            <service consumer-service-name="discovery-springcloud-example-a" provider-service-name="discovery-springcloud-example-b" consumer-version-value="1.0" provider-version-value="1.0"/>
            <!-- 表示消费端服务a的1.1,允许访问提供端服务b的1.1版本 -->
            <service consumer-service-name="discovery-springcloud-example-a" provider-service-name="discovery-springcloud-example-b" consumer-version-value="1.1" provider-version-value="1.1"/>
            <!-- 表示消费端服务b的1.0,允许访问提供端服务c的1.0和1.1版本 -->
            <service consumer-service-name="discovery-springcloud-example-b" provider-service-name="discovery-springcloud-example-c" consumer-version-value="1.0" provider-version-value="1.0;1.1"/>
            <!-- 表示消费端服务b的1.1,允许访问提供端服务c的1.2版本 -->
            <service consumer-service-name="discovery-springcloud-example-b" provider-service-name="discovery-springcloud-example-c" consumer-version-value="1.1" provider-version-value="1.2"/>
        </version>

        <!-- 服务发现的多区域灰度匹配控制 -->
        <!-- service-name,表示服务名 -->
        <!-- region-value,表示可供访问的区域,如果多个用“;”分隔,不允许出现空格 -->
        <!-- 区域策略介绍 -->
        <!-- 1. 标准配置,举例如下 -->
        <!--    <service consumer-service-name="a" provider-service-name="b" consumer-region-value="dev" provider-region-value="dev"/> 表示dev区域的消费端,允许访问dev区域的提供端 -->
        <!-- 2. 区域值不配置,举例如下 -->
        <!--    <service consumer-service-name="a" provider-service-name="b" provider-region-value="dev;qa"/> 表示任何区域的消费端,允许访问dev区域和qa区域的提供端 -->
        <!--    <service consumer-service-name="a" provider-service-name="b" consumer-region-value="dev"/> 表示dev区域的消费端,允许访问任何区域的提供端 -->
        <!--    <service consumer-service-name="a" provider-service-name="b"/> 表示任何区域的消费端,允许访问任何区域的提供端 -->
        <!-- 3. 区域值空字符串,举例如下 -->
        <!--    <service consumer-service-name="a" provider-service-name="b" consumer-region-value="" provider-region-value="dev;qa"/> 表示任何区域的消费端,允许访问dev区域和qa区域的提供端 -->
        <!--    <service consumer-service-name="a" provider-service-name="b" consumer-region-value="dev" provider-region-value=""/> 表示dev区域的消费端,允许访问任何区域的提供端 -->
        <!--    <service consumer-service-name="a" provider-service-name="b" consumer-region-value="" provider-region-value=""/> 表示任何区域的消费端,允许访问任何区域的提供端 -->
        <!-- 4. 区域对应关系未定义,默认表示任何区域的消费端,允许访问任何区域的提供端 -->
        <!-- 特殊情况处理,在使用上需要极力避免该情况发生 -->
        <!-- 1. 消费端的application.properties未定义区域值,则该消费端可以访问任何区域的提供端 -->
        <!-- 2. 提供端的application.properties未定义区域值,当消费端在xml里不做任何区域配置,才可以访问该提供端 -->
        <region>
            <!-- 表示dev区域的消费端服务a,允许访问dev区域的提供端服务b -->
            <service consumer-service-name="discovery-springcloud-example-a" provider-service-name="discovery-springcloud-example-b" consumer-region-value="dev" provider-region-value="dev"/>
            <!-- 表示qa区域的消费端服务a,允许访问qa区域的提供端服务b -->
            <service consumer-service-name="discovery-springcloud-example-a" provider-service-name="discovery-springcloud-example-b" consumer-region-value="qa" provider-region-value="qa"/>
            <!-- 表示dev区域的消费端服务b,允许访问dev区域的提供端服务c -->
            <service consumer-service-name="discovery-springcloud-example-b" provider-service-name="discovery-springcloud-example-c" consumer-region-value="dev" provider-region-value="dev"/>
            <!-- 表示qa区域的消费端服务b,允许访问qa区域的提供端服务c -->
            <service consumer-service-name="discovery-springcloud-example-b" provider-service-name="discovery-springcloud-example-c" consumer-region-value="qa" provider-region-value="qa"/>
        </region>

        <!-- 服务发现的多版本或者多区域的灰度权重控制 -->
        <!-- service-name,表示服务名 -->
        <!-- weight-value,表示版本对应的权重值,格式为"版本/区域值=权重值",如果多个用“;”分隔,不允许出现空格 -->
        <!-- 版本权重策略介绍 -->
        <!-- 1. 标准配置,举例如下 -->
        <!--    <service consumer-service-name="a" provider-service-name="b" provider-weight-value="1.0=90;1.1=10"/> 表示消费端访问提供端的时候,提供端的1.0版本提供90%的权重流量,1.1版本提供10%的权重流量 -->
        <!--    <service provider-service-name="b" provider-weight-value="1.0=90;1.1=10"/> 表示所有消费端访问提供端的时候,提供端的1.0版本提供90%的权重流量,1.1版本提供10%的权重流量 -->
        <!-- 2. 局部配置,即指定consumer-service-name,专门为该消费端配置权重。全局配置,即不指定consumer-service-name,为所有消费端配置相同情形的权重。当局部配置和全局配置同时存在的时候,以局部配置优先 -->
        <!-- 3. 尽量为线上所有版本都赋予权重值 -->
        <!-- 全局版本权重策略介绍 -->
        <!-- 1. 标准配置,举例如下 -->
        <!--    <version provider-weight-value="1.0=85;1.1=15"/> 表示版本为1.0的服务提供85%的权重流量,版本为1.1的服务提供15%的权重流量 -->
        <!-- 2. 全局版本权重可以切换整条调用链的权重配比 -->
        <!-- 3. 尽量为线上所有版本都赋予权重值 -->

        <!-- 区域权重策略介绍 -->
        <!-- 1. 标准配置,举例如下 -->
        <!--    <service consumer-service-name="a" provider-service-name="b" provider-weight-value="dev=85;qa=15"/> 表示消费端访问提供端的时候,区域为dev的服务提供85%的权重流量,区域为qa的服务提供15%的权重流量 -->
        <!--    <service provider-service-name="b" provider-weight-value="dev=85;qa=15"/> 表示所有消费端访问提供端的时候,区域为dev的服务提供85%的权重流量,区域为qa的服务提供15%的权重流量 -->
        <!-- 2. 局部配置,即指定consumer-service-name,专门为该消费端配置权重。全局配置,即不指定consumer-service-name,为所有消费端配置相同情形的权重。当局部配置和全局配置同时存在的时候,以局部配置优先 -->
        <!-- 3. 尽量为线上所有版本都赋予权重值 -->
        <!-- 全局区域权重策略介绍 -->
        <!-- 1. 标准配置,举例如下 -->
        <!--    <region provider-weight-value="dev=85;qa=15"/> 表示区域为dev的服务提供85%的权重流量,区域为qa的服务提供15%的权重流量 -->
        <!-- 2. 全局区域权重可以切换整条调用链的权重配比 -->
        <!-- 3. 尽量为线上所有区域都赋予权重值 -->
        <weight>
            <!-- 权重流量配置有如下六种方式,优先级分别是由高到底,即先从第一种方式取权重流量值,取不到则到第二种方式取值,以此类推,最后仍取不到则忽略。使用者按照实际情况,选择一种即可 -->
            <!-- 表示消费端服务b访问提供端服务c的时候,提供端服务c的1.0版本提供90%的权重流量,1.1版本提供10%的权重流量 -->
            <service consumer-service-name="discovery-springcloud-example-b" provider-service-name="discovery-springcloud-example-c" provider-weight-value="1.0=90;1.1=10" type="version"/>
            <!-- 表示所有消费端服务访问提供端服务c的时候,提供端服务c的1.0版本提供90%的权重流量,1.1版本提供10%的权重流量 -->
            <service provider-service-name="discovery-springcloud-example-c" provider-weight-value="1.0=90;1.1=10" type="version"/>
            <!-- 表示所有版本为1.0的服务提供90%的权重流量,版本为1.1的服务提供10%的权重流量 -->
            <version provider-weight-value="1.0=90;1.1=10"/>

            <!-- 表示消费端服务b访问提供端服务c的时候,提供端服务c的dev区域提供85%的权重流量,qa区域提供15%的权重流量 -->
            <service consumer-service-name="discovery-springcloud-example-b" provider-service-name="discovery-springcloud-example-c" provider-weight-value="dev=85;qa=15" type="region"/>
            <!-- 表示所有消费端服务访问提供端服务c的时候,提供端服务c的dev区域提供85%的权重流量,qa区域提供15%的权重流量 -->
            <service provider-service-name="discovery-springcloud-example-c" provider-weight-value="dev=85;qa=15" type="region"/>
            <!-- 表示所有区域为dev的服务提供85%的权重流量,区域为qa的服务提供15%的权重流量 -->
            <region provider-weight-value="dev=85;qa=15"/>
        </weight>
    </discovery>

    <!-- 基于Http Header传递的策略路由,全局缺省路由(第三优先级) -->
    <strategy>
        <!-- 版本路由 -->
        <version>{"discovery-springcloud-example-a":"1.0", "discovery-springcloud-example-b":"1.0", "discovery-springcloud-example-c":"1.0;1.2"}</version>
        <!-- <version>1.0</version> -->
        <!-- 区域路由 -->
        <region>{"discovery-springcloud-example-a":"qa;dev", "discovery-springcloud-example-b":"dev", "discovery-springcloud-example-c":"qa"}</region>
        <!-- <region>dev</region> -->
        <!-- IP地址和端口路由 -->
        <address>{"discovery-springcloud-example-a":"192.168.43.101:1100", "discovery-springcloud-example-b":"192.168.43.101:1201", "discovery-springcloud-example-c":"192.168.43.101:1300"}</address>
        <!-- 权重流量配置有如下四种方式,优先级分别是由高到底,即先从第一种方式取权重流量值,取不到则到第二种方式取值,以此类推,最后仍取不到则忽略。使用者按照实际情况,选择一种即可 -->
        <!-- 版本权重路由 -->
        <version-weight>{"discovery-springcloud-example-a":"1.0=90;1.1=10", "discovery-springcloud-example-b":"1.0=90;1.1=10", "discovery-springcloud-example-c":"1.0=90;1.1=10"}</version-weight>
        <!-- <version-weight>1.0=90;1.1=10</version-weight> -->
        <!-- 区域权重路由 -->
        <region-weight>{"discovery-springcloud-example-a":"dev=85;qa=15", "discovery-springcloud-example-b":"dev=85;qa=15", "discovery-springcloud-example-c":"dev=85;qa=15"}</region-weight>
        <!-- <region-weight>dev=85;qa=15</region-weight> -->
    </strategy>

    <!-- 基于Http Header传递的策略路由,支持蓝绿发布和灰度发布两种模式。如果都不命中,则执行上面的全局缺省路由 -->
    <strategy-release>
        <!-- Spel表达式在XML中的转义符:-->
        <!-- 和符号 & 转义为 & 必须转义 -->
        <!-- 小于号 < 转义为 < 必须转义 -->
        <!-- 双引号 " 转义为 " 必须转义 -->
        <!-- 大于号 > 转义为 > -->
        <!-- 单引号 ' 转义为 ' -->

        <!-- 全链路蓝绿发布:条件命中的匹配方式(第一优先级),支持版本匹配、区域匹配、IP地址和端口匹配、版本权重匹配、区域权重匹配 -->
        <!-- Expression节点允许缺失,当含Expression和未含Expression的配置并存时,以含Expression的配置为优先 -->
        <conditions type="blue-green">
            <condition id="1" expression="#H['a'] == '1' and #H['b'] == '2'" version-id="a-1" region-id="b-1" address-id="c-1" version-weight-id="d-1" region-weight-id="e-1"/>
            <condition id="2" expression="#H['c'] == '3'" version-id="a-2" region-id="b-2" address-id="c-2" version-weight-id="d-2" region-weight-id="e-2"/>
            <condition id="3" version-id="a-2" region-id="b-2" address-id="c-2" version-weight-id="d-2" region-weight-id="e-2"/>
        </conditions>

        <!-- 全链路灰度发布:条件命中的随机权重(第二优先级),支持版本匹配、区域匹配、IP地址和端口匹配 -->
        <!-- Expression节点允许缺失,当含Expression和未含Expression的配置并存时,以含Expression的配置为优先 -->
        <conditions type="gray">
            <condition id="1" expression="#H['a'] == '1' and #H['b'] == '2'" version-id="a-1=10;a-2=90" region-id="b-1=20;b-2=80" address-id="c-1=30;c-2=70"/>
            <condition id="2" expression="#H['c'] == '3'" version-id="a-1=90;a-2=10" region-id="b-1=80;b-2=20" address-id="c-1=70;c-2=30"/>
            <condition id="3" version-id="a-1=5;a-2=95" region-id="b-1=5;b-2=95" address-id="c-1=5;c-2=95"/>
        </conditions>

        <routes>
            <route id="a-1" type="version">{"discovery-springcloud-example-a":"1.0", "discovery-springcloud-example-b":"1.0", "discovery-springcloud-example-c":"1.0;1.2"}</route>
            <route id="a-2" type="version">{"discovery-springcloud-example-a":"1.1", "discovery-springcloud-example-b":"1.1", "discovery-springcloud-example-c":"1.2"}</route>
            <route id="b-1" type="region">{"discovery-springcloud-example-a":"qa;dev", "discovery-springcloud-example-b":"dev", "discovery-springcloud-example-c":"qa"}</route>
            <route id="b-2" type="region">{"discovery-springcloud-example-a":"qa", "discovery-springcloud-example-b":"qa", "discovery-springcloud-example-c":"qa"}</route>
            <route id="c-1" type="address">{"discovery-springcloud-example-a":"192.168.43.101:1100", "discovery-springcloud-example-b":"192.168.43.101:1201", "discovery-springcloud-example-c":"192.168.43.101:1300"}</route>
            <route id="c-2" type="address">{"discovery-springcloud-example-a":"192.168.43.101:1101", "discovery-springcloud-example-b":"192.168.43.101:1201", "discovery-springcloud-example-c":"192.168.43.101:1301"}</route>
            <route id="d-1" type="version-weight">{"discovery-springcloud-example-a":"1.0=90;1.1=10", "discovery-springcloud-example-b":"1.0=90;1.1=10", "discovery-springcloud-example-c":"1.0=90;1.1=10"}</route>
            <route id="d-2" type="version-weight">{"discovery-springcloud-example-a":"1.0=10;1.1=90", "discovery-springcloud-example-b":"1.0=10;1.1=90", "discovery-springcloud-example-c":"1.0=10;1.1=90"}</route>
            <route id="e-1" type="region-weight">{"discovery-springcloud-example-a":"dev=85;qa=15", "discovery-springcloud-example-b":"dev=85;qa=15", "discovery-springcloud-example-c":"dev=85;qa=15"}</route>
            <route id="e-2" type="region-weight">{"discovery-springcloud-example-a":"dev=15;qa=85", "discovery-springcloud-example-b":"dev=15;qa=85", "discovery-springcloud-example-c":"dev=15;qa=85"}</route>
        </routes>

        <!-- 策略中配置条件表达式中的Header来决策蓝绿和灰度,可以代替外部传入Header -->
        <header>{"a":"1", "b":"2", "c":"3"}</header>
    </strategy-release>

    <!-- 策略路由上服务屏蔽黑名单。一般适用于服务下线场景,流量实现实时性的绝对无损:下线之前,把服务实例添加到下面屏蔽名单中,负载均衡不会去寻址该服务实例。下线之后,清除该名单-->
    <strategy-blacklist>
        <!-- 通过全局唯一ID进行屏蔽,ID对应于元数据spring.application.uuid字段 -->
        <id>{"discovery-springcloud-example-a":"20210601-222214-909-1146-372-698", "discovery-springcloud-example-b":"20210601-222623-277-4978-633-279", "discovery-springcloud-example-c":"20210601-222728-133-2597-222-609"}</id>
        <!-- 通过IP地址或者端口或者IP地址+端口进行屏蔽 -->
        <address>{"discovery-springcloud-example-a":"192.168.43.101:1100", "discovery-springcloud-example-b":"192.168.43.101:1201", "discovery-springcloud-example-c":"192.168.43.101:1300"}</address>        
    </strategy-blacklist>

    <!-- 参数控制,由远程推送参数的改变,实现一些特色化的蓝绿发布,例如,基于数据库和消息队列的蓝绿发布 -->
    <parameter>
        <!-- 服务a在版本为1.0的时候,数据库的数据源指向db1;服务a在版本为1.1的时候,数据库的数据源指向db2 -->
        <!-- 服务b在区域为dev的时候,消息队列指向queue1;服务b在区域为dev的时候,消息队列指向queue2 -->
        <!-- 服务c在环境为env1的时候,数据库的数据源指向db1;服务c在环境为env2的时候,数据库的数据源指向db2 -->
        <!-- 服务d在可用区为zone1的时候,消息队列指向queue1;服务d在可用区为zone2的时候,消息队列指向queue2 -->
        <!-- 服务c在IP地址和端口为192.168.43.101:1201的时候,数据库的数据源指向db1;服务c在IP地址和端口为192.168.43.102:1201的时候,数据库的数据源指向db2 -->
        <service service-name="discovery-springcloud-example-a" tag-key="version" tag-value="1.0" key="ShardingSphere" value="db1"/>
        <service service-name="discovery-springcloud-example-a" tag-key="version" tag-value="1.1" key="ShardingSphere" value="db2"/>
        <service service-name="discovery-springcloud-example-b" tag-key="region" tag-value="dev" key="RocketMQ" value="queue1"/>
        <service service-name="discovery-springcloud-example-b" tag-key="region" tag-value="qa" key="RocketMQ" value="queue2"/>
        <service service-name="discovery-springcloud-example-c" tag-key="env" tag-value="env1" key="ShardingSphere" value="db1"/>
        <service service-name="discovery-springcloud-example-c" tag-key="env" tag-value="env2" key="ShardingSphere" value="db2"/>
        <service service-name="discovery-springcloud-example-d" tag-key="zone" tag-value="zone1" key="RocketMQ" value="queue1"/>
        <service service-name="discovery-springcloud-example-d" tag-key="zone" tag-value="zone2" key="RocketMQ" value="queue2"/>
        <service service-name="discovery-springcloud-example-e" tag-key="address" tag-value="192.168.43.101:1201" key="ShardingSphere" value="db1"/>
        <service service-name="discovery-springcloud-example-e" tag-key="address" tag-value="192.168.43.102:1201" key="ShardingSphere" value="db2"/>
    </parameter>
</rule>

基于Swagger和Rest的规则策略推送

copy iconCopydownload iconDownload
swagger.service.scan.group=your-scan-group
swagger.service.scan.packages=your-scan-packages

全链路环境隔离

copy iconCopydownload iconDownload
spring.cloud.nacos.discovery.metadata.env=env1

全链路环境路由

copy iconCopydownload iconDownload
# 流量路由到指定的环境下。不允许为保留值default,缺失则默认为common
spring.application.strategy.environment.route=common

全链路可用区亲和性隔离

copy iconCopydownload iconDownload
spring.cloud.nacos.discovery.metadata.zone=zone

全链路可用区亲和性路由

copy iconCopydownload iconDownload
# 启动和关闭可用区亲和性失败后的路由,即调用端实例没有找到同一个可用区的提供端实例的时候,当开关打开,可路由到其它可用区或者不归属任何可用区,当开关关闭,则直接调用失败。缺失则默认为true
spring.application.strategy.zone.route.enabled=true

消费端服务隔离

copy iconCopydownload iconDownload
# 启动和关闭消费端的服务隔离(基于Group是否相同的策略)。缺失则默认为false
spring.application.strategy.consumer.isolation.enabled=true

提供端服务隔离

copy iconCopydownload iconDownload
# 启动和关闭提供端的服务隔离(基于Group是否相同的策略)。缺失则默认为false
spring.application.strategy.provider.isolation.enabled=true

# 路由策略的时候,需要指定对业务RestController类的扫描路径。此项配置作用于RPC方式的调用拦截和消费端的服务隔离两项工作
spring.application.strategy.scan.packages=com.nepxion.discovery.guide.service.feign

全链路服务限流熔断降级权限

copy iconCopydownload iconDownload
# 启动和关闭Sentinel限流降级熔断权限等原生功能的数据来源扩展。缺失则默认为false
spring.application.strategy.sentinel.datasource.enabled=true

原生Sentinel注解

copy iconCopydownload iconDownload
@RestController
@ConditionalOnProperty(name = DiscoveryConstant.SPRING_APPLICATION_NAME, havingValue = "discovery-guide-service-b")
public class BFeignImpl extends AbstractFeignImpl implements BFeign {
    private static final Logger LOG = LoggerFactory.getLogger(BFeignImpl.class);

    @Override
    @SentinelResource(value = "sentinel-resource", blockHandler = "handleBlock", fallback = "handleFallback")
    public String invoke(@PathVariable(value = "value") String value) {
        value = doInvoke(value);

        LOG.info("调用路径:{}", value);

        return value;
    }

    public String handleBlock(String value, BlockException e) {
        return value + "-> B server sentinel block, cause=" + e.getClass().getName() + ", rule=" + e.getRule() + ", limitApp=" + e.getRuleLimitApp();
    }

    public String handleFallback(String value) {
        return value + "-> B server sentinel fallback";
    }
}

原生Sentinel规则

copy iconCopydownload iconDownload
[
    {
        "resource": "sentinel-resource",
        "limitApp": "default",
        "grade": 1,
        "count": 1,
        "strategy": 0,
        "refResource": null,
        "controlBehavior": 0,
        "warmUpPeriodSec": 10,
        "maxQueueingTimeMs": 500,
        "clusterMode": false,
        "clusterConfig": null
    }
]

基于Sentinel-LimitApp扩展的防护

copy iconCopydownload iconDownload
# 启动和关闭Sentinel LimitApp限流等功能。缺失则默认为false
spring.application.strategy.sentinel.limit.app.enabled=true

全链路调用链监控

copy iconCopydownload iconDownload
n-d-service-group - 服务所属组或者应用
n-d-service-type - 服务类型,分为网关端 | 服务端 | 控制台端 | 测试端,使用者只需要关注前两个即可
n-d-service-id - 服务ID
n-d-service-address - 服务地址,包括Host和Port
n-d-service-version - 服务版本
n-d-service-region - 服务所属区域
n-d-service-env - 服务所属环境
n-d-version - 版本路由值
n-d-region - 区域路由值
n-d-env - 环境路由值
n-d-address - 地址路由值
n-d-version-weight - 版本权重路由值
n-d-region-weight - 区域权重路由值
n-d-id-blacklist - 全局唯一ID屏蔽值
n-d-address-blacklist - IP地址和端口屏蔽值

全链路日志监控

copy iconCopydownload iconDownload
<!-- Logback configuration. See http://logback.qos.ch/manual/index.html -->
<configuration scan="true" scanPeriod="10 seconds">
    <!-- Simple file output -->
    <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <!-- encoder defaults to ch.qos.logback.classic.encoder.PatternLayoutEncoder -->
        <encoder>
            <pattern>discovery %date %level [%thread] [%X{trace-id}] [%X{span-id}] [%X{n-d-service-group}] [%X{n-d-service-type}] [%X{n-d-service-app-id}] [%X{n-d-service-id}] [%X{n-d-service-address}] [%X{n-d-service-version}] [%X{n-d-service-region}] [%X{n-d-service-env}] [%X{n-d-service-zone}] [%X{mobile}] [%X{user}] %logger{10} [%file:%line] - %msg%n</pattern>
            <charset>UTF-8</charset>
        </encoder>
        <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
            <fileNamePattern>log/discovery-%d{yyyy-MM-dd}.%i.log</fileNamePattern>
            <maxFileSize>50MB</maxFileSize>
        </rollingPolicy>
        <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
            <level>INFO</level>
        </filter>
        <!-- Safely log to the same file from multiple JVMs. Degrades performance! -->
        <prudent>true</prudent>
    </appender>

    <appender name="FILE_ASYNC" class="ch.qos.logback.classic.AsyncAppender">
        <discardingThreshold>0</discardingThreshold>
        <queueSize>512</queueSize>
        <appender-ref ref="FILE" />
    </appender>

    <!-- Console output -->
    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <!-- encoder defaults to ch.qos.logback.classic.encoder.PatternLayoutEncoder -->
        <encoder>
            <pattern>discovery %date %level [%thread] [%X{trace-id}] [%X{span-id}] [%X{n-d-service-group}] [%X{n-d-service-type}] [%X{n-d-service-app-id}] [%X{n-d-service-id}] [%X{n-d-service-address}] [%X{n-d-service-version}] [%X{n-d-service-region}] [%X{n-d-service-env}] [%X{n-d-service-zone}] [%X{mobile}] [%X{user}] %logger{10} [%file:%line] - %msg%n</pattern>
            <charset>UTF-8</charset>
        </encoder>
        <!-- Only log level WARN and above -->
        <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
            <level>INFO</level>
        </filter>
    </appender>

    <!-- For loggers in the these namespaces, log at all levels. -->
    <logger name="pedestal" level="ALL" />
    <logger name="hammock-cafe" level="ALL" />
    <logger name="user" level="ALL" />

    <root level="INFO">
        <!-- <appender-ref ref="FILE_ASYNC" /> -->
        <appender-ref ref="STDOUT" />
    </root>
</configuration>

全链路指标监控

copy iconCopydownload iconDownload
# 启动和关闭Sentinel Metric通过次数统计输出功能。缺失则默认为true
spring.application.strategy.metric.sentinel.pass.qps.output.enabled=true
# 启动和关闭Sentinel Metric阻塞次数统计输出功能。缺失则默认为true
spring.application.strategy.metric.sentinel.block.qps.output.enabled=true
# 启动和关闭Sentinel Metric成功次数统计输出功能。缺失则默认为true
spring.application.strategy.metric.sentinel.success.qps.output.enabled=true
# 启动和关闭Sentinel Metric异常次数统计输出功能。缺失则默认为true
spring.application.strategy.metric.sentinel.exception.qps.output.enabled=true

全链路告警监控

copy iconCopydownload iconDownload
@EventBus
public class MySubscriber {
    @Subscribe
    public void onAlarm(StrategyAlarmEvent strategyAlarmEvent) {
        // 在本告警中告警类型为StrategyConstant.STRATEGY_CONTEXT_ALARM的静态变量值,表示蓝绿灰度上下文告警
        String alarmType = strategyAlarmEvent.getAlarmType();

        // 通过事件总线把告警数据alarmMap存储到ElasticSearch、MessageQueue、数据库等
        Map<String, String> alarmMap = strategyAlarmEvent.getAlarmMap();
    }
}

全链路服务侧注解

copy iconCopydownload iconDownload
# 路由策略的时候,需要指定对业务RestController类的扫描路径。此项配置作用于RPC方式的调用拦截、消费端的服务隔离和调用链三项功能
spring.application.strategy.scan.packages=com.nepxion.discovery.guide.service.feign

基于Git插件自动创建版本号

copy iconCopydownload iconDownload
<plugin>
    <groupId>pl.project13.maven</groupId>
    <artifactId>git-commit-id-plugin</artifactId>
    <executions>
        <execution>
            <goals>
                <goal>revision</goal>
            </goals>
        </execution>
    </executions>
    <configuration>
        <!-- 必须配置,并指定为true -->
        <generateGitPropertiesFile>true</generateGitPropertiesFile>
        <!-- 指定日期格式 -->
        <dateFormat>yyyyMMdd</dateFormat>
        <!-- <dateFormat>yyyy-MM-dd-HH:mm:ss</dateFormat> -->
    </configuration>
</plugin>

基于服务名前缀自动创建组名

copy iconCopydownload iconDownload
# 开启和关闭使用服务名前缀来作为服务组名。缺失则默认为false
spring.application.group.generator.enabled=true
# 服务名前缀的截断长度,必须大于0
spring.application.group.generator.length=15
# 服务名前缀的截断标志。当截断长度配置了,则取截断长度方式,否则取截断标志方式
spring.application.group.generator.character=-

自动扫描目录

copy iconCopydownload iconDownload
# 路由策略的时候,需要指定对业务RestController类的扫描路径。此项配置作用于RPC方式的调用拦截、消费端的服务隔离和调用链三项功能
spring.application.strategy.scan.packages=com.nepxion.discovery.guide.service

流量染色配置

copy iconCopydownload iconDownload
spring.cloud.discovery.metadata.group=xxx-service-group
spring.cloud.discovery.metadata.version=1.0
spring.cloud.discovery.metadata.region=dev
spring.cloud.discovery.metadata.env=env1
spring.cloud.discovery.metadata.zone=zone1

中间件属性配置

copy iconCopydownload iconDownload
# Nacos config for discovery
spring.cloud.nacos.discovery.server-addr=localhost:8848
# spring.cloud.nacos.discovery.namespace=discovery

功能开关配置

copy iconCopydownload iconDownload
# Plugin core config
# 开启和关闭服务注册层面的控制。一旦关闭,服务注册的黑/白名单过滤功能将失效,最大注册数的限制过滤功能将失效。缺失则默认为true
spring.application.register.control.enabled=true
# 开启和关闭服务发现层面的控制。一旦关闭,服务多版本调用的控制功能将失效,动态屏蔽指定IP地址的服务实例被发现的功能将失效。缺失则默认为true
spring.application.discovery.control.enabled=true
# 开启和关闭通过Rest方式对规则配置的控制和推送。一旦关闭,只能通过远程配置中心来控制和推送。缺失则默认为true
spring.application.config.rest.control.enabled=true
# 随机权重算法。缺失则默认为MapWeightRandom
spring.application.weight.random.type=MapWeightRandom
# 规则文件的格式,支持xml和json。缺失则默认为xml
spring.application.config.format=xml
# spring.application.config.format=json
# 本地规则文件的路径,支持两种方式:classpath:rule.xml(rule.json) - 规则文件放在resources目录下,便于打包进jar;file:rule.xml(rule.json) - 规则文件放在工程根目录下,放置在外部便于修改。缺失则默认为不装载本地规则
spring.application.config.path=classpath:rule.xml
# spring.application.config.path=classpath:rule.json
# 为微服务归类的Key,一般通过group字段来归类,例如eureka.instance.metadataMap.group=xxx-group或者eureka.instance.metadataMap.application=xxx-application。缺失则默认为group
spring.application.group.key=group
# spring.application.group.key=application
# 业务系统希望大多数时候Spring、SpringBoot或者SpringCloud的基本配置、调优参数(非业务系统配置参数),不配置在业务端,集成到基础框架里。但特殊情况下,业务系统有时候也希望能把基础框架里配置的参数给覆盖掉,用他们自己的配置
# 对于此类型的配置需求,可以配置在下面的配置文件里。该文件一般放在resource目录下。缺失则默认为spring-application-default.properties
spring.application.default.properties.path=spring-application-default.properties
# 负载均衡下,消费端尝试获取对应提供端初始服务实例列表为空的时候,进行重试。缺失则默认为false
spring.application.no.servers.retry.enabled=false
# 负载均衡下,消费端尝试获取对应提供端初始服务实例列表为空的时候,进行重试的次数。缺失则默认为5
spring.application.no.servers.retry.times=5
# 负载均衡下,消费端尝试获取对应提供端初始服务实例列表为空的时候,进行重试的时间间隔。缺失则默认为2000
spring.application.no.servers.retry.await.time=2000
# 负载均衡下,消费端尝试获取对应提供端服务实例列表为空的时候,通过日志方式通知。缺失则默认为false
spring.application.no.servers.notify.enabled=false
# 由于Nacos注册中心会自动把服务名处理成GROUP@@SERVICE_ID的格式,导致根据服务名去获取元数据的时候会找不到。通过如下开关开启是否要过滤掉GROUP前缀。缺失则默认为true
spring.application.nacos.service.id.filter.enabled=true

# Plugin strategy config
# 开启和关闭Ribbon默认的ZoneAvoidanceRule负载均衡策略。一旦关闭,则使用RoundRobin简单轮询负载均衡策略。缺失则默认为true
spring.application.strategy.zone.avoidance.rule.enabled=true
# 启动和关闭路由策略的时候,对REST方式的调用拦截。缺失则默认为true
spring.application.strategy.rest.intercept.enabled=true
# 启动和关闭路由策略的时候,对REST方式在异步调用场景下在服务端的Request请求的装饰,当主线程先于子线程执行完的时候,Request会被Destory,导致Header仍旧拿不到,开启装饰,就可以确保拿到。缺失则默认为true
spring.application.strategy.rest.request.decorator.enabled=true
# 启动和关闭Header传递的Debug日志打印,注意:每调用一次都会打印一次,会对性能有所影响,建议压测环境和生产环境关闭。缺失则默认为false
spring.application.strategy.rest.intercept.debug.enabled=true
# 路由策略过滤器的执行顺序(Order)。缺失则默认为0
spring.application.strategy.service.route.filter.order=0
# 当外界传值Header的时候,服务也设置并传递同名的Header,需要决定哪个Header传递到后边的服务去。如果下面开关为true,以服务设置为优先,否则以外界传值为优先。缺失则默认为true
spring.application.strategy.service.header.priority=true
# 启动和关闭Feign上核心策略Header传递,缺失则默认为true。当全局订阅启动时,可以关闭核心策略Header传递,这样可以节省传递数据的大小,一定程度上可以提升性能。核心策略Header,包含如下
# 1. n-d-version
# 2. n-d-region
# 3. n-d-address
# 4. n-d-version-weight
# 5. n-d-region-weight
# 6. n-d-id-blacklist
# 7. n-d-address-blacklist
# 8. n-d-env (不属于蓝绿灰度范畴的Header,只要外部传入就会全程传递)
spring.application.strategy.feign.core.header.transmission.enabled=true
# 启动和关闭RestTemplate上核心策略Header传递,缺失则默认为true。当全局订阅启动时,可以关闭核心策略Header传递,这样可以节省传递数据的大小,一定程度上可以提升性能。核心策略Header,包含如下
# 1. n-d-version
# 2. n-d-region
# 3. n-d-address
# 4. n-d-version-weight
# 5. n-d-region-weight
# 6. n-d-id-blacklist
# 7. n-d-address-blacklist
# 8. n-d-env (不属于蓝绿灰度范畴的Header,只要外部传入就会全程传递)
spring.application.strategy.rest.template.core.header.transmission.enabled=true
# 启动和关闭WebClient上核心策略Header传递,缺失则默认为true。当全局订阅启动时,可以关闭核心策略Header传递,这样可以节省传递数据的大小,一定程度上可以提升性能。核心策略Header,包含如下
# 1. n-d-version
# 2. n-d-region
# 3. n-d-address
# 4. n-d-version-weight
# 5. n-d-region-weight
# 6. n-d-id-blacklist
# 7. n-d-address-blacklist
# 8. n-d-env (不属于蓝绿灰度范畴的Header,只要外部传入就会全程传递)
spring.application.strategy.web.client.core.header.transmission.enabled=true
# 路由策略的时候,对REST方式调用拦截的时候(支持Feign、RestTemplate或者WebClient调用),希望把来自外部自定义的Header参数(用于框架内置上下文Header,例如:trace-id, span-id等)传递到服务里,那么配置如下值。如果多个用“;”分隔,不允许出现空格
spring.application.strategy.context.request.headers=trace-id;span-id
# 路由策略的时候,对REST方式调用拦截的时候(支持Feign、RestTemplate或者WebClient调用),希望把来自外部自定义的Header参数(用于业务系统自定义Header,例如:mobile)传递到服务里,那么配置如下值。如果多个用“;”分隔,不允许出现空格
spring.application.strategy.business.request.headers=token
# 路由策略的时候,执行请求过滤,对指定包含的URI字段进行排除。缺失则默认为/actuator/,如果多个用“;”分隔,不允许出现空格
spring.application.strategy.uri.filter.exclusion=/actuator/
# 启动和关闭路由策略的时候,对RPC方式的调用拦截。缺失则默认为false
spring.application.strategy.rpc.intercept.enabled=true
# 路由策略的时候,需要指定对业务RestController类的扫描路径。此项配置作用于RPC方式的调用拦截、消费端的服务隔离和调用链三项功能
spring.application.strategy.scan.packages=com.nepxion.discovery.plugin.example.service.feign
# 启动和关闭消费端的服务隔离(基于Group是否相同的策略)。缺失则默认为false
spring.application.strategy.consumer.isolation.enabled=true
# 启动和关闭提供端的服务隔离(基于Group是否相同的策略)。缺失则默认为false
spring.application.strategy.provider.isolation.enabled=true

# 启动和关闭监控,一旦关闭,调用链和日志输出都将关闭。缺失则默认为false
spring.application.strategy.monitor.enabled=true
# 启动和关闭告警,一旦关闭,蓝绿灰度上下文输出都将关闭。缺失则默认为false
spring.application.strategy.alarm.enabled=true
# 启动和关闭日志输出。缺失则默认为false
spring.application.strategy.logger.enabled=true
# 日志输出中,是否显示MDC前面的Key。缺失则默认为true
spring.application.strategy.logger.mdc.key.shown=true
# 启动和关闭Debug日志打印,注意:每调用一次都会打印一次,会对性能有所影响,建议压测环境和生产环境关闭。缺失则默认为false
spring.application.strategy.logger.debug.enabled=true
# 启动和关闭调用链输出。缺失则默认为false
spring.application.strategy.tracer.enabled=true
# 启动和关闭调用链的蓝绿灰度信息以独立的Span节点输出,如果关闭,则蓝绿灰度信息输出到原生的Span节点中(SkyWalking不支持原生模式)。缺失则默认为true
spring.application.strategy.tracer.separate.span.enabled=true
# 启动和关闭调用链的蓝绿灰度规则策略信息输出。缺失则默认为true
spring.application.strategy.tracer.rule.output.enabled=true
# 启动和关闭调用链的异常信息是否以详细格式输出。缺失则默认为false
spring.application.strategy.tracer.exception.detail.output.enabled=true
# 启动和关闭类方法上入参和出参输出到调用链。缺失则默认为false
spring.application.strategy.tracer.method.context.output.enabled=true
# 显示在调用链界面上蓝绿灰度Span的名称,建议改成具有公司特色的框架产品名称。缺失则默认为NEPXION
spring.application.strategy.tracer.span.value=NEPXION
# 显示在调用链界面上蓝绿灰度Span Tag的插件名称,建议改成具有公司特色的框架产品的描述。缺失则默认为Nepxion Discovery
spring.application.strategy.tracer.span.tag.plugin.value=Nepxion Discovery
# 启动和关闭Sentinel调用链上规则在Span上的输出。缺失则默认为true
spring.application.strategy.tracer.sentinel.rule.output.enabled=true
# 启动和关闭Sentinel调用链上方法入参在Span上的输出。缺失则默认为false
spring.application.strategy.tracer.sentinel.args.output.enabled=true

# 启动和关闭Sentinel Metric通过次数统计输出功能。缺失则默认为true
spring.application.strategy.metric.sentinel.pass.qps.output.enabled=true
# 启动和关闭Sentinel Metric阻塞次数统计输出功能。缺失则默认为true
spring.application.strategy.metric.sentinel.block.qps.output.enabled=true
# 启动和关闭Sentinel Metric成功次数统计输出功能。缺失则默认为true
spring.application.strategy.metric.sentinel.success.qps.output.enabled=true
# 启动和关闭Sentinel Metric异常次数统计输出功能。缺失则默认为true
spring.application.strategy.metric.sentinel.exception.qps.output.enabled=true

# 启动和关闭Sentinel限流降级熔断权限等原生功能的数据来源扩展。缺失则默认为false
spring.application.strategy.sentinel.datasource.enabled=true
# 流控规则文件路径。缺失则默认为classpath:sentinel-flow.json
spring.application.strategy.sentinel.flow.path=classpath:sentinel-flow.json
# 降级规则文件路径。缺失则默认为classpath:sentinel-degrade.json
spring.application.strategy.sentinel.degrade.path=classpath:sentinel-degrade.json
# 授权规则文件路径。缺失则默认为classpath:sentinel-authority.json
spring.application.strategy.sentinel.authority.path=classpath:sentinel-authority.json
# 系统规则文件路径。缺失则默认为classpath:sentinel-system.json
spring.application.strategy.sentinel.system.path=classpath:sentinel-system.json
# 热点参数流控规则文件路径。缺失则默认为classpath:sentinel-param-flow.json
spring.application.strategy.sentinel.param.flow.path=classpath:sentinel-param-flow.json

# 启动和关闭Sentinel LimitApp高级限流熔断功能。缺失则默认为false
spring.application.strategy.sentinel.limit.app.enabled=true
# 执行Sentinel LimitApp高级限流熔断时候,以Http请求中的Header值作为关键Key。缺失则默认为n-d-service-id,即以服务名作为关键Key
spring.application.strategy.sentinel.request.origin.key=n-d-service-id

# 流量路由到指定的环境下。不允许为保留值default,缺失则默认为common
spring.application.strategy.environment.route=common

# 启动和关闭可用区亲和性,即同一个可用区的服务才能调用,同一个可用区的条件是调用端实例和提供端实例的元数据Metadata的zone配置值必须相等。缺失则默认为false
spring.application.strategy.zone.affinity.enabled=true
# 启动和关闭可用区亲和性失败后的路由,即调用端实例没有找到同一个可用区的提供端实例的时候,当开关打开,可路由到其它可用区或者不归属任何可用区,当开关关闭,则直接调用失败。缺失则默认为true
spring.application.strategy.zone.route.enabled=true

# 版本故障转移,即无法找到相应版本的服务实例,路由到老的稳定版本的实例。其作用是防止蓝绿灰度版本发布人为设置错误,或者对应的版本实例发生灾难性的全部下线,导致流量有损
# 启动和关闭版本故障转移。缺失则默认为false
spring.application.strategy.version.failover.enabled=true
# 版本偏好,即非蓝绿灰度发布场景下,路由到老的稳定版本的实例。其作用是防止多个网关上并行实施蓝绿灰度版本发布产生混乱,对处于非蓝绿灰度状态的服务,调用它的时候,只取它的老的稳定版本的实例;蓝绿灰度状态的服务,还是根据传递的Header版本号进行匹配
# 启动和关闭版本偏好。缺失则默认为false
spring.application.strategy.version.prefer.enabled=true

# 启动和关闭在服务启动的时候参数订阅事件发送。缺失则默认为true
spring.application.parameter.event.onstart.enabled=true

# 启动和关闭自动扫描目录,当扫描目录未人工配置的时候,可以通过自动扫描方式决定扫描目录。缺失则默认为true
spring.application.strategy.auto.scan.packages.enabled=true
# 启动和关闭嵌套扫描,嵌套扫描指扫描非本工程下外部包的目录,可以支持多层嵌套。缺失则默认为false
spring.application.strategy.auto.scan.recursion.enabled=false

# 开启和关闭使用服务名前缀来作为服务组名。缺失则默认为false
spring.application.group.generator.enabled=true
# 服务名前缀的截断长度,必须大于0
spring.application.group.generator.length=15
# 服务名前缀的截断标志。当截断长度配置了,则取截断长度方式,否则取截断标志方式
spring.application.group.generator.character=-

# 开启和关闭使用Git信息中的字段单个或者多个组合来作为服务版本号。缺失则默认为false
spring.application.git.generator.enabled=true
# 插件git-commit-id-plugin产生git信息文件的输出路径,支持properties和json两种格式,支持classpath:xxx和file:xxx两种路径,这些需要和插件里的配置保持一致。缺失则默认为classpath:git.properties
spring.application.git.generator.path=classpath:git.properties
# spring.application.git.generator.path=classpath:git.json
# 使用Git信息中的字段单个或者多个组合来作为服务版本号。缺失则默认为{git.commit.time}-{git.total.commit.count}
spring.application.git.version.key={git.commit.id.abbrev}-{git.commit.time}
# spring.application.git.version.key={git.build.version}-{git.commit.time}

# 开启服务端实现Hystrix线程隔离模式做服务隔离时,必须把spring.application.strategy.hystrix.threadlocal.supported设置为true,同时要引入discovery-plugin-strategy-starter-hystrix包,否则线程切换时会发生ThreadLocal上下文对象丢失。缺失则默认为false
spring.application.strategy.hystrix.threadlocal.supported=true

# 启动和关闭Swagger。缺失则默认为true
swagger.service.enabled=true
# Swagger基准Docket组名
swagger.service.base.group=Nepxion Discovery
# Swagger自定义Docket组名
swagger.service.scan.group=Admin Center Restful APIs
# Swagger自定义扫描目录
swagger.service.scan.packages=your-scan-packages
# Swagger描述
swagger.service.description=your-description
# Swagger版本
swagger.service.version=6.11.0
# Swagger License名称
swagger.service.license.name=Apache License 2.0
# Swagger License链接
swagger.service.license.url=http://www.apache.org/licenses/LICENSE-2.0
# Swagger联系人名称
swagger.service.contact.name=Nepxion
# Swagger联系人网址
swagger.service.contact.url=https://github.com/Nepxion/Discovery
# Swagger联系人邮件
swagger.service.contact.email=1394997@qq.com
# Swagger服务条件网址
swagger.service.termsOfService.url=http://www.nepxion.com

Docker容器化

copy iconCopydownload iconDownload
<plugin>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-maven-plugin</artifactId>
    <configuration>
        <executable>true</executable>
        <mainClass>com.nepxion.discovery.guide.gateway.DiscoveryGuideGateway</mainClass>
        <layout>JAR</layout>
    </configuration>
    <executions>
        <execution>
            <goals>
                <goal>repackage</goal>
            </goals>
            <configuration>
                <attach>false</attach>
            </configuration>
        </execution>
    </executions>
</plugin>
<plugin>
    <groupId>com.spotify</groupId>
    <artifactId>docker-maven-plugin</artifactId>
    <configuration>
        <imageName>${ImageName}</imageName>
        <baseImage>openjdk:8-jre-alpine</baseImage>
        <entryPoint>["java", "-jar", "/${project.build.finalName}.jar"]</entryPoint>
        <exposes>
            <expose>${ExposePort}</expose>
        </exposes>
        <resources>
            <resource>
                <targetPath>/</targetPath>
                <directory>${project.build.directory}</directory>
                <include>${project.build.finalName}.jar</include>
            </resource>
        </resources>
    </configuration>
</plugin>

配置文件

copy iconCopydownload iconDownload
# 自动化测试框架内置配置
# 测试用例类的扫描路径
spring.application.test.scan.packages=com.nepxion.discovery.guide.test
# 测试用例的配置内容推送时,是否打印配置日志。缺失则默认为true
spring.application.test.config.print.enabled=true
# 测试用例的配置内容推送后,等待生效的时间。推送远程配置中心后,再通知各服务更新自身的配置缓存,需要一定的时间,缺失则默认为3000
spring.application.test.config.operation.await.time=5000
# 测试用例的配置内容推送的控制台地址。控制台是连接服务注册发现中心、远程配置中心和服务的纽带
spring.application.test.console.url=http://localhost:6001/

# 业务测试配置
# Spring Cloud Gateway网关配置
gateway.group=discovery-guide-group
gateway.service.id=discovery-guide-gateway
gateway.test.url=http://localhost:5001/discovery-guide-service-a/invoke/gateway
gateway.route0.test.url=http://localhost:5001/x/invoke/gateway
gateway.route1.test.url=http://localhost:5001/y/invoke/gateway
gateway.route2.test.url=http://localhost:5001/z/invoke/gateway
gateway.inspect.url=http://localhost:5001/discovery-guide-service-a/inspector/inspect

# Zuul网关配置
zuul.group=discovery-guide-group
zuul.service.id=discovery-guide-zuul
zuul.test.url=http://localhost:5002/discovery-guide-service-a/invoke/zuul
zuul.route0.test.url=http://localhost:5002/x/invoke/zuul
zuul.route1.test.url=http://localhost:5002/y/invoke/zuul
zuul.route2.test.url=http://localhost:5002/z/invoke/zuul
zuul.inspect.url=http://localhost:5002/discovery-guide-service-a/inspector/inspect

# 每个测试用例执行循环次数
testcase.loop.times=1

# 测试用例的灰度权重测试开关。由于权重测试需要大量采样调用,会造成整个自动化测试时间很长,可以通过下面开关开启和关闭。缺失则默认为true
gray.weight.testcases.enabled=true
# 测试用例的灰度权重采样总数。采样总数越大,灰度权重准确率越高,但耗费时间越长
gray.weight.testcase.sample.count=1500
# 测试用例的灰度权重准确率偏离值。采样总数越大,灰度权重准确率偏离值越小
gray.weight.testcase.result.offset=5

测试用例

copy iconCopydownload iconDownload
<dependency>
    <groupId>com.nepxion</groupId>
    <artifactId>discovery-plugin-test-starter</artifactId>
    <version>${discovery.version}</version>
</dependency>

测试报告

copy iconCopydownload iconDownload
---------- Run automation testcase :: testNoGray() ----------
Result1 : gateway -> discovery-guide-service-a[192.168.0.107:3001][V=1.0][R=dev][G=discovery-guide-group] -> discovery-guide-service-b[192.168.0.107:4001][V=1.0][R=qa][G=discovery-guide-group]
Result2 : gateway -> discovery-guide-service-a[192.168.0.107:3002][V=1.1][R=qa][G=discovery-guide-group] -> discovery-guide-service-b[192.168.0.107:4001][V=1.0][R=qa][G=discovery-guide-group]
Result3 : gateway -> discovery-guide-service-a[192.168.0.107:3001][V=1.0][R=dev][G=discovery-guide-group] -> discovery-guide-service-b[192.168.0.107:4002][V=1.1][R=dev][G=discovery-guide-group]
Result4 : gateway -> discovery-guide-service-a[192.168.0.107:3002][V=1.1][R=qa][G=discovery-guide-group] -> discovery-guide-service-b[192.168.0.107:4002][V=1.1][R=dev][G=discovery-guide-group]
* Passed
---------- Run automation testcase :: testEnabledStrategyGray1() ----------
Header : [mobile:"138"]
Result1 : gateway -> discovery-guide-service-a[192.168.0.107:3001][V=1.0][R=dev][G=discovery-guide-group] -> discovery-guide-service-b[192.168.0.107:4001][V=1.0][R=qa][G=discovery-guide-group]
Result2 : gateway -> discovery-guide-service-a[192.168.0.107:3001][V=1.0][R=dev][G=discovery-guide-group] -> discovery-guide-service-b[192.168.0.107:4001][V=1.0][R=qa][G=discovery-guide-group]
Result3 : gateway -> discovery-guide-service-a[192.168.0.107:3001][V=1.0][R=dev][G=discovery-guide-group] -> discovery-guide-service-b[192.168.0.107:4001][V=1.0][R=qa][G=discovery-guide-group]
Result4 : gateway -> discovery-guide-service-a[192.168.0.107:3001][V=1.0][R=dev][G=discovery-guide-group] -> discovery-guide-service-b[192.168.0.107:4001][V=1.0][R=qa][G=discovery-guide-group]
* Passed
---------- Run automation testcase :: testVersionStrategyGray1() ----------
Result1 : gateway -> discovery-guide-service-a[192.168.0.107:3001][V=1.0][R=dev][G=discovery-guide-group] -> discovery-guide-service-b[192.168.0.107:4001][V=1.0][R=qa][G=discovery-guide-group]
Result2 : gateway -> discovery-guide-service-a[192.168.0.107:3001][V=1.0][R=dev][G=discovery-guide-group] -> discovery-guide-service-b[192.168.0.107:4001][V=1.0][R=qa][G=discovery-guide-group]
Result3 : gateway -> discovery-guide-service-a[192.168.0.107:3001][V=1.0][R=dev][G=discovery-guide-group] -> discovery-guide-service-b[192.168.0.107:4001][V=1.0][R=qa][G=discovery-guide-group]
Result4 : gateway -> discovery-guide-service-a[192.168.0.107:3001][V=1.0][R=dev][G=discovery-guide-group] -> discovery-guide-service-b[192.168.0.107:4001][V=1.0][R=qa][G=discovery-guide-group]
* Passed
---------- Run automation testcase :: testRegionStrategyGray1() ----------
Result1 : gateway -> discovery-guide-service-a[192.168.0.107:3001][V=1.0][R=dev][G=discovery-guide-group] -> discovery-guide-service-b[192.168.0.107:4002][V=1.1][R=dev][G=discovery-guide-group]
Result2 : gateway -> discovery-guide-service-a[192.168.0.107:3001][V=1.0][R=dev][G=discovery-guide-group] -> discovery-guide-service-b[192.168.0.107:4002][V=1.1][R=dev][G=discovery-guide-group]
Result3 : gateway -> discovery-guide-service-a[192.168.0.107:3001][V=1.0][R=dev][G=discovery-guide-group] -> discovery-guide-service-b[192.168.0.107:4002][V=1.1][R=dev][G=discovery-guide-group]
Result4 : gateway -> discovery-guide-service-a[192.168.0.107:3001][V=1.0][R=dev][G=discovery-guide-group] -> discovery-guide-service-b[192.168.0.107:4002][V=1.1][R=dev][G=discovery-guide-group]
* Passed
---------- Run automation testcase :: testVersionWeightStrategyGray() ----------
Sample count=3000
Weight result offset desired=5%
A service desired : 1.0 version weight=90%, 1.1 version weight=10%
B service desired : 1.0 version weight=20%, 1.1 version weight=80%
Result : A service 1.0 version weight=89.6%
Result : A service 1.1 version weight=10.4%
Result : B service 1.0 version weight=20.1333%
Result : B service 1.1 version weight=79.8667%
* Passed
---------- Run automation testcase :: testRegionWeightStrategyGray() ----------
Sample count=3000
Weight result offset desired=5%
A service desired : dev region weight=85%, qa region weight=15%
B service desired : dev region weight=85%, qa region weight=15%
Result : A service dev region weight=83.7667%
Result : A service qa region weight=16.2333%
Result : B service dev region weight=86.2%
Result : B service qa region weight=13.8%
* Passed
---------- Run automation testcase :: testStrategyGray1() ----------
Header : [a:"1", b:"2"]
Result1 : gateway -> discovery-guide-service-a[192.168.0.107:3002][V=1.1][R=qa][G=discovery-guide-group] -> discovery-guide-service-b[192.168.0.107:4002][V=1.1][R=dev][G=discovery-guide-group]
Result2 : gateway -> discovery-guide-service-a[192.168.0.107:3002][V=1.1][R=qa][G=discovery-guide-group] -> discovery-guide-service-b[192.168.0.107:4002][V=1.1][R=dev][G=discovery-guide-group]
Result3 : gateway -> discovery-guide-service-a[192.168.0.107:3002][V=1.1][R=qa][G=discovery-guide-group] -> discovery-guide-service-b[192.168.0.107:4002][V=1.1][R=dev][G=discovery-guide-group]
Result4 : gateway -> discovery-guide-service-a[192.168.0.107:3002][V=1.1][R=qa][G=discovery-guide-group] -> discovery-guide-service-b[192.168.0.107:4002][V=1.1][R=dev][G=discovery-guide-group]

测试环境

copy iconCopydownload iconDownload
zuul.host.max-per-route-connections=1000
zuul.host.max-total-connections=1000
zuul.semaphore.max-semaphores=5000

测试步骤

copy iconCopydownload iconDownload
使用方法: wrk <选项> <被测HTTP服务的URL>
  Options:
    -c, --connections 跟服务器建立并保持的TCP连接数量
    -d, --duration    压测时间。例如:2s,2m,2h
    -t, --threads     使用多少个线程进行压测
    -s, --script      指定Lua脚本路径
    -H, --header      为每一个HTTP请求添加HTTP头。例如:-H "id: 123" -H "token: abc",冒号后面要带空格
        --latency     在压测结束后,打印延迟统计信息
        --timeout     超时时间

Could not locate MSBuild instance to register with OmniSharp

copy iconCopydownload iconDownload
"omnisharp.enableEditorConfigSupport": true,
"omnisharp.path": "latest",
"omnisharp.projectLoadTimeout": 60,
"omnisharp.useGlobalMono": "always",
"omnisharp.useModernNet": true,

Unable to inject @Stateless EJB into CDI bean (multi-module) Jakarta EE 8

copy iconCopydownload iconDownload
<sub-deployment name="WebApp-web.war">
    <dependencies>
        <module name="deployment.WebApp.ear.WebApp-entities.jar" />
        <module name="deployment.WebApp.ear.WebApp-ejb.jar" />
    </dependencies>
  </sub-deployment>

Enable use of images from the local library on Kubernetes

copy iconCopydownload iconDownload
containers:
  - name: test-container
    image: testImage:latest
    imagePullPolicy: Never
-----------------------
# Start minikube and set docker env
minikube start
eval $(minikube docker-env)

# Build image
docker build -t foo:1.0 .

# Run in minikube
kubectl run hello-foo --image=foo:1.0 --image-pull-policy=Never
-----------------------
    image: wm/hello-openfaas2:0.1
    imagePullPolicy: Always

How to use scoped APIs with (GSI) Google Identity Services

copy iconCopydownload iconDownload
const client = google.accounts.oauth2.initTokenClient({
  client_id: 'YOUR_GOOGLE_CLIENT_ID',
  scope: 'https://www.googleapis.com/auth/calendar.readonly',
  callback: (response) => {
    ...
  },
});
client.requestAccessToken();
-----------------------
const client = google.accounts.oauth2.initTokenClient({
  client_id: 'YOUR_GOOGLE_CLIENT_ID',
  scope: 'https://www.googleapis.com/auth/calendar.readonly',
  callback: (response) => {
    ...
  },
});
client.requestAccessToken();

Downloading files from public Google Drive in python: scoping issues?

copy iconCopydownload iconDownload
regex = "(?<=https://drive.google.com/file/d/)[a-zA-Z0-9]+"
regex_rkey = "(?<=resourcekey=)[a-zA-Z0-9-]+"
for i, l in enumerate(links_to_download):
    url = l
    file_id = re.search(regex, url)[0]
    resource_key = re.search(regex_rkey, url)[0]
    request = drive_service.files().get_media(fileId=file_id)
    request.headers["X-Goog-Drive-Resource-Keys"] = f"{file_id}/{resource_key}"
    fh = io.FileIO(f"file_{i}", mode='wb')
    downloader = MediaIoBaseDownload(fh, request)
    done = False
    while done is False:
        status, done = downloader.next_chunk()
        print("Download %d%%." % int(status.progress() * 100))

EFK system is build on docker but fluentd can't start up

copy iconCopydownload iconDownload
docker pull kurraj/fluentd_castom:latest
FROM fluent/fluentd:v1.14-1

USER root

RUN gem update --system && \
gem install fluent-plugin-elasticsearch --source http://rubygems.org
-----------------------
docker pull kurraj/fluentd_castom:latest
FROM fluent/fluentd:v1.14-1

USER root

RUN gem update --system && \
gem install fluent-plugin-elasticsearch --source http://rubygems.org
-----------------------
FROM fluent/fluentd:v1.12.0-debian-1.0
USER root
RUN gem uninstall -I elasticsearch && gem install elasticsearch -v 7.17.0
RUN ["gem", "install", "fluent-plugin-elasticsearch", "--no-document", "-- 
 version", "5.0.3"]
USER fluent

org.junit.platform.commons.JUnitException: TestEngine with ID 'junit-jupiter' failed to discover tests

copy iconCopydownload iconDownload
test {
  useJUnitPlatform()
  include '**/*Pdf'
}
-----------------------
project-dir
 -- src
     -- main
        -- java
            -- classes
     -- test
        -- java
            -- test-classes

How can I get notified when money has been sent to a particular Bitcoin address on a local regtest network?

copy iconCopydownload iconDownload
docker run -p 18444:19000 -d ulamlabs/bitcoind-custom-regtest:latest
docker exec -it 0aa2e863cd9927 bash
peerbloomfilters=1
docker exec -it 0aa2e863cd9927 bitcoin-cli -regtest getnewaddress
2N23tWAFEtBtTgxNjBNmnwzsiPdLcNek181
docker exec -it 0aa2e863cd9927 bitcoin-cli -regtest generatetoaddress 101 2N23tWAFEtBtTgxNjBNmnwzsiPdLcNek181 
import java.io.File;

import org.bitcoinj.core.Address;
import org.bitcoinj.core.NetworkParameters;
import org.bitcoinj.kits.WalletAppKit;
import org.bitcoinj.params.RegTestParams;

public class Kit {

  public static void main(String[] args) {
    Kit kit  = new Kit();
    kit.run();
  }

  private synchronized void run(){
    NetworkParameters params = RegTestParams.get();
    WalletAppKit kit = new WalletAppKit(params, new File("."), "walletappkit-example");
    kit.connectToLocalHost();

    kit.startAsync();
    kit.awaitRunning();

    kit.wallet().addWatchedAddress(Address.fromString(params, "2N23tWAFEtBtTgxNjBNmnwzsiPdLcNek181"));

    kit.wallet().addCoinsReceivedEventListener((wallet, tx, prevBalance, newBalance) -> {
      System.out.println("-----> coins resceived: " + tx.getTxId());
    });

    while (true) {
      try {
        this.wait(2000);
      } catch (InterruptedException e) {
        e.printStackTrace();
      }
    }
  }
}
docker exec -it 0aa2e863cd9927 bitcoin-cli -regtest sendtoaddress 2N23tWAFEtBtTgxNjBNmnwzsiPdLcNek181 0.00001
0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26
2021-11-09 23:51:20.537 INFO  [NioClientManager][Wallet] Received a pending transaction 0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26 that spends 0.00 BTC from our own wallet, and sends us 0.00001 BTC
2021-11-09 23:51:20.537 INFO  [NioClientManager][Wallet] commitTx of 0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26
...
2021-11-09 23:51:20.537 INFO  [NioClientManager][Wallet] ->pending: 0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26
2021-11-09 23:51:20.537 INFO  [NioClientManager][Wallet] Estimated balance is now: 0.00001 BTC
-----> coins resceived: 0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26
2021-11-09 23:51:20.538 INFO  [NioClientManager][WalletFiles] Saving wallet; last seen block is height 165, date 2021-11-09T22:50:48Z, hash 23451521947bc5ff098c088ae0fc445becca8837d39ee8f6dd88f2c47ad5ac23
2021-11-09 23:51:20.543 INFO  [NioClientManager][WalletFiles] Save completed in 4.736 ms
-----------------------
docker run -p 18444:19000 -d ulamlabs/bitcoind-custom-regtest:latest
docker exec -it 0aa2e863cd9927 bash
peerbloomfilters=1
docker exec -it 0aa2e863cd9927 bitcoin-cli -regtest getnewaddress
2N23tWAFEtBtTgxNjBNmnwzsiPdLcNek181
docker exec -it 0aa2e863cd9927 bitcoin-cli -regtest generatetoaddress 101 2N23tWAFEtBtTgxNjBNmnwzsiPdLcNek181 
import java.io.File;

import org.bitcoinj.core.Address;
import org.bitcoinj.core.NetworkParameters;
import org.bitcoinj.kits.WalletAppKit;
import org.bitcoinj.params.RegTestParams;

public class Kit {

  public static void main(String[] args) {
    Kit kit  = new Kit();
    kit.run();
  }

  private synchronized void run(){
    NetworkParameters params = RegTestParams.get();
    WalletAppKit kit = new WalletAppKit(params, new File("."), "walletappkit-example");
    kit.connectToLocalHost();

    kit.startAsync();
    kit.awaitRunning();

    kit.wallet().addWatchedAddress(Address.fromString(params, "2N23tWAFEtBtTgxNjBNmnwzsiPdLcNek181"));

    kit.wallet().addCoinsReceivedEventListener((wallet, tx, prevBalance, newBalance) -> {
      System.out.println("-----> coins resceived: " + tx.getTxId());
    });

    while (true) {
      try {
        this.wait(2000);
      } catch (InterruptedException e) {
        e.printStackTrace();
      }
    }
  }
}
docker exec -it 0aa2e863cd9927 bitcoin-cli -regtest sendtoaddress 2N23tWAFEtBtTgxNjBNmnwzsiPdLcNek181 0.00001
0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26
2021-11-09 23:51:20.537 INFO  [NioClientManager][Wallet] Received a pending transaction 0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26 that spends 0.00 BTC from our own wallet, and sends us 0.00001 BTC
2021-11-09 23:51:20.537 INFO  [NioClientManager][Wallet] commitTx of 0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26
...
2021-11-09 23:51:20.537 INFO  [NioClientManager][Wallet] ->pending: 0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26
2021-11-09 23:51:20.537 INFO  [NioClientManager][Wallet] Estimated balance is now: 0.00001 BTC
-----> coins resceived: 0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26
2021-11-09 23:51:20.538 INFO  [NioClientManager][WalletFiles] Saving wallet; last seen block is height 165, date 2021-11-09T22:50:48Z, hash 23451521947bc5ff098c088ae0fc445becca8837d39ee8f6dd88f2c47ad5ac23
2021-11-09 23:51:20.543 INFO  [NioClientManager][WalletFiles] Save completed in 4.736 ms
-----------------------
docker run -p 18444:19000 -d ulamlabs/bitcoind-custom-regtest:latest
docker exec -it 0aa2e863cd9927 bash
peerbloomfilters=1
docker exec -it 0aa2e863cd9927 bitcoin-cli -regtest getnewaddress
2N23tWAFEtBtTgxNjBNmnwzsiPdLcNek181
docker exec -it 0aa2e863cd9927 bitcoin-cli -regtest generatetoaddress 101 2N23tWAFEtBtTgxNjBNmnwzsiPdLcNek181 
import java.io.File;

import org.bitcoinj.core.Address;
import org.bitcoinj.core.NetworkParameters;
import org.bitcoinj.kits.WalletAppKit;
import org.bitcoinj.params.RegTestParams;

public class Kit {

  public static void main(String[] args) {
    Kit kit  = new Kit();
    kit.run();
  }

  private synchronized void run(){
    NetworkParameters params = RegTestParams.get();
    WalletAppKit kit = new WalletAppKit(params, new File("."), "walletappkit-example");
    kit.connectToLocalHost();

    kit.startAsync();
    kit.awaitRunning();

    kit.wallet().addWatchedAddress(Address.fromString(params, "2N23tWAFEtBtTgxNjBNmnwzsiPdLcNek181"));

    kit.wallet().addCoinsReceivedEventListener((wallet, tx, prevBalance, newBalance) -> {
      System.out.println("-----> coins resceived: " + tx.getTxId());
    });

    while (true) {
      try {
        this.wait(2000);
      } catch (InterruptedException e) {
        e.printStackTrace();
      }
    }
  }
}
docker exec -it 0aa2e863cd9927 bitcoin-cli -regtest sendtoaddress 2N23tWAFEtBtTgxNjBNmnwzsiPdLcNek181 0.00001
0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26
2021-11-09 23:51:20.537 INFO  [NioClientManager][Wallet] Received a pending transaction 0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26 that spends 0.00 BTC from our own wallet, and sends us 0.00001 BTC
2021-11-09 23:51:20.537 INFO  [NioClientManager][Wallet] commitTx of 0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26
...
2021-11-09 23:51:20.537 INFO  [NioClientManager][Wallet] ->pending: 0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26
2021-11-09 23:51:20.537 INFO  [NioClientManager][Wallet] Estimated balance is now: 0.00001 BTC
-----> coins resceived: 0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26
2021-11-09 23:51:20.538 INFO  [NioClientManager][WalletFiles] Saving wallet; last seen block is height 165, date 2021-11-09T22:50:48Z, hash 23451521947bc5ff098c088ae0fc445becca8837d39ee8f6dd88f2c47ad5ac23
2021-11-09 23:51:20.543 INFO  [NioClientManager][WalletFiles] Save completed in 4.736 ms
-----------------------
docker run -p 18444:19000 -d ulamlabs/bitcoind-custom-regtest:latest
docker exec -it 0aa2e863cd9927 bash
peerbloomfilters=1
docker exec -it 0aa2e863cd9927 bitcoin-cli -regtest getnewaddress
2N23tWAFEtBtTgxNjBNmnwzsiPdLcNek181
docker exec -it 0aa2e863cd9927 bitcoin-cli -regtest generatetoaddress 101 2N23tWAFEtBtTgxNjBNmnwzsiPdLcNek181 
import java.io.File;

import org.bitcoinj.core.Address;
import org.bitcoinj.core.NetworkParameters;
import org.bitcoinj.kits.WalletAppKit;
import org.bitcoinj.params.RegTestParams;

public class Kit {

  public static void main(String[] args) {
    Kit kit  = new Kit();
    kit.run();
  }

  private synchronized void run(){
    NetworkParameters params = RegTestParams.get();
    WalletAppKit kit = new WalletAppKit(params, new File("."), "walletappkit-example");
    kit.connectToLocalHost();

    kit.startAsync();
    kit.awaitRunning();

    kit.wallet().addWatchedAddress(Address.fromString(params, "2N23tWAFEtBtTgxNjBNmnwzsiPdLcNek181"));

    kit.wallet().addCoinsReceivedEventListener((wallet, tx, prevBalance, newBalance) -> {
      System.out.println("-----> coins resceived: " + tx.getTxId());
    });

    while (true) {
      try {
        this.wait(2000);
      } catch (InterruptedException e) {
        e.printStackTrace();
      }
    }
  }
}
docker exec -it 0aa2e863cd9927 bitcoin-cli -regtest sendtoaddress 2N23tWAFEtBtTgxNjBNmnwzsiPdLcNek181 0.00001
0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26
2021-11-09 23:51:20.537 INFO  [NioClientManager][Wallet] Received a pending transaction 0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26 that spends 0.00 BTC from our own wallet, and sends us 0.00001 BTC
2021-11-09 23:51:20.537 INFO  [NioClientManager][Wallet] commitTx of 0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26
...
2021-11-09 23:51:20.537 INFO  [NioClientManager][Wallet] ->pending: 0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26
2021-11-09 23:51:20.537 INFO  [NioClientManager][Wallet] Estimated balance is now: 0.00001 BTC
-----> coins resceived: 0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26
2021-11-09 23:51:20.538 INFO  [NioClientManager][WalletFiles] Saving wallet; last seen block is height 165, date 2021-11-09T22:50:48Z, hash 23451521947bc5ff098c088ae0fc445becca8837d39ee8f6dd88f2c47ad5ac23
2021-11-09 23:51:20.543 INFO  [NioClientManager][WalletFiles] Save completed in 4.736 ms
-----------------------
docker run -p 18444:19000 -d ulamlabs/bitcoind-custom-regtest:latest
docker exec -it 0aa2e863cd9927 bash
peerbloomfilters=1
docker exec -it 0aa2e863cd9927 bitcoin-cli -regtest getnewaddress
2N23tWAFEtBtTgxNjBNmnwzsiPdLcNek181
docker exec -it 0aa2e863cd9927 bitcoin-cli -regtest generatetoaddress 101 2N23tWAFEtBtTgxNjBNmnwzsiPdLcNek181 
import java.io.File;

import org.bitcoinj.core.Address;
import org.bitcoinj.core.NetworkParameters;
import org.bitcoinj.kits.WalletAppKit;
import org.bitcoinj.params.RegTestParams;

public class Kit {

  public static void main(String[] args) {
    Kit kit  = new Kit();
    kit.run();
  }

  private synchronized void run(){
    NetworkParameters params = RegTestParams.get();
    WalletAppKit kit = new WalletAppKit(params, new File("."), "walletappkit-example");
    kit.connectToLocalHost();

    kit.startAsync();
    kit.awaitRunning();

    kit.wallet().addWatchedAddress(Address.fromString(params, "2N23tWAFEtBtTgxNjBNmnwzsiPdLcNek181"));

    kit.wallet().addCoinsReceivedEventListener((wallet, tx, prevBalance, newBalance) -> {
      System.out.println("-----> coins resceived: " + tx.getTxId());
    });

    while (true) {
      try {
        this.wait(2000);
      } catch (InterruptedException e) {
        e.printStackTrace();
      }
    }
  }
}
docker exec -it 0aa2e863cd9927 bitcoin-cli -regtest sendtoaddress 2N23tWAFEtBtTgxNjBNmnwzsiPdLcNek181 0.00001
0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26
2021-11-09 23:51:20.537 INFO  [NioClientManager][Wallet] Received a pending transaction 0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26 that spends 0.00 BTC from our own wallet, and sends us 0.00001 BTC
2021-11-09 23:51:20.537 INFO  [NioClientManager][Wallet] commitTx of 0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26
...
2021-11-09 23:51:20.537 INFO  [NioClientManager][Wallet] ->pending: 0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26
2021-11-09 23:51:20.537 INFO  [NioClientManager][Wallet] Estimated balance is now: 0.00001 BTC
-----> coins resceived: 0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26
2021-11-09 23:51:20.538 INFO  [NioClientManager][WalletFiles] Saving wallet; last seen block is height 165, date 2021-11-09T22:50:48Z, hash 23451521947bc5ff098c088ae0fc445becca8837d39ee8f6dd88f2c47ad5ac23
2021-11-09 23:51:20.543 INFO  [NioClientManager][WalletFiles] Save completed in 4.736 ms
-----------------------
docker run -p 18444:19000 -d ulamlabs/bitcoind-custom-regtest:latest
docker exec -it 0aa2e863cd9927 bash
peerbloomfilters=1
docker exec -it 0aa2e863cd9927 bitcoin-cli -regtest getnewaddress
2N23tWAFEtBtTgxNjBNmnwzsiPdLcNek181
docker exec -it 0aa2e863cd9927 bitcoin-cli -regtest generatetoaddress 101 2N23tWAFEtBtTgxNjBNmnwzsiPdLcNek181 
import java.io.File;

import org.bitcoinj.core.Address;
import org.bitcoinj.core.NetworkParameters;
import org.bitcoinj.kits.WalletAppKit;
import org.bitcoinj.params.RegTestParams;

public class Kit {

  public static void main(String[] args) {
    Kit kit  = new Kit();
    kit.run();
  }

  private synchronized void run(){
    NetworkParameters params = RegTestParams.get();
    WalletAppKit kit = new WalletAppKit(params, new File("."), "walletappkit-example");
    kit.connectToLocalHost();

    kit.startAsync();
    kit.awaitRunning();

    kit.wallet().addWatchedAddress(Address.fromString(params, "2N23tWAFEtBtTgxNjBNmnwzsiPdLcNek181"));

    kit.wallet().addCoinsReceivedEventListener((wallet, tx, prevBalance, newBalance) -> {
      System.out.println("-----> coins resceived: " + tx.getTxId());
    });

    while (true) {
      try {
        this.wait(2000);
      } catch (InterruptedException e) {
        e.printStackTrace();
      }
    }
  }
}
docker exec -it 0aa2e863cd9927 bitcoin-cli -regtest sendtoaddress 2N23tWAFEtBtTgxNjBNmnwzsiPdLcNek181 0.00001
0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26
2021-11-09 23:51:20.537 INFO  [NioClientManager][Wallet] Received a pending transaction 0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26 that spends 0.00 BTC from our own wallet, and sends us 0.00001 BTC
2021-11-09 23:51:20.537 INFO  [NioClientManager][Wallet] commitTx of 0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26
...
2021-11-09 23:51:20.537 INFO  [NioClientManager][Wallet] ->pending: 0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26
2021-11-09 23:51:20.537 INFO  [NioClientManager][Wallet] Estimated balance is now: 0.00001 BTC
-----> coins resceived: 0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26
2021-11-09 23:51:20.538 INFO  [NioClientManager][WalletFiles] Saving wallet; last seen block is height 165, date 2021-11-09T22:50:48Z, hash 23451521947bc5ff098c088ae0fc445becca8837d39ee8f6dd88f2c47ad5ac23
2021-11-09 23:51:20.543 INFO  [NioClientManager][WalletFiles] Save completed in 4.736 ms
-----------------------
docker run -p 18444:19000 -d ulamlabs/bitcoind-custom-regtest:latest
docker exec -it 0aa2e863cd9927 bash
peerbloomfilters=1
docker exec -it 0aa2e863cd9927 bitcoin-cli -regtest getnewaddress
2N23tWAFEtBtTgxNjBNmnwzsiPdLcNek181
docker exec -it 0aa2e863cd9927 bitcoin-cli -regtest generatetoaddress 101 2N23tWAFEtBtTgxNjBNmnwzsiPdLcNek181 
import java.io.File;

import org.bitcoinj.core.Address;
import org.bitcoinj.core.NetworkParameters;
import org.bitcoinj.kits.WalletAppKit;
import org.bitcoinj.params.RegTestParams;

public class Kit {

  public static void main(String[] args) {
    Kit kit  = new Kit();
    kit.run();
  }

  private synchronized void run(){
    NetworkParameters params = RegTestParams.get();
    WalletAppKit kit = new WalletAppKit(params, new File("."), "walletappkit-example");
    kit.connectToLocalHost();

    kit.startAsync();
    kit.awaitRunning();

    kit.wallet().addWatchedAddress(Address.fromString(params, "2N23tWAFEtBtTgxNjBNmnwzsiPdLcNek181"));

    kit.wallet().addCoinsReceivedEventListener((wallet, tx, prevBalance, newBalance) -> {
      System.out.println("-----> coins resceived: " + tx.getTxId());
    });

    while (true) {
      try {
        this.wait(2000);
      } catch (InterruptedException e) {
        e.printStackTrace();
      }
    }
  }
}
docker exec -it 0aa2e863cd9927 bitcoin-cli -regtest sendtoaddress 2N23tWAFEtBtTgxNjBNmnwzsiPdLcNek181 0.00001
0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26
2021-11-09 23:51:20.537 INFO  [NioClientManager][Wallet] Received a pending transaction 0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26 that spends 0.00 BTC from our own wallet, and sends us 0.00001 BTC
2021-11-09 23:51:20.537 INFO  [NioClientManager][Wallet] commitTx of 0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26
...
2021-11-09 23:51:20.537 INFO  [NioClientManager][Wallet] ->pending: 0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26
2021-11-09 23:51:20.537 INFO  [NioClientManager][Wallet] Estimated balance is now: 0.00001 BTC
-----> coins resceived: 0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26
2021-11-09 23:51:20.538 INFO  [NioClientManager][WalletFiles] Saving wallet; last seen block is height 165, date 2021-11-09T22:50:48Z, hash 23451521947bc5ff098c088ae0fc445becca8837d39ee8f6dd88f2c47ad5ac23
2021-11-09 23:51:20.543 INFO  [NioClientManager][WalletFiles] Save completed in 4.736 ms
-----------------------
docker run -p 18444:19000 -d ulamlabs/bitcoind-custom-regtest:latest
docker exec -it 0aa2e863cd9927 bash
peerbloomfilters=1
docker exec -it 0aa2e863cd9927 bitcoin-cli -regtest getnewaddress
2N23tWAFEtBtTgxNjBNmnwzsiPdLcNek181
docker exec -it 0aa2e863cd9927 bitcoin-cli -regtest generatetoaddress 101 2N23tWAFEtBtTgxNjBNmnwzsiPdLcNek181 
import java.io.File;

import org.bitcoinj.core.Address;
import org.bitcoinj.core.NetworkParameters;
import org.bitcoinj.kits.WalletAppKit;
import org.bitcoinj.params.RegTestParams;

public class Kit {

  public static void main(String[] args) {
    Kit kit  = new Kit();
    kit.run();
  }

  private synchronized void run(){
    NetworkParameters params = RegTestParams.get();
    WalletAppKit kit = new WalletAppKit(params, new File("."), "walletappkit-example");
    kit.connectToLocalHost();

    kit.startAsync();
    kit.awaitRunning();

    kit.wallet().addWatchedAddress(Address.fromString(params, "2N23tWAFEtBtTgxNjBNmnwzsiPdLcNek181"));

    kit.wallet().addCoinsReceivedEventListener((wallet, tx, prevBalance, newBalance) -> {
      System.out.println("-----> coins resceived: " + tx.getTxId());
    });

    while (true) {
      try {
        this.wait(2000);
      } catch (InterruptedException e) {
        e.printStackTrace();
      }
    }
  }
}
docker exec -it 0aa2e863cd9927 bitcoin-cli -regtest sendtoaddress 2N23tWAFEtBtTgxNjBNmnwzsiPdLcNek181 0.00001
0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26
2021-11-09 23:51:20.537 INFO  [NioClientManager][Wallet] Received a pending transaction 0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26 that spends 0.00 BTC from our own wallet, and sends us 0.00001 BTC
2021-11-09 23:51:20.537 INFO  [NioClientManager][Wallet] commitTx of 0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26
...
2021-11-09 23:51:20.537 INFO  [NioClientManager][Wallet] ->pending: 0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26
2021-11-09 23:51:20.537 INFO  [NioClientManager][Wallet] Estimated balance is now: 0.00001 BTC
-----> coins resceived: 0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26
2021-11-09 23:51:20.538 INFO  [NioClientManager][WalletFiles] Saving wallet; last seen block is height 165, date 2021-11-09T22:50:48Z, hash 23451521947bc5ff098c088ae0fc445becca8837d39ee8f6dd88f2c47ad5ac23
2021-11-09 23:51:20.543 INFO  [NioClientManager][WalletFiles] Save completed in 4.736 ms
-----------------------
docker run -p 18444:19000 -d ulamlabs/bitcoind-custom-regtest:latest
docker exec -it 0aa2e863cd9927 bash
peerbloomfilters=1
docker exec -it 0aa2e863cd9927 bitcoin-cli -regtest getnewaddress
2N23tWAFEtBtTgxNjBNmnwzsiPdLcNek181
docker exec -it 0aa2e863cd9927 bitcoin-cli -regtest generatetoaddress 101 2N23tWAFEtBtTgxNjBNmnwzsiPdLcNek181 
import java.io.File;

import org.bitcoinj.core.Address;
import org.bitcoinj.core.NetworkParameters;
import org.bitcoinj.kits.WalletAppKit;
import org.bitcoinj.params.RegTestParams;

public class Kit {

  public static void main(String[] args) {
    Kit kit  = new Kit();
    kit.run();
  }

  private synchronized void run(){
    NetworkParameters params = RegTestParams.get();
    WalletAppKit kit = new WalletAppKit(params, new File("."), "walletappkit-example");
    kit.connectToLocalHost();

    kit.startAsync();
    kit.awaitRunning();

    kit.wallet().addWatchedAddress(Address.fromString(params, "2N23tWAFEtBtTgxNjBNmnwzsiPdLcNek181"));

    kit.wallet().addCoinsReceivedEventListener((wallet, tx, prevBalance, newBalance) -> {
      System.out.println("-----> coins resceived: " + tx.getTxId());
    });

    while (true) {
      try {
        this.wait(2000);
      } catch (InterruptedException e) {
        e.printStackTrace();
      }
    }
  }
}
docker exec -it 0aa2e863cd9927 bitcoin-cli -regtest sendtoaddress 2N23tWAFEtBtTgxNjBNmnwzsiPdLcNek181 0.00001
0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26
2021-11-09 23:51:20.537 INFO  [NioClientManager][Wallet] Received a pending transaction 0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26 that spends 0.00 BTC from our own wallet, and sends us 0.00001 BTC
2021-11-09 23:51:20.537 INFO  [NioClientManager][Wallet] commitTx of 0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26
...
2021-11-09 23:51:20.537 INFO  [NioClientManager][Wallet] ->pending: 0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26
2021-11-09 23:51:20.537 INFO  [NioClientManager][Wallet] Estimated balance is now: 0.00001 BTC
-----> coins resceived: 0f972642713c72ae0fe03fe51818b9ea4d483720b69b90e795f35eb80a587c26
2021-11-09 23:51:20.538 INFO  [NioClientManager][WalletFiles] Saving wallet; last seen block is height 165, date 2021-11-09T22:50:48Z, hash 23451521947bc5ff098c088ae0fc445becca8837d39ee8f6dd88f2c47ad5ac23
2021-11-09 23:51:20.543 INFO  [NioClientManager][WalletFiles] Save completed in 4.736 ms

Splunk: Return One or True from a search, use that result in another search

copy iconCopydownload iconDownload
index="myIndex" "started with profile" BD_L* 
| eval Platform=case(match(_raw,"LINUX"),"LINUX",match(_raw,"AIX"),"AIX",match(_raw,"DB2"),"DB2", match(_raw,"SQL"),"SQL", match(_raw,"WEBSPHERE"),"WEBSPHERE", match(_raw,"SYBASE"),"SYBASE", match(_raw,"WINDOWS"),"WINDOWS", true(),"ZLINUX") 
| stats count by Platform RUNID
| join type=left RUNID
    [ search index="myIndex" source="/*/RUNID/*" CASE("ERROR") CTJT*
        | stats count by RUNID
    ]
| stats count by Platform
index=ndx sourcetype=srctp 
| rex field=_raw " (?<runid>\d{10,})[\s\#]"
| rex field=_raw "BD_\w+_(?<platform>\w+)"
| rex field=_raw "(?<error>[eEoOrR]{5})"
| stats values(error) as error values(platform) as platform by runid
| where isnotnull(error)
index=ndx sourcetype=srctp 
| rex field=_raw " (?<runid>\d{10,})[\s\#]"
| rex field=_raw "BD_\w+_(?<platform>\w+)"
| rex field=_raw "(?<error>[eEoOrR]{5})"
| stats values(error) as error values(platform) as platform by runid
| where isnotnull(error)
| stats count by platform
-----------------------
index="myIndex" "started with profile" BD_L* 
| eval Platform=case(match(_raw,"LINUX"),"LINUX",match(_raw,"AIX"),"AIX",match(_raw,"DB2"),"DB2", match(_raw,"SQL"),"SQL", match(_raw,"WEBSPHERE"),"WEBSPHERE", match(_raw,"SYBASE"),"SYBASE", match(_raw,"WINDOWS"),"WINDOWS", true(),"ZLINUX") 
| stats count by Platform RUNID
| join type=left RUNID
    [ search index="myIndex" source="/*/RUNID/*" CASE("ERROR") CTJT*
        | stats count by RUNID
    ]
| stats count by Platform
index=ndx sourcetype=srctp 
| rex field=_raw " (?<runid>\d{10,})[\s\#]"
| rex field=_raw "BD_\w+_(?<platform>\w+)"
| rex field=_raw "(?<error>[eEoOrR]{5})"
| stats values(error) as error values(platform) as platform by runid
| where isnotnull(error)
index=ndx sourcetype=srctp 
| rex field=_raw " (?<runid>\d{10,})[\s\#]"
| rex field=_raw "BD_\w+_(?<platform>\w+)"
| rex field=_raw "(?<error>[eEoOrR]{5})"
| stats values(error) as error values(platform) as platform by runid
| where isnotnull(error)
| stats count by platform
-----------------------
index="myIndex" "started with profile" BD_L* 
| eval Platform=case(match(_raw,"LINUX"),"LINUX",match(_raw,"AIX"),"AIX",match(_raw,"DB2"),"DB2", match(_raw,"SQL"),"SQL", match(_raw,"WEBSPHERE"),"WEBSPHERE", match(_raw,"SYBASE"),"SYBASE", match(_raw,"WINDOWS"),"WINDOWS", true(),"ZLINUX") 
| stats count by Platform RUNID
| join type=left RUNID
    [ search index="myIndex" source="/*/RUNID/*" CASE("ERROR") CTJT*
        | stats count by RUNID
    ]
| stats count by Platform
index=ndx sourcetype=srctp 
| rex field=_raw " (?<runid>\d{10,})[\s\#]"
| rex field=_raw "BD_\w+_(?<platform>\w+)"
| rex field=_raw "(?<error>[eEoOrR]{5})"
| stats values(error) as error values(platform) as platform by runid
| where isnotnull(error)
index=ndx sourcetype=srctp 
| rex field=_raw " (?<runid>\d{10,})[\s\#]"
| rex field=_raw "BD_\w+_(?<platform>\w+)"
| rex field=_raw "(?<error>[eEoOrR]{5})"
| stats values(error) as error values(platform) as platform by runid
| where isnotnull(error)
| stats count by platform
-----------------------
index="myIndex" "started with profile" BD_L* 
| eval platform=case(searchmatch("LINUX"),"LINUX",searchmatch("AIX"),"AIX",searchmatch("DB2"),"DB2", searchmatch("SQL"),"SQL", searchmatch("WEBSPHERE"),"WEBSPHERE", searchmatch("SYBASE"),"SYBASE", searchmatch("WINDOWS"),"WINDOWS", true(),"ZLINUX")
| rex "Discovery run, (?<RUNID>.+) started with profile"
| stats count by platform, RUNID
| join RUNID 
    [ search index="myIndex" source="/opt/XXX/XXXXX/XXXX/log/sensors/*/*" CASE("ERROR") CTJT* 
    | rex field=source "^/opt/XXX/XXXXX/XXXX/log/sensors/(?<RUNID>.+)/" 
    | stats count by RUNID ] 
| stats count by platform

How to async handle callback from keyboard hotkeys?

copy iconCopydownload iconDownload
from threading import Thread
from queue import Queue
from time import sleep
from random import randint


def process(task):
    print(task["name"])
    sleep(3 + randint(0, 100) / 100)
    print(f"Task {task['name']} done")


class WorkerThread(Thread):
    def __init__(self, queue):
        super().__init__()
        self.queue = queue
        print("Started worker thread")
        self._open = True

    def run(self):
        while self._open:
            task = self.queue.get()
            process(task)
            self.queue.task_done()

    def close(self):
        print("Closing", self)
        self._open = False


task_queue = Queue()
THREADS = 6
worker_threads = [WorkerThread(task_queue) for _ in range(THREADS)]
for worker in worker_threads:
    worker.setDaemon(True)
    worker.start()


print("Sending one task")
task_queue.put({"name": "Task 1"})
sleep(1)

print("Sending a bunch of tasks")
for i in range(1, 15):
    task_queue.put({"name": f"Task {i}"})
print("Sleeping for a bit")
sleep(2)

print("Shutting down")

# wrap this in your exit code
task_queue.join()  # wait for everything to be done
for worker in worker_threads:
    worker.close()

Community Discussions

Trending Discussions on Discovery
  • Could not locate MSBuild instance to register with OmniSharp
  • Unable to inject @Stateless EJB into CDI bean (multi-module) Jakarta EE 8
  • Enable use of images from the local library on Kubernetes
  • How to use scoped APIs with (GSI) Google Identity Services
  • Downloading files from public Google Drive in python: scoping issues?
  • Android Studio Disconnects From Physical Device
  • Winsock sendto returns error 10049 (WSAEADDRNOTAVAIL) for broadcast address after network adapter is disabled or physically disconnected
  • EFK system is build on docker but fluentd can't start up
  • org.junit.platform.commons.JUnitException: TestEngine with ID 'junit-jupiter' failed to discover tests
  • How can I get notified when money has been sent to a particular Bitcoin address on a local regtest network?
Trending Discussions on Discovery

QUESTION

Could not locate MSBuild instance to register with OmniSharp

Asked 2022-Apr-01 at 22:39

I have found many questions about this but non have helped me. I am trying to write c# code and the omnisharp auto complete doesn't work and I get this back from the Omnisharp Log:

OmniSharp server started.
    Path: c:\Users\GeorgV.216\.vscode\extensions\ms-dotnettools.csharp-1.24.1\.omnisharp\1.38.3-beta.31\OmniSharp.exe
    PID: 11536

[info]: OmniSharp.Stdio.Host
        Starting OmniSharp on Windows 6.2.9200.0 (x64)
[info]: OmniSharp.Services.DotNetCliService
        Checking the 'DOTNET_ROOT' environment variable to find a .NET SDK
[info]: OmniSharp.Services.DotNetCliService
        Using the 'dotnet' on the PATH.
[info]: OmniSharp.Services.DotNetCliService
        DotNetPath set to dotnet
[info]: OmniSharp.MSBuild.Discovery.MSBuildLocator
        Located 0 MSBuild instance(s)
Could not locate MSBuild instance to register with OmniSharp.

What could be a possible solution?

ANSWER

Answered 2022-Mar-26 at 02:22

Suddenly got this problem too, adding "omnisharp.useModernNet": true to the settings.json fixed it for me.

Source https://stackoverflow.com/questions/71622214

Community Discussions, Code Snippets contain sources that include Stack Exchange Network

Vulnerabilities

No vulnerabilities reported

Install Discovery

You can download it from GitHub, Maven.
You can use Discovery like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the Discovery component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .

Support

For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .

DOWNLOAD this Library from

Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

Explore Related Topics

Share this Page

share link
Consider Popular Microservice Libraries
Compare Microservice Libraries with Highest Support
Compare Microservice Libraries with Highest Security
Compare Microservice Libraries with Permissive License
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

  • © 2022 Open Weaver Inc.