Explore all Microservice open source software, libraries, packages, source code, cloud functions and APIs.

Popular New Releases in Microservice

mall

v1.0.1

istio

Istio 1.13.3

apollo

Apollo 2.0.0-RC1 Release

apollo

Apollo 1.9.0 Release

nacos

2.1.0-BETA (Apr 01, 2022)

Popular Libraries in Microservice

advanced-java

by doocs doticonjavadoticon

star image 57101 doticonCC-BY-SA-4.0

😮 Core Interview Questions & Answers For Experienced Java(Backend) Developers | 互联网 Java 工程师进阶知识完全扫盲:涵盖高并发、分布式、高可用、微服务、海量数据处理等领域知识

mall

by macrozheng doticonjavadoticon

star image 52180 doticonApache-2.0

mall项目是一套电商系统,包括前台商城系统及后台管理系统,基于SpringBoot+MyBatis实现,采用Docker容器化部署。 前台商城系统包含首页门户、商品推荐、商品搜索、商品展示、购物车、订单流程、会员中心、客户服务、帮助中心等模块。 后台管理系统包含商品管理、订单管理、会员管理、促销管理、运营管理、内容管理、统计报表、财务管理、权限管理、设置等模块。

istio

by istio doticongodoticon

star image 30043 doticonApache-2.0

Connect, secure, control, and observe services.

apollo

by apolloconfig doticonjavadoticon

star image 26563 doticonApache-2.0

Apollo is a reliable configuration management system suitable for microservice configuration management scenarios.

spring-boot-examples

by ityouknow doticonjavadoticon

star image 26196 doticon

about learning Spring Boot via examples. Spring Boot 教程、技术栈示例代码,快速简单上手教程。

apollo

by ctripcorp doticonjavadoticon

star image 25294 doticonApache-2.0

Apollo is a reliable configuration management system suitable for microservice configuration management scenarios.

nacos

by alibaba doticonjavadoticon

star image 21883 doticonApache-2.0

an easy-to-use dynamic service discovery, configuration and service management platform for building cloud native applications.

spring-cloud-alibaba

by alibaba doticonjavadoticon

star image 21810 doticonApache-2.0

Spring Cloud Alibaba provides a one-stop solution for application development for the distributed solutions of Alibaba middleware.

seata

by seata doticonjavadoticon

star image 21777 doticonApache-2.0

:fire: Seata is an easy-to-use, high-performance, open source distributed transaction solution.

Trending New libraries in Microservice

Sa-Token

by dromara doticonjavadoticon

star image 8497 doticonApache-2.0

这可能是史上功能最全的Java权限认证框架!目前已集成——登录认证、权限认证、分布式Session会话、微服务网关鉴权、单点登录、OAuth2.0、踢人下线、Redis集成、前后台分离、记住我模式、模拟他人账号、临时身份切换、账号封禁、多账号认证体系、注解式鉴权、路由拦截式鉴权、花式token生成、自动续签、同端互斥登录、会话治理、密码加密、jwt集成、Spring集成、WebFlux集成...

easegress

by megaease doticongodoticon

star image 4243 doticonApache-2.0

A Cloud Native traffic orchestration system

kitex

by cloudwego doticongodoticon

star image 3958 doticonApache-2.0

A high-performance and strong-extensibility Golang RPC framework that helps developers build microservices.

jupiter

by douyu doticongodoticon

star image 3691 doticonApache-2.0

Jupiter是斗鱼开源的面向服务治理的Golang微服务框架

sa-token

by dromara doticonjavadoticon

star image 3542 doticonApache-2.0

这可能是史上功能最全的Java权限认证框架!目前已集成——登录认证、权限认证、分布式Session会话、微服务网关鉴权、单点登录、OAuth2.0、踢人下线、Redis集成、前后台分离、记住我模式、模拟他人账号、临时身份切换、账号封禁、多账号认证体系、注解式鉴权、路由拦截式鉴权、花式token生成、自动续签、同端互斥登录、会话治理、密码加密、jwt集成、Spring集成、WebFlux集成...

SpringBootVulExploit

by LandGrey doticonjavadoticon

star image 2694 doticon

SpringBoot 相关漏洞学习资料,利用方法和技巧合集,黑盒安全评估 check list

erda

by erda-project doticongodoticon

star image 2294 doticonApache-2.0

An enterprise-grade Cloud-Native application platform for Kubernetes.

PassJava-Platform

by Jackson0714 doticonjavadoticon

star image 1083 doticonGPL-3.0

一款面试刷题的 Spring Cloud 开源系统。零碎时间利用小程序查看常见面试题,夯实Java基础。 该项目可以教会你如何搭建SpringBoot项目,Spring Cloud项目。 采用流行的技术,如 SpringBoot、MyBatis、Redis、 MySql、 MongoDB、 RabbitMQ、Elasticsearch,采用Docker容器化部署。

learning-note

by rbmonster doticonjavadoticon

star image 1081 doticon

java开发 面试八股文(个人的面试及工作总结)

Top Authors in Microservice

1

PacktPublishing

63 Libraries

star icon2162

2

piomin

38 Libraries

star icon1948

3

prooph

26 Libraries

star icon2155

4

spring-cloud

25 Libraries

star icon16873

5

OpenLiberty

23 Libraries

star icon1078

6

IBM

22 Libraries

star icon1437

7

oms-services

20 Libraries

star icon186

8

ShittySoft

20 Libraries

star icon153

9

thenativeweb

19 Libraries

star icon2443

10

moleculerjs

16 Libraries

star icon5847

1

63 Libraries

star icon2162

2

38 Libraries

star icon1948

3

26 Libraries

star icon2155

4

25 Libraries

star icon16873

5

23 Libraries

star icon1078

6

22 Libraries

star icon1437

7

20 Libraries

star icon186

8

20 Libraries

star icon153

9

19 Libraries

star icon2443

10

16 Libraries

star icon5847

Trending Kits in Microservice

Here are some of the famous C# Microservice Libraries. C# Microservice Libraries' use cases include Data Access, Business Logic, Security, Automation, Integration, and Monitoring. 


C# Microservice Libraries are packages of code that allow developers to rapidly create, deploy, and manage microservices in the C# programming language. They provide a foundation of reusable code and components, such as authentication, logging, and messaging, that can be used to create and deploy microservices quickly. Additionally, they often provide tools and frameworks to assist in developing, deploying, and managing microservices. 


Let us have a look at these libraries in detail below. 

eShopOnContainers 

  • Built-in support for distributed application architectures and stateful microservices. 
  • Fully container-based architecture. 
  • Support for cloud-native development. 

CAP 

  • Provides a unified API for building distributed, event-driven microservices. 
  • Provides a lightweight, cloud-native infrastructure for running microservices.  
  • Provides a complete set of tools and libraries for developing, testing, and deploying microservices. 

tye 

  • Allows developers to create and manage multiple services with a single command.  
  • Automatically detect and manage configuration changes across services.  
  • Allows developers to iterate and test their code with a single command quickly. 

surging 

  • Supports Dependency Injection (DI), which makes it easier for developers to manage and maintain their code.  
  • Provides API gateways that allow for better control over API traffic.  
  • Provides service discovery, allowing developers to locate network services easily. 

coolstore-microservices 

  • Includes a comprehensive logging and reporting system. 
  • Supports a wide range of service discovery and monitoring tools. 
  • Provides a modern CI/CD pipeline for automated deployment of applications and services. 

run-aspnetcore-microservices 

  • End-to-end monitoring and logging of service calls and performance data. 
  • Intuitive and extensible service-oriented architecture (SOA) framework. 
  • Support for advanced message-based communication between services. 

microdot 

  • Built on the Actor Model, providing a robust framework for distributed, concurrent, fault-tolerant, and scalable applications. 
  • Built-in dependency injection system, allowing developers to quickly and easily inject dependencies into their services. 
  • Built-in support for load-balancing, meaning that services can be distributed across multiple nodes in the cluster. 

Micronetes 

  • Provides a simple way to package, deploy and manage microservices with Docker containers. 
  • Powerful integration layer that allows developers to connect microservices easily with each other.    
  • Supports service discovery, which allows services to be discovered and connected automatically. 

awesome-dotnet-tips 

  • Wide selection of tips and tricks for developing more efficient and maintainable C# microservices. 
  • Easy-to-use command line interface. 
  • Built-in support for the latest versions of popular C# frameworks. 

Convey 

  • Provides various features, such as distributed tracing, service discovery, and resilience. 
  • Offers an opinionated approach to microservices architecture. 
  • Provides a set of tools for testing and monitoring microservices 

Microservices break down complex applications into smaller and independent services. Microservice architecture provides solutions to difficulties of using Monolith architecture, such as scalability and numerous lines of code for applications. The benefits are typically divided by business domain, allowing developers to focus on a specific part of the application and iterate on it quickly. Applications can be changed, deleted, and replaced without affecting the entire application, making application maintenance simpler.

 

Java Microservices Libraries are a set of open-source libraries that empower developers to quickly and easily create microservices-based applications. They provide tools and frameworks for creating and managing microservices, including APIs, distributed systems, and cloud-native architectures. Each service performs a specific task, such as handling user authentication, providing data storage, or processing payments. They help developers easily create distributed applications without having to write complex code. They give us a way to quickly identify and debug any issues that may arise in a distributed system. By using microservices in Java, new technology and process adoption become easier. 

 

Java Microservices rely on a communication protocol called Remote Procedure Call (RPC) to facilitate communication between services. RPC is a robust protocol that allows for the remote execution of procedures or functions without the need for shared memory or database connections between services. This makes it an ideal solution for distributed systems, as it allows for efficient communication between multiple services without requiring them to be co-located. RPC also allows easy integration of new services into an existing system without requiring changes to the current system.

 

Each Java Microservices Library has its unique set of features and functionalities. light-4j, piggymetrics, Activiti, microservices-spring-boot, mycollab, and microservice-istio are libraries that help us to reuse the code. Micronaut is a full-stack framework for creating JVM-based applications. Apache Camel is an open-source integration framework that provides components for routing messages. Quarkus is a Kubernetes-native Java stack designed to make applications more efficient. Helidon and KumuluzEE are lightweight frameworks for creating serverless applications.

 

Check out the below list to find the best top 10 Java Microservices Libraries for your app development.

It is very difficult to maintain a large monolithic application. It is also difficult to release new features and bug fixes. But using the Java Microservice libraries like apollo, nacos, armeria can easily solve these problems. The microservice approach has been around for a while now and is being used by many organizations to improve their efficiency, increase the speed of delivery and enhance the quality of their products. Apollo is an open source framework that provides efficient support for building fast and scalable distributed applications with a unified programming model. Apollo uses HTTP as its primary protocol and provides an HTTP client implementation that supports all of the major web browsers (including IE6) as well as other clients such as curl and wget. Apollo also supports streaming data using either Netty or GZIP compression for efficient high-volume data transfers over low bandwidth connections. Nacos is a lightweight library for building reactive asynchronous systems in Java SE 9+. The main purpose of NACOS is to provide a simple yet robust way to create asynchronous applications using Java SE 9+ functional interfaces with low overhead on thread creation overhead and/or blocking calls at the expense of some performance impact due to the need to manage threads yourself. Some of the most popular among developers are:

Python Microservice libraries like falcon, nameko, emissary, etc are becoming popular. These libraries provide you with a lot of benefits like high-performance asynchronous processing, easy integration with NoSQL databases, and REST APIs. These tools help us move away from monolithic systems and towards more scalable ones. They also help us move towards highly available systems that don’t require all of our time spent on infrastructure maintenance and administration. Falcon is an open source Python framework for building scalable web applications. It is backed by Google and supports many different web frameworks such as Flask, Pyramid and Django. Falcon's main purpose is to provide a high-performance asynchronous processing framework for Python web applications. It uses coroutines to provide an event-driven programming model that makes it easy to define complex asynchronous logic and handle multiple concurrent operations within an application. Nameko is a hosted service that lets you create and manage your own DNS records using standard tools. This makes it easy to manage your own DNS records without having to set up servers or worry about security issues. Nameko provides simple collaboration tools so that people can easily create accounts and store data in the cloud without having to worry about securing sensitive information on their own servers. A few of the most popular open source Python Microservice libraries for developers are:

The Ruby Microservice library is a set of often-used modules that can be used to build more complex applications. The library includes a number of microservices, which are small and simple programs that perform a specific function. The Ruby Microservice library contains many microservices, each designed to solve a different type of problem. Kontena is a project that helps you to manage your full stack CI/CD pipeline, including your Docker registry, Kubernetes clusters, CI/CD pipelines and Docker images management. It has a simple yet powerful UI and integrates with other tools like Jenkins and GitlabCI. Stitches is a library for building complex tasks in an easy way. It makes it possible to use many languages in the same task by providing abstractions over different types of tasks (for example, creating HTML pages or running unit tests) via DSLs written in Python or JavaScript. The library can be used from Ruby as well as from other languages such as Java or Kotlin. Expeditor is a toolkit for managing dockerized applications in AWS environments. You can use it for setting up multi-tier applications on AWS or hybrid cloud infrastructures where you need to run services on-premises and manage them through the cloud provider - AWS EC2 instances, ECS clusters, RDS DB instances etc. Popular open source Ruby Microservice libraries for developers include:

On the other hand, if you use C++ microservice libraries like app-mash, edgelessrt, libcluon then this will be perfect for building distributed systems. These libraries have been designed from the ground up to run on multiple machines without any issues and handle distributed systems very well. Libraries like app-mash and edgelessrt are designed with the C++11 standard, so they don’t require you to use any other framework for building your service. Also, these libraries have been tested on Linux and Windows systems and have been used successfully by many developers worldwide. libcluon is also a good choice for building microservices in C++ because it has been tested on both Linux and Windows systems as well as Mac OS X systems. It provides a set of tools that help developers with their daily work: from creating new projects, testing them and deploying them on different platforms (including mobile platforms). App-Mash is a rapid application development framework for easy deployment of web applications. It's built on top of Google App Engine standard library and runs on both Google App Engine and Google Cloud Platform. App-Mash provides a full stack for building robust web applications with all the tools needed to build high performance services: database support, caching, authentication, internationalization support and more. There are several popular open source C++ microservice libraries available for developers:

PHP Microservice is an alternative to the traditional web application architecture based on the use of microservices. The main idea behind this architecture is that each service should be completely independent from other services and should have its own state. Microservice libraries like swoft, hyperf, flyimg make it easier to build PHP microservices by providing a common set of tools for developing scalable applications in PHP. Swoft is a microservice framework for developing REST APIs in PHP. It provides a set of tools and libraries that help you build REST API services quickly and easily. Swoft has a variety of features including authentication and authorization, data access strategies, middleware support, as well as support for various HTTP methodologies. Hyperf is a lightweight PHP microservice toolkit built on top of Symfony components. It offers you an easy way to create HTTP services in an efficient, secure way from the ground up. With Hyperf you can easily create: HTTP API, HTTP API with asynchronous calls or callbacks and many other features that will help you build scalable applications quickly. Many developers depend on the following open source PHP Microservice libraries:

Go microservices are small, single-purpose services that can be independently deployed and scaled. This approach is often used in applications that need to scale well, such as production systems or web applications. One of the biggest advantages of using Go microservices libraries is that you get better performance and reliability than you would by writing your own code. You also avoid the maintenance overhead of managing a monolithic application. Kratos is an open source library that implements Erlang actor framework in Go. Actor-model based systems are one of the most popular patterns for building distributed systems. Actors are lightweight processes that communicate with each other via message passing, which makes them very scalable and durable. Kratos provides an implementation of Erlang actors for Go so you can easily build high performance, reliable systems on top of it. Nomad is another open source library that implements Erlang actor framework in Go and works with kubectl to manage your cluster from within your program code. Nomad supports all kinds of clusters (mock, test) but its main goal is to provide a consistent interface between operations that want a cluster (e.g., Kubernetes). Popular open source Go microservice libraries include:

The use of JavaScript microservice libraries single-spa, moleculer, seneca is not only because it is easy to implement and maintain but also provides a high degree of flexibility. You can easily integrate microservice components into your existing application by using the API. For example, if you want to add a new feature in your application, you can use a JavaScript microservice library like moleculer to develop and test it. They help you to build a large scale application with very little effort, and they allow you to use all of the benefits of the JavaScript language. Moleculer is a JavaScript library that provides an easy and intuitive way to build an API. It is designed to be as simple as possible: it doesn't require an extensive configuration or setup process, but at the same time it allows you to build powerful APIs. Seneca is a lightweight microservice framework that simplifies building, deploying and managing apps. Seneca has recently been rewritten from scratch with Ember (and thus ember-cli), so it supports frontend development with Ember.js apps in addition to backend services with Node and Express (and now both). Some of the most widely used open source JavaScript microservice libraries among developers include:

Trending Discussions on Microservice

Exclude Logs from Datadog Ingestion

Custom Serilog sink with injection?

How to manage Google Cloud credentials for local development

using webclient to call the grapql mutation API in spring boot

Jdeps Module java.annotation not found

How to make a Spring Boot application quit on tomcat failure

Deadlock on insert/select

Rewrite host and port for outgoing request of a pod in an Istio Mesh

Checking list of conditions on API data

Traefik v2 reverse proxy without Docker

QUESTION

Exclude Logs from Datadog Ingestion

Asked 2022-Mar-19 at 22:38

I have a kubernetes cluster that's running datadog and some microservices. Each microservice makes healthchecks every 5 seconds to make sure the service is up and running. I want to exclude these healthcheck logs from being ingested into Datadog.

I think I need to use log_processing_rules and I've tried that but the healthcheck logs are still making it into the logs section of Datadog. My current Deployment looks like this:

1apiVersion: apps/v1
2kind: Deployment
3[ ... SNIP ... ]
4spec:
5  replicas: 2
6  selector:
7    matchLabels:
8      app: my-service
9  template:
10    metadata:
11      labels:
12        app: my-service
13        version: "fac8fb13"
14      annotations:
15        rollme: "IO2ad"
16        tags.datadoghq.com/env: development
17        tags.datadoghq.com/version: "fac8fb13"
18        tags.datadoghq.com/service: my-service
19        tags.datadoghq.com/my-service.logs: |
20          [{
21            "source": my-service,
22            "service": my-service,
23            "log_processing_rules": [
24              {
25                "type": "exclude_at_match",
26                "name": "exclude_healthcheck_logs",
27                "pattern": "\"RequestPath\": \"\/health\""
28              }
29            ]
30          }]
31

and the logs coming out of the kubernetes pod:

1apiVersion: apps/v1
2kind: Deployment
3[ ... SNIP ... ]
4spec:
5  replicas: 2
6  selector:
7    matchLabels:
8      app: my-service
9  template:
10    metadata:
11      labels:
12        app: my-service
13        version: "fac8fb13"
14      annotations:
15        rollme: "IO2ad"
16        tags.datadoghq.com/env: development
17        tags.datadoghq.com/version: "fac8fb13"
18        tags.datadoghq.com/service: my-service
19        tags.datadoghq.com/my-service.logs: |
20          [{
21            "source": my-service,
22            "service": my-service,
23            "log_processing_rules": [
24              {
25                "type": "exclude_at_match",
26                "name": "exclude_healthcheck_logs",
27                "pattern": "\"RequestPath\": \"\/health\""
28              }
29            ]
30          }]
31$ kubectl logs my-service-pod
32
33{
34  "@t": "2022-01-07T19:13:05.3134483Z",
35  "@m": "Request finished HTTP/1.1 GET http://10.64.0.80:5000/health - - - 200 - text/plain 7.5992ms",
36  "@i": "REDACTED",
37  "ElapsedMilliseconds": 7.5992,
38  "StatusCode": 200,
39  "ContentType": "text/plain",
40  "ContentLength": null,
41  "Protocol": "HTTP/1.1",
42  "Method": "GET",
43  "Scheme": "http",
44  "Host": "10.64.0.80:5000",
45  "PathBase": "",
46  "Path": "/health",
47  "QueryString": "",
48  "HostingRequestFinishedLog": "Request finished HTTP/1.1 GET http://10.64.0.80:5000/health - - - 200 - text/plain 7.5992ms",
49  "EventId": {
50    "Id": 2,
51    "Name": "RequestFinished"
52  },
53  "SourceContext": "Microsoft.AspNetCore.Hosting.Diagnostics",
54  "RequestId": "REDACTED",
55  "RequestPath": "/health",
56  "ConnectionId": "REDACTED",
57  "dd_service": "my-service",
58  "dd_version": "54aae2b5",
59  "dd_env": "development",
60  "dd_trace_id": "REDACTED",
61  "dd_span_id": "REDACTED"
62}
63

EDIT: Removed 2nd element of the log_processing_rules array above as I've tried with 1 and 2 elements in the rules array.

EDIT2: I've also tried changing log_processing_rules type to INCLUDE at match in an attempt to figure this out:

1apiVersion: apps/v1
2kind: Deployment
3[ ... SNIP ... ]
4spec:
5  replicas: 2
6  selector:
7    matchLabels:
8      app: my-service
9  template:
10    metadata:
11      labels:
12        app: my-service
13        version: "fac8fb13"
14      annotations:
15        rollme: "IO2ad"
16        tags.datadoghq.com/env: development
17        tags.datadoghq.com/version: "fac8fb13"
18        tags.datadoghq.com/service: my-service
19        tags.datadoghq.com/my-service.logs: |
20          [{
21            "source": my-service,
22            "service": my-service,
23            "log_processing_rules": [
24              {
25                "type": "exclude_at_match",
26                "name": "exclude_healthcheck_logs",
27                "pattern": "\"RequestPath\": \"\/health\""
28              }
29            ]
30          }]
31$ kubectl logs my-service-pod
32
33{
34  "@t": "2022-01-07T19:13:05.3134483Z",
35  "@m": "Request finished HTTP/1.1 GET http://10.64.0.80:5000/health - - - 200 - text/plain 7.5992ms",
36  "@i": "REDACTED",
37  "ElapsedMilliseconds": 7.5992,
38  "StatusCode": 200,
39  "ContentType": "text/plain",
40  "ContentLength": null,
41  "Protocol": "HTTP/1.1",
42  "Method": "GET",
43  "Scheme": "http",
44  "Host": "10.64.0.80:5000",
45  "PathBase": "",
46  "Path": "/health",
47  "QueryString": "",
48  "HostingRequestFinishedLog": "Request finished HTTP/1.1 GET http://10.64.0.80:5000/health - - - 200 - text/plain 7.5992ms",
49  "EventId": {
50    "Id": 2,
51    "Name": "RequestFinished"
52  },
53  "SourceContext": "Microsoft.AspNetCore.Hosting.Diagnostics",
54  "RequestId": "REDACTED",
55  "RequestPath": "/health",
56  "ConnectionId": "REDACTED",
57  "dd_service": "my-service",
58  "dd_version": "54aae2b5",
59  "dd_env": "development",
60  "dd_trace_id": "REDACTED",
61  "dd_span_id": "REDACTED"
62}
63"log_processing_rules": [
64  {
65    "type": "include_at_match",
66    "name": "testing_include_at_match",
67    "pattern": "somepath"
68  }
69]
70

and I'm still getting the health logs in Datadog (in theory I should not as /health is not part of the matching pattern)

ANSWER

Answered 2022-Jan-12 at 20:28

I think the problem is that you're defining multiple patterns; the docs state, If you want to match one or more patterns you must define them in a single expression.

Try somtething like this and see what happens:

1apiVersion: apps/v1
2kind: Deployment
3[ ... SNIP ... ]
4spec:
5  replicas: 2
6  selector:
7    matchLabels:
8      app: my-service
9  template:
10    metadata:
11      labels:
12        app: my-service
13        version: "fac8fb13"
14      annotations:
15        rollme: "IO2ad"
16        tags.datadoghq.com/env: development
17        tags.datadoghq.com/version: "fac8fb13"
18        tags.datadoghq.com/service: my-service
19        tags.datadoghq.com/my-service.logs: |
20          [{
21            "source": my-service,
22            "service": my-service,
23            "log_processing_rules": [
24              {
25                "type": "exclude_at_match",
26                "name": "exclude_healthcheck_logs",
27                "pattern": "\"RequestPath\": \"\/health\""
28              }
29            ]
30          }]
31$ kubectl logs my-service-pod
32
33{
34  "@t": "2022-01-07T19:13:05.3134483Z",
35  "@m": "Request finished HTTP/1.1 GET http://10.64.0.80:5000/health - - - 200 - text/plain 7.5992ms",
36  "@i": "REDACTED",
37  "ElapsedMilliseconds": 7.5992,
38  "StatusCode": 200,
39  "ContentType": "text/plain",
40  "ContentLength": null,
41  "Protocol": "HTTP/1.1",
42  "Method": "GET",
43  "Scheme": "http",
44  "Host": "10.64.0.80:5000",
45  "PathBase": "",
46  "Path": "/health",
47  "QueryString": "",
48  "HostingRequestFinishedLog": "Request finished HTTP/1.1 GET http://10.64.0.80:5000/health - - - 200 - text/plain 7.5992ms",
49  "EventId": {
50    "Id": 2,
51    "Name": "RequestFinished"
52  },
53  "SourceContext": "Microsoft.AspNetCore.Hosting.Diagnostics",
54  "RequestId": "REDACTED",
55  "RequestPath": "/health",
56  "ConnectionId": "REDACTED",
57  "dd_service": "my-service",
58  "dd_version": "54aae2b5",
59  "dd_env": "development",
60  "dd_trace_id": "REDACTED",
61  "dd_span_id": "REDACTED"
62}
63"log_processing_rules": [
64  {
65    "type": "include_at_match",
66    "name": "testing_include_at_match",
67    "pattern": "somepath"
68  }
69]
70"log_processing_rules": [
71  {
72    "type": "exclude_at_match",
73    "name": "exclude_healthcheck_logs",
74    "pattern": "\/health|\"RequestPath\": \"\/health\""
75  }
76

Source https://stackoverflow.com/questions/70687054

QUESTION

Custom Serilog sink with injection?

Asked 2022-Mar-08 at 10:41

I have create a simple Serilog sink project that looks like this :

1namespace MyApp.Cloud.Serilog.MQSink
2{
3    public class MessageQueueSink: ILogEventSink
4    {
5        private readonly IMQProducer _MQProducerService;
6        public MessageQueueSink(IMQProducer mQProducerService)
7        {
8            _MQProducerService = mQProducerService;
9        }
10        public void Emit(LogEvent logEvent)
11        {
12            _MQProducerService.Produce<SendLog>(new SendLog() { LogEventJson = JsonConvert.SerializeObject(logEvent)});
13        }
14    }
15}
16

The consuming microservice are starting up like this :

1namespace MyApp.Cloud.Serilog.MQSink
2{
3    public class MessageQueueSink: ILogEventSink
4    {
5        private readonly IMQProducer _MQProducerService;
6        public MessageQueueSink(IMQProducer mQProducerService)
7        {
8            _MQProducerService = mQProducerService;
9        }
10        public void Emit(LogEvent logEvent)
11        {
12            _MQProducerService.Produce<SendLog>(new SendLog() { LogEventJson = JsonConvert.SerializeObject(logEvent)});
13        }
14    }
15}
16        var configurationBuilder = new ConfigurationBuilder().AddJsonFile("appsettings.json").Build();
17        var appSettings = configurationBuilder.Get<AppSettings>();
18
19        configurationBuilder = new ConfigurationBuilder().AddJsonFile("ExtendedSettings.json").Build();
20
21            Host.CreateDefaultBuilder(args)
22                .UseMyAppCloudMQ(context => context.UseSettings(appSettings.MQSettings))
23                .UseSerilog((hostingContext, loggerConfiguration) => loggerConfiguration.ReadFrom.Configuration(hostingContext.Configuration))
24                .ConfigureServices((hostContext, services) =>
25                {
26                    services
27                        .AddHostedService<ExtendedProgService>()
28                        .Configure<MQSettings>(configurationBuilder.GetSection("MQSettings"))
29                })
30                .Build().Run();
31

The serilog part of appsettings.json looks like this :

1namespace MyApp.Cloud.Serilog.MQSink
2{
3    public class MessageQueueSink: ILogEventSink
4    {
5        private readonly IMQProducer _MQProducerService;
6        public MessageQueueSink(IMQProducer mQProducerService)
7        {
8            _MQProducerService = mQProducerService;
9        }
10        public void Emit(LogEvent logEvent)
11        {
12            _MQProducerService.Produce<SendLog>(new SendLog() { LogEventJson = JsonConvert.SerializeObject(logEvent)});
13        }
14    }
15}
16        var configurationBuilder = new ConfigurationBuilder().AddJsonFile("appsettings.json").Build();
17        var appSettings = configurationBuilder.Get<AppSettings>();
18
19        configurationBuilder = new ConfigurationBuilder().AddJsonFile("ExtendedSettings.json").Build();
20
21            Host.CreateDefaultBuilder(args)
22                .UseMyAppCloudMQ(context => context.UseSettings(appSettings.MQSettings))
23                .UseSerilog((hostingContext, loggerConfiguration) => loggerConfiguration.ReadFrom.Configuration(hostingContext.Configuration))
24                .ConfigureServices((hostContext, services) =>
25                {
26                    services
27                        .AddHostedService<ExtendedProgService>()
28                        .Configure<MQSettings>(configurationBuilder.GetSection("MQSettings"))
29                })
30                .Build().Run();
31  "serilog": {
32    "Using": [ "Serilog.Sinks.File", "Serilog.Sinks.Console", "MyApp.Cloud.Serilog.MQSink" ],
33    "MinimumLevel": {
34      "Default": "Debug",
35      "Override": {
36        "Microsoft": "Warning",
37        "System": "Warning"
38      }
39    },
40    "Enrich": [ "FromLogContext", "WithMachineName", "WithProcessId" ],
41    "WriteTo": [
42      {
43        "Name": "MessageQueueSink",
44        "Args": {}
45        }
46    ]
47  }
48

The MQSink project is added as reference to the microservice project and I can see that the MQSink dll ends up in the bin folder.

The problem is that when executing a _logger.LogInformation(...) in the microservice the Emit are never triggered, but if I add a console sink it will output data? I also suspect that the injected MQ will not work properly?

How could this be solved?

EDIT :

Turned on the Serilog internal log and could see that the method MessageQueueSink could not be found. I did not find any way to get this working with appsetings.json so I started to look on how to bind this in code.

To get it working a extension hade to be created :

1namespace MyApp.Cloud.Serilog.MQSink
2{
3    public class MessageQueueSink: ILogEventSink
4    {
5        private readonly IMQProducer _MQProducerService;
6        public MessageQueueSink(IMQProducer mQProducerService)
7        {
8            _MQProducerService = mQProducerService;
9        }
10        public void Emit(LogEvent logEvent)
11        {
12            _MQProducerService.Produce<SendLog>(new SendLog() { LogEventJson = JsonConvert.SerializeObject(logEvent)});
13        }
14    }
15}
16        var configurationBuilder = new ConfigurationBuilder().AddJsonFile("appsettings.json").Build();
17        var appSettings = configurationBuilder.Get<AppSettings>();
18
19        configurationBuilder = new ConfigurationBuilder().AddJsonFile("ExtendedSettings.json").Build();
20
21            Host.CreateDefaultBuilder(args)
22                .UseMyAppCloudMQ(context => context.UseSettings(appSettings.MQSettings))
23                .UseSerilog((hostingContext, loggerConfiguration) => loggerConfiguration.ReadFrom.Configuration(hostingContext.Configuration))
24                .ConfigureServices((hostContext, services) =>
25                {
26                    services
27                        .AddHostedService<ExtendedProgService>()
28                        .Configure<MQSettings>(configurationBuilder.GetSection("MQSettings"))
29                })
30                .Build().Run();
31  "serilog": {
32    "Using": [ "Serilog.Sinks.File", "Serilog.Sinks.Console", "MyApp.Cloud.Serilog.MQSink" ],
33    "MinimumLevel": {
34      "Default": "Debug",
35      "Override": {
36        "Microsoft": "Warning",
37        "System": "Warning"
38      }
39    },
40    "Enrich": [ "FromLogContext", "WithMachineName", "WithProcessId" ],
41    "WriteTo": [
42      {
43        "Name": "MessageQueueSink",
44        "Args": {}
45        }
46    ]
47  }
48public static class MySinkExtensions
49    {
50        public static LoggerConfiguration MessageQueueSink(
51                  this Serilog.Configuration.LoggerSinkConfiguration loggerConfiguration,
52                  MyApp.Cloud.MQ.Interface.IMQProducer mQProducer = null)
53        {
54            return loggerConfiguration.Sink(new MyApp.Cloud.Serilog.MQSink.MessageQueueSink(mQProducer));
55        }
56    }
57

This made it possible to add the custom sink like this :

1namespace MyApp.Cloud.Serilog.MQSink
2{
3    public class MessageQueueSink: ILogEventSink
4    {
5        private readonly IMQProducer _MQProducerService;
6        public MessageQueueSink(IMQProducer mQProducerService)
7        {
8            _MQProducerService = mQProducerService;
9        }
10        public void Emit(LogEvent logEvent)
11        {
12            _MQProducerService.Produce<SendLog>(new SendLog() { LogEventJson = JsonConvert.SerializeObject(logEvent)});
13        }
14    }
15}
16        var configurationBuilder = new ConfigurationBuilder().AddJsonFile("appsettings.json").Build();
17        var appSettings = configurationBuilder.Get<AppSettings>();
18
19        configurationBuilder = new ConfigurationBuilder().AddJsonFile("ExtendedSettings.json").Build();
20
21            Host.CreateDefaultBuilder(args)
22                .UseMyAppCloudMQ(context => context.UseSettings(appSettings.MQSettings))
23                .UseSerilog((hostingContext, loggerConfiguration) => loggerConfiguration.ReadFrom.Configuration(hostingContext.Configuration))
24                .ConfigureServices((hostContext, services) =>
25                {
26                    services
27                        .AddHostedService<ExtendedProgService>()
28                        .Configure<MQSettings>(configurationBuilder.GetSection("MQSettings"))
29                })
30                .Build().Run();
31  "serilog": {
32    "Using": [ "Serilog.Sinks.File", "Serilog.Sinks.Console", "MyApp.Cloud.Serilog.MQSink" ],
33    "MinimumLevel": {
34      "Default": "Debug",
35      "Override": {
36        "Microsoft": "Warning",
37        "System": "Warning"
38      }
39    },
40    "Enrich": [ "FromLogContext", "WithMachineName", "WithProcessId" ],
41    "WriteTo": [
42      {
43        "Name": "MessageQueueSink",
44        "Args": {}
45        }
46    ]
47  }
48public static class MySinkExtensions
49    {
50        public static LoggerConfiguration MessageQueueSink(
51                  this Serilog.Configuration.LoggerSinkConfiguration loggerConfiguration,
52                  MyApp.Cloud.MQ.Interface.IMQProducer mQProducer = null)
53        {
54            return loggerConfiguration.Sink(new MyApp.Cloud.Serilog.MQSink.MessageQueueSink(mQProducer));
55        }
56    }
57Host.CreateDefaultBuilder(args)
58                    .UseMyAppCloudMQ(context => context.UseSettings(appSettings.MQSettings))
59                     .ConfigureServices((hostContext, services) =>
60                    {
61                        services
62                            .Configure<MQSettings>(configurationBuilder.GetSection("MQSettings"))
63                    })
64                    .UseSerilog((hostingContext, loggerConfiguration) => loggerConfiguration.ReadFrom.Configuration(hostingContext.Configuration).WriteTo.MessageQueueSink())
65                    .Build().Run();
66

The custom sink is loaded and the Emit is triggered but I still do not know how to inject the MQ in to the sink? It would also be much better if I could do all the configuration of the Serilog and sink in the appsettings.json file.

ANSWER

Answered 2022-Feb-23 at 18:28

If you refer to the Provided Sinks list and examine the source code for some of them, you'll notice that the pattern is usually:

  1. Construct the sink configuration (usually taking values from IConfiguration, inline or a combination of both)
  2. Pass the configuration to the sink registration.

Then the sink implementation instantiates the required services to push logs to.

An alternate approach I could suggest is registering Serilog without any arguments (UseSerilog()) and then configure the static Serilog.Log class using the built IServiceProvider:

1namespace MyApp.Cloud.Serilog.MQSink
2{
3    public class MessageQueueSink: ILogEventSink
4    {
5        private readonly IMQProducer _MQProducerService;
6        public MessageQueueSink(IMQProducer mQProducerService)
7        {
8            _MQProducerService = mQProducerService;
9        }
10        public void Emit(LogEvent logEvent)
11        {
12            _MQProducerService.Produce<SendLog>(new SendLog() { LogEventJson = JsonConvert.SerializeObject(logEvent)});
13        }
14    }
15}
16        var configurationBuilder = new ConfigurationBuilder().AddJsonFile("appsettings.json").Build();
17        var appSettings = configurationBuilder.Get<AppSettings>();
18
19        configurationBuilder = new ConfigurationBuilder().AddJsonFile("ExtendedSettings.json").Build();
20
21            Host.CreateDefaultBuilder(args)
22                .UseMyAppCloudMQ(context => context.UseSettings(appSettings.MQSettings))
23                .UseSerilog((hostingContext, loggerConfiguration) => loggerConfiguration.ReadFrom.Configuration(hostingContext.Configuration))
24                .ConfigureServices((hostContext, services) =>
25                {
26                    services
27                        .AddHostedService<ExtendedProgService>()
28                        .Configure<MQSettings>(configurationBuilder.GetSection("MQSettings"))
29                })
30                .Build().Run();
31  "serilog": {
32    "Using": [ "Serilog.Sinks.File", "Serilog.Sinks.Console", "MyApp.Cloud.Serilog.MQSink" ],
33    "MinimumLevel": {
34      "Default": "Debug",
35      "Override": {
36        "Microsoft": "Warning",
37        "System": "Warning"
38      }
39    },
40    "Enrich": [ "FromLogContext", "WithMachineName", "WithProcessId" ],
41    "WriteTo": [
42      {
43        "Name": "MessageQueueSink",
44        "Args": {}
45        }
46    ]
47  }
48public static class MySinkExtensions
49    {
50        public static LoggerConfiguration MessageQueueSink(
51                  this Serilog.Configuration.LoggerSinkConfiguration loggerConfiguration,
52                  MyApp.Cloud.MQ.Interface.IMQProducer mQProducer = null)
53        {
54            return loggerConfiguration.Sink(new MyApp.Cloud.Serilog.MQSink.MessageQueueSink(mQProducer));
55        }
56    }
57Host.CreateDefaultBuilder(args)
58                    .UseMyAppCloudMQ(context => context.UseSettings(appSettings.MQSettings))
59                     .ConfigureServices((hostContext, services) =>
60                    {
61                        services
62                            .Configure<MQSettings>(configurationBuilder.GetSection("MQSettings"))
63                    })
64                    .UseSerilog((hostingContext, loggerConfiguration) => loggerConfiguration.ReadFrom.Configuration(hostingContext.Configuration).WriteTo.MessageQueueSink())
65                    .Build().Run();
66var host = Host.CreateDefaultBuilder(args)
67    // Register your services as usual
68    .UseSerilog()
69    .Build()
70    
71Log.Logger = new LoggerConfiguration()
72    .ReadFrom.Configuration(host.Services.GetRequiredService<IConfiguration>())
73    .WriteTo.MessageQueueSink(host.Services.GetRequiredService<IMQProducer>())
74    .CreateLogger();
75
76host.Run();
77

Source https://stackoverflow.com/questions/71145751

QUESTION

How to manage Google Cloud credentials for local development

Asked 2022-Feb-14 at 23:35

I searched a lot how to authenticate/authorize Google's client libraries and it seems no one agrees how to do it.

Some people states that I should create a service account, create a key out from it and give that key to each developer that wants to act as this service account. I hate this solution because it leaks the identity of the service account to multiple person.

Others mentioned that you simply log in with the Cloud SDK and ADC (Application Default Credentials) by doing:

1$ gcloud auth application-default login
2

Then, libraries like google-cloud-storage will load credentials tied to my user from the ADC. It's better but still not good for me as this require to go to IAM and give every developer (or a group) the permissions required for the application to run. Moreover, if the developer runs many applications locally for testing purposes (e.g. microservices), the list of permissions required will probably be very long. Also it will be hard to understand why we gave such permissions after some time.

The last approach I encountered is service account impersonation. This resolve the fact of exposing private keys to developers, and let us define the permission required by an application, let's say A once, associate them to a service account and say:

Hey, let Julien act as the service account used for application A.

Here's a snippet of how to impersonate a principal:

1$ gcloud auth application-default login
2from google.auth import impersonated_credentials
3from google.auth import default
4
5from google.cloud import storage
6
7target_scopes = ['https://www.googleapis.com/auth/devstorage.read_only']
8
9credentials, project = default(scopes=target_scopes)
10
11final_credentials = impersonated_credentials.Credentials(
12    source_credentials=credentials,
13    target_principal="foo@bar-271618.iam.gserviceaccount.com",
14    target_scopes=target_scopes
15)
16
17client = storage.Client(credentials=final_credentials)
18
19print(next(client.list_buckets()))
20

If you want to try this yourself, you need to create the service account you want to impersonate (here foo@bar-271618.iam.gserviceaccount.com) and grant your user the role Service Account Token Creator from the service account permission tab.

My only concern is that it would require me to wrap all Google Cloud client libraries I want to use with something that checks if I am running my app locally:

1$ gcloud auth application-default login
2from google.auth import impersonated_credentials
3from google.auth import default
4
5from google.cloud import storage
6
7target_scopes = ['https://www.googleapis.com/auth/devstorage.read_only']
8
9credentials, project = default(scopes=target_scopes)
10
11final_credentials = impersonated_credentials.Credentials(
12    source_credentials=credentials,
13    target_principal="foo@bar-271618.iam.gserviceaccount.com",
14    target_scopes=target_scopes
15)
16
17client = storage.Client(credentials=final_credentials)
18
19print(next(client.list_buckets()))
20from google.auth import impersonated_credentials
21from google.auth import default
22
23from google.cloud import storage
24
25target_scopes = ['https://www.googleapis.com/auth/devstorage.read_only']
26
27credentials, project = default(scopes=target_scopes)
28if env := os.getenv("RUNNING_ENVIRONMENT") == "local":
29    credentials = impersonated_credentials.Credentials(
30        source_credentials=credentials,
31        target_principal=os.environ["TARGET_PRINCIPAL"],
32        target_scopes=target_scopes
33    )
34
35client = storage.Client(credentials=credentials)
36print(next(client.list_buckets()))
37

Also, I have to define the scopes (I think it's the oauth2 access scopes?) I am using, which is pretty annoying.

My question is: I am I going the right direction? Do I overthink all of that? Is there any easier way to achieve all of that?

Here's some of the source I used:

Update 1

This topic is discussed here.

I've made a first proposition here to support this enhancement.

Update 2

The feature has been implemented! See here for details.

ANSWER

Answered 2021-Oct-02 at 14:00

You can use a new gcloud feature and impersonate your local credential like that:

1$ gcloud auth application-default login
2from google.auth import impersonated_credentials
3from google.auth import default
4
5from google.cloud import storage
6
7target_scopes = ['https://www.googleapis.com/auth/devstorage.read_only']
8
9credentials, project = default(scopes=target_scopes)
10
11final_credentials = impersonated_credentials.Credentials(
12    source_credentials=credentials,
13    target_principal="foo@bar-271618.iam.gserviceaccount.com",
14    target_scopes=target_scopes
15)
16
17client = storage.Client(credentials=final_credentials)
18
19print(next(client.list_buckets()))
20from google.auth import impersonated_credentials
21from google.auth import default
22
23from google.cloud import storage
24
25target_scopes = ['https://www.googleapis.com/auth/devstorage.read_only']
26
27credentials, project = default(scopes=target_scopes)
28if env := os.getenv("RUNNING_ENVIRONMENT") == "local":
29    credentials = impersonated_credentials.Credentials(
30        source_credentials=credentials,
31        target_principal=os.environ["TARGET_PRINCIPAL"],
32        target_scopes=target_scopes
33    )
34
35client = storage.Client(credentials=credentials)
36print(next(client.list_buckets()))
37gcloud auth application-default login --impersonate-service-account=<SA email>
38

It's a new feature. Being a Java and Golang developer, I checked and tested the Java client library and it already supports this authentication mode. However, it's not yet the case in Go. And I submitted a pull request to add it into the go client library.

I quickly checked in Python, and it seems implemented. Have a try on one of the latest versions (released after August 3rd 2021) and let me know!!

Note: A few is aware of your use case. I'm happy not to be alone in this case :)

Source https://stackoverflow.com/questions/69412702

QUESTION

using webclient to call the grapql mutation API in spring boot

Asked 2022-Jan-24 at 12:18

I am stuck while calling the graphQL mutation API in spring boot. Let me explain my scenario, I have two microservice one is the AuditConsumeService which consume the message from the activeMQ, and the other is GraphQL layer which simply takes the data from the consume service and put it inside the database. Everything well when i try to push data using graphql playground or postman. How do I push data from AuditConsumeService. In the AuditConsumeService I am trying to send mutation API as a string. the method which is responsible to send that to graphQL layer is

1public Mono<String> sendLogsToGraphQL(String logs){
2        return webClient
3                .post()
4                .uri("http://localhost:8080/logs/createEventLog")
5                .bodyValue(logs)
6                .retrieve()
7                .bodyToMono(String.class);
8    }  
9

NOTE: I try to pass data as Object as well but no use. The String logs will be given to it from activeMQ. The data which I am sending is;

1public Mono<String> sendLogsToGraphQL(String logs){
2        return webClient
3                .post()
4                .uri("http://localhost:8080/logs/createEventLog")
5                .bodyValue(logs)
6                .retrieve()
7                .bodyToMono(String.class);
8    }  
9{
10    "hasError": false,
11    "message": "Hello There",
12    "sender": "Ali Ahmad",
13    "payload": {
14        "type": "String",
15        "title": "Topoic",
16        "description": "This is the demo description of the activemqq"
17    },
18    "serviceInfo":{
19        "version": "v1",
20        "date": "2021-05-18T08:44:17.8237608+05:00",
21        "serverStatus": "UP",
22        "serviceName": "IdentityService"
23    }
24}
25

The mutation will be like;

1public Mono<String> sendLogsToGraphQL(String logs){
2        return webClient
3                .post()
4                .uri("http://localhost:8080/logs/createEventLog")
5                .bodyValue(logs)
6                .retrieve()
7                .bodyToMono(String.class);
8    }  
9{
10    "hasError": false,
11    "message": "Hello There",
12    "sender": "Ali Ahmad",
13    "payload": {
14        "type": "String",
15        "title": "Topoic",
16        "description": "This is the demo description of the activemqq"
17    },
18    "serviceInfo":{
19        "version": "v1",
20        "date": "2021-05-18T08:44:17.8237608+05:00",
21        "serverStatus": "UP",
22        "serviceName": "IdentityService"
23    }
24}
25mutation($eventLog:EventLogInput){
26  createEventLog(eventLog: $eventLog){
27    hasError
28    message
29    payload{
30      title,
31      description
32    }
33  }
34}
35

The $eventLog has json body as;

1public Mono<String> sendLogsToGraphQL(String logs){
2        return webClient
3                .post()
4                .uri("http://localhost:8080/logs/createEventLog")
5                .bodyValue(logs)
6                .retrieve()
7                .bodyToMono(String.class);
8    }  
9{
10    "hasError": false,
11    "message": "Hello There",
12    "sender": "Ali Ahmad",
13    "payload": {
14        "type": "String",
15        "title": "Topoic",
16        "description": "This is the demo description of the activemqq"
17    },
18    "serviceInfo":{
19        "version": "v1",
20        "date": "2021-05-18T08:44:17.8237608+05:00",
21        "serverStatus": "UP",
22        "serviceName": "IdentityService"
23    }
24}
25mutation($eventLog:EventLogInput){
26  createEventLog(eventLog: $eventLog){
27    hasError
28    message
29    payload{
30      title,
31      description
32    }
33  }
34}
35{
36  "eventLog": {
37    "hasError": false,
38    "message": "Hello There",
39    "sender": "Ali Ahmad",
40    "payload": {
41        "type": "String",
42        "title": "Topoic",
43        "description": "This is the demo description of the activemqq"
44    },
45    "serviceInfo":{
46        "version": "v1",
47        "date": "2021-05-18T08:44:17.8237608+05:00",
48        "serverStatus": "UP",
49        "serviceName": "IdentityService"
50    }
51}
52}
53

EDIT The follow the below answer, by updating the consumerservice as;

1public Mono<String> sendLogsToGraphQL(String logs){
2        return webClient
3                .post()
4                .uri("http://localhost:8080/logs/createEventLog")
5                .bodyValue(logs)
6                .retrieve()
7                .bodyToMono(String.class);
8    }  
9{
10    "hasError": false,
11    "message": "Hello There",
12    "sender": "Ali Ahmad",
13    "payload": {
14        "type": "String",
15        "title": "Topoic",
16        "description": "This is the demo description of the activemqq"
17    },
18    "serviceInfo":{
19        "version": "v1",
20        "date": "2021-05-18T08:44:17.8237608+05:00",
21        "serverStatus": "UP",
22        "serviceName": "IdentityService"
23    }
24}
25mutation($eventLog:EventLogInput){
26  createEventLog(eventLog: $eventLog){
27    hasError
28    message
29    payload{
30      title,
31      description
32    }
33  }
34}
35{
36  "eventLog": {
37    "hasError": false,
38    "message": "Hello There",
39    "sender": "Ali Ahmad",
40    "payload": {
41        "type": "String",
42        "title": "Topoic",
43        "description": "This is the demo description of the activemqq"
44    },
45    "serviceInfo":{
46        "version": "v1",
47        "date": "2021-05-18T08:44:17.8237608+05:00",
48        "serverStatus": "UP",
49        "serviceName": "IdentityService"
50    }
51}
52}
53@Component
54public class Consumer {
55    @Autowired
56    private AuditService auditService;
57
58    private final String MUTATION_QUERY = "mutation($eventLog: EventLogInput){\n" +
59            "createEventLog(eventLog: $eventLog){\n" +
60            "hasError\n" +
61            "}\n" +
62            "}";
63
64    @JmsListener(destination = "Audit.queue")
65    public void consumeLogs(String logs) {
66        Gson gson = new Gson();
67        Object jsonObject = gson.fromJson(logs, Object.class);
68        Map<String, Object> graphQlBody = new HashMap<>();
69        graphQlBody.put("query", MUTATION_QUERY);
70        graphQlBody.put("variables", "{eventLog: " + jsonObject+ "}");
71        auditService.sendLogsToGraphQL(graphQlBody);
72    }
73}
74

Now In the `sendLogsToGraphQL' will becomes.

1public Mono<String> sendLogsToGraphQL(String logs){
2        return webClient
3                .post()
4                .uri("http://localhost:8080/logs/createEventLog")
5                .bodyValue(logs)
6                .retrieve()
7                .bodyToMono(String.class);
8    }  
9{
10    "hasError": false,
11    "message": "Hello There",
12    "sender": "Ali Ahmad",
13    "payload": {
14        "type": "String",
15        "title": "Topoic",
16        "description": "This is the demo description of the activemqq"
17    },
18    "serviceInfo":{
19        "version": "v1",
20        "date": "2021-05-18T08:44:17.8237608+05:00",
21        "serverStatus": "UP",
22        "serviceName": "IdentityService"
23    }
24}
25mutation($eventLog:EventLogInput){
26  createEventLog(eventLog: $eventLog){
27    hasError
28    message
29    payload{
30      title,
31      description
32    }
33  }
34}
35{
36  "eventLog": {
37    "hasError": false,
38    "message": "Hello There",
39    "sender": "Ali Ahmad",
40    "payload": {
41        "type": "String",
42        "title": "Topoic",
43        "description": "This is the demo description of the activemqq"
44    },
45    "serviceInfo":{
46        "version": "v1",
47        "date": "2021-05-18T08:44:17.8237608+05:00",
48        "serverStatus": "UP",
49        "serviceName": "IdentityService"
50    }
51}
52}
53@Component
54public class Consumer {
55    @Autowired
56    private AuditService auditService;
57
58    private final String MUTATION_QUERY = "mutation($eventLog: EventLogInput){\n" +
59            "createEventLog(eventLog: $eventLog){\n" +
60            "hasError\n" +
61            "}\n" +
62            "}";
63
64    @JmsListener(destination = "Audit.queue")
65    public void consumeLogs(String logs) {
66        Gson gson = new Gson();
67        Object jsonObject = gson.fromJson(logs, Object.class);
68        Map<String, Object> graphQlBody = new HashMap<>();
69        graphQlBody.put("query", MUTATION_QUERY);
70        graphQlBody.put("variables", "{eventLog: " + jsonObject+ "}");
71        auditService.sendLogsToGraphQL(graphQlBody);
72    }
73}
74public void sendLogsToGraphQL(Map<String, String> logs) {
75        log.info("Logs: {} ", logs);
76        Mono<String> stringMono = webClient
77                .post()
78                .uri("http://localhost:8080/graphql")
79                .bodyValue(BodyInserters.fromValue(logs))
80                .retrieve()
81                .bodyToMono(String.class);
82        log.info("StringMono: {}", stringMono);
83        return stringMono;
84    }
85

The data is not sending to the graphql layer with the specified url.

ANSWER

Answered 2022-Jan-23 at 21:40

You have to send the query and body as variables in post request like shown here

1public Mono<String> sendLogsToGraphQL(String logs){
2        return webClient
3                .post()
4                .uri("http://localhost:8080/logs/createEventLog")
5                .bodyValue(logs)
6                .retrieve()
7                .bodyToMono(String.class);
8    }  
9{
10    "hasError": false,
11    "message": "Hello There",
12    "sender": "Ali Ahmad",
13    "payload": {
14        "type": "String",
15        "title": "Topoic",
16        "description": "This is the demo description of the activemqq"
17    },
18    "serviceInfo":{
19        "version": "v1",
20        "date": "2021-05-18T08:44:17.8237608+05:00",
21        "serverStatus": "UP",
22        "serviceName": "IdentityService"
23    }
24}
25mutation($eventLog:EventLogInput){
26  createEventLog(eventLog: $eventLog){
27    hasError
28    message
29    payload{
30      title,
31      description
32    }
33  }
34}
35{
36  "eventLog": {
37    "hasError": false,
38    "message": "Hello There",
39    "sender": "Ali Ahmad",
40    "payload": {
41        "type": "String",
42        "title": "Topoic",
43        "description": "This is the demo description of the activemqq"
44    },
45    "serviceInfo":{
46        "version": "v1",
47        "date": "2021-05-18T08:44:17.8237608+05:00",
48        "serverStatus": "UP",
49        "serviceName": "IdentityService"
50    }
51}
52}
53@Component
54public class Consumer {
55    @Autowired
56    private AuditService auditService;
57
58    private final String MUTATION_QUERY = "mutation($eventLog: EventLogInput){\n" +
59            "createEventLog(eventLog: $eventLog){\n" +
60            "hasError\n" +
61            "}\n" +
62            "}";
63
64    @JmsListener(destination = "Audit.queue")
65    public void consumeLogs(String logs) {
66        Gson gson = new Gson();
67        Object jsonObject = gson.fromJson(logs, Object.class);
68        Map<String, Object> graphQlBody = new HashMap<>();
69        graphQlBody.put("query", MUTATION_QUERY);
70        graphQlBody.put("variables", "{eventLog: " + jsonObject+ "}");
71        auditService.sendLogsToGraphQL(graphQlBody);
72    }
73}
74public void sendLogsToGraphQL(Map<String, String> logs) {
75        log.info("Logs: {} ", logs);
76        Mono<String> stringMono = webClient
77                .post()
78                .uri("http://localhost:8080/graphql")
79                .bodyValue(BodyInserters.fromValue(logs))
80                .retrieve()
81                .bodyToMono(String.class);
82        log.info("StringMono: {}", stringMono);
83        return stringMono;
84    }
85graphQlBody = { "query" : mutation_query, "variables" : { "eventLog" : event_log_json } }
86

And then in webClient you can send body as multiple ways

1public Mono<String> sendLogsToGraphQL(String logs){
2        return webClient
3                .post()
4                .uri("http://localhost:8080/logs/createEventLog")
5                .bodyValue(logs)
6                .retrieve()
7                .bodyToMono(String.class);
8    }  
9{
10    "hasError": false,
11    "message": "Hello There",
12    "sender": "Ali Ahmad",
13    "payload": {
14        "type": "String",
15        "title": "Topoic",
16        "description": "This is the demo description of the activemqq"
17    },
18    "serviceInfo":{
19        "version": "v1",
20        "date": "2021-05-18T08:44:17.8237608+05:00",
21        "serverStatus": "UP",
22        "serviceName": "IdentityService"
23    }
24}
25mutation($eventLog:EventLogInput){
26  createEventLog(eventLog: $eventLog){
27    hasError
28    message
29    payload{
30      title,
31      description
32    }
33  }
34}
35{
36  "eventLog": {
37    "hasError": false,
38    "message": "Hello There",
39    "sender": "Ali Ahmad",
40    "payload": {
41        "type": "String",
42        "title": "Topoic",
43        "description": "This is the demo description of the activemqq"
44    },
45    "serviceInfo":{
46        "version": "v1",
47        "date": "2021-05-18T08:44:17.8237608+05:00",
48        "serverStatus": "UP",
49        "serviceName": "IdentityService"
50    }
51}
52}
53@Component
54public class Consumer {
55    @Autowired
56    private AuditService auditService;
57
58    private final String MUTATION_QUERY = "mutation($eventLog: EventLogInput){\n" +
59            "createEventLog(eventLog: $eventLog){\n" +
60            "hasError\n" +
61            "}\n" +
62            "}";
63
64    @JmsListener(destination = "Audit.queue")
65    public void consumeLogs(String logs) {
66        Gson gson = new Gson();
67        Object jsonObject = gson.fromJson(logs, Object.class);
68        Map<String, Object> graphQlBody = new HashMap<>();
69        graphQlBody.put("query", MUTATION_QUERY);
70        graphQlBody.put("variables", "{eventLog: " + jsonObject+ "}");
71        auditService.sendLogsToGraphQL(graphQlBody);
72    }
73}
74public void sendLogsToGraphQL(Map<String, String> logs) {
75        log.info("Logs: {} ", logs);
76        Mono<String> stringMono = webClient
77                .post()
78                .uri("http://localhost:8080/graphql")
79                .bodyValue(BodyInserters.fromValue(logs))
80                .retrieve()
81                .bodyToMono(String.class);
82        log.info("StringMono: {}", stringMono);
83        return stringMono;
84    }
85graphQlBody = { "query" : mutation_query, "variables" : { "eventLog" : event_log_json } }
86public Mono<String> sendLogsToGraphQL(Map<String,Object> body){
87    return webClient
88            .post()
89            .uri("http://localhost:8080/logs/createEventLog")
90            .bodyValue(BodyInserters.fromValue(body))
91            .retrieve()
92            .bodyToMono(String.class);
93}  
94

Here i just showed using Map<String,Object> for forming graphQL request body, but you can also create corresponding POJO classes with the properties of query and variables

Source https://stackoverflow.com/questions/70823774

QUESTION

Jdeps Module java.annotation not found

Asked 2022-Jan-20 at 22:48

I'm trying to create a minimal jre for Spring Boot microservices using jdeps and jlink, but I'm getting the following error when I get to the using jdeps part

1Exception in thread &quot;main&quot; java.lang.module.FindException: Module java.annotation not found, required by org.apache.tomcat.embed.core
2    at java.base/java.lang.module.Resolver.findFail(Resolver.java:893)
3    at java.base/java.lang.module.Resolver.resolve(Resolver.java:192)
4    at java.base/java.lang.module.Resolver.resolve(Resolver.java:141)
5    at java.base/java.lang.module.Configuration.resolve(Configuration.java:421)
6    at java.base/java.lang.module.Configuration.resolve(Configuration.java:255)
7    at jdk.jdeps/com.sun.tools.jdeps.JdepsConfiguration$Builder.build(JdepsConfiguration.java:564)
8    at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.buildConfig(JdepsTask.java:603)
9    at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:557)
10    at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:533)
11    at jdk.jdeps/com.sun.tools.jdeps.Main.main(Main.java:49)
12

I already tried the following commands with no effect

1Exception in thread &quot;main&quot; java.lang.module.FindException: Module java.annotation not found, required by org.apache.tomcat.embed.core
2    at java.base/java.lang.module.Resolver.findFail(Resolver.java:893)
3    at java.base/java.lang.module.Resolver.resolve(Resolver.java:192)
4    at java.base/java.lang.module.Resolver.resolve(Resolver.java:141)
5    at java.base/java.lang.module.Configuration.resolve(Configuration.java:421)
6    at java.base/java.lang.module.Configuration.resolve(Configuration.java:255)
7    at jdk.jdeps/com.sun.tools.jdeps.JdepsConfiguration$Builder.build(JdepsConfiguration.java:564)
8    at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.buildConfig(JdepsTask.java:603)
9    at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:557)
10    at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:533)
11    at jdk.jdeps/com.sun.tools.jdeps.Main.main(Main.java:49)
12jdeps --ignore-missing-deps --multi-release 17 --module-path target/lib/* target/errorrr-*.jar
13jdeps --multi-release 16 --module-path target/lib/* target/errorrr-*.jar
14jdeps --ignore-missing-deps --multi-release 17 --class-path target/lib/* target/errorrr-*.jar
15

I already tried it with java versions 11, 16 and 17 and different versions of Spring Boot.

All dependencies needed for build are copied to target/lib folder by maven-dependency-plugin plugin when I run mvn install

After identifying the responsible dependency I created a new project from scratch with only it to isolate the error, but it remained.

I tried to use gradle at first but as the error remained I changed it to mavem but also no change.

When I add the specified dependency that is being requested the error changes to

1Exception in thread &quot;main&quot; java.lang.module.FindException: Module java.annotation not found, required by org.apache.tomcat.embed.core
2    at java.base/java.lang.module.Resolver.findFail(Resolver.java:893)
3    at java.base/java.lang.module.Resolver.resolve(Resolver.java:192)
4    at java.base/java.lang.module.Resolver.resolve(Resolver.java:141)
5    at java.base/java.lang.module.Configuration.resolve(Configuration.java:421)
6    at java.base/java.lang.module.Configuration.resolve(Configuration.java:255)
7    at jdk.jdeps/com.sun.tools.jdeps.JdepsConfiguration$Builder.build(JdepsConfiguration.java:564)
8    at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.buildConfig(JdepsTask.java:603)
9    at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:557)
10    at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:533)
11    at jdk.jdeps/com.sun.tools.jdeps.Main.main(Main.java:49)
12jdeps --ignore-missing-deps --multi-release 17 --module-path target/lib/* target/errorrr-*.jar
13jdeps --multi-release 16 --module-path target/lib/* target/errorrr-*.jar
14jdeps --ignore-missing-deps --multi-release 17 --class-path target/lib/* target/errorrr-*.jar
15#13 1.753 Exception in thread &quot;main&quot; java.lang.Error: java.util.concurrent.ExecutionException: com.sun.tools.jdeps.MultiReleaseException
16        #13 1.753       at jdk.jdeps/com.sun.tools.jdeps.DependencyFinder.waitForTasksCompleted(DependencyFinder.java:271)
17        #13 1.753       at jdk.jdeps/com.sun.tools.jdeps.DependencyFinder.parse(DependencyFinder.java:133)
18        #13 1.753       at jdk.jdeps/com.sun.tools.jdeps.DepsAnalyzer.run(DepsAnalyzer.java:129)
19        #13 1.753       at jdk.jdeps/com.sun.tools.jdeps.ModuleExportsAnalyzer.run(ModuleExportsAnalyzer.java:74)
20        #13 1.753       at jdk.jdeps/com.sun.tools.jdeps.JdepsTask$ListModuleDeps.run(JdepsTask.java:1047)
21        #13 1.753       at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:574)
22        #13 1.753       at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:533)
23        #13 1.753       at jdk.jdeps/com.sun.tools.jdeps.Main.main(Main.java:49)
24        #13 1.753 Caused by: java.util.concurrent.ExecutionException: com.sun.tools.jdeps.MultiReleaseException
25        #13 1.753       at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
26        #13 1.753       at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191)
27        #13 1.753       at jdk.jdeps/com.sun.tools.jdeps.DependencyFinder.waitForTasksCompleted(DependencyFinder.java:267)
28        #13 1.754       ... 7 more
29        #13 1.754 Caused by: com.sun.tools.jdeps.MultiReleaseException
30        #13 1.754       at jdk.jdeps/com.sun.tools.jdeps.VersionHelper.add(VersionHelper.java:62)
31        #13 1.754       at jdk.jdeps/com.sun.tools.jdeps.ClassFileReader$JarFileReader.readClassFile(ClassFileReader.java:360)
32        #13 1.754       at jdk.jdeps/com.sun.tools.jdeps.ClassFileReader$JarFileIterator.hasNext(ClassFileReader.java:402)
33        #13 1.754       at jdk.jdeps/com.sun.tools.jdeps.DependencyFinder.lambda$parse$5(DependencyFinder.java:179)
34        #13 1.754       at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
35        #13 1.754       at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
36        #13 1.754       at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
37        #13 1.754       at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
38        #13 1.754       at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
39        #13 1.754       at java.base/java.lang.Thread.run(Thread.java:833)
40

My pom.xml

1Exception in thread &quot;main&quot; java.lang.module.FindException: Module java.annotation not found, required by org.apache.tomcat.embed.core
2    at java.base/java.lang.module.Resolver.findFail(Resolver.java:893)
3    at java.base/java.lang.module.Resolver.resolve(Resolver.java:192)
4    at java.base/java.lang.module.Resolver.resolve(Resolver.java:141)
5    at java.base/java.lang.module.Configuration.resolve(Configuration.java:421)
6    at java.base/java.lang.module.Configuration.resolve(Configuration.java:255)
7    at jdk.jdeps/com.sun.tools.jdeps.JdepsConfiguration$Builder.build(JdepsConfiguration.java:564)
8    at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.buildConfig(JdepsTask.java:603)
9    at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:557)
10    at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:533)
11    at jdk.jdeps/com.sun.tools.jdeps.Main.main(Main.java:49)
12jdeps --ignore-missing-deps --multi-release 17 --module-path target/lib/* target/errorrr-*.jar
13jdeps --multi-release 16 --module-path target/lib/* target/errorrr-*.jar
14jdeps --ignore-missing-deps --multi-release 17 --class-path target/lib/* target/errorrr-*.jar
15#13 1.753 Exception in thread &quot;main&quot; java.lang.Error: java.util.concurrent.ExecutionException: com.sun.tools.jdeps.MultiReleaseException
16        #13 1.753       at jdk.jdeps/com.sun.tools.jdeps.DependencyFinder.waitForTasksCompleted(DependencyFinder.java:271)
17        #13 1.753       at jdk.jdeps/com.sun.tools.jdeps.DependencyFinder.parse(DependencyFinder.java:133)
18        #13 1.753       at jdk.jdeps/com.sun.tools.jdeps.DepsAnalyzer.run(DepsAnalyzer.java:129)
19        #13 1.753       at jdk.jdeps/com.sun.tools.jdeps.ModuleExportsAnalyzer.run(ModuleExportsAnalyzer.java:74)
20        #13 1.753       at jdk.jdeps/com.sun.tools.jdeps.JdepsTask$ListModuleDeps.run(JdepsTask.java:1047)
21        #13 1.753       at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:574)
22        #13 1.753       at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:533)
23        #13 1.753       at jdk.jdeps/com.sun.tools.jdeps.Main.main(Main.java:49)
24        #13 1.753 Caused by: java.util.concurrent.ExecutionException: com.sun.tools.jdeps.MultiReleaseException
25        #13 1.753       at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
26        #13 1.753       at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191)
27        #13 1.753       at jdk.jdeps/com.sun.tools.jdeps.DependencyFinder.waitForTasksCompleted(DependencyFinder.java:267)
28        #13 1.754       ... 7 more
29        #13 1.754 Caused by: com.sun.tools.jdeps.MultiReleaseException
30        #13 1.754       at jdk.jdeps/com.sun.tools.jdeps.VersionHelper.add(VersionHelper.java:62)
31        #13 1.754       at jdk.jdeps/com.sun.tools.jdeps.ClassFileReader$JarFileReader.readClassFile(ClassFileReader.java:360)
32        #13 1.754       at jdk.jdeps/com.sun.tools.jdeps.ClassFileReader$JarFileIterator.hasNext(ClassFileReader.java:402)
33        #13 1.754       at jdk.jdeps/com.sun.tools.jdeps.DependencyFinder.lambda$parse$5(DependencyFinder.java:179)
34        #13 1.754       at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
35        #13 1.754       at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
36        #13 1.754       at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
37        #13 1.754       at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
38        #13 1.754       at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
39        #13 1.754       at java.base/java.lang.Thread.run(Thread.java:833)
40   &lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;
41&lt;project xmlns=&quot;http://maven.apache.org/POM/4.0.0&quot; xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot;
42   xsi:schemaLocation=&quot;http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd&quot;&gt;
43   &lt;modelVersion&gt;4.0.0&lt;/modelVersion&gt;
44   &lt;parent&gt;
45       &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
46       &lt;artifactId&gt;spring-boot-starter-parent&lt;/artifactId&gt;
47       &lt;version&gt;2.6.0&lt;/version&gt;
48       &lt;relativePath/&gt; &lt;!-- lookup parent from repository --&gt;
49   &lt;/parent&gt;
50   &lt;groupId&gt;com.example&lt;/groupId&gt;
51   &lt;artifactId&gt;errorrr&lt;/artifactId&gt;
52   &lt;version&gt;0.0.1-SNAPSHOT&lt;/version&gt;
53   &lt;name&gt;errorrr&lt;/name&gt;
54   &lt;description&gt;Demo project for Spring Boot&lt;/description&gt;
55   &lt;properties&gt;
56       &lt;java.version&gt;17&lt;/java.version&gt;
57   &lt;/properties&gt;
58   &lt;dependencies&gt;
59       &lt;dependency&gt;
60           &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
61           &lt;artifactId&gt;spring-boot-starter-web&lt;/artifactId&gt;
62       &lt;/dependency&gt;
63
64       &lt;dependency&gt;
65           &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
66           &lt;artifactId&gt;spring-boot-starter&lt;/artifactId&gt;
67       &lt;/dependency&gt;
68
69       &lt;dependency&gt;
70           &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
71           &lt;artifactId&gt;spring-boot-starter-test&lt;/artifactId&gt;
72           &lt;scope&gt;test&lt;/scope&gt;
73       &lt;/dependency&gt;
74
75   &lt;/dependencies&gt;
76
77   &lt;build&gt;
78       &lt;plugins&gt;
79           &lt;plugin&gt;
80               &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
81               &lt;artifactId&gt;spring-boot-maven-plugin&lt;/artifactId&gt;
82           &lt;/plugin&gt;
83           &lt;plugin&gt;
84               &lt;groupId&gt;org.apache.maven.plugins&lt;/groupId&gt;
85               &lt;artifactId&gt;maven-dependency-plugin&lt;/artifactId&gt;
86               &lt;executions&gt;
87                   &lt;execution&gt;
88                       &lt;id&gt;copy-dependencies&lt;/id&gt;
89                       &lt;phase&gt;package&lt;/phase&gt;
90                       &lt;goals&gt;
91                           &lt;goal&gt;copy-dependencies&lt;/goal&gt;
92                       &lt;/goals&gt;
93                       &lt;configuration&gt;
94                           &lt;outputDirectory&gt;${project.build.directory}/lib&lt;/outputDirectory&gt;
95                       &lt;/configuration&gt;
96                   &lt;/execution&gt;
97               &lt;/executions&gt;
98           &lt;/plugin&gt;
99       &lt;/plugins&gt;
100   &lt;/build&gt;
101
102&lt;/project&gt;
103
104

If I don't need to use this dependency I can do all the build processes and at the end I have a 76mb jre

ANSWER

Answered 2021-Dec-28 at 14:39

I have been struggling with a similar issue In my gradle spring boot project I am using the output of the following for adding modules in jlink in my dockerfile with (openjdk:17-alpine):

1Exception in thread &quot;main&quot; java.lang.module.FindException: Module java.annotation not found, required by org.apache.tomcat.embed.core
2    at java.base/java.lang.module.Resolver.findFail(Resolver.java:893)
3    at java.base/java.lang.module.Resolver.resolve(Resolver.java:192)
4    at java.base/java.lang.module.Resolver.resolve(Resolver.java:141)
5    at java.base/java.lang.module.Configuration.resolve(Configuration.java:421)
6    at java.base/java.lang.module.Configuration.resolve(Configuration.java:255)
7    at jdk.jdeps/com.sun.tools.jdeps.JdepsConfiguration$Builder.build(JdepsConfiguration.java:564)
8    at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.buildConfig(JdepsTask.java:603)
9    at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:557)
10    at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:533)
11    at jdk.jdeps/com.sun.tools.jdeps.Main.main(Main.java:49)
12jdeps --ignore-missing-deps --multi-release 17 --module-path target/lib/* target/errorrr-*.jar
13jdeps --multi-release 16 --module-path target/lib/* target/errorrr-*.jar
14jdeps --ignore-missing-deps --multi-release 17 --class-path target/lib/* target/errorrr-*.jar
15#13 1.753 Exception in thread &quot;main&quot; java.lang.Error: java.util.concurrent.ExecutionException: com.sun.tools.jdeps.MultiReleaseException
16        #13 1.753       at jdk.jdeps/com.sun.tools.jdeps.DependencyFinder.waitForTasksCompleted(DependencyFinder.java:271)
17        #13 1.753       at jdk.jdeps/com.sun.tools.jdeps.DependencyFinder.parse(DependencyFinder.java:133)
18        #13 1.753       at jdk.jdeps/com.sun.tools.jdeps.DepsAnalyzer.run(DepsAnalyzer.java:129)
19        #13 1.753       at jdk.jdeps/com.sun.tools.jdeps.ModuleExportsAnalyzer.run(ModuleExportsAnalyzer.java:74)
20        #13 1.753       at jdk.jdeps/com.sun.tools.jdeps.JdepsTask$ListModuleDeps.run(JdepsTask.java:1047)
21        #13 1.753       at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:574)
22        #13 1.753       at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:533)
23        #13 1.753       at jdk.jdeps/com.sun.tools.jdeps.Main.main(Main.java:49)
24        #13 1.753 Caused by: java.util.concurrent.ExecutionException: com.sun.tools.jdeps.MultiReleaseException
25        #13 1.753       at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
26        #13 1.753       at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191)
27        #13 1.753       at jdk.jdeps/com.sun.tools.jdeps.DependencyFinder.waitForTasksCompleted(DependencyFinder.java:267)
28        #13 1.754       ... 7 more
29        #13 1.754 Caused by: com.sun.tools.jdeps.MultiReleaseException
30        #13 1.754       at jdk.jdeps/com.sun.tools.jdeps.VersionHelper.add(VersionHelper.java:62)
31        #13 1.754       at jdk.jdeps/com.sun.tools.jdeps.ClassFileReader$JarFileReader.readClassFile(ClassFileReader.java:360)
32        #13 1.754       at jdk.jdeps/com.sun.tools.jdeps.ClassFileReader$JarFileIterator.hasNext(ClassFileReader.java:402)
33        #13 1.754       at jdk.jdeps/com.sun.tools.jdeps.DependencyFinder.lambda$parse$5(DependencyFinder.java:179)
34        #13 1.754       at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
35        #13 1.754       at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
36        #13 1.754       at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
37        #13 1.754       at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
38        #13 1.754       at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
39        #13 1.754       at java.base/java.lang.Thread.run(Thread.java:833)
40   &lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;
41&lt;project xmlns=&quot;http://maven.apache.org/POM/4.0.0&quot; xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot;
42   xsi:schemaLocation=&quot;http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd&quot;&gt;
43   &lt;modelVersion&gt;4.0.0&lt;/modelVersion&gt;
44   &lt;parent&gt;
45       &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
46       &lt;artifactId&gt;spring-boot-starter-parent&lt;/artifactId&gt;
47       &lt;version&gt;2.6.0&lt;/version&gt;
48       &lt;relativePath/&gt; &lt;!-- lookup parent from repository --&gt;
49   &lt;/parent&gt;
50   &lt;groupId&gt;com.example&lt;/groupId&gt;
51   &lt;artifactId&gt;errorrr&lt;/artifactId&gt;
52   &lt;version&gt;0.0.1-SNAPSHOT&lt;/version&gt;
53   &lt;name&gt;errorrr&lt;/name&gt;
54   &lt;description&gt;Demo project for Spring Boot&lt;/description&gt;
55   &lt;properties&gt;
56       &lt;java.version&gt;17&lt;/java.version&gt;
57   &lt;/properties&gt;
58   &lt;dependencies&gt;
59       &lt;dependency&gt;
60           &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
61           &lt;artifactId&gt;spring-boot-starter-web&lt;/artifactId&gt;
62       &lt;/dependency&gt;
63
64       &lt;dependency&gt;
65           &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
66           &lt;artifactId&gt;spring-boot-starter&lt;/artifactId&gt;
67       &lt;/dependency&gt;
68
69       &lt;dependency&gt;
70           &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
71           &lt;artifactId&gt;spring-boot-starter-test&lt;/artifactId&gt;
72           &lt;scope&gt;test&lt;/scope&gt;
73       &lt;/dependency&gt;
74
75   &lt;/dependencies&gt;
76
77   &lt;build&gt;
78       &lt;plugins&gt;
79           &lt;plugin&gt;
80               &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
81               &lt;artifactId&gt;spring-boot-maven-plugin&lt;/artifactId&gt;
82           &lt;/plugin&gt;
83           &lt;plugin&gt;
84               &lt;groupId&gt;org.apache.maven.plugins&lt;/groupId&gt;
85               &lt;artifactId&gt;maven-dependency-plugin&lt;/artifactId&gt;
86               &lt;executions&gt;
87                   &lt;execution&gt;
88                       &lt;id&gt;copy-dependencies&lt;/id&gt;
89                       &lt;phase&gt;package&lt;/phase&gt;
90                       &lt;goals&gt;
91                           &lt;goal&gt;copy-dependencies&lt;/goal&gt;
92                       &lt;/goals&gt;
93                       &lt;configuration&gt;
94                           &lt;outputDirectory&gt;${project.build.directory}/lib&lt;/outputDirectory&gt;
95                       &lt;/configuration&gt;
96                   &lt;/execution&gt;
97               &lt;/executions&gt;
98           &lt;/plugin&gt;
99       &lt;/plugins&gt;
100   &lt;/build&gt;
101
102&lt;/project&gt;
103
104RUN jdeps \
105    --ignore-missing-deps \
106    -q \
107    --multi-release 17 \
108    --print-module-deps \
109    --class-path build/lib/* \
110    app.jar &gt; deps.info
111
112RUN jlink --verbose \
113    --compress 2 \
114    --strip-java-debug-attributes \
115    --no-header-files \
116    --no-man-pages \
117    --output jre \
118    --add-modules $(cat deps.info)
119

I think your mvn build is fine as long as you have all the required dependencies. But just in case I modified my gradle jar task to include the dependencies as follow:

1Exception in thread &quot;main&quot; java.lang.module.FindException: Module java.annotation not found, required by org.apache.tomcat.embed.core
2    at java.base/java.lang.module.Resolver.findFail(Resolver.java:893)
3    at java.base/java.lang.module.Resolver.resolve(Resolver.java:192)
4    at java.base/java.lang.module.Resolver.resolve(Resolver.java:141)
5    at java.base/java.lang.module.Configuration.resolve(Configuration.java:421)
6    at java.base/java.lang.module.Configuration.resolve(Configuration.java:255)
7    at jdk.jdeps/com.sun.tools.jdeps.JdepsConfiguration$Builder.build(JdepsConfiguration.java:564)
8    at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.buildConfig(JdepsTask.java:603)
9    at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:557)
10    at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:533)
11    at jdk.jdeps/com.sun.tools.jdeps.Main.main(Main.java:49)
12jdeps --ignore-missing-deps --multi-release 17 --module-path target/lib/* target/errorrr-*.jar
13jdeps --multi-release 16 --module-path target/lib/* target/errorrr-*.jar
14jdeps --ignore-missing-deps --multi-release 17 --class-path target/lib/* target/errorrr-*.jar
15#13 1.753 Exception in thread &quot;main&quot; java.lang.Error: java.util.concurrent.ExecutionException: com.sun.tools.jdeps.MultiReleaseException
16        #13 1.753       at jdk.jdeps/com.sun.tools.jdeps.DependencyFinder.waitForTasksCompleted(DependencyFinder.java:271)
17        #13 1.753       at jdk.jdeps/com.sun.tools.jdeps.DependencyFinder.parse(DependencyFinder.java:133)
18        #13 1.753       at jdk.jdeps/com.sun.tools.jdeps.DepsAnalyzer.run(DepsAnalyzer.java:129)
19        #13 1.753       at jdk.jdeps/com.sun.tools.jdeps.ModuleExportsAnalyzer.run(ModuleExportsAnalyzer.java:74)
20        #13 1.753       at jdk.jdeps/com.sun.tools.jdeps.JdepsTask$ListModuleDeps.run(JdepsTask.java:1047)
21        #13 1.753       at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:574)
22        #13 1.753       at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:533)
23        #13 1.753       at jdk.jdeps/com.sun.tools.jdeps.Main.main(Main.java:49)
24        #13 1.753 Caused by: java.util.concurrent.ExecutionException: com.sun.tools.jdeps.MultiReleaseException
25        #13 1.753       at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
26        #13 1.753       at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191)
27        #13 1.753       at jdk.jdeps/com.sun.tools.jdeps.DependencyFinder.waitForTasksCompleted(DependencyFinder.java:267)
28        #13 1.754       ... 7 more
29        #13 1.754 Caused by: com.sun.tools.jdeps.MultiReleaseException
30        #13 1.754       at jdk.jdeps/com.sun.tools.jdeps.VersionHelper.add(VersionHelper.java:62)
31        #13 1.754       at jdk.jdeps/com.sun.tools.jdeps.ClassFileReader$JarFileReader.readClassFile(ClassFileReader.java:360)
32        #13 1.754       at jdk.jdeps/com.sun.tools.jdeps.ClassFileReader$JarFileIterator.hasNext(ClassFileReader.java:402)
33        #13 1.754       at jdk.jdeps/com.sun.tools.jdeps.DependencyFinder.lambda$parse$5(DependencyFinder.java:179)
34        #13 1.754       at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
35        #13 1.754       at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
36        #13 1.754       at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
37        #13 1.754       at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
38        #13 1.754       at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
39        #13 1.754       at java.base/java.lang.Thread.run(Thread.java:833)
40   &lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;
41&lt;project xmlns=&quot;http://maven.apache.org/POM/4.0.0&quot; xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot;
42   xsi:schemaLocation=&quot;http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd&quot;&gt;
43   &lt;modelVersion&gt;4.0.0&lt;/modelVersion&gt;
44   &lt;parent&gt;
45       &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
46       &lt;artifactId&gt;spring-boot-starter-parent&lt;/artifactId&gt;
47       &lt;version&gt;2.6.0&lt;/version&gt;
48       &lt;relativePath/&gt; &lt;!-- lookup parent from repository --&gt;
49   &lt;/parent&gt;
50   &lt;groupId&gt;com.example&lt;/groupId&gt;
51   &lt;artifactId&gt;errorrr&lt;/artifactId&gt;
52   &lt;version&gt;0.0.1-SNAPSHOT&lt;/version&gt;
53   &lt;name&gt;errorrr&lt;/name&gt;
54   &lt;description&gt;Demo project for Spring Boot&lt;/description&gt;
55   &lt;properties&gt;
56       &lt;java.version&gt;17&lt;/java.version&gt;
57   &lt;/properties&gt;
58   &lt;dependencies&gt;
59       &lt;dependency&gt;
60           &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
61           &lt;artifactId&gt;spring-boot-starter-web&lt;/artifactId&gt;
62       &lt;/dependency&gt;
63
64       &lt;dependency&gt;
65           &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
66           &lt;artifactId&gt;spring-boot-starter&lt;/artifactId&gt;
67       &lt;/dependency&gt;
68
69       &lt;dependency&gt;
70           &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
71           &lt;artifactId&gt;spring-boot-starter-test&lt;/artifactId&gt;
72           &lt;scope&gt;test&lt;/scope&gt;
73       &lt;/dependency&gt;
74
75   &lt;/dependencies&gt;
76
77   &lt;build&gt;
78       &lt;plugins&gt;
79           &lt;plugin&gt;
80               &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
81               &lt;artifactId&gt;spring-boot-maven-plugin&lt;/artifactId&gt;
82           &lt;/plugin&gt;
83           &lt;plugin&gt;
84               &lt;groupId&gt;org.apache.maven.plugins&lt;/groupId&gt;
85               &lt;artifactId&gt;maven-dependency-plugin&lt;/artifactId&gt;
86               &lt;executions&gt;
87                   &lt;execution&gt;
88                       &lt;id&gt;copy-dependencies&lt;/id&gt;
89                       &lt;phase&gt;package&lt;/phase&gt;
90                       &lt;goals&gt;
91                           &lt;goal&gt;copy-dependencies&lt;/goal&gt;
92                       &lt;/goals&gt;
93                       &lt;configuration&gt;
94                           &lt;outputDirectory&gt;${project.build.directory}/lib&lt;/outputDirectory&gt;
95                       &lt;/configuration&gt;
96                   &lt;/execution&gt;
97               &lt;/executions&gt;
98           &lt;/plugin&gt;
99       &lt;/plugins&gt;
100   &lt;/build&gt;
101
102&lt;/project&gt;
103
104RUN jdeps \
105    --ignore-missing-deps \
106    -q \
107    --multi-release 17 \
108    --print-module-deps \
109    --class-path build/lib/* \
110    app.jar &gt; deps.info
111
112RUN jlink --verbose \
113    --compress 2 \
114    --strip-java-debug-attributes \
115    --no-header-files \
116    --no-man-pages \
117    --output jre \
118    --add-modules $(cat deps.info)
119jar {
120     manifest {
121          attributes &quot;Main-Class&quot;: &quot;com.demo.Application&quot;
122     }
123     duplicatesStrategy = DuplicatesStrategy.INCLUDE
124     from {
125          configurations.default.collect { it.isDirectory() ? it : zipTree(it) 
126     }
127   }
128}
129

Source https://stackoverflow.com/questions/70105271

QUESTION

How to make a Spring Boot application quit on tomcat failure

Asked 2022-Jan-15 at 09:55

We have a bunch of microservices based on Spring Boot 2.5.4 also including spring-kafka:2.7.6 and spring-boot-actuator:2.5.4. All the services use Tomcat as servlet container and graceful shutdown enabled. These microservices are containerized using docker.
Due to a misconfiguration, yesterday we faced a problem on one of these containers because it took a port already bound from another one.
Log states:

1Stopping service [Tomcat]
2Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
3***************************
4APPLICATION FAILED TO START
5***************************
6
7Description:
8
9Web server failed to start. Port 8080 was already in use.
10

However, the JVM is still running, because of the kafka consumers/streams.

I need to destroy everything or at least do a System.exit(error-code) to trigger the docker restart policy. How I could achieve this? If possible, a solution using configuration is better than a solution requiring development.

I developed a minimal test application made of the SpringBootApplicationand a KafkaConsumer class to ensure the problem isn't related to our microservices. Same result.

POM file

1Stopping service [Tomcat]
2Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
3***************************
4APPLICATION FAILED TO START
5***************************
6
7Description:
8
9Web server failed to start. Port 8080 was already in use.
10&lt;parent&gt;
11  &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
12  &lt;artifactId&gt;spring-boot-starter-parent&lt;/artifactId&gt;
13  &lt;version&gt;2.5.4&lt;/version&gt;
14  &lt;relativePath/&gt; &lt;!-- lookup parent from repository --&gt;
15&lt;/parent&gt;
16...
17&lt;dependency&gt;
18  &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
19  &lt;artifactId&gt;spring-boot-starter-web&lt;/artifactId&gt;
20&lt;/dependency&gt;
21&lt;dependency&gt;
22  &lt;groupId&gt;org.springframework.kafka&lt;/groupId&gt;
23  &lt;artifactId&gt;spring-kafka&lt;/artifactId&gt;
24&lt;/dependency&gt;
25

Kafka listener

1Stopping service [Tomcat]
2Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
3***************************
4APPLICATION FAILED TO START
5***************************
6
7Description:
8
9Web server failed to start. Port 8080 was already in use.
10&lt;parent&gt;
11  &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
12  &lt;artifactId&gt;spring-boot-starter-parent&lt;/artifactId&gt;
13  &lt;version&gt;2.5.4&lt;/version&gt;
14  &lt;relativePath/&gt; &lt;!-- lookup parent from repository --&gt;
15&lt;/parent&gt;
16...
17&lt;dependency&gt;
18  &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
19  &lt;artifactId&gt;spring-boot-starter-web&lt;/artifactId&gt;
20&lt;/dependency&gt;
21&lt;dependency&gt;
22  &lt;groupId&gt;org.springframework.kafka&lt;/groupId&gt;
23  &lt;artifactId&gt;spring-kafka&lt;/artifactId&gt;
24&lt;/dependency&gt;
25@Component
26public class KafkaConsumer {
27
28  @KafkaListener(topics = &quot;test&quot;, groupId = &quot;test&quot;)
29  public void process(String message) {
30
31  }
32}
33

application.yml

1Stopping service [Tomcat]
2Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
3***************************
4APPLICATION FAILED TO START
5***************************
6
7Description:
8
9Web server failed to start. Port 8080 was already in use.
10&lt;parent&gt;
11  &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
12  &lt;artifactId&gt;spring-boot-starter-parent&lt;/artifactId&gt;
13  &lt;version&gt;2.5.4&lt;/version&gt;
14  &lt;relativePath/&gt; &lt;!-- lookup parent from repository --&gt;
15&lt;/parent&gt;
16...
17&lt;dependency&gt;
18  &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
19  &lt;artifactId&gt;spring-boot-starter-web&lt;/artifactId&gt;
20&lt;/dependency&gt;
21&lt;dependency&gt;
22  &lt;groupId&gt;org.springframework.kafka&lt;/groupId&gt;
23  &lt;artifactId&gt;spring-kafka&lt;/artifactId&gt;
24&lt;/dependency&gt;
25@Component
26public class KafkaConsumer {
27
28  @KafkaListener(topics = &quot;test&quot;, groupId = &quot;test&quot;)
29  public void process(String message) {
30
31  }
32}
33spring:
34  kafka:
35    bootstrap-servers: kafka:9092
36

Log file

1Stopping service [Tomcat]
2Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
3***************************
4APPLICATION FAILED TO START
5***************************
6
7Description:
8
9Web server failed to start. Port 8080 was already in use.
10&lt;parent&gt;
11  &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
12  &lt;artifactId&gt;spring-boot-starter-parent&lt;/artifactId&gt;
13  &lt;version&gt;2.5.4&lt;/version&gt;
14  &lt;relativePath/&gt; &lt;!-- lookup parent from repository --&gt;
15&lt;/parent&gt;
16...
17&lt;dependency&gt;
18  &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
19  &lt;artifactId&gt;spring-boot-starter-web&lt;/artifactId&gt;
20&lt;/dependency&gt;
21&lt;dependency&gt;
22  &lt;groupId&gt;org.springframework.kafka&lt;/groupId&gt;
23  &lt;artifactId&gt;spring-kafka&lt;/artifactId&gt;
24&lt;/dependency&gt;
25@Component
26public class KafkaConsumer {
27
28  @KafkaListener(topics = &quot;test&quot;, groupId = &quot;test&quot;)
29  public void process(String message) {
30
31  }
32}
33spring:
34  kafka:
35    bootstrap-servers: kafka:9092
362021-12-17 11:12:24.955  WARN 29067 --- [           main] ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.context.ApplicationContextException: Failed to start bean 'webServerStartStop'; nested exception is org.springframework.boot.web.server.PortInUseException: Port 8080 is already in use
372021-12-17 11:12:24.959  INFO 29067 --- [           main] o.apache.catalina.core.StandardService   : Stopping service [Tomcat]
382021-12-17 11:12:24.969  INFO 29067 --- [           main] ConditionEvaluationReportLoggingListener : 
39
40Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
412021-12-17 11:12:24.978 ERROR 29067 --- [           main] o.s.b.d.LoggingFailureAnalysisReporter   : 
42
43***************************
44APPLICATION FAILED TO START
45***************************
46
47Description:
48
49Web server failed to start. Port 8080 was already in use.
50
51Action:
52
53Identify and stop the process that's listening on port 8080 or configure this application to listen on another port.
54
552021-12-17 11:12:25.151  WARN 29067 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient   : [Consumer clientId=consumer-test-1, groupId=test] Error while fetching metadata with correlation id 2 : {test=LEADER_NOT_AVAILABLE}
562021-12-17 11:12:25.154  INFO 29067 --- [ntainer#0-0-C-1] org.apache.kafka.clients.Metadata        : [Consumer clientId=consumer-test-1, groupId=test] Cluster ID: NwbnlV2vSdiYtDzgZ81TDQ
572021-12-17 11:12:25.156  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Discovered group coordinator kafka:9092 (id: 2147483636 rack: null)
582021-12-17 11:12:25.159  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] (Re-)joining group
592021-12-17 11:12:25.179  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] (Re-)joining group
602021-12-17 11:12:27.004  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Successfully joined group with generation Generation{generationId=2, memberId='consumer-test-1-c5924ab5-afc8-4720-a5d7-f8107ace3aad', protocol='range'}
612021-12-17 11:12:27.009  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Finished assignment for group at generation 2: {consumer-test-1-c5924ab5-afc8-4720-a5d7-f8107ace3aad=Assignment(partitions=[test-0])}
622021-12-17 11:12:27.021  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Successfully synced group in generation Generation{generationId=2, memberId='consumer-test-1-c5924ab5-afc8-4720-a5d7-f8107ace3aad', protocol='range'}
632021-12-17 11:12:27.022  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Notifying assignor about the new Assignment(partitions=[test-0])
642021-12-17 11:12:27.025  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Adding newly assigned partitions: test-0
652021-12-17 11:12:27.029  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Found no committed offset for partition test-0
662021-12-17 11:12:27.034  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Found no committed offset for partition test-0
672021-12-17 11:12:27.040  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.SubscriptionState    : [Consumer clientId=consumer-test-1, groupId=test] Resetting offset for partition test-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 11 rack: null)], epoch=0}}.
682021-12-17 11:12:27.045  INFO 29067 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer    : test: partitions assigned: [test-0]
69

ANSWER

Answered 2021-Dec-17 at 08:38

Since you have everything containerized, it's way simpler.

Just set up a small healthcheck endpoint with Spring Web which serves to see if the server is still running, something like:

1Stopping service [Tomcat]
2Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
3***************************
4APPLICATION FAILED TO START
5***************************
6
7Description:
8
9Web server failed to start. Port 8080 was already in use.
10&lt;parent&gt;
11  &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
12  &lt;artifactId&gt;spring-boot-starter-parent&lt;/artifactId&gt;
13  &lt;version&gt;2.5.4&lt;/version&gt;
14  &lt;relativePath/&gt; &lt;!-- lookup parent from repository --&gt;
15&lt;/parent&gt;
16...
17&lt;dependency&gt;
18  &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
19  &lt;artifactId&gt;spring-boot-starter-web&lt;/artifactId&gt;
20&lt;/dependency&gt;
21&lt;dependency&gt;
22  &lt;groupId&gt;org.springframework.kafka&lt;/groupId&gt;
23  &lt;artifactId&gt;spring-kafka&lt;/artifactId&gt;
24&lt;/dependency&gt;
25@Component
26public class KafkaConsumer {
27
28  @KafkaListener(topics = &quot;test&quot;, groupId = &quot;test&quot;)
29  public void process(String message) {
30
31  }
32}
33spring:
34  kafka:
35    bootstrap-servers: kafka:9092
362021-12-17 11:12:24.955  WARN 29067 --- [           main] ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.context.ApplicationContextException: Failed to start bean 'webServerStartStop'; nested exception is org.springframework.boot.web.server.PortInUseException: Port 8080 is already in use
372021-12-17 11:12:24.959  INFO 29067 --- [           main] o.apache.catalina.core.StandardService   : Stopping service [Tomcat]
382021-12-17 11:12:24.969  INFO 29067 --- [           main] ConditionEvaluationReportLoggingListener : 
39
40Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
412021-12-17 11:12:24.978 ERROR 29067 --- [           main] o.s.b.d.LoggingFailureAnalysisReporter   : 
42
43***************************
44APPLICATION FAILED TO START
45***************************
46
47Description:
48
49Web server failed to start. Port 8080 was already in use.
50
51Action:
52
53Identify and stop the process that's listening on port 8080 or configure this application to listen on another port.
54
552021-12-17 11:12:25.151  WARN 29067 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient   : [Consumer clientId=consumer-test-1, groupId=test] Error while fetching metadata with correlation id 2 : {test=LEADER_NOT_AVAILABLE}
562021-12-17 11:12:25.154  INFO 29067 --- [ntainer#0-0-C-1] org.apache.kafka.clients.Metadata        : [Consumer clientId=consumer-test-1, groupId=test] Cluster ID: NwbnlV2vSdiYtDzgZ81TDQ
572021-12-17 11:12:25.156  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Discovered group coordinator kafka:9092 (id: 2147483636 rack: null)
582021-12-17 11:12:25.159  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] (Re-)joining group
592021-12-17 11:12:25.179  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] (Re-)joining group
602021-12-17 11:12:27.004  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Successfully joined group with generation Generation{generationId=2, memberId='consumer-test-1-c5924ab5-afc8-4720-a5d7-f8107ace3aad', protocol='range'}
612021-12-17 11:12:27.009  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Finished assignment for group at generation 2: {consumer-test-1-c5924ab5-afc8-4720-a5d7-f8107ace3aad=Assignment(partitions=[test-0])}
622021-12-17 11:12:27.021  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Successfully synced group in generation Generation{generationId=2, memberId='consumer-test-1-c5924ab5-afc8-4720-a5d7-f8107ace3aad', protocol='range'}
632021-12-17 11:12:27.022  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Notifying assignor about the new Assignment(partitions=[test-0])
642021-12-17 11:12:27.025  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Adding newly assigned partitions: test-0
652021-12-17 11:12:27.029  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Found no committed offset for partition test-0
662021-12-17 11:12:27.034  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Found no committed offset for partition test-0
672021-12-17 11:12:27.040  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.SubscriptionState    : [Consumer clientId=consumer-test-1, groupId=test] Resetting offset for partition test-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 11 rack: null)], epoch=0}}.
682021-12-17 11:12:27.045  INFO 29067 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer    : test: partitions assigned: [test-0]
69@RestController(
70public class HealtcheckController {
71
72  @Get(&quot;/monitoring&quot;)
73  public String getMonitoring() {
74    return &quot;200: OK&quot;;
75  } 
76
77}
78

and then refer to it in the HEALTHCHECK part of your Dockerfile. If the server stops, then the container will be scheduled as unhealthy and it will be restarted:

1Stopping service [Tomcat]
2Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
3***************************
4APPLICATION FAILED TO START
5***************************
6
7Description:
8
9Web server failed to start. Port 8080 was already in use.
10&lt;parent&gt;
11  &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
12  &lt;artifactId&gt;spring-boot-starter-parent&lt;/artifactId&gt;
13  &lt;version&gt;2.5.4&lt;/version&gt;
14  &lt;relativePath/&gt; &lt;!-- lookup parent from repository --&gt;
15&lt;/parent&gt;
16...
17&lt;dependency&gt;
18  &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
19  &lt;artifactId&gt;spring-boot-starter-web&lt;/artifactId&gt;
20&lt;/dependency&gt;
21&lt;dependency&gt;
22  &lt;groupId&gt;org.springframework.kafka&lt;/groupId&gt;
23  &lt;artifactId&gt;spring-kafka&lt;/artifactId&gt;
24&lt;/dependency&gt;
25@Component
26public class KafkaConsumer {
27
28  @KafkaListener(topics = &quot;test&quot;, groupId = &quot;test&quot;)
29  public void process(String message) {
30
31  }
32}
33spring:
34  kafka:
35    bootstrap-servers: kafka:9092
362021-12-17 11:12:24.955  WARN 29067 --- [           main] ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.context.ApplicationContextException: Failed to start bean 'webServerStartStop'; nested exception is org.springframework.boot.web.server.PortInUseException: Port 8080 is already in use
372021-12-17 11:12:24.959  INFO 29067 --- [           main] o.apache.catalina.core.StandardService   : Stopping service [Tomcat]
382021-12-17 11:12:24.969  INFO 29067 --- [           main] ConditionEvaluationReportLoggingListener : 
39
40Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
412021-12-17 11:12:24.978 ERROR 29067 --- [           main] o.s.b.d.LoggingFailureAnalysisReporter   : 
42
43***************************
44APPLICATION FAILED TO START
45***************************
46
47Description:
48
49Web server failed to start. Port 8080 was already in use.
50
51Action:
52
53Identify and stop the process that's listening on port 8080 or configure this application to listen on another port.
54
552021-12-17 11:12:25.151  WARN 29067 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient   : [Consumer clientId=consumer-test-1, groupId=test] Error while fetching metadata with correlation id 2 : {test=LEADER_NOT_AVAILABLE}
562021-12-17 11:12:25.154  INFO 29067 --- [ntainer#0-0-C-1] org.apache.kafka.clients.Metadata        : [Consumer clientId=consumer-test-1, groupId=test] Cluster ID: NwbnlV2vSdiYtDzgZ81TDQ
572021-12-17 11:12:25.156  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Discovered group coordinator kafka:9092 (id: 2147483636 rack: null)
582021-12-17 11:12:25.159  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] (Re-)joining group
592021-12-17 11:12:25.179  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] (Re-)joining group
602021-12-17 11:12:27.004  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Successfully joined group with generation Generation{generationId=2, memberId='consumer-test-1-c5924ab5-afc8-4720-a5d7-f8107ace3aad', protocol='range'}
612021-12-17 11:12:27.009  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Finished assignment for group at generation 2: {consumer-test-1-c5924ab5-afc8-4720-a5d7-f8107ace3aad=Assignment(partitions=[test-0])}
622021-12-17 11:12:27.021  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Successfully synced group in generation Generation{generationId=2, memberId='consumer-test-1-c5924ab5-afc8-4720-a5d7-f8107ace3aad', protocol='range'}
632021-12-17 11:12:27.022  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Notifying assignor about the new Assignment(partitions=[test-0])
642021-12-17 11:12:27.025  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Adding newly assigned partitions: test-0
652021-12-17 11:12:27.029  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Found no committed offset for partition test-0
662021-12-17 11:12:27.034  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Found no committed offset for partition test-0
672021-12-17 11:12:27.040  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.SubscriptionState    : [Consumer clientId=consumer-test-1, groupId=test] Resetting offset for partition test-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 11 rack: null)], epoch=0}}.
682021-12-17 11:12:27.045  INFO 29067 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer    : test: partitions assigned: [test-0]
69@RestController(
70public class HealtcheckController {
71
72  @Get(&quot;/monitoring&quot;)
73  public String getMonitoring() {
74    return &quot;200: OK&quot;;
75  } 
76
77}
78FROM ...
79
80ENTRYPOINT ...
81HEALTHCHECK localhost:8080/monitoring
82

If you don't want to develop anything, then you can just use any other Endpoint that you know it should successfully answer as HEALTCHECK, but I would recommend that you have one endpoint explicitly for that.

Source https://stackoverflow.com/questions/70378200

QUESTION

Deadlock on insert/select

Asked 2021-Dec-26 at 12:54

Ok, I'm totally lost on deadlock issue. I just don't know how to solve this.

I have these three tables (I have removed not important columns):

1CREATE TABLE [dbo].[ManageServicesRequest]
2(
3    [ReferenceTransactionId]    INT                 NOT NULL,
4    [OrderDate]                 DATETIMEOFFSET(7)   NOT NULL,
5    [QueuePriority]             INT                 NOT NULL,
6    [Queued]                    DATETIMEOFFSET(7)   NULL,
7    CONSTRAINT [PK_ManageServicesRequest] PRIMARY KEY CLUSTERED ([ReferenceTransactionId]),
8)
9
10CREATE TABLE [dbo].[ServiceChange]
11(
12    [ReferenceTransactionId]    INT                 NOT NULL,
13    [ServiceId]                 VARCHAR(50)         NOT NULL,
14    [ServiceStatus]             CHAR(1)             NOT NULL,
15    [ValidFrom]                 DATETIMEOFFSET(7)   NOT NULL,
16    CONSTRAINT [PK_ServiceChange] PRIMARY KEY CLUSTERED ([ReferenceTransactionId],[ServiceId]),
17    CONSTRAINT [FK_ServiceChange_ManageServiceRequest] FOREIGN KEY ([ReferenceTransactionId]) REFERENCES [ManageServicesRequest]([ReferenceTransactionId]) ON DELETE CASCADE,
18    INDEX [IDX_ServiceChange_ManageServiceRequestId] ([ReferenceTransactionId]),
19    INDEX [IDX_ServiceChange_ServiceId] ([ServiceId])
20)
21
22CREATE TABLE [dbo].[ServiceChangeParameter]
23(
24    [ReferenceTransactionId]    INT                 NOT NULL,
25    [ServiceId]                 VARCHAR(50)         NOT NULL,
26    [ParamCode]                 VARCHAR(50)         NOT NULL,
27    [ParamValue]                VARCHAR(50)         NOT NULL,
28    [ParamValidFrom]            DATETIMEOFFSET(7)   NOT NULL,
29    CONSTRAINT [PK_ServiceChangeParameter] PRIMARY KEY CLUSTERED ([ReferenceTransactionId],[ServiceId],[ParamCode]),
30    CONSTRAINT [FK_ServiceChangeParameter_ServiceChange] FOREIGN KEY ([ReferenceTransactionId],[ServiceId]) REFERENCES [ServiceChange] ([ReferenceTransactionId],[ServiceId]) ON DELETE CASCADE,
31    INDEX [IDX_ServiceChangeParameter_ManageServiceRequestId] ([ReferenceTransactionId]),
32    INDEX [IDX_ServiceChangeParameter_ServiceId] ([ServiceId]),
33    INDEX [IDX_ServiceChangeParameter_ParamCode] ([ParamCode])
34)
35

And these two procedures:

1CREATE TABLE [dbo].[ManageServicesRequest]
2(
3    [ReferenceTransactionId]    INT                 NOT NULL,
4    [OrderDate]                 DATETIMEOFFSET(7)   NOT NULL,
5    [QueuePriority]             INT                 NOT NULL,
6    [Queued]                    DATETIMEOFFSET(7)   NULL,
7    CONSTRAINT [PK_ManageServicesRequest] PRIMARY KEY CLUSTERED ([ReferenceTransactionId]),
8)
9
10CREATE TABLE [dbo].[ServiceChange]
11(
12    [ReferenceTransactionId]    INT                 NOT NULL,
13    [ServiceId]                 VARCHAR(50)         NOT NULL,
14    [ServiceStatus]             CHAR(1)             NOT NULL,
15    [ValidFrom]                 DATETIMEOFFSET(7)   NOT NULL,
16    CONSTRAINT [PK_ServiceChange] PRIMARY KEY CLUSTERED ([ReferenceTransactionId],[ServiceId]),
17    CONSTRAINT [FK_ServiceChange_ManageServiceRequest] FOREIGN KEY ([ReferenceTransactionId]) REFERENCES [ManageServicesRequest]([ReferenceTransactionId]) ON DELETE CASCADE,
18    INDEX [IDX_ServiceChange_ManageServiceRequestId] ([ReferenceTransactionId]),
19    INDEX [IDX_ServiceChange_ServiceId] ([ServiceId])
20)
21
22CREATE TABLE [dbo].[ServiceChangeParameter]
23(
24    [ReferenceTransactionId]    INT                 NOT NULL,
25    [ServiceId]                 VARCHAR(50)         NOT NULL,
26    [ParamCode]                 VARCHAR(50)         NOT NULL,
27    [ParamValue]                VARCHAR(50)         NOT NULL,
28    [ParamValidFrom]            DATETIMEOFFSET(7)   NOT NULL,
29    CONSTRAINT [PK_ServiceChangeParameter] PRIMARY KEY CLUSTERED ([ReferenceTransactionId],[ServiceId],[ParamCode]),
30    CONSTRAINT [FK_ServiceChangeParameter_ServiceChange] FOREIGN KEY ([ReferenceTransactionId],[ServiceId]) REFERENCES [ServiceChange] ([ReferenceTransactionId],[ServiceId]) ON DELETE CASCADE,
31    INDEX [IDX_ServiceChangeParameter_ManageServiceRequestId] ([ReferenceTransactionId]),
32    INDEX [IDX_ServiceChangeParameter_ServiceId] ([ServiceId]),
33    INDEX [IDX_ServiceChangeParameter_ParamCode] ([ParamCode])
34)
35CREATE PROCEDURE [dbo].[spCreateManageServicesRequest]
36    @ReferenceTransactionId INT,
37    @OrderDate DATETIMEOFFSET,
38    @QueuePriority INT,
39    @Services ServiceChangeUdt READONLY,
40    @Parameters ServiceChangeParameterUdt READONLY
41AS
42BEGIN
43    SET NOCOUNT ON;
44
45    BEGIN TRY
46    /* VYTVOŘ NOVÝ REQUEST NA ZMĚNU SLUŽEB */
47
48        /*  INSERT REQUEST  */
49        INSERT INTO [dbo].[ManageServicesRequest]
50            ([ReferenceTransactionId]
51            ,[OrderDate]
52            ,[QueuePriority]
53            ,[Queued])
54        VALUES
55            (@ReferenceTransactionId
56            ,@OrderDate
57            ,@QueuePriority
58            ,NULL)
59
60        /*  INSERT SERVICES */
61        INSERT INTO [dbo].[ServiceChange]
62            ([ReferenceTransactionId]
63            ,[ServiceId]
64            ,[ServiceStatus]
65            ,[ValidFrom])
66        SELECT 
67             @ReferenceTransactionId AS [ReferenceTransactionId]
68            ,[ServiceId]
69            ,[ServiceStatus]
70            ,[ValidFrom]
71        FROM @Services AS [S]
72
73        /*  INSERT PARAMS   */
74        INSERT INTO [dbo].[ServiceChangeParameter]
75            ([ReferenceTransactionId]
76            ,[ServiceId]
77            ,[ParamCode]
78            ,[ParamValue]
79            ,[ParamValidFrom])
80        SELECT 
81            @ReferenceTransactionId AS [ReferenceTransactionId]
82            ,[ServiceId]
83            ,[ParamCode]
84            ,[ParamValue]
85            ,[ParamValidFrom]
86        FROM @Parameters AS [P]
87
88    END TRY
89    BEGIN CATCH
90        THROW
91    END CATCH
92END
93
94CREATE PROCEDURE [dbo].[spGetManageServicesRequest]
95    @ReferenceTransactionId INT
96AS
97BEGIN
98    SET NOCOUNT ON;
99
100    BEGIN TRY 
101        /* VRAŤ MANAGE SERVICES REQUEST PODLE ID */
102
103        SELECT 
104            [MR].[ReferenceTransactionId], 
105            [MR].[OrderDate], 
106            [MR].[QueuePriority], 
107            [MR].[Queued], 
108            
109            [SC].[ReferenceTransactionId], 
110            [SC].[ServiceId], 
111            [SC].[ServiceStatus], 
112            [SC].[ValidFrom],
113            
114            [SP].[ReferenceTransactionId], 
115            [SP].[ServiceId], 
116            [SP].[ParamCode], 
117            [SP].[ParamValue], 
118            [SP].[ParamValidFrom]
119
120        FROM [dbo].[ManageServicesRequest] AS [MR]
121        LEFT JOIN [dbo].[ServiceChange] AS [SC] ON [SC].[ReferenceTransactionId] = [MR].[ReferenceTransactionId]
122        LEFT JOIN [dbo].[ServiceChangeParameter] AS [SP] ON [SP].[ReferenceTransactionId] = [SC].[ReferenceTransactionId] AND [SP].[ServiceId] = [SC].[ServiceId]
123        WHERE [MR].[ReferenceTransactionId] = @ReferenceTransactionId
124
125    END TRY
126    BEGIN CATCH
127        THROW
128    END CATCH
129END
130

Now these are used this way (it's a simplified C# method that creates a record and then posts record to a micro service queue):

1CREATE TABLE [dbo].[ManageServicesRequest]
2(
3    [ReferenceTransactionId]    INT                 NOT NULL,
4    [OrderDate]                 DATETIMEOFFSET(7)   NOT NULL,
5    [QueuePriority]             INT                 NOT NULL,
6    [Queued]                    DATETIMEOFFSET(7)   NULL,
7    CONSTRAINT [PK_ManageServicesRequest] PRIMARY KEY CLUSTERED ([ReferenceTransactionId]),
8)
9
10CREATE TABLE [dbo].[ServiceChange]
11(
12    [ReferenceTransactionId]    INT                 NOT NULL,
13    [ServiceId]                 VARCHAR(50)         NOT NULL,
14    [ServiceStatus]             CHAR(1)             NOT NULL,
15    [ValidFrom]                 DATETIMEOFFSET(7)   NOT NULL,
16    CONSTRAINT [PK_ServiceChange] PRIMARY KEY CLUSTERED ([ReferenceTransactionId],[ServiceId]),
17    CONSTRAINT [FK_ServiceChange_ManageServiceRequest] FOREIGN KEY ([ReferenceTransactionId]) REFERENCES [ManageServicesRequest]([ReferenceTransactionId]) ON DELETE CASCADE,
18    INDEX [IDX_ServiceChange_ManageServiceRequestId] ([ReferenceTransactionId]),
19    INDEX [IDX_ServiceChange_ServiceId] ([ServiceId])
20)
21
22CREATE TABLE [dbo].[ServiceChangeParameter]
23(
24    [ReferenceTransactionId]    INT                 NOT NULL,
25    [ServiceId]                 VARCHAR(50)         NOT NULL,
26    [ParamCode]                 VARCHAR(50)         NOT NULL,
27    [ParamValue]                VARCHAR(50)         NOT NULL,
28    [ParamValidFrom]            DATETIMEOFFSET(7)   NOT NULL,
29    CONSTRAINT [PK_ServiceChangeParameter] PRIMARY KEY CLUSTERED ([ReferenceTransactionId],[ServiceId],[ParamCode]),
30    CONSTRAINT [FK_ServiceChangeParameter_ServiceChange] FOREIGN KEY ([ReferenceTransactionId],[ServiceId]) REFERENCES [ServiceChange] ([ReferenceTransactionId],[ServiceId]) ON DELETE CASCADE,
31    INDEX [IDX_ServiceChangeParameter_ManageServiceRequestId] ([ReferenceTransactionId]),
32    INDEX [IDX_ServiceChangeParameter_ServiceId] ([ServiceId]),
33    INDEX [IDX_ServiceChangeParameter_ParamCode] ([ParamCode])
34)
35CREATE PROCEDURE [dbo].[spCreateManageServicesRequest]
36    @ReferenceTransactionId INT,
37    @OrderDate DATETIMEOFFSET,
38    @QueuePriority INT,
39    @Services ServiceChangeUdt READONLY,
40    @Parameters ServiceChangeParameterUdt READONLY
41AS
42BEGIN
43    SET NOCOUNT ON;
44
45    BEGIN TRY
46    /* VYTVOŘ NOVÝ REQUEST NA ZMĚNU SLUŽEB */
47
48        /*  INSERT REQUEST  */
49        INSERT INTO [dbo].[ManageServicesRequest]
50            ([ReferenceTransactionId]
51            ,[OrderDate]
52            ,[QueuePriority]
53            ,[Queued])
54        VALUES
55            (@ReferenceTransactionId
56            ,@OrderDate
57            ,@QueuePriority
58            ,NULL)
59
60        /*  INSERT SERVICES */
61        INSERT INTO [dbo].[ServiceChange]
62            ([ReferenceTransactionId]
63            ,[ServiceId]
64            ,[ServiceStatus]
65            ,[ValidFrom])
66        SELECT 
67             @ReferenceTransactionId AS [ReferenceTransactionId]
68            ,[ServiceId]
69            ,[ServiceStatus]
70            ,[ValidFrom]
71        FROM @Services AS [S]
72
73        /*  INSERT PARAMS   */
74        INSERT INTO [dbo].[ServiceChangeParameter]
75            ([ReferenceTransactionId]
76            ,[ServiceId]
77            ,[ParamCode]
78            ,[ParamValue]
79            ,[ParamValidFrom])
80        SELECT 
81            @ReferenceTransactionId AS [ReferenceTransactionId]
82            ,[ServiceId]
83            ,[ParamCode]
84            ,[ParamValue]
85            ,[ParamValidFrom]
86        FROM @Parameters AS [P]
87
88    END TRY
89    BEGIN CATCH
90        THROW
91    END CATCH
92END
93
94CREATE PROCEDURE [dbo].[spGetManageServicesRequest]
95    @ReferenceTransactionId INT
96AS
97BEGIN
98    SET NOCOUNT ON;
99
100    BEGIN TRY 
101        /* VRAŤ MANAGE SERVICES REQUEST PODLE ID */
102
103        SELECT 
104            [MR].[ReferenceTransactionId], 
105            [MR].[OrderDate], 
106            [MR].[QueuePriority], 
107            [MR].[Queued], 
108            
109            [SC].[ReferenceTransactionId], 
110            [SC].[ServiceId], 
111            [SC].[ServiceStatus], 
112            [SC].[ValidFrom],
113            
114            [SP].[ReferenceTransactionId], 
115            [SP].[ServiceId], 
116            [SP].[ParamCode], 
117            [SP].[ParamValue], 
118            [SP].[ParamValidFrom]
119
120        FROM [dbo].[ManageServicesRequest] AS [MR]
121        LEFT JOIN [dbo].[ServiceChange] AS [SC] ON [SC].[ReferenceTransactionId] = [MR].[ReferenceTransactionId]
122        LEFT JOIN [dbo].[ServiceChangeParameter] AS [SP] ON [SP].[ReferenceTransactionId] = [SC].[ReferenceTransactionId] AND [SP].[ServiceId] = [SC].[ServiceId]
123        WHERE [MR].[ReferenceTransactionId] = @ReferenceTransactionId
124
125    END TRY
126    BEGIN CATCH
127        THROW
128    END CATCH
129END
130public async Task Consume(ConsumeContext&lt;CreateCommand&gt; context)
131{
132    using (var sql = sqlFactory.Cip)
133    {
134        /*SAVE REQUEST TO DATABASE*/
135        sql.StartTransaction(System.Data.IsolationLevel.Serializable); &lt;----- First transaction starts
136
137        /* Create id */
138        var transactionId = await GetNewId(context.Message.CorrelationId);
139
140        /* Create manage services request */
141        await sql.OrderingGateway.ManageServices.Create(transactionId,  context.Message.ApiRequest.OrderDate, context.Message.ApiRequest.Priority, services);
142
143        sql.Commit(); &lt;----- First transaction ends
144        
145
146        /// .... Some other stuff ...
147
148        /* Fetch the same object you created in the first transaction */
149        Try
150        {
151            sql.StartTransaction(System.Data.IsolationLevel.Serializable);
152            
153            var request = await sql.OrderingGateway.ManageServices.Get(transactionId); &lt;----- HERE BE THE DEADLOCK, 
154
155            request.Queued = DateTimeOffset.Now;
156            await sql.OrderingGateway.ManageServices.Update(request);
157
158            ... Here is a posting to a microservice queue ...
159        
160            sql.Commit();
161        }
162        catch (Exception)
163        {
164            sql.RollBack();
165        }
166        
167        /// .... Some other stuff ....
168}
169

Now my problem is. Why are these two procedures getting deadlocked? The first and the second transaction are never run in parallel for the same record.

Here is the deadlock detail:

1CREATE TABLE [dbo].[ManageServicesRequest]
2(
3    [ReferenceTransactionId]    INT                 NOT NULL,
4    [OrderDate]                 DATETIMEOFFSET(7)   NOT NULL,
5    [QueuePriority]             INT                 NOT NULL,
6    [Queued]                    DATETIMEOFFSET(7)   NULL,
7    CONSTRAINT [PK_ManageServicesRequest] PRIMARY KEY CLUSTERED ([ReferenceTransactionId]),
8)
9
10CREATE TABLE [dbo].[ServiceChange]
11(
12    [ReferenceTransactionId]    INT                 NOT NULL,
13    [ServiceId]                 VARCHAR(50)         NOT NULL,
14    [ServiceStatus]             CHAR(1)             NOT NULL,
15    [ValidFrom]                 DATETIMEOFFSET(7)   NOT NULL,
16    CONSTRAINT [PK_ServiceChange] PRIMARY KEY CLUSTERED ([ReferenceTransactionId],[ServiceId]),
17    CONSTRAINT [FK_ServiceChange_ManageServiceRequest] FOREIGN KEY ([ReferenceTransactionId]) REFERENCES [ManageServicesRequest]([ReferenceTransactionId]) ON DELETE CASCADE,
18    INDEX [IDX_ServiceChange_ManageServiceRequestId] ([ReferenceTransactionId]),
19    INDEX [IDX_ServiceChange_ServiceId] ([ServiceId])
20)
21
22CREATE TABLE [dbo].[ServiceChangeParameter]
23(
24    [ReferenceTransactionId]    INT                 NOT NULL,
25    [ServiceId]                 VARCHAR(50)         NOT NULL,
26    [ParamCode]                 VARCHAR(50)         NOT NULL,
27    [ParamValue]                VARCHAR(50)         NOT NULL,
28    [ParamValidFrom]            DATETIMEOFFSET(7)   NOT NULL,
29    CONSTRAINT [PK_ServiceChangeParameter] PRIMARY KEY CLUSTERED ([ReferenceTransactionId],[ServiceId],[ParamCode]),
30    CONSTRAINT [FK_ServiceChangeParameter_ServiceChange] FOREIGN KEY ([ReferenceTransactionId],[ServiceId]) REFERENCES [ServiceChange] ([ReferenceTransactionId],[ServiceId]) ON DELETE CASCADE,
31    INDEX [IDX_ServiceChangeParameter_ManageServiceRequestId] ([ReferenceTransactionId]),
32    INDEX [IDX_ServiceChangeParameter_ServiceId] ([ServiceId]),
33    INDEX [IDX_ServiceChangeParameter_ParamCode] ([ParamCode])
34)
35CREATE PROCEDURE [dbo].[spCreateManageServicesRequest]
36    @ReferenceTransactionId INT,
37    @OrderDate DATETIMEOFFSET,
38    @QueuePriority INT,
39    @Services ServiceChangeUdt READONLY,
40    @Parameters ServiceChangeParameterUdt READONLY
41AS
42BEGIN
43    SET NOCOUNT ON;
44
45    BEGIN TRY
46    /* VYTVOŘ NOVÝ REQUEST NA ZMĚNU SLUŽEB */
47
48        /*  INSERT REQUEST  */
49        INSERT INTO [dbo].[ManageServicesRequest]
50            ([ReferenceTransactionId]
51            ,[OrderDate]
52            ,[QueuePriority]
53            ,[Queued])
54        VALUES
55            (@ReferenceTransactionId
56            ,@OrderDate
57            ,@QueuePriority
58            ,NULL)
59
60        /*  INSERT SERVICES */
61        INSERT INTO [dbo].[ServiceChange]
62            ([ReferenceTransactionId]
63            ,[ServiceId]
64            ,[ServiceStatus]
65            ,[ValidFrom])
66        SELECT 
67             @ReferenceTransactionId AS [ReferenceTransactionId]
68            ,[ServiceId]
69            ,[ServiceStatus]
70            ,[ValidFrom]
71        FROM @Services AS [S]
72
73        /*  INSERT PARAMS   */
74        INSERT INTO [dbo].[ServiceChangeParameter]
75            ([ReferenceTransactionId]
76            ,[ServiceId]
77            ,[ParamCode]
78            ,[ParamValue]
79            ,[ParamValidFrom])
80        SELECT 
81            @ReferenceTransactionId AS [ReferenceTransactionId]
82            ,[ServiceId]
83            ,[ParamCode]
84            ,[ParamValue]
85            ,[ParamValidFrom]
86        FROM @Parameters AS [P]
87
88    END TRY
89    BEGIN CATCH
90        THROW
91    END CATCH
92END
93
94CREATE PROCEDURE [dbo].[spGetManageServicesRequest]
95    @ReferenceTransactionId INT
96AS
97BEGIN
98    SET NOCOUNT ON;
99
100    BEGIN TRY 
101        /* VRAŤ MANAGE SERVICES REQUEST PODLE ID */
102
103        SELECT 
104            [MR].[ReferenceTransactionId], 
105            [MR].[OrderDate], 
106            [MR].[QueuePriority], 
107            [MR].[Queued], 
108            
109            [SC].[ReferenceTransactionId], 
110            [SC].[ServiceId], 
111            [SC].[ServiceStatus], 
112            [SC].[ValidFrom],
113            
114            [SP].[ReferenceTransactionId], 
115            [SP].[ServiceId], 
116            [SP].[ParamCode], 
117            [SP].[ParamValue], 
118            [SP].[ParamValidFrom]
119
120        FROM [dbo].[ManageServicesRequest] AS [MR]
121        LEFT JOIN [dbo].[ServiceChange] AS [SC] ON [SC].[ReferenceTransactionId] = [MR].[ReferenceTransactionId]
122        LEFT JOIN [dbo].[ServiceChangeParameter] AS [SP] ON [SP].[ReferenceTransactionId] = [SC].[ReferenceTransactionId] AND [SP].[ServiceId] = [SC].[ServiceId]
123        WHERE [MR].[ReferenceTransactionId] = @ReferenceTransactionId
124
125    END TRY
126    BEGIN CATCH
127        THROW
128    END CATCH
129END
130public async Task Consume(ConsumeContext&lt;CreateCommand&gt; context)
131{
132    using (var sql = sqlFactory.Cip)
133    {
134        /*SAVE REQUEST TO DATABASE*/
135        sql.StartTransaction(System.Data.IsolationLevel.Serializable); &lt;----- First transaction starts
136
137        /* Create id */
138        var transactionId = await GetNewId(context.Message.CorrelationId);
139
140        /* Create manage services request */
141        await sql.OrderingGateway.ManageServices.Create(transactionId,  context.Message.ApiRequest.OrderDate, context.Message.ApiRequest.Priority, services);
142
143        sql.Commit(); &lt;----- First transaction ends
144        
145
146        /// .... Some other stuff ...
147
148        /* Fetch the same object you created in the first transaction */
149        Try
150        {
151            sql.StartTransaction(System.Data.IsolationLevel.Serializable);
152            
153            var request = await sql.OrderingGateway.ManageServices.Get(transactionId); &lt;----- HERE BE THE DEADLOCK, 
154
155            request.Queued = DateTimeOffset.Now;
156            await sql.OrderingGateway.ManageServices.Update(request);
157
158            ... Here is a posting to a microservice queue ...
159        
160            sql.Commit();
161        }
162        catch (Exception)
163        {
164            sql.RollBack();
165        }
166        
167        /// .... Some other stuff ....
168}
169&lt;deadlock&gt;
170  &lt;victim-list&gt;
171    &lt;victimProcess id=&quot;process1dbfa86c4e8&quot; /&gt;
172  &lt;/victim-list&gt;
173  &lt;process-list&gt;
174    &lt;process id=&quot;process1dbfa86c4e8&quot; taskpriority=&quot;0&quot; logused=&quot;0&quot; waitresource=&quot;KEY: 18:72057594046775296 (b42d8e559092)&quot; waittime=&quot;2503&quot; ownerId=&quot;33411557480&quot; transactionname=&quot;user_transaction&quot; lasttranstarted=&quot;2021-12-01T01:06:15.303&quot; XDES=&quot;0x1ddd2df4420&quot; lockMode=&quot;RangeS-S&quot; schedulerid=&quot;20&quot; kpid=&quot;23000&quot; status=&quot;suspended&quot; spid=&quot;55&quot; sbid=&quot;2&quot; ecid=&quot;0&quot; priority=&quot;0&quot; trancount=&quot;1&quot; lastbatchstarted=&quot;2021-12-01T01:06:15.310&quot; lastbatchcompleted=&quot;2021-12-01T01:06:15.300&quot; lastattention=&quot;1900-01-01T00:00:00.300&quot; clientapp=&quot;Core Microsoft SqlClient Data Provider&quot; hostpid=&quot;11020&quot; isolationlevel=&quot;serializable (4)&quot; xactid=&quot;33411557480&quot; currentdb=&quot;18&quot; currentdbname=&quot;xxx&quot; lockTimeout=&quot;4294967295&quot; clientoption1=&quot;673185824&quot; clientoption2=&quot;128056&quot;&gt;
175      &lt;executionStack&gt;
176        &lt;frame procname=&quot;xxx.dbo.spGetManageServicesRequest&quot; line=&quot;10&quot; stmtstart=&quot;356&quot; stmtend=&quot;4256&quot; sqlhandle=&quot;0x030012001374fc02f91433019aad000001000000000000000000000000000000000000000000000000000000&quot;&gt;&lt;/frame&gt;
177      &lt;/executionStack&gt;
178    &lt;/process&gt;
179    &lt;process id=&quot;process1dbfa1c1c28&quot; taskpriority=&quot;0&quot; logused=&quot;1232&quot; waitresource=&quot;KEY: 18:72057594046971904 (ffffffffffff)&quot; waittime=&quot;6275&quot; ownerId=&quot;33411563398&quot; transactionname=&quot;user_transaction&quot; lasttranstarted=&quot;2021-12-01T01:06:16.450&quot; XDES=&quot;0x3d4e842c420&quot; lockMode=&quot;RangeI-N&quot; schedulerid=&quot;31&quot; kpid=&quot;36432&quot; status=&quot;suspended&quot; spid=&quot;419&quot; sbid=&quot;2&quot; ecid=&quot;0&quot; priority=&quot;0&quot; trancount=&quot;2&quot; lastbatchstarted=&quot;2021-12-01T01:06:16.480&quot; lastbatchcompleted=&quot;2021-12-01T01:06:16.463&quot; lastattention=&quot;1900-01-01T00:00:00.463&quot; clientapp=&quot;Core Microsoft SqlClient Data Provider&quot;  hostpid=&quot;11020&quot; isolationlevel=&quot;serializable (4)&quot; xactid=&quot;33411563398&quot; currentdb=&quot;18&quot; currentdbname=&quot;xxx&quot; lockTimeout=&quot;4294967295&quot; clientoption1=&quot;673185824&quot; clientoption2=&quot;128056&quot;&gt;
180      &lt;executionStack&gt;
181        &lt;frame procname=&quot;xxx.dbo.spCreateManageServicesRequest&quot; line=&quot;40&quot; stmtstart=&quot;2592&quot; stmtend=&quot;3226&quot; sqlhandle=&quot;0x03001200f01ab84aeb1433019aad000001000000000000000000000000000000000000000000000000000000&quot;&gt;&lt;/frame&gt;
182      &lt;/executionStack&gt;
183    &lt;/process&gt;
184  &lt;/process-list&gt;
185  &lt;resource-list&gt;
186    &lt;keylock hobtid=&quot;72057594046775296&quot; dbid=&quot;18&quot; objectname=&quot;xxx.dbo.ServiceChange&quot; indexname=&quot;PK_ServiceChange&quot; id=&quot;lock202ecfd0380&quot; mode=&quot;X&quot; associatedObjectId=&quot;72057594046775296&quot;&gt;
187      &lt;owner-list&gt;
188        &lt;owner id=&quot;process1dbfa1c1c28&quot; mode=&quot;X&quot; /&gt;
189      &lt;/owner-list&gt;
190      &lt;waiter-list&gt;
191        &lt;waiter id=&quot;process1dbfa86c4e8&quot; mode=&quot;RangeS-S&quot; requestType=&quot;wait&quot; /&gt;
192      &lt;/waiter-list&gt;
193    &lt;/keylock&gt;
194    &lt;keylock hobtid=&quot;72057594046971904&quot; dbid=&quot;18&quot; objectname=&quot;xxx.dbo.ServiceChangeParameter&quot; indexname=&quot;PK_ServiceChangeParameter&quot; id=&quot;lock27d3d371880&quot; mode=&quot;RangeS-S&quot; associatedObjectId=&quot;72057594046971904&quot;&gt;
195      &lt;owner-list&gt;
196        &lt;owner id=&quot;process1dbfa86c4e8&quot; mode=&quot;RangeS-S&quot; /&gt;
197      &lt;/owner-list&gt;
198      &lt;waiter-list&gt;
199        &lt;waiter id=&quot;process1dbfa1c1c28&quot; mode=&quot;RangeI-N&quot; requestType=&quot;wait&quot; /&gt;
200      &lt;/waiter-list&gt;
201    &lt;/keylock&gt;
202  &lt;/resource-list&gt;
203&lt;/deadlock&gt;
204

Why is this deadlock happening? How do I avoid it in the future?

Edit: Here is a plan for Get procedure: https://www.brentozar.com/pastetheplan/?id=B1UMMhaqF

Another Edit: After GSerg comment, I changed the line number in the deadlock graph from 65 to 40, due to removed columns that are not important to the question.

ANSWER

Answered 2021-Dec-26 at 12:54

You are better off avoiding serializable isolation level. The way the serializable guarantee is provided is often deadlock prone.

If you can't alter your stored procs to use more targeted locking hints that guarantee the results you require at a lesser isolation level then you can prevent this particular deadlock scenario shown by ensuring that all locks are taken out on ServiceChange first before any are taken out on ServiceChangeParameter.

One way of doing this would be to introduce a table variable in spGetManageServicesRequest and materialize the results of

1CREATE TABLE [dbo].[ManageServicesRequest]
2(
3    [ReferenceTransactionId]    INT                 NOT NULL,
4    [OrderDate]                 DATETIMEOFFSET(7)   NOT NULL,
5    [QueuePriority]             INT                 NOT NULL,
6    [Queued]                    DATETIMEOFFSET(7)   NULL,
7    CONSTRAINT [PK_ManageServicesRequest] PRIMARY KEY CLUSTERED ([ReferenceTransactionId]),
8)
9
10CREATE TABLE [dbo].[ServiceChange]
11(
12    [ReferenceTransactionId]    INT                 NOT NULL,
13    [ServiceId]                 VARCHAR(50)         NOT NULL,
14    [ServiceStatus]             CHAR(1)             NOT NULL,
15    [ValidFrom]                 DATETIMEOFFSET(7)   NOT NULL,
16    CONSTRAINT [PK_ServiceChange] PRIMARY KEY CLUSTERED ([ReferenceTransactionId],[ServiceId]),
17    CONSTRAINT [FK_ServiceChange_ManageServiceRequest] FOREIGN KEY ([ReferenceTransactionId]) REFERENCES [ManageServicesRequest]([ReferenceTransactionId]) ON DELETE CASCADE,
18    INDEX [IDX_ServiceChange_ManageServiceRequestId] ([ReferenceTransactionId]),
19    INDEX [IDX_ServiceChange_ServiceId] ([ServiceId])
20)
21
22CREATE TABLE [dbo].[ServiceChangeParameter]
23(
24    [ReferenceTransactionId]    INT                 NOT NULL,
25    [ServiceId]                 VARCHAR(50)         NOT NULL,
26    [ParamCode]                 VARCHAR(50)         NOT NULL,
27    [ParamValue]                VARCHAR(50)         NOT NULL,
28    [ParamValidFrom]            DATETIMEOFFSET(7)   NOT NULL,
29    CONSTRAINT [PK_ServiceChangeParameter] PRIMARY KEY CLUSTERED ([ReferenceTransactionId],[ServiceId],[ParamCode]),
30    CONSTRAINT [FK_ServiceChangeParameter_ServiceChange] FOREIGN KEY ([ReferenceTransactionId],[ServiceId]) REFERENCES [ServiceChange] ([ReferenceTransactionId],[ServiceId]) ON DELETE CASCADE,
31    INDEX [IDX_ServiceChangeParameter_ManageServiceRequestId] ([ReferenceTransactionId]),
32    INDEX [IDX_ServiceChangeParameter_ServiceId] ([ServiceId]),
33    INDEX [IDX_ServiceChangeParameter_ParamCode] ([ParamCode])
34)
35CREATE PROCEDURE [dbo].[spCreateManageServicesRequest]
36    @ReferenceTransactionId INT,
37    @OrderDate DATETIMEOFFSET,
38    @QueuePriority INT,
39    @Services ServiceChangeUdt READONLY,
40    @Parameters ServiceChangeParameterUdt READONLY
41AS
42BEGIN
43    SET NOCOUNT ON;
44
45    BEGIN TRY
46    /* VYTVOŘ NOVÝ REQUEST NA ZMĚNU SLUŽEB */
47
48        /*  INSERT REQUEST  */
49        INSERT INTO [dbo].[ManageServicesRequest]
50            ([ReferenceTransactionId]
51            ,[OrderDate]
52            ,[QueuePriority]
53            ,[Queued])
54        VALUES
55            (@ReferenceTransactionId
56            ,@OrderDate
57            ,@QueuePriority
58            ,NULL)
59
60        /*  INSERT SERVICES */
61        INSERT INTO [dbo].[ServiceChange]
62            ([ReferenceTransactionId]
63            ,[ServiceId]
64            ,[ServiceStatus]
65            ,[ValidFrom])
66        SELECT 
67             @ReferenceTransactionId AS [ReferenceTransactionId]
68            ,[ServiceId]
69            ,[ServiceStatus]
70            ,[ValidFrom]
71        FROM @Services AS [S]
72
73        /*  INSERT PARAMS   */
74        INSERT INTO [dbo].[ServiceChangeParameter]
75            ([ReferenceTransactionId]
76            ,[ServiceId]
77            ,[ParamCode]
78            ,[ParamValue]
79            ,[ParamValidFrom])
80        SELECT 
81            @ReferenceTransactionId AS [ReferenceTransactionId]
82            ,[ServiceId]
83            ,[ParamCode]
84            ,[ParamValue]
85            ,[ParamValidFrom]
86        FROM @Parameters AS [P]
87
88    END TRY
89    BEGIN CATCH
90        THROW
91    END CATCH
92END
93
94CREATE PROCEDURE [dbo].[spGetManageServicesRequest]
95    @ReferenceTransactionId INT
96AS
97BEGIN
98    SET NOCOUNT ON;
99
100    BEGIN TRY 
101        /* VRAŤ MANAGE SERVICES REQUEST PODLE ID */
102
103        SELECT 
104            [MR].[ReferenceTransactionId], 
105            [MR].[OrderDate], 
106            [MR].[QueuePriority], 
107            [MR].[Queued], 
108            
109            [SC].[ReferenceTransactionId], 
110            [SC].[ServiceId], 
111            [SC].[ServiceStatus], 
112            [SC].[ValidFrom],
113            
114            [SP].[ReferenceTransactionId], 
115            [SP].[ServiceId], 
116            [SP].[ParamCode], 
117            [SP].[ParamValue], 
118            [SP].[ParamValidFrom]
119
120        FROM [dbo].[ManageServicesRequest] AS [MR]
121        LEFT JOIN [dbo].[ServiceChange] AS [SC] ON [SC].[ReferenceTransactionId] = [MR].[ReferenceTransactionId]
122        LEFT JOIN [dbo].[ServiceChangeParameter] AS [SP] ON [SP].[ReferenceTransactionId] = [SC].[ReferenceTransactionId] AND [SP].[ServiceId] = [SC].[ServiceId]
123        WHERE [MR].[ReferenceTransactionId] = @ReferenceTransactionId
124
125    END TRY
126    BEGIN CATCH
127        THROW
128    END CATCH
129END
130public async Task Consume(ConsumeContext&lt;CreateCommand&gt; context)
131{
132    using (var sql = sqlFactory.Cip)
133    {
134        /*SAVE REQUEST TO DATABASE*/
135        sql.StartTransaction(System.Data.IsolationLevel.Serializable); &lt;----- First transaction starts
136
137        /* Create id */
138        var transactionId = await GetNewId(context.Message.CorrelationId);
139
140        /* Create manage services request */
141        await sql.OrderingGateway.ManageServices.Create(transactionId,  context.Message.ApiRequest.OrderDate, context.Message.ApiRequest.Priority, services);
142
143        sql.Commit(); &lt;----- First transaction ends
144        
145
146        /// .... Some other stuff ...
147
148        /* Fetch the same object you created in the first transaction */
149        Try
150        {
151            sql.StartTransaction(System.Data.IsolationLevel.Serializable);
152            
153            var request = await sql.OrderingGateway.ManageServices.Get(transactionId); &lt;----- HERE BE THE DEADLOCK, 
154
155            request.Queued = DateTimeOffset.Now;
156            await sql.OrderingGateway.ManageServices.Update(request);
157
158            ... Here is a posting to a microservice queue ...
159        
160            sql.Commit();
161        }
162        catch (Exception)
163        {
164            sql.RollBack();
165        }
166        
167        /// .... Some other stuff ....
168}
169&lt;deadlock&gt;
170  &lt;victim-list&gt;
171    &lt;victimProcess id=&quot;process1dbfa86c4e8&quot; /&gt;
172  &lt;/victim-list&gt;
173  &lt;process-list&gt;
174    &lt;process id=&quot;process1dbfa86c4e8&quot; taskpriority=&quot;0&quot; logused=&quot;0&quot; waitresource=&quot;KEY: 18:72057594046775296 (b42d8e559092)&quot; waittime=&quot;2503&quot; ownerId=&quot;33411557480&quot; transactionname=&quot;user_transaction&quot; lasttranstarted=&quot;2021-12-01T01:06:15.303&quot; XDES=&quot;0x1ddd2df4420&quot; lockMode=&quot;RangeS-S&quot; schedulerid=&quot;20&quot; kpid=&quot;23000&quot; status=&quot;suspended&quot; spid=&quot;55&quot; sbid=&quot;2&quot; ecid=&quot;0&quot; priority=&quot;0&quot; trancount=&quot;1&quot; lastbatchstarted=&quot;2021-12-01T01:06:15.310&quot; lastbatchcompleted=&quot;2021-12-01T01:06:15.300&quot; lastattention=&quot;1900-01-01T00:00:00.300&quot; clientapp=&quot;Core Microsoft SqlClient Data Provider&quot; hostpid=&quot;11020&quot; isolationlevel=&quot;serializable (4)&quot; xactid=&quot;33411557480&quot; currentdb=&quot;18&quot; currentdbname=&quot;xxx&quot; lockTimeout=&quot;4294967295&quot; clientoption1=&quot;673185824&quot; clientoption2=&quot;128056&quot;&gt;
175      &lt;executionStack&gt;
176        &lt;frame procname=&quot;xxx.dbo.spGetManageServicesRequest&quot; line=&quot;10&quot; stmtstart=&quot;356&quot; stmtend=&quot;4256&quot; sqlhandle=&quot;0x030012001374fc02f91433019aad000001000000000000000000000000000000000000000000000000000000&quot;&gt;&lt;/frame&gt;
177      &lt;/executionStack&gt;
178    &lt;/process&gt;
179    &lt;process id=&quot;process1dbfa1c1c28&quot; taskpriority=&quot;0&quot; logused=&quot;1232&quot; waitresource=&quot;KEY: 18:72057594046971904 (ffffffffffff)&quot; waittime=&quot;6275&quot; ownerId=&quot;33411563398&quot; transactionname=&quot;user_transaction&quot; lasttranstarted=&quot;2021-12-01T01:06:16.450&quot; XDES=&quot;0x3d4e842c420&quot; lockMode=&quot;RangeI-N&quot; schedulerid=&quot;31&quot; kpid=&quot;36432&quot; status=&quot;suspended&quot; spid=&quot;419&quot; sbid=&quot;2&quot; ecid=&quot;0&quot; priority=&quot;0&quot; trancount=&quot;2&quot; lastbatchstarted=&quot;2021-12-01T01:06:16.480&quot; lastbatchcompleted=&quot;2021-12-01T01:06:16.463&quot; lastattention=&quot;1900-01-01T00:00:00.463&quot; clientapp=&quot;Core Microsoft SqlClient Data Provider&quot;  hostpid=&quot;11020&quot; isolationlevel=&quot;serializable (4)&quot; xactid=&quot;33411563398&quot; currentdb=&quot;18&quot; currentdbname=&quot;xxx&quot; lockTimeout=&quot;4294967295&quot; clientoption1=&quot;673185824&quot; clientoption2=&quot;128056&quot;&gt;
180      &lt;executionStack&gt;
181        &lt;frame procname=&quot;xxx.dbo.spCreateManageServicesRequest&quot; line=&quot;40&quot; stmtstart=&quot;2592&quot; stmtend=&quot;3226&quot; sqlhandle=&quot;0x03001200f01ab84aeb1433019aad000001000000000000000000000000000000000000000000000000000000&quot;&gt;&lt;/frame&gt;
182      &lt;/executionStack&gt;
183    &lt;/process&gt;
184  &lt;/process-list&gt;
185  &lt;resource-list&gt;
186    &lt;keylock hobtid=&quot;72057594046775296&quot; dbid=&quot;18&quot; objectname=&quot;xxx.dbo.ServiceChange&quot; indexname=&quot;PK_ServiceChange&quot; id=&quot;lock202ecfd0380&quot; mode=&quot;X&quot; associatedObjectId=&quot;72057594046775296&quot;&gt;
187      &lt;owner-list&gt;
188        &lt;owner id=&quot;process1dbfa1c1c28&quot; mode=&quot;X&quot; /&gt;
189      &lt;/owner-list&gt;
190      &lt;waiter-list&gt;
191        &lt;waiter id=&quot;process1dbfa86c4e8&quot; mode=&quot;RangeS-S&quot; requestType=&quot;wait&quot; /&gt;
192      &lt;/waiter-list&gt;
193    &lt;/keylock&gt;
194    &lt;keylock hobtid=&quot;72057594046971904&quot; dbid=&quot;18&quot; objectname=&quot;xxx.dbo.ServiceChangeParameter&quot; indexname=&quot;PK_ServiceChangeParameter&quot; id=&quot;lock27d3d371880&quot; mode=&quot;RangeS-S&quot; associatedObjectId=&quot;72057594046971904&quot;&gt;
195      &lt;owner-list&gt;
196        &lt;owner id=&quot;process1dbfa86c4e8&quot; mode=&quot;RangeS-S&quot; /&gt;
197      &lt;/owner-list&gt;
198      &lt;waiter-list&gt;
199        &lt;waiter id=&quot;process1dbfa1c1c28&quot; mode=&quot;RangeI-N&quot; requestType=&quot;wait&quot; /&gt;
200      &lt;/waiter-list&gt;
201    &lt;/keylock&gt;
202  &lt;/resource-list&gt;
203&lt;/deadlock&gt;
204SELECT ...
205FROM [dbo].[ManageServicesRequest] AS [MR]
206  LEFT JOIN [dbo].[ServiceChange] AS [SC]  ON [SC].[ReferenceTransactionId] = [MR].[ReferenceTransactionId]
207

to the table variable.

Then join that against [dbo].[ServiceChangeParameter] to get your final results.

The phase separation introduced by the table variable will ensure the SELECT statement acquires the locks in the same object order as the insert is doing so prevent deadlocks where the SELECT statement already holds a lock on ServiceChangeParameter and is waiting to acquire one on ServiceChange (as in the deadlock graph here).

It may be instructive to look at the exact locks taken out by the SELECT running at serializable isolation level. These can be seen with extended events or undocumented trace flag 1200.

Currently your execution plan is below.

enter image description here

For the following example data

1CREATE TABLE [dbo].[ManageServicesRequest]
2(
3    [ReferenceTransactionId]    INT                 NOT NULL,
4    [OrderDate]                 DATETIMEOFFSET(7)   NOT NULL,
5    [QueuePriority]             INT                 NOT NULL,
6    [Queued]                    DATETIMEOFFSET(7)   NULL,
7    CONSTRAINT [PK_ManageServicesRequest] PRIMARY KEY CLUSTERED ([ReferenceTransactionId]),
8)
9
10CREATE TABLE [dbo].[ServiceChange]
11(
12    [ReferenceTransactionId]    INT                 NOT NULL,
13    [ServiceId]                 VARCHAR(50)         NOT NULL,
14    [ServiceStatus]             CHAR(1)             NOT NULL,
15    [ValidFrom]                 DATETIMEOFFSET(7)   NOT NULL,
16    CONSTRAINT [PK_ServiceChange] PRIMARY KEY CLUSTERED ([ReferenceTransactionId],[ServiceId]),
17    CONSTRAINT [FK_ServiceChange_ManageServiceRequest] FOREIGN KEY ([ReferenceTransactionId]) REFERENCES [ManageServicesRequest]([ReferenceTransactionId]) ON DELETE CASCADE,
18    INDEX [IDX_ServiceChange_ManageServiceRequestId] ([ReferenceTransactionId]),
19    INDEX [IDX_ServiceChange_ServiceId] ([ServiceId])
20)
21
22CREATE TABLE [dbo].[ServiceChangeParameter]
23(
24    [ReferenceTransactionId]    INT                 NOT NULL,
25    [ServiceId]                 VARCHAR(50)         NOT NULL,
26    [ParamCode]                 VARCHAR(50)         NOT NULL,
27    [ParamValue]                VARCHAR(50)         NOT NULL,
28    [ParamValidFrom]            DATETIMEOFFSET(7)   NOT NULL,
29    CONSTRAINT [PK_ServiceChangeParameter] PRIMARY KEY CLUSTERED ([ReferenceTransactionId],[ServiceId],[ParamCode]),
30    CONSTRAINT [FK_ServiceChangeParameter_ServiceChange] FOREIGN KEY ([ReferenceTransactionId],[ServiceId]) REFERENCES [ServiceChange] ([ReferenceTransactionId],[ServiceId]) ON DELETE CASCADE,
31    INDEX [IDX_ServiceChangeParameter_ManageServiceRequestId] ([ReferenceTransactionId]),
32    INDEX [IDX_ServiceChangeParameter_ServiceId] ([ServiceId]),
33    INDEX [IDX_ServiceChangeParameter_ParamCode] ([ParamCode])
34)
35CREATE PROCEDURE [dbo].[spCreateManageServicesRequest]
36    @ReferenceTransactionId INT,
37    @OrderDate DATETIMEOFFSET,
38    @QueuePriority INT,
39    @Services ServiceChangeUdt READONLY,
40    @Parameters ServiceChangeParameterUdt READONLY
41AS
42BEGIN
43    SET NOCOUNT ON;
44
45    BEGIN TRY
46    /* VYTVOŘ NOVÝ REQUEST NA ZMĚNU SLUŽEB */
47
48        /*  INSERT REQUEST  */
49        INSERT INTO [dbo].[ManageServicesRequest]
50            ([ReferenceTransactionId]
51            ,[OrderDate]
52            ,[QueuePriority]
53            ,[Queued])
54        VALUES
55            (@ReferenceTransactionId
56            ,@OrderDate
57            ,@QueuePriority
58            ,NULL)
59
60        /*  INSERT SERVICES */
61        INSERT INTO [dbo].[ServiceChange]
62            ([ReferenceTransactionId]
63            ,[ServiceId]
64            ,[ServiceStatus]
65            ,[ValidFrom])
66        SELECT 
67             @ReferenceTransactionId AS [ReferenceTransactionId]
68            ,[ServiceId]
69            ,[ServiceStatus]
70            ,[ValidFrom]
71        FROM @Services AS [S]
72
73        /*  INSERT PARAMS   */
74        INSERT INTO [dbo].[ServiceChangeParameter]
75            ([ReferenceTransactionId]
76            ,[ServiceId]
77            ,[ParamCode]
78            ,[ParamValue]
79            ,[ParamValidFrom])
80        SELECT 
81            @ReferenceTransactionId AS [ReferenceTransactionId]
82            ,[ServiceId]
83            ,[ParamCode]
84            ,[ParamValue]
85            ,[ParamValidFrom]
86        FROM @Parameters AS [P]
87
88    END TRY
89    BEGIN CATCH
90        THROW
91    END CATCH
92END
93
94CREATE PROCEDURE [dbo].[spGetManageServicesRequest]
95    @ReferenceTransactionId INT
96AS
97BEGIN
98    SET NOCOUNT ON;
99
100    BEGIN TRY 
101        /* VRAŤ MANAGE SERVICES REQUEST PODLE ID */
102
103        SELECT 
104            [MR].[ReferenceTransactionId], 
105            [MR].[OrderDate], 
106            [MR].[QueuePriority], 
107            [MR].[Queued], 
108            
109            [SC].[ReferenceTransactionId], 
110            [SC].[ServiceId], 
111            [SC].[ServiceStatus], 
112            [SC].[ValidFrom],
113            
114            [SP].[ReferenceTransactionId], 
115            [SP].[ServiceId], 
116            [SP].[ParamCode], 
117            [SP].[ParamValue], 
118            [SP].[ParamValidFrom]
119
120        FROM [dbo].[ManageServicesRequest] AS [MR]
121        LEFT JOIN [dbo].[ServiceChange] AS [SC] ON [SC].[ReferenceTransactionId] = [MR].[ReferenceTransactionId]
122        LEFT JOIN [dbo].[ServiceChangeParameter] AS [SP] ON [SP].[ReferenceTransactionId] = [SC].[ReferenceTransactionId] AND [SP].[ServiceId] = [SC].[ServiceId]
123        WHERE [MR].[ReferenceTransactionId] = @ReferenceTransactionId
124
125    END TRY
126    BEGIN CATCH
127        THROW
128    END CATCH
129END
130public async Task Consume(ConsumeContext&lt;CreateCommand&gt; context)
131{
132    using (var sql = sqlFactory.Cip)
133    {
134        /*SAVE REQUEST TO DATABASE*/
135        sql.StartTransaction(System.Data.IsolationLevel.Serializable); &lt;----- First transaction starts
136
137        /* Create id */
138        var transactionId = await GetNewId(context.Message.CorrelationId);
139
140        /* Create manage services request */
141        await sql.OrderingGateway.ManageServices.Create(transactionId,  context.Message.ApiRequest.OrderDate, context.Message.ApiRequest.Priority, services);
142
143        sql.Commit(); &lt;----- First transaction ends
144        
145
146        /// .... Some other stuff ...
147
148        /* Fetch the same object you created in the first transaction */
149        Try
150        {
151            sql.StartTransaction(System.Data.IsolationLevel.Serializable);
152            
153            var request = await sql.OrderingGateway.ManageServices.Get(transactionId); &lt;----- HERE BE THE DEADLOCK, 
154
155            request.Queued = DateTimeOffset.Now;
156            await sql.OrderingGateway.ManageServices.Update(request);
157
158            ... Here is a posting to a microservice queue ...
159        
160            sql.Commit();
161        }
162        catch (Exception)
163        {
164            sql.RollBack();
165        }
166        
167        /// .... Some other stuff ....
168}
169&lt;deadlock&gt;
170  &lt;victim-list&gt;
171    &lt;victimProcess id=&quot;process1dbfa86c4e8&quot; /&gt;
172  &lt;/victim-list&gt;
173  &lt;process-list&gt;
174    &lt;process id=&quot;process1dbfa86c4e8&quot; taskpriority=&quot;0&quot; logused=&quot;0&quot; waitresource=&quot;KEY: 18:72057594046775296 (b42d8e559092)&quot; waittime=&quot;2503&quot; ownerId=&quot;33411557480&quot; transactionname=&quot;user_transaction&quot; lasttranstarted=&quot;2021-12-01T01:06:15.303&quot; XDES=&quot;0x1ddd2df4420&quot; lockMode=&quot;RangeS-S&quot; schedulerid=&quot;20&quot; kpid=&quot;23000&quot; status=&quot;suspended&quot; spid=&quot;55&quot; sbid=&quot;2&quot; ecid=&quot;0&quot; priority=&quot;0&quot; trancount=&quot;1&quot; lastbatchstarted=&quot;2021-12-01T01:06:15.310&quot; lastbatchcompleted=&quot;2021-12-01T01:06:15.300&quot; lastattention=&quot;1900-01-01T00:00:00.300&quot; clientapp=&quot;Core Microsoft SqlClient Data Provider&quot; hostpid=&quot;11020&quot; isolationlevel=&quot;serializable (4)&quot; xactid=&quot;33411557480&quot; currentdb=&quot;18&quot; currentdbname=&quot;xxx&quot; lockTimeout=&quot;4294967295&quot; clientoption1=&quot;673185824&quot; clientoption2=&quot;128056&quot;&gt;
175      &lt;executionStack&gt;
176        &lt;frame procname=&quot;xxx.dbo.spGetManageServicesRequest&quot; line=&quot;10&quot; stmtstart=&quot;356&quot; stmtend=&quot;4256&quot; sqlhandle=&quot;0x030012001374fc02f91433019aad000001000000000000000000000000000000000000000000000000000000&quot;&gt;&lt;/frame&gt;
177      &lt;/executionStack&gt;
178    &lt;/process&gt;
179    &lt;process id=&quot;process1dbfa1c1c28&quot; taskpriority=&quot;0&quot; logused=&quot;1232&quot; waitresource=&quot;KEY: 18:72057594046971904 (ffffffffffff)&quot; waittime=&quot;6275&quot; ownerId=&quot;33411563398&quot; transactionname=&quot;user_transaction&quot; lasttranstarted=&quot;2021-12-01T01:06:16.450&quot; XDES=&quot;0x3d4e842c420&quot; lockMode=&quot;RangeI-N&quot; schedulerid=&quot;31&quot; kpid=&quot;36432&quot; status=&quot;suspended&quot; spid=&quot;419&quot; sbid=&quot;2&quot; ecid=&quot;0&quot; priority=&quot;0&quot; trancount=&quot;2&quot; lastbatchstarted=&quot;2021-12-01T01:06:16.480&quot; lastbatchcompleted=&quot;2021-12-01T01:06:16.463&quot; lastattention=&quot;1900-01-01T00:00:00.463&quot; clientapp=&quot;Core Microsoft SqlClient Data Provider&quot;  hostpid=&quot;11020&quot; isolationlevel=&quot;serializable (4)&quot; xactid=&quot;33411563398&quot; currentdb=&quot;18&quot; currentdbname=&quot;xxx&quot; lockTimeout=&quot;4294967295&quot; clientoption1=&quot;673185824&quot; clientoption2=&quot;128056&quot;&gt;
180      &lt;executionStack&gt;
181        &lt;frame procname=&quot;xxx.dbo.spCreateManageServicesRequest&quot; line=&quot;40&quot; stmtstart=&quot;2592&quot; stmtend=&quot;3226&quot; sqlhandle=&quot;0x03001200f01ab84aeb1433019aad000001000000000000000000000000000000000000000000000000000000&quot;&gt;&lt;/frame&gt;
182      &lt;/executionStack&gt;
183    &lt;/process&gt;
184  &lt;/process-list&gt;
185  &lt;resource-list&gt;
186    &lt;keylock hobtid=&quot;72057594046775296&quot; dbid=&quot;18&quot; objectname=&quot;xxx.dbo.ServiceChange&quot; indexname=&quot;PK_ServiceChange&quot; id=&quot;lock202ecfd0380&quot; mode=&quot;X&quot; associatedObjectId=&quot;72057594046775296&quot;&gt;
187      &lt;owner-list&gt;
188        &lt;owner id=&quot;process1dbfa1c1c28&quot; mode=&quot;X&quot; /&gt;
189      &lt;/owner-list&gt;
190      &lt;waiter-list&gt;
191        &lt;waiter id=&quot;process1dbfa86c4e8&quot; mode=&quot;RangeS-S&quot; requestType=&quot;wait&quot; /&gt;
192      &lt;/waiter-list&gt;
193    &lt;/keylock&gt;
194    &lt;keylock hobtid=&quot;72057594046971904&quot; dbid=&quot;18&quot; objectname=&quot;xxx.dbo.ServiceChangeParameter&quot; indexname=&quot;PK_ServiceChangeParameter&quot; id=&quot;lock27d3d371880&quot; mode=&quot;RangeS-S&quot; associatedObjectId=&quot;72057594046971904&quot;&gt;
195      &lt;owner-list&gt;
196        &lt;owner id=&quot;process1dbfa86c4e8&quot; mode=&quot;RangeS-S&quot; /&gt;
197      &lt;/owner-list&gt;
198      &lt;waiter-list&gt;
199        &lt;waiter id=&quot;process1dbfa1c1c28&quot; mode=&quot;RangeI-N&quot; requestType=&quot;wait&quot; /&gt;
200      &lt;/waiter-list&gt;
201    &lt;/keylock&gt;
202  &lt;/resource-list&gt;
203&lt;/deadlock&gt;
204SELECT ...
205FROM [dbo].[ManageServicesRequest] AS [MR]
206  LEFT JOIN [dbo].[ServiceChange] AS [SC]  ON [SC].[ReferenceTransactionId] = [MR].[ReferenceTransactionId]
207INSERT INTO [dbo].[ManageServicesRequest] 
208VALUES (26410821, GETDATE(), 1, GETDATE()), 
209       (26410822, GETDATE(), 1, GETDATE()), 
210       (26410823, GETDATE(), 1, GETDATE());
211
212INSERT INTO [dbo].[ServiceChange] 
213VALUES (26410821, 'X', 'X', GETDATE()), 
214       (26410822, 'X', 'X', GETDATE()), 
215       (26410823, 'X', 'X', GETDATE());
216
217INSERT INTO [dbo].[ServiceChangeParameter]  
218VALUES (26410821, 'X', 'P1','P1', GETDATE()), 
219       (26410823, 'X', 'P1','P1', GETDATE());
220

The trace flag output (for WHERE [MR].[ReferenceTransactionId] = 26410822) is

1CREATE TABLE [dbo].[ManageServicesRequest]
2(
3    [ReferenceTransactionId]    INT                 NOT NULL,
4    [OrderDate]                 DATETIMEOFFSET(7)   NOT NULL,
5    [QueuePriority]             INT                 NOT NULL,
6    [Queued]                    DATETIMEOFFSET(7)   NULL,
7    CONSTRAINT [PK_ManageServicesRequest] PRIMARY KEY CLUSTERED ([ReferenceTransactionId]),
8)
9
10CREATE TABLE [dbo].[ServiceChange]
11(
12    [ReferenceTransactionId]    INT                 NOT NULL,
13    [ServiceId]                 VARCHAR(50)         NOT NULL,
14    [ServiceStatus]             CHAR(1)             NOT NULL,
15    [ValidFrom]                 DATETIMEOFFSET(7)   NOT NULL,
16    CONSTRAINT [PK_ServiceChange] PRIMARY KEY CLUSTERED ([ReferenceTransactionId],[ServiceId]),
17    CONSTRAINT [FK_ServiceChange_ManageServiceRequest] FOREIGN KEY ([ReferenceTransactionId]) REFERENCES [ManageServicesRequest]([ReferenceTransactionId]) ON DELETE CASCADE,
18    INDEX [IDX_ServiceChange_ManageServiceRequestId] ([ReferenceTransactionId]),
19    INDEX [IDX_ServiceChange_ServiceId] ([ServiceId])
20)
21
22CREATE TABLE [dbo].[ServiceChangeParameter]
23(
24    [ReferenceTransactionId]    INT                 NOT NULL,
25    [ServiceId]                 VARCHAR(50)         NOT NULL,
26    [ParamCode]                 VARCHAR(50)         NOT NULL,
27    [ParamValue]                VARCHAR(50)         NOT NULL,
28    [ParamValidFrom]            DATETIMEOFFSET(7)   NOT NULL,
29    CONSTRAINT [PK_ServiceChangeParameter] PRIMARY KEY CLUSTERED ([ReferenceTransactionId],[ServiceId],[ParamCode]),
30    CONSTRAINT [FK_ServiceChangeParameter_ServiceChange] FOREIGN KEY ([ReferenceTransactionId],[ServiceId]) REFERENCES [ServiceChange] ([ReferenceTransactionId],[ServiceId]) ON DELETE CASCADE,
31    INDEX [IDX_ServiceChangeParameter_ManageServiceRequestId] ([ReferenceTransactionId]),
32    INDEX [IDX_ServiceChangeParameter_ServiceId] ([ServiceId]),
33    INDEX [IDX_ServiceChangeParameter_ParamCode] ([ParamCode])
34)
35CREATE PROCEDURE [dbo].[spCreateManageServicesRequest]
36    @ReferenceTransactionId INT,
37    @OrderDate DATETIMEOFFSET,
38    @QueuePriority INT,
39    @Services ServiceChangeUdt READONLY,
40    @Parameters ServiceChangeParameterUdt READONLY
41AS
42BEGIN
43    SET NOCOUNT ON;
44
45    BEGIN TRY
46    /* VYTVOŘ NOVÝ REQUEST NA ZMĚNU SLUŽEB */
47
48        /*  INSERT REQUEST  */
49        INSERT INTO [dbo].[ManageServicesRequest]
50            ([ReferenceTransactionId]
51            ,[OrderDate]
52            ,[QueuePriority]
53            ,[Queued])
54        VALUES
55            (@ReferenceTransactionId
56            ,@OrderDate
57            ,@QueuePriority
58            ,NULL)
59
60        /*  INSERT SERVICES */
61        INSERT INTO [dbo].[ServiceChange]
62            ([ReferenceTransactionId]
63            ,[ServiceId]
64            ,[ServiceStatus]
65            ,[ValidFrom])
66        SELECT 
67             @ReferenceTransactionId AS [ReferenceTransactionId]
68            ,[ServiceId]
69            ,[ServiceStatus]
70            ,[ValidFrom]
71        FROM @Services AS [S]
72
73        /*  INSERT PARAMS   */
74        INSERT INTO [dbo].[ServiceChangeParameter]
75            ([ReferenceTransactionId]
76            ,[ServiceId]
77            ,[ParamCode]
78            ,[ParamValue]
79            ,[ParamValidFrom])
80        SELECT 
81            @ReferenceTransactionId AS [ReferenceTransactionId]
82            ,[ServiceId]
83            ,[ParamCode]
84            ,[ParamValue]
85            ,[ParamValidFrom]
86        FROM @Parameters AS [P]
87
88    END TRY
89    BEGIN CATCH
90        THROW
91    END CATCH
92END
93
94CREATE PROCEDURE [dbo].[spGetManageServicesRequest]
95    @ReferenceTransactionId INT
96AS
97BEGIN
98    SET NOCOUNT ON;
99
100    BEGIN TRY 
101        /* VRAŤ MANAGE SERVICES REQUEST PODLE ID */
102
103        SELECT 
104            [MR].[ReferenceTransactionId], 
105            [MR].[OrderDate], 
106            [MR].[QueuePriority], 
107            [MR].[Queued], 
108            
109            [SC].[ReferenceTransactionId], 
110            [SC].[ServiceId], 
111            [SC].[ServiceStatus], 
112            [SC].[ValidFrom],
113            
114            [SP].[ReferenceTransactionId], 
115            [SP].[ServiceId], 
116            [SP].[ParamCode], 
117            [SP].[ParamValue], 
118            [SP].[ParamValidFrom]
119
120        FROM [dbo].[ManageServicesRequest] AS [MR]
121        LEFT JOIN [dbo].[ServiceChange] AS [SC] ON [SC].[ReferenceTransactionId] = [MR].[ReferenceTransactionId]
122        LEFT JOIN [dbo].[ServiceChangeParameter] AS [SP] ON [SP].[ReferenceTransactionId] = [SC].[ReferenceTransactionId] AND [SP].[ServiceId] = [SC].[ServiceId]
123        WHERE [MR].[ReferenceTransactionId] = @ReferenceTransactionId
124
125    END TRY
126    BEGIN CATCH
127        THROW
128    END CATCH
129END
130public async Task Consume(ConsumeContext&lt;CreateCommand&gt; context)
131{
132    using (var sql = sqlFactory.Cip)
133    {
134        /*SAVE REQUEST TO DATABASE*/
135        sql.StartTransaction(System.Data.IsolationLevel.Serializable); &lt;----- First transaction starts
136
137        /* Create id */
138        var transactionId = await GetNewId(context.Message.CorrelationId);
139
140        /* Create manage services request */
141        await sql.OrderingGateway.ManageServices.Create(transactionId,  context.Message.ApiRequest.OrderDate, context.Message.ApiRequest.Priority, services);
142
143        sql.Commit(); &lt;----- First transaction ends
144        
145
146        /// .... Some other stuff ...
147
148        /* Fetch the same object you created in the first transaction */
149        Try
150        {
151            sql.StartTransaction(System.Data.IsolationLevel.Serializable);
152            
153            var request = await sql.OrderingGateway.ManageServices.Get(transactionId); &lt;----- HERE BE THE DEADLOCK, 
154
155            request.Queued = DateTimeOffset.Now;
156            await sql.OrderingGateway.ManageServices.Update(request);
157
158            ... Here is a posting to a microservice queue ...
159        
160            sql.Commit();
161        }
162        catch (Exception)
163        {
164            sql.RollBack();
165        }
166        
167        /// .... Some other stuff ....
168}
169&lt;deadlock&gt;
170  &lt;victim-list&gt;
171    &lt;victimProcess id=&quot;process1dbfa86c4e8&quot; /&gt;
172  &lt;/victim-list&gt;
173  &lt;process-list&gt;
174    &lt;process id=&quot;process1dbfa86c4e8&quot; taskpriority=&quot;0&quot; logused=&quot;0&quot; waitresource=&quot;KEY: 18:72057594046775296 (b42d8e559092)&quot; waittime=&quot;2503&quot; ownerId=&quot;33411557480&quot; transactionname=&quot;user_transaction&quot; lasttranstarted=&quot;2021-12-01T01:06:15.303&quot; XDES=&quot;0x1ddd2df4420&quot; lockMode=&quot;RangeS-S&quot; schedulerid=&quot;20&quot; kpid=&quot;23000&quot; status=&quot;suspended&quot; spid=&quot;55&quot; sbid=&quot;2&quot; ecid=&quot;0&quot; priority=&quot;0&quot; trancount=&quot;1&quot; lastbatchstarted=&quot;2021-12-01T01:06:15.310&quot; lastbatchcompleted=&quot;2021-12-01T01:06:15.300&quot; lastattention=&quot;1900-01-01T00:00:00.300&quot; clientapp=&quot;Core Microsoft SqlClient Data Provider&quot; hostpid=&quot;11020&quot; isolationlevel=&quot;serializable (4)&quot; xactid=&quot;33411557480&quot; currentdb=&quot;18&quot; currentdbname=&quot;xxx&quot; lockTimeout=&quot;4294967295&quot; clientoption1=&quot;673185824&quot; clientoption2=&quot;128056&quot;&gt;
175      &lt;executionStack&gt;
176        &lt;frame procname=&quot;xxx.dbo.spGetManageServicesRequest&quot; line=&quot;10&quot; stmtstart=&quot;356&quot; stmtend=&quot;4256&quot; sqlhandle=&quot;0x030012001374fc02f91433019aad000001000000000000000000000000000000000000000000000000000000&quot;&gt;&lt;/frame&gt;
177      &lt;/executionStack&gt;
178    &lt;/process&gt;
179    &lt;process id=&quot;process1dbfa1c1c28&quot; taskpriority=&quot;0&quot; logused=&quot;1232&quot; waitresource=&quot;KEY: 18:72057594046971904 (ffffffffffff)&quot; waittime=&quot;6275&quot; ownerId=&quot;33411563398&quot; transactionname=&quot;user_transaction&quot; lasttranstarted=&quot;2021-12-01T01:06:16.450&quot; XDES=&quot;0x3d4e842c420&quot; lockMode=&quot;RangeI-N&quot; schedulerid=&quot;31&quot; kpid=&quot;36432&quot; status=&quot;suspended&quot; spid=&quot;419&quot; sbid=&quot;2&quot; ecid=&quot;0&quot; priority=&quot;0&quot; trancount=&quot;2&quot; lastbatchstarted=&quot;2021-12-01T01:06:16.480&quot; lastbatchcompleted=&quot;2021-12-01T01:06:16.463&quot; lastattention=&quot;1900-01-01T00:00:00.463&quot; clientapp=&quot;Core Microsoft SqlClient Data Provider&quot;  hostpid=&quot;11020&quot; isolationlevel=&quot;serializable (4)&quot; xactid=&quot;33411563398&quot; currentdb=&quot;18&quot; currentdbname=&quot;xxx&quot; lockTimeout=&quot;4294967295&quot; clientoption1=&quot;673185824&quot; clientoption2=&quot;128056&quot;&gt;
180      &lt;executionStack&gt;
181        &lt;frame procname=&quot;xxx.dbo.spCreateManageServicesRequest&quot; line=&quot;40&quot; stmtstart=&quot;2592&quot; stmtend=&quot;3226&quot; sqlhandle=&quot;0x03001200f01ab84aeb1433019aad000001000000000000000000000000000000000000000000000000000000&quot;&gt;&lt;/frame&gt;
182      &lt;/executionStack&gt;
183    &lt;/process&gt;
184  &lt;/process-list&gt;
185  &lt;resource-list&gt;
186    &lt;keylock hobtid=&quot;72057594046775296&quot; dbid=&quot;18&quot; objectname=&quot;xxx.dbo.ServiceChange&quot; indexname=&quot;PK_ServiceChange&quot; id=&quot;lock202ecfd0380&quot; mode=&quot;X&quot; associatedObjectId=&quot;72057594046775296&quot;&gt;
187      &lt;owner-list&gt;
188        &lt;owner id=&quot;process1dbfa1c1c28&quot; mode=&quot;X&quot; /&gt;
189      &lt;/owner-list&gt;
190      &lt;waiter-list&gt;
191        &lt;waiter id=&quot;process1dbfa86c4e8&quot; mode=&quot;RangeS-S&quot; requestType=&quot;wait&quot; /&gt;
192      &lt;/waiter-list&gt;
193    &lt;/keylock&gt;
194    &lt;keylock hobtid=&quot;72057594046971904&quot; dbid=&quot;18&quot; objectname=&quot;xxx.dbo.ServiceChangeParameter&quot; indexname=&quot;PK_ServiceChangeParameter&quot; id=&quot;lock27d3d371880&quot; mode=&quot;RangeS-S&quot; associatedObjectId=&quot;72057594046971904&quot;&gt;
195      &lt;owner-list&gt;
196        &lt;owner id=&quot;process1dbfa86c4e8&quot; mode=&quot;RangeS-S&quot; /&gt;
197      &lt;/owner-list&gt;
198      &lt;waiter-list&gt;
199        &lt;waiter id=&quot;process1dbfa1c1c28&quot; mode=&quot;RangeI-N&quot; requestType=&quot;wait&quot; /&gt;
200      &lt;/waiter-list&gt;
201    &lt;/keylock&gt;
202  &lt;/resource-list&gt;
203&lt;/deadlock&gt;
204SELECT ...
205FROM [dbo].[ManageServicesRequest] AS [MR]
206  LEFT JOIN [dbo].[ServiceChange] AS [SC]  ON [SC].[ReferenceTransactionId] = [MR].[ReferenceTransactionId]
207INSERT INTO [dbo].[ManageServicesRequest] 
208VALUES (26410821, GETDATE(), 1, GETDATE()), 
209       (26410822, GETDATE(), 1, GETDATE()), 
210       (26410823, GETDATE(), 1, GETDATE());
211
212INSERT INTO [dbo].[ServiceChange] 
213VALUES (26410821, 'X', 'X', GETDATE()), 
214       (26410822, 'X', 'X', GETDATE()), 
215       (26410823, 'X', 'X', GETDATE());
216
217INSERT INTO [dbo].[ServiceChangeParameter]  
218VALUES (26410821, 'X', 'P1','P1', GETDATE()), 
219       (26410823, 'X', 'P1','P1', GETDATE());
220Process 51 acquiring IS lock on OBJECT: 7:1557580587:0  (class bit2000000 ref1) result: OK
221
222Process 51 acquiring IS lock on OBJECT: 7:1509580416:0  (class bit2000000 ref1) result: OK
223
224Process 51 acquiring IS lock on OBJECT: 7:1477580302:0  (class bit2000000 ref1) result: OK
225
226Process 51 acquiring IS lock on PAGE: 7:1:600  (class bit2000000 ref0) result: OK
227
228Process 51 acquiring S lock on KEY: 7:72057594044940288 (1b148afa48fb) (class bit2000000 ref0) result: OK
229
230Process 51 acquiring IS lock on PAGE: 7:1:608  (class bit2000000 ref0) result: OK
231
232Process 51 acquiring RangeS-S lock on KEY: 7:72057594045005824 (a69d56b089b6) (class bit2000000 ref0) result: OK
233
234Process 51 acquiring IS lock on PAGE: 7:1:632  (class bit2000000 ref0) result: OK
235
236Process 51 acquiring RangeS-S lock on KEY: 7:72057594045202432 (c37d1982c3c9) (class bit2000000 ref0) result: OK
237
238Process 51 acquiring RangeS-S lock on KEY: 7:72057594045005824 (2ef5265f2b42) (class bit2000000 ref0) result: OK
239

The order of locks taken is indicated in the image below. Range locks apply to the range of possible values from the given key value, to the nearest key value below it (in key order - so above it in the image!).

enter image description here

First node 1 is called and it takes an S lock on the row in ManageServicesRequest, then node 2 is called and a RangeS-S lock is taken on a key in ServiceChange the values from this row are then used to do the lookup in ServiceChangeParameter - in this case there are no matching rows for the predicate but a RangeS-S lock is still taken out covering the range from the next highest key to the preceding one (range (26410821, 'X', 'P1') ... (26410823, 'X', 'P1') in this case).

Then node 2 is called again to see if there are any more rows. Even in the case that there aren't an additional RangeS-S lock is taken on the next row in ServiceChange.

In the case of your deadlock graph it seems that the range being locked in ServiceChangeParameter is the range to infinity (denoted by ffffffffffff) - this will happen here when it does a look up for a key value at or beyond the last key in the index.

An alternative to the table variable might also be to change the query as below.

1CREATE TABLE [dbo].[ManageServicesRequest]
2(
3    [ReferenceTransactionId]    INT                 NOT NULL,
4    [OrderDate]                 DATETIMEOFFSET(7)   NOT NULL,
5    [QueuePriority]             INT                 NOT NULL,
6    [Queued]                    DATETIMEOFFSET(7)   NULL,
7    CONSTRAINT [PK_ManageServicesRequest] PRIMARY KEY CLUSTERED ([ReferenceTransactionId]),
8)
9
10CREATE TABLE [dbo].[ServiceChange]
11(
12    [ReferenceTransactionId]    INT                 NOT NULL,
13    [ServiceId]                 VARCHAR(50)         NOT NULL,
14    [ServiceStatus]             CHAR(1)             NOT NULL,
15    [ValidFrom]                 DATETIMEOFFSET(7)   NOT NULL,
16    CONSTRAINT [PK_ServiceChange] PRIMARY KEY CLUSTERED ([ReferenceTransactionId],[ServiceId]),
17    CONSTRAINT [FK_ServiceChange_ManageServiceRequest] FOREIGN KEY ([ReferenceTransactionId]) REFERENCES [ManageServicesRequest]([ReferenceTransactionId]) ON DELETE CASCADE,
18    INDEX [IDX_ServiceChange_ManageServiceRequestId] ([ReferenceTransactionId]),
19    INDEX [IDX_ServiceChange_ServiceId] ([ServiceId])
20)
21
22CREATE TABLE [dbo].[ServiceChangeParameter]
23(
24    [ReferenceTransactionId]    INT                 NOT NULL,
25    [ServiceId]                 VARCHAR(50)         NOT NULL,
26    [ParamCode]                 VARCHAR(50)         NOT NULL,
27    [ParamValue]                VARCHAR(50)         NOT NULL,
28    [ParamValidFrom]            DATETIMEOFFSET(7)   NOT NULL,
29    CONSTRAINT [PK_ServiceChangeParameter] PRIMARY KEY CLUSTERED ([ReferenceTransactionId],[ServiceId],[ParamCode]),
30    CONSTRAINT [FK_ServiceChangeParameter_ServiceChange] FOREIGN KEY ([ReferenceTransactionId],[ServiceId]) REFERENCES [ServiceChange] ([ReferenceTransactionId],[ServiceId]) ON DELETE CASCADE,
31    INDEX [IDX_ServiceChangeParameter_ManageServiceRequestId] ([ReferenceTransactionId]),
32    INDEX [IDX_ServiceChangeParameter_ServiceId] ([ServiceId]),
33    INDEX [IDX_ServiceChangeParameter_ParamCode] ([ParamCode])
34)
35CREATE PROCEDURE [dbo].[spCreateManageServicesRequest]
36    @ReferenceTransactionId INT,
37    @OrderDate DATETIMEOFFSET,
38    @QueuePriority INT,
39    @Services ServiceChangeUdt READONLY,
40    @Parameters ServiceChangeParameterUdt READONLY
41AS
42BEGIN
43    SET NOCOUNT ON;
44
45    BEGIN TRY
46    /* VYTVOŘ NOVÝ REQUEST NA ZMĚNU SLUŽEB */
47
48        /*  INSERT REQUEST  */
49        INSERT INTO [dbo].[ManageServicesRequest]
50            ([ReferenceTransactionId]
51            ,[OrderDate]
52            ,[QueuePriority]
53            ,[Queued])
54        VALUES
55            (@ReferenceTransactionId
56            ,@OrderDate
57            ,@QueuePriority
58            ,NULL)
59
60        /*  INSERT SERVICES */
61        INSERT INTO [dbo].[ServiceChange]
62            ([ReferenceTransactionId]
63            ,[ServiceId]
64            ,[ServiceStatus]
65            ,[ValidFrom])
66        SELECT 
67             @ReferenceTransactionId AS [ReferenceTransactionId]
68            ,[ServiceId]
69            ,[ServiceStatus]
70            ,[ValidFrom]
71        FROM @Services AS [S]
72
73        /*  INSERT PARAMS   */
74        INSERT INTO [dbo].[ServiceChangeParameter]
75            ([ReferenceTransactionId]
76            ,[ServiceId]
77            ,[ParamCode]
78            ,[ParamValue]
79            ,[ParamValidFrom])
80        SELECT 
81            @ReferenceTransactionId AS [ReferenceTransactionId]
82            ,[ServiceId]
83            ,[ParamCode]
84            ,[ParamValue]
85            ,[ParamValidFrom]
86        FROM @Parameters AS [P]
87
88    END TRY
89    BEGIN CATCH
90        THROW
91    END CATCH
92END
93
94CREATE PROCEDURE [dbo].[spGetManageServicesRequest]
95    @ReferenceTransactionId INT
96AS
97BEGIN
98    SET NOCOUNT ON;
99
100    BEGIN TRY 
101        /* VRAŤ MANAGE SERVICES REQUEST PODLE ID */
102
103        SELECT 
104            [MR].[ReferenceTransactionId], 
105            [MR].[OrderDate], 
106            [MR].[QueuePriority], 
107            [MR].[Queued], 
108            
109            [SC].[ReferenceTransactionId], 
110            [SC].[ServiceId], 
111            [SC].[ServiceStatus], 
112            [SC].[ValidFrom],
113            
114            [SP].[ReferenceTransactionId], 
115            [SP].[ServiceId], 
116            [SP].[ParamCode], 
117            [SP].[ParamValue], 
118            [SP].[ParamValidFrom]
119
120        FROM [dbo].[ManageServicesRequest] AS [MR]
121        LEFT JOIN [dbo].[ServiceChange] AS [SC] ON [SC].[ReferenceTransactionId] = [MR].[ReferenceTransactionId]
122        LEFT JOIN [dbo].[ServiceChangeParameter] AS [SP] ON [SP].[ReferenceTransactionId] = [SC].[ReferenceTransactionId] AND [SP].[ServiceId] = [SC].[ServiceId]
123        WHERE [MR].[ReferenceTransactionId] = @ReferenceTransactionId
124
125    END TRY
126    BEGIN CATCH
127        THROW
128    END CATCH
129END
130public async Task Consume(ConsumeContext&lt;CreateCommand&gt; context)
131{
132    using (var sql = sqlFactory.Cip)
133    {
134        /*SAVE REQUEST TO DATABASE*/
135        sql.StartTransaction(System.Data.IsolationLevel.Serializable); &lt;----- First transaction starts
136
137        /* Create id */
138        var transactionId = await GetNewId(context.Message.CorrelationId);
139
140        /* Create manage services request */
141        await sql.OrderingGateway.ManageServices.Create(transactionId,  context.Message.ApiRequest.OrderDate, context.Message.ApiRequest.Priority, services);
142
143        sql.Commit(); &lt;----- First transaction ends
144        
145
146        /// .... Some other stuff ...
147
148        /* Fetch the same object you created in the first transaction */
149        Try
150        {
151            sql.StartTransaction(System.Data.IsolationLevel.Serializable);
152            
153            var request = await sql.OrderingGateway.ManageServices.Get(transactionId); &lt;----- HERE BE THE DEADLOCK, 
154
155            request.Queued = DateTimeOffset.Now;
156            await sql.OrderingGateway.ManageServices.Update(request);
157
158            ... Here is a posting to a microservice queue ...
159        
160            sql.Commit();
161        }
162        catch (Exception)
163        {
164            sql.RollBack();
165        }
166        
167        /// .... Some other stuff ....
168}
169&lt;deadlock&gt;
170  &lt;victim-list&gt;
171    &lt;victimProcess id=&quot;process1dbfa86c4e8&quot; /&gt;
172  &lt;/victim-list&gt;
173  &lt;process-list&gt;
174    &lt;process id=&quot;process1dbfa86c4e8&quot; taskpriority=&quot;0&quot; logused=&quot;0&quot; waitresource=&quot;KEY: 18:72057594046775296 (b42d8e559092)&quot; waittime=&quot;2503&quot; ownerId=&quot;33411557480&quot; transactionname=&quot;user_transaction&quot; lasttranstarted=&quot;2021-12-01T01:06:15.303&quot; XDES=&quot;0x1ddd2df4420&quot; lockMode=&quot;RangeS-S&quot; schedulerid=&quot;20&quot; kpid=&quot;23000&quot; status=&quot;suspended&quot; spid=&quot;55&quot; sbid=&quot;2&quot; ecid=&quot;0&quot; priority=&quot;0&quot; trancount=&quot;1&quot; lastbatchstarted=&quot;2021-12-01T01:06:15.310&quot; lastbatchcompleted=&quot;2021-12-01T01:06:15.300&quot; lastattention=&quot;1900-01-01T00:00:00.300&quot; clientapp=&quot;Core Microsoft SqlClient Data Provider&quot; hostpid=&quot;11020&quot; isolationlevel=&quot;serializable (4)&quot; xactid=&quot;33411557480&quot; currentdb=&quot;18&quot; currentdbname=&quot;xxx&quot; lockTimeout=&quot;4294967295&quot; clientoption1=&quot;673185824&quot; clientoption2=&quot;128056&quot;&gt;
175      &lt;executionStack&gt;
176        &lt;frame procname=&quot;xxx.dbo.spGetManageServicesRequest&quot; line=&quot;10&quot; stmtstart=&quot;356&quot; stmtend=&quot;4256&quot; sqlhandle=&quot;0x030012001374fc02f91433019aad000001000000000000000000000000000000000000000000000000000000&quot;&gt;&lt;/frame&gt;
177      &lt;/executionStack&gt;
178    &lt;/process&gt;
179    &lt;process id=&quot;process1dbfa1c1c28&quot; taskpriority=&quot;0&quot; logused=&quot;1232&quot; waitresource=&quot;KEY: 18:72057594046971904 (ffffffffffff)&quot; waittime=&quot;6275&quot; ownerId=&quot;33411563398&quot; transactionname=&quot;user_transaction&quot; lasttranstarted=&quot;2021-12-01T01:06:16.450&quot; XDES=&quot;0x3d4e842c420&quot; lockMode=&quot;RangeI-N&quot; schedulerid=&quot;31&quot; kpid=&quot;36432&quot; status=&quot;suspended&quot; spid=&quot;419&quot; sbid=&quot;2&quot; ecid=&quot;0&quot; priority=&quot;0&quot; trancount=&quot;2&quot; lastbatchstarted=&quot;2021-12-01T01:06:16.480&quot; lastbatchcompleted=&quot;2021-12-01T01:06:16.463&quot; lastattention=&quot;1900-01-01T00:00:00.463&quot; clientapp=&quot;Core Microsoft SqlClient Data Provider&quot;  hostpid=&quot;11020&quot; isolationlevel=&quot;serializable (4)&quot; xactid=&quot;33411563398&quot; currentdb=&quot;18&quot; currentdbname=&quot;xxx&quot; lockTimeout=&quot;4294967295&quot; clientoption1=&quot;673185824&quot; clientoption2=&quot;128056&quot;&gt;
180      &lt;executionStack&gt;
181        &lt;frame procname=&quot;xxx.dbo.spCreateManageServicesRequest&quot; line=&quot;40&quot; stmtstart=&quot;2592&quot; stmtend=&quot;3226&quot; sqlhandle=&quot;0x03001200f01ab84aeb1433019aad000001000000000000000000000000000000000000000000000000000000&quot;&gt;&lt;/frame&gt;
182      &lt;/executionStack&gt;
183    &lt;/process&gt;
184  &lt;/process-list&gt;
185  &lt;resource-list&gt;
186    &lt;keylock hobtid=&quot;72057594046775296&quot; dbid=&quot;18&quot; objectname=&quot;xxx.dbo.ServiceChange&quot; indexname=&quot;PK_ServiceChange&quot; id=&quot;lock202ecfd0380&quot; mode=&quot;X&quot; associatedObjectId=&quot;72057594046775296&quot;&gt;
187      &lt;owner-list&gt;
188        &lt;owner id=&quot;process1dbfa1c1c28&quot; mode=&quot;X&quot; /&gt;
189      &lt;/owner-list&gt;
190      &lt;waiter-list&gt;
191        &lt;waiter id=&quot;process1dbfa86c4e8&quot; mode=&quot;RangeS-S&quot; requestType=&quot;wait&quot; /&gt;
192      &lt;/waiter-list&gt;
193    &lt;/keylock&gt;
194    &lt;keylock hobtid=&quot;72057594046971904&quot; dbid=&quot;18&quot; objectname=&quot;xxx.dbo.ServiceChangeParameter&quot; indexname=&quot;PK_ServiceChangeParameter&quot; id=&quot;lock27d3d371880&quot; mode=&quot;RangeS-S&quot; associatedObjectId=&quot;72057594046971904&quot;&gt;
195      &lt;owner-list&gt;
196        &lt;owner id=&quot;process1dbfa86c4e8&quot; mode=&quot;RangeS-S&quot; /&gt;
197      &lt;/owner-list&gt;
198      &lt;waiter-list&gt;
199        &lt;waiter id=&quot;process1dbfa1c1c28&quot; mode=&quot;RangeI-N&quot; requestType=&quot;wait&quot; /&gt;
200      &lt;/waiter-list&gt;
201    &lt;/keylock&gt;
202  &lt;/resource-list&gt;
203&lt;/deadlock&gt;
204SELECT ...
205FROM [dbo].[ManageServicesRequest] AS [MR]
206  LEFT JOIN [dbo].[ServiceChange] AS [SC]  ON [SC].[ReferenceTransactionId] = [MR].[ReferenceTransactionId]
207INSERT INTO [dbo].[ManageServicesRequest] 
208VALUES (26410821, GETDATE(), 1, GETDATE()), 
209       (26410822, GETDATE(), 1, GETDATE()), 
210       (26410823, GETDATE(), 1, GETDATE());
211
212INSERT INTO [dbo].[ServiceChange] 
213VALUES (26410821, 'X', 'X', GETDATE()), 
214       (26410822, 'X', 'X', GETDATE()), 
215       (26410823, 'X', 'X', GETDATE());
216
217INSERT INTO [dbo].[ServiceChangeParameter]  
218VALUES (26410821, 'X', 'P1','P1', GETDATE()), 
219       (26410823, 'X', 'P1','P1', GETDATE());
220Process 51 acquiring IS lock on OBJECT: 7:1557580587:0  (class bit2000000 ref1) result: OK
221
222Process 51 acquiring IS lock on OBJECT: 7:1509580416:0  (class bit2000000 ref1) result: OK
223
224Process 51 acquiring IS lock on OBJECT: 7:1477580302:0  (class bit2000000 ref1) result: OK
225
226Process 51 acquiring IS lock on PAGE: 7:1:600  (class bit2000000 ref0) result: OK
227
228Process 51 acquiring S lock on KEY: 7:72057594044940288 (1b148afa48fb) (class bit2000000 ref0) result: OK
229
230Process 51 acquiring IS lock on PAGE: 7:1:608  (class bit2000000 ref0) result: OK
231
232Process 51 acquiring RangeS-S lock on KEY: 7:72057594045005824 (a69d56b089b6) (class bit2000000 ref0) result: OK
233
234Process 51 acquiring IS lock on PAGE: 7:1:632  (class bit2000000 ref0) result: OK
235
236Process 51 acquiring RangeS-S lock on KEY: 7:72057594045202432 (c37d1982c3c9) (class bit2000000 ref0) result: OK
237
238Process 51 acquiring RangeS-S lock on KEY: 7:72057594045005824 (2ef5265f2b42) (class bit2000000 ref0) result: OK
239SELECT ...
240FROM [dbo].[ManageServicesRequest] AS [MR]
241  LEFT JOIN [dbo].[ServiceChange] AS [SC]  ON [SC].[ReferenceTransactionId] = [MR].[ReferenceTransactionId]
242  LEFT HASH JOIN [dbo].[ServiceChangeParameter] AS [SP] ON [SP].[ReferenceTransactionId] = [MR].[ReferenceTransactionId] AND [SP].[ServiceId] = [SC].[ServiceId]
243  WHERE [MR].[ReferenceTransactionId] = @ReferenceTransactionId
244

The final predicate on [dbo].[ServiceChangeParameter] is changed to reference [MR].[ReferenceTransactionId] instead of [SC].[ReferenceTransactionId] and an explicit hash join hint is added.

This gives a plan like the below where all the locks on ServiceChange are taken during the hash table build stage before any are taken on ServiceChangeParameter - without changing the ReferenceTransactionId condition the new plan had a scan rather than a seek on ServiceChangeParameter which is why that change was made (it allows the optimiser to use the implied equality predicate on @ReferenceTransactionId)

enter image description here

Source https://stackoverflow.com/questions/70377745

QUESTION

Rewrite host and port for outgoing request of a pod in an Istio Mesh

Asked 2021-Nov-17 at 09:30

I have to get the existing microservices run. They are given as docker images. They talk to each other by configured hostnames and ports. I started to use Istio to view and configure the outgoing calls of each microservice. Now I am at the point that I need to rewrite / redirect the host and the port of a request that goes out of one container. How can I do that with Istio?

I will try to give a minimum example. There are two services, service-a and service-b.

1apiVersion: apps/v1
2kind: Deployment
3metadata:
4  name: service-b
5spec:
6  selector:
7    matchLabels:
8      run: service-b
9  replicas: 1
10  template:
11    metadata:
12      labels:
13        run: service-b
14    spec:
15      containers:
16        - name: service-b
17          image: nginx
18          ports:
19            - containerPort: 80
20              name: web
21---
22apiVersion: v1
23kind: Service
24metadata:
25  name: service-b
26  labels:
27    run: service-b
28spec:
29  ports:
30    - port: 8080
31      protocol: TCP
32      targetPort: 80
33      name: service-b
34  selector:
35    run: service-b
36
37---
38apiVersion: apps/v1
39kind: Deployment
40metadata:
41  name: service-a
42spec:
43  selector:
44    matchLabels:
45      run: service-a
46  replicas: 1
47  template:
48    metadata:
49      labels:
50        run: service-a
51    spec:
52      containers:
53        - name: service-a
54          image: nginx
55          ports:
56            - containerPort: 80
57              name: web
58---
59apiVersion: v1
60kind: Service
61metadata:
62  name: service-a
63  labels:
64    run: service-a
65spec:
66  ports:
67    - port: 8081
68      protocol: TCP
69      targetPort: 80
70      name: service-a
71  selector:
72    run: service-a
73
74

I can docker exec into service-a and successfully execute:

1apiVersion: apps/v1
2kind: Deployment
3metadata:
4  name: service-b
5spec:
6  selector:
7    matchLabels:
8      run: service-b
9  replicas: 1
10  template:
11    metadata:
12      labels:
13        run: service-b
14    spec:
15      containers:
16        - name: service-b
17          image: nginx
18          ports:
19            - containerPort: 80
20              name: web
21---
22apiVersion: v1
23kind: Service
24metadata:
25  name: service-b
26  labels:
27    run: service-b
28spec:
29  ports:
30    - port: 8080
31      protocol: TCP
32      targetPort: 80
33      name: service-b
34  selector:
35    run: service-b
36
37---
38apiVersion: apps/v1
39kind: Deployment
40metadata:
41  name: service-a
42spec:
43  selector:
44    matchLabels:
45      run: service-a
46  replicas: 1
47  template:
48    metadata:
49      labels:
50        run: service-a
51    spec:
52      containers:
53        - name: service-a
54          image: nginx
55          ports:
56            - containerPort: 80
57              name: web
58---
59apiVersion: v1
60kind: Service
61metadata:
62  name: service-a
63  labels:
64    run: service-a
65spec:
66  ports:
67    - port: 8081
68      protocol: TCP
69      targetPort: 80
70      name: service-a
71  selector:
72    run: service-a
73
74root@service-a-d44f55d8c-8cp8m:/# curl -v service-b:8080
75
76&lt; HTTP/1.1 200 OK
77&lt; server: envoy
78
79

Now, to simulate my problem, I want to reach service-b by using another hostname and port. I want to configure Istio the way that this call will also work:

1apiVersion: apps/v1
2kind: Deployment
3metadata:
4  name: service-b
5spec:
6  selector:
7    matchLabels:
8      run: service-b
9  replicas: 1
10  template:
11    metadata:
12      labels:
13        run: service-b
14    spec:
15      containers:
16        - name: service-b
17          image: nginx
18          ports:
19            - containerPort: 80
20              name: web
21---
22apiVersion: v1
23kind: Service
24metadata:
25  name: service-b
26  labels:
27    run: service-b
28spec:
29  ports:
30    - port: 8080
31      protocol: TCP
32      targetPort: 80
33      name: service-b
34  selector:
35    run: service-b
36
37---
38apiVersion: apps/v1
39kind: Deployment
40metadata:
41  name: service-a
42spec:
43  selector:
44    matchLabels:
45      run: service-a
46  replicas: 1
47  template:
48    metadata:
49      labels:
50        run: service-a
51    spec:
52      containers:
53        - name: service-a
54          image: nginx
55          ports:
56            - containerPort: 80
57              name: web
58---
59apiVersion: v1
60kind: Service
61metadata:
62  name: service-a
63  labels:
64    run: service-a
65spec:
66  ports:
67    - port: 8081
68      protocol: TCP
69      targetPort: 80
70      name: service-a
71  selector:
72    run: service-a
73
74root@service-a-d44f55d8c-8cp8m:/# curl -v service-b:8080
75
76&lt; HTTP/1.1 200 OK
77&lt; server: envoy
78
79root@service-a-d44f55d8c-8cp8m:/# curl -v service-x:7777
80

Best regards, Christian

ANSWER

Answered 2021-Nov-16 at 10:56

There are two solutions which can be used depending on necessity of using istio features.

If no istio features are planned to use, it can be solved using native kubernetes. In turn, if some istio feature are intended to use, it can be solved using istio virtual service. Below are two options:


1. Native kubernetes

Service-x should be pointed to the backend of service-b deployment. Below is selector which points to deployment: service-b:

1apiVersion: apps/v1
2kind: Deployment
3metadata:
4  name: service-b
5spec:
6  selector:
7    matchLabels:
8      run: service-b
9  replicas: 1
10  template:
11    metadata:
12      labels:
13        run: service-b
14    spec:
15      containers:
16        - name: service-b
17          image: nginx
18          ports:
19            - containerPort: 80
20              name: web
21---
22apiVersion: v1
23kind: Service
24metadata:
25  name: service-b
26  labels:
27    run: service-b
28spec:
29  ports:
30    - port: 8080
31      protocol: TCP
32      targetPort: 80
33      name: service-b
34  selector:
35    run: service-b
36
37---
38apiVersion: apps/v1
39kind: Deployment
40metadata:
41  name: service-a
42spec:
43  selector:
44    matchLabels:
45      run: service-a
46  replicas: 1
47  template:
48    metadata:
49      labels:
50        run: service-a
51    spec:
52      containers:
53        - name: service-a
54          image: nginx
55          ports:
56            - containerPort: 80
57              name: web
58---
59apiVersion: v1
60kind: Service
61metadata:
62  name: service-a
63  labels:
64    run: service-a
65spec:
66  ports:
67    - port: 8081
68      protocol: TCP
69      targetPort: 80
70      name: service-a
71  selector:
72    run: service-a
73
74root@service-a-d44f55d8c-8cp8m:/# curl -v service-b:8080
75
76&lt; HTTP/1.1 200 OK
77&lt; server: envoy
78
79root@service-a-d44f55d8c-8cp8m:/# curl -v service-x:7777
80apiVersion: v1
81kind: Service
82metadata:
83  name: service-x
84  labels:
85    run: service-x
86spec:
87  ports:
88    - port: 7777
89      protocol: TCP
90      targetPort: 80
91      name: service-x
92  selector:
93    run: service-b
94

This way request will go through istio anyway because sidecar containers are injected.

1apiVersion: apps/v1
2kind: Deployment
3metadata:
4  name: service-b
5spec:
6  selector:
7    matchLabels:
8      run: service-b
9  replicas: 1
10  template:
11    metadata:
12      labels:
13        run: service-b
14    spec:
15      containers:
16        - name: service-b
17          image: nginx
18          ports:
19            - containerPort: 80
20              name: web
21---
22apiVersion: v1
23kind: Service
24metadata:
25  name: service-b
26  labels:
27    run: service-b
28spec:
29  ports:
30    - port: 8080
31      protocol: TCP
32      targetPort: 80
33      name: service-b
34  selector:
35    run: service-b
36
37---
38apiVersion: apps/v1
39kind: Deployment
40metadata:
41  name: service-a
42spec:
43  selector:
44    matchLabels:
45      run: service-a
46  replicas: 1
47  template:
48    metadata:
49      labels:
50        run: service-a
51    spec:
52      containers:
53        - name: service-a
54          image: nginx
55          ports:
56            - containerPort: 80
57              name: web
58---
59apiVersion: v1
60kind: Service
61metadata:
62  name: service-a
63  labels:
64    run: service-a
65spec:
66  ports:
67    - port: 8081
68      protocol: TCP
69      targetPort: 80
70      name: service-a
71  selector:
72    run: service-a
73
74root@service-a-d44f55d8c-8cp8m:/# curl -v service-b:8080
75
76&lt; HTTP/1.1 200 OK
77&lt; server: envoy
78
79root@service-a-d44f55d8c-8cp8m:/# curl -v service-x:7777
80apiVersion: v1
81kind: Service
82metadata:
83  name: service-x
84  labels:
85    run: service-x
86spec:
87  ports:
88    - port: 7777
89      protocol: TCP
90      targetPort: 80
91      name: service-x
92  selector:
93    run: service-b
94# curl -vI service-b:8080
95
96*   Trying xx.xx.xx.xx:8080...
97* Connected to service-b (xx.xx.xx.xx) port 8080 (#0)
98&gt; Host: service-b:8080
99&lt; HTTP/1.1 200 OK
100&lt; server: envoy
101

and

1apiVersion: apps/v1
2kind: Deployment
3metadata:
4  name: service-b
5spec:
6  selector:
7    matchLabels:
8      run: service-b
9  replicas: 1
10  template:
11    metadata:
12      labels:
13        run: service-b
14    spec:
15      containers:
16        - name: service-b
17          image: nginx
18          ports:
19            - containerPort: 80
20              name: web
21---
22apiVersion: v1
23kind: Service
24metadata:
25  name: service-b
26  labels:
27    run: service-b
28spec:
29  ports:
30    - port: 8080
31      protocol: TCP
32      targetPort: 80
33      name: service-b
34  selector:
35    run: service-b
36
37---
38apiVersion: apps/v1
39kind: Deployment
40metadata:
41  name: service-a
42spec:
43  selector:
44    matchLabels:
45      run: service-a
46  replicas: 1
47  template:
48    metadata:
49      labels:
50        run: service-a
51    spec:
52      containers:
53        - name: service-a
54          image: nginx
55          ports:
56            - containerPort: 80
57              name: web
58---
59apiVersion: v1
60kind: Service
61metadata:
62  name: service-a
63  labels:
64    run: service-a
65spec:
66  ports:
67    - port: 8081
68      protocol: TCP
69      targetPort: 80
70      name: service-a
71  selector:
72    run: service-a
73
74root@service-a-d44f55d8c-8cp8m:/# curl -v service-b:8080
75
76&lt; HTTP/1.1 200 OK
77&lt; server: envoy
78
79root@service-a-d44f55d8c-8cp8m:/# curl -v service-x:7777
80apiVersion: v1
81kind: Service
82metadata:
83  name: service-x
84  labels:
85    run: service-x
86spec:
87  ports:
88    - port: 7777
89      protocol: TCP
90      targetPort: 80
91      name: service-x
92  selector:
93    run: service-b
94# curl -vI service-b:8080
95
96*   Trying xx.xx.xx.xx:8080...
97* Connected to service-b (xx.xx.xx.xx) port 8080 (#0)
98&gt; Host: service-b:8080
99&lt; HTTP/1.1 200 OK
100&lt; server: envoy
101# curl -vI service-x:7777
102
103*   Trying yy.yy.yy.yy:7777...
104* Connected to service-x (yy.yy.yy.yy) port 7777 (#0)
105&gt; Host: service-x:7777
106&lt; HTTP/1.1 200 OK
107&lt; server: envoy
108

2. Istio virtual service

In this example virtual service is used. Service service-x still needs to be created, but now we don't specify any selectors:

1apiVersion: apps/v1
2kind: Deployment
3metadata:
4  name: service-b
5spec:
6  selector:
7    matchLabels:
8      run: service-b
9  replicas: 1
10  template:
11    metadata:
12      labels:
13        run: service-b
14    spec:
15      containers:
16        - name: service-b
17          image: nginx
18          ports:
19            - containerPort: 80
20              name: web
21---
22apiVersion: v1
23kind: Service
24metadata:
25  name: service-b
26  labels:
27    run: service-b
28spec:
29  ports:
30    - port: 8080
31      protocol: TCP
32      targetPort: 80
33      name: service-b
34  selector:
35    run: service-b
36
37---
38apiVersion: apps/v1
39kind: Deployment
40metadata:
41  name: service-a
42spec:
43  selector:
44    matchLabels:
45      run: service-a
46  replicas: 1
47  template:
48    metadata:
49      labels:
50        run: service-a
51    spec:
52      containers:
53        - name: service-a
54          image: nginx
55          ports:
56            - containerPort: 80
57              name: web
58---
59apiVersion: v1
60kind: Service
61metadata:
62  name: service-a
63  labels:
64    run: service-a
65spec:
66  ports:
67    - port: 8081
68      protocol: TCP
69      targetPort: 80
70      name: service-a
71  selector:
72    run: service-a
73
74root@service-a-d44f55d8c-8cp8m:/# curl -v service-b:8080
75
76&lt; HTTP/1.1 200 OK
77&lt; server: envoy
78
79root@service-a-d44f55d8c-8cp8m:/# curl -v service-x:7777
80apiVersion: v1
81kind: Service
82metadata:
83  name: service-x
84  labels:
85    run: service-x
86spec:
87  ports:
88    - port: 7777
89      protocol: TCP
90      targetPort: 80
91      name: service-x
92  selector:
93    run: service-b
94# curl -vI service-b:8080
95
96*   Trying xx.xx.xx.xx:8080...
97* Connected to service-b (xx.xx.xx.xx) port 8080 (#0)
98&gt; Host: service-b:8080
99&lt; HTTP/1.1 200 OK
100&lt; server: envoy
101# curl -vI service-x:7777
102
103*   Trying yy.yy.yy.yy:7777...
104* Connected to service-x (yy.yy.yy.yy) port 7777 (#0)
105&gt; Host: service-x:7777
106&lt; HTTP/1.1 200 OK
107&lt; server: envoy
108apiVersion: v1
109kind: Service
110metadata:
111  name: service-x
112  labels:
113    run: service-x
114spec:
115  ports:
116    - port: 7777
117      protocol: TCP
118      targetPort: 80
119      name: service-x
120

Test it from another pod:

1apiVersion: apps/v1
2kind: Deployment
3metadata:
4  name: service-b
5spec:
6  selector:
7    matchLabels:
8      run: service-b
9  replicas: 1
10  template:
11    metadata:
12      labels:
13        run: service-b
14    spec:
15      containers:
16        - name: service-b
17          image: nginx
18          ports:
19            - containerPort: 80
20              name: web
21---
22apiVersion: v1
23kind: Service
24metadata:
25  name: service-b
26  labels:
27    run: service-b
28spec:
29  ports:
30    - port: 8080
31      protocol: TCP
32      targetPort: 80
33      name: service-b
34  selector:
35    run: service-b
36
37---
38apiVersion: apps/v1
39kind: Deployment
40metadata:
41  name: service-a
42spec:
43  selector:
44    matchLabels:
45      run: service-a
46  replicas: 1
47  template:
48    metadata:
49      labels:
50        run: service-a
51    spec:
52      containers:
53        - name: service-a
54          image: nginx
55          ports:
56            - containerPort: 80
57              name: web
58---
59apiVersion: v1
60kind: Service
61metadata:
62  name: service-a
63  labels:
64    run: service-a
65spec:
66  ports:
67    - port: 8081
68      protocol: TCP
69      targetPort: 80
70      name: service-a
71  selector:
72    run: service-a
73
74root@service-a-d44f55d8c-8cp8m:/# curl -v service-b:8080
75
76&lt; HTTP/1.1 200 OK
77&lt; server: envoy
78
79root@service-a-d44f55d8c-8cp8m:/# curl -v service-x:7777
80apiVersion: v1
81kind: Service
82metadata:
83  name: service-x
84  labels:
85    run: service-x
86spec:
87  ports:
88    - port: 7777
89      protocol: TCP
90      targetPort: 80
91      name: service-x
92  selector:
93    run: service-b
94# curl -vI service-b:8080
95
96*   Trying xx.xx.xx.xx:8080...
97* Connected to service-b (xx.xx.xx.xx) port 8080 (#0)
98&gt; Host: service-b:8080
99&lt; HTTP/1.1 200 OK
100&lt; server: envoy
101# curl -vI service-x:7777
102
103*   Trying yy.yy.yy.yy:7777...
104* Connected to service-x (yy.yy.yy.yy) port 7777 (#0)
105&gt; Host: service-x:7777
106&lt; HTTP/1.1 200 OK
107&lt; server: envoy
108apiVersion: v1
109kind: Service
110metadata:
111  name: service-x
112  labels:
113    run: service-x
114spec:
115  ports:
116    - port: 7777
117      protocol: TCP
118      targetPort: 80
119      name: service-x
120# curl -vI service-x:7777
121
122*   Trying yy.yy.yy.yy:7777...
123* Connected to service-x (yy.yy.yy.yy) port 7777 (#0)
124&gt; Host: service-x:7777
125&lt; HTTP/1.1 503 Service Unavailable
126&lt; server: envoy
127

503 error which is expected. Now creating virtual service which will route requests to service-b on port: 8080:

1apiVersion: apps/v1
2kind: Deployment
3metadata:
4  name: service-b
5spec:
6  selector:
7    matchLabels:
8      run: service-b
9  replicas: 1
10  template:
11    metadata:
12      labels:
13        run: service-b
14    spec:
15      containers:
16        - name: service-b
17          image: nginx
18          ports:
19            - containerPort: 80
20              name: web
21---
22apiVersion: v1
23kind: Service
24metadata:
25  name: service-b
26  labels:
27    run: service-b
28spec:
29  ports:
30    - port: 8080
31      protocol: TCP
32      targetPort: 80
33      name: service-b
34  selector:
35    run: service-b
36
37---
38apiVersion: apps/v1
39kind: Deployment
40metadata:
41  name: service-a
42spec:
43  selector:
44    matchLabels:
45      run: service-a
46  replicas: 1
47  template:
48    metadata:
49      labels:
50        run: service-a
51    spec:
52      containers:
53        - name: service-a
54          image: nginx
55          ports:
56            - containerPort: 80
57              name: web
58---
59apiVersion: v1
60kind: Service
61metadata:
62  name: service-a
63  labels:
64    run: service-a
65spec:
66  ports:
67    - port: 8081
68      protocol: TCP
69      targetPort: 80
70      name: service-a
71  selector:
72    run: service-a
73
74root@service-a-d44f55d8c-8cp8m:/# curl -v service-b:8080
75
76&lt; HTTP/1.1 200 OK
77&lt; server: envoy
78
79root@service-a-d44f55d8c-8cp8m:/# curl -v service-x:7777
80apiVersion: v1
81kind: Service
82metadata:
83  name: service-x
84  labels:
85    run: service-x
86spec:
87  ports:
88    - port: 7777
89      protocol: TCP
90      targetPort: 80
91      name: service-x
92  selector:
93    run: service-b
94# curl -vI service-b:8080
95
96*   Trying xx.xx.xx.xx:8080...
97* Connected to service-b (xx.xx.xx.xx) port 8080 (#0)
98&gt; Host: service-b:8080
99&lt; HTTP/1.1 200 OK
100&lt; server: envoy
101# curl -vI service-x:7777
102
103*   Trying yy.yy.yy.yy:7777...
104* Connected to service-x (yy.yy.yy.yy) port 7777 (#0)
105&gt; Host: service-x:7777
106&lt; HTTP/1.1 200 OK
107&lt; server: envoy
108apiVersion: v1
109kind: Service
110metadata:
111  name: service-x
112  labels:
113    run: service-x
114spec:
115  ports:
116    - port: 7777
117      protocol: TCP
118      targetPort: 80
119      name: service-x
120# curl -vI service-x:7777
121
122*   Trying yy.yy.yy.yy:7777...
123* Connected to service-x (yy.yy.yy.yy) port 7777 (#0)
124&gt; Host: service-x:7777
125&lt; HTTP/1.1 503 Service Unavailable
126&lt; server: envoy
127apiVersion: networking.istio.io/v1beta1
128kind: VirtualService
129metadata:
130  name: service-x-to-b
131spec:
132  hosts:
133  - service-x
134  http:
135  - route:
136    - destination:
137        host: service-b
138        port:
139          number: 8080
140

Testing from the pod:

1apiVersion: apps/v1
2kind: Deployment
3metadata:
4  name: service-b
5spec:
6  selector:
7    matchLabels:
8      run: service-b
9  replicas: 1
10  template:
11    metadata:
12      labels:
13        run: service-b
14    spec:
15      containers:
16        - name: service-b
17          image: nginx
18          ports:
19            - containerPort: 80
20              name: web
21---
22apiVersion: v1
23kind: Service
24metadata:
25  name: service-b
26  labels:
27    run: service-b
28spec:
29  ports:
30    - port: 8080
31      protocol: TCP
32      targetPort: 80
33      name: service-b
34  selector:
35    run: service-b
36
37---
38apiVersion: apps/v1
39kind: Deployment
40metadata:
41  name: service-a
42spec:
43  selector:
44    matchLabels:
45      run: service-a
46  replicas: 1
47  template:
48    metadata:
49      labels:
50        run: service-a
51    spec:
52      containers:
53        - name: service-a
54          image: nginx
55          ports:
56            - containerPort: 80
57              name: web
58---
59apiVersion: v1
60kind: Service
61metadata:
62  name: service-a
63  labels:
64    run: service-a
65spec:
66  ports:
67    - port: 8081
68      protocol: TCP
69      targetPort: 80
70      name: service-a
71  selector:
72    run: service-a
73
74root@service-a-d44f55d8c-8cp8m:/# curl -v service-b:8080
75
76&lt; HTTP/1.1 200 OK
77&lt; server: envoy
78
79root@service-a-d44f55d8c-8cp8m:/# curl -v service-x:7777
80apiVersion: v1
81kind: Service
82metadata:
83  name: service-x
84  labels:
85    run: service-x
86spec:
87  ports:
88    - port: 7777
89      protocol: TCP
90      targetPort: 80
91      name: service-x
92  selector:
93    run: service-b
94# curl -vI service-b:8080
95
96*   Trying xx.xx.xx.xx:8080...
97* Connected to service-b (xx.xx.xx.xx) port 8080 (#0)
98&gt; Host: service-b:8080
99&lt; HTTP/1.1 200 OK
100&lt; server: envoy
101# curl -vI service-x:7777
102
103*   Trying yy.yy.yy.yy:7777...
104* Connected to service-x (yy.yy.yy.yy) port 7777 (#0)
105&gt; Host: service-x:7777
106&lt; HTTP/1.1 200 OK
107&lt; server: envoy
108apiVersion: v1
109kind: Service
110metadata:
111  name: service-x
112  labels:
113    run: service-x
114spec:
115  ports:
116    - port: 7777
117      protocol: TCP
118      targetPort: 80
119      name: service-x
120# curl -vI service-x:7777
121
122*   Trying yy.yy.yy.yy:7777...
123* Connected to service-x (yy.yy.yy.yy) port 7777 (#0)
124&gt; Host: service-x:7777
125&lt; HTTP/1.1 503 Service Unavailable
126&lt; server: envoy
127apiVersion: networking.istio.io/v1beta1
128kind: VirtualService
129metadata:
130  name: service-x-to-b
131spec:
132  hosts:
133  - service-x
134  http:
135  - route:
136    - destination:
137        host: service-b
138        port:
139          number: 8080
140# curl -vI service-x:7777
141
142*   Trying yy.yy.yy.yy:7777...
143* Connected to service-x (yy.yy.yy.yy) port 7777 (#0)
144&gt; Host: service-x:7777
145&lt; HTTP/1.1 200 OK
146&lt; server: envoy
147

See it works as expected.


Useful links:

Source https://stackoverflow.com/questions/69901156

QUESTION

Checking list of conditions on API data

Asked 2021-Aug-31 at 00:23

I am using an API which is sending some data about products, every 1 second. on the other hand I have a list of user-created conditions. And I want to check if any data that comes, matches any of the conditions. and if so, I want to notify the user.

for example , user condition maybe like this : price < 30000 and productName = 'chairNumber2'

and the data would be something like this : {'data':[{'name':'chair1','price':'20000','color':blue},{'name':'chairNumber2','price':'45500','color':green},{'name':'chairNumber2','price':'27000','color':blue}]

I am using microservice architecture, and on validating condition I am sending a message on RabbitMQ to my notification service

I have tried the naïve solution (every 1 second, check every condition , and if any data meets the condition then pass on data my other service) but this takes so much RAM and time(time order is in n*m,n being the count of conditions, and m is the count of data), so I am looking for a better scenario

ANSWER

Answered 2021-Aug-31 at 00:23

It's an interesting problem. I have to confess I don't really know how I would do it - it depends a lot on exactly how fast the processing needs to occur, and a lot of other factors not mentioned - such as what constraints to do you have in terms of the technology stack you have, is it on-premise or in the cloud, must the solution be coded by you/your team or can you buy some $$ tool. For future reference, for architecture questions especially, any context you can provide is really helpful - e.g. constraints.

I did think of Pub-Sub, which may offer patterns you can use, but you really just need a simple implementation that will work within your code base, AND very importantly you only have one consuming client, the RabbitMQ queue - it's not like you have X number of random clients wanting the data. So an off-the-shelf Pub-Sub solution might not be a good fit.

Assuming you want a "home-grown" solution, this is what has come to mind so far:

enter image description here

("flow" connectors show data flow, which could be interpreted as a 'push'; where as the other lines are UML "dependency" lines; e.g. the match engine depends on data held in the batch, but it's agnostic as to how that happens).

  • The external data source is where the data is coming from. I had not made any assumptions about how that works or what control you have over it.
  • Interface, all this does is take the raw data and put it into batches that can be processed later by the Match Engine. How the interface works depends on how you want to balance (a) the data coming in, and (b) what you know the match engine expects.
  • Batches are thrown into a batch queue. It's job is to ensure that no data is lost before its processed, that processing can be managed (order of batch processing, resilience, etc).
  • Match engine, works fast on the assumption that the size of each batch is a manageable number of records/changes. It's job is to take changes and ask who's interested in them, and return the results to the RabbitMQ. So its inputs are just the batches and the user & user matching rules (more on that later). How this actually works I'm not sure, worst case it iterates through each rule seeing who has a match - what you're doing now, but...

Key point: the queue would also allow you to scale-out the number of match engine instances - but, I don't know what affect that has downstream on the RabbitMQ and it's downstream consumers (the order in which the updates would arrive, etc).

What's not shown: caching. The match engine needs to know what the matching rules are, and which users those rules relate to. The fastest way to do that look-up is probably in memory, not a database read (unless you can be smart about how that happens), which brings me to this addition:

enter image description here

  • Data Source is wherever the user data, and user matching rules, are kept. I have assumed they are external to "Your Solution" but it doesn't matter.
  • Cache is something that holds the user matches (rules) & user data. It's sole job is to hold these in a way that is optimized for the Match Engine to work fast. You could logically say it was part of the match engine, or separate. How you approach this might be determined by whether or not you intend to scale-out the match engine.
  • Data Provider is simply the component whose job it is to fetch user & rule data and make it available for caching.

So, the Rule engine, cache and data provider could all be separate components, or logically parts of the one component / microservice.

Source https://stackoverflow.com/questions/68970178

QUESTION

Traefik v2 reverse proxy without Docker

Asked 2021-Jul-14 at 10:26

I have a dead simple Golang microservice (no Docker, just simple binary file) which returns simple message on GET-request.

1curl -XGET 'http://localhost:36001/api/operability/list'
2

{"message": "ping 123"}

Now I want to do reverse proxy via Traefik-v2, so I've made configuration file "traefik.toml":

1curl -XGET 'http://localhost:36001/api/operability/list'
2[global]
3  checkNewVersion = false
4  sendAnonymousUsage = false
5
6[entryPoints]
7    [entryPoints.web]
8    address = &quot;:8090&quot;
9
10    [entryPoints.traefik]
11    address = &quot;:8091&quot;
12
13[log]
14    level = &quot;DEBUG&quot;
15    filePath = &quot;logs/traefik.log&quot;
16[accessLog]
17    filePath = &quot;logs/access.log&quot;
18
19[api]
20    insecure = true
21    dashboard = true
22
23[providers]
24  [providers.file]
25    filename = &quot;traefik.toml&quot;
26
27# dynamic conf
28[http]
29    [http.routers]
30        [http.routers.my-router]
31            rule = &quot;Path(`/proxy`)&quot;
32            service = &quot;my-service&quot;
33            entryPoints = [&quot;web&quot;]
34    [http.services]
35        [http.services.my-service.loadBalancer]
36            [[http.services.my-service.loadBalancer.servers]]
37                url = &quot;http://localhost:36001&quot;
38

Starting Traefik (I'm using binary distribution):

1curl -XGET 'http://localhost:36001/api/operability/list'
2[global]
3  checkNewVersion = false
4  sendAnonymousUsage = false
5
6[entryPoints]
7    [entryPoints.web]
8    address = &quot;:8090&quot;
9
10    [entryPoints.traefik]
11    address = &quot;:8091&quot;
12
13[log]
14    level = &quot;DEBUG&quot;
15    filePath = &quot;logs/traefik.log&quot;
16[accessLog]
17    filePath = &quot;logs/access.log&quot;
18
19[api]
20    insecure = true
21    dashboard = true
22
23[providers]
24  [providers.file]
25    filename = &quot;traefik.toml&quot;
26
27# dynamic conf
28[http]
29    [http.routers]
30        [http.routers.my-router]
31            rule = &quot;Path(`/proxy`)&quot;
32            service = &quot;my-service&quot;
33            entryPoints = [&quot;web&quot;]
34    [http.services]
35        [http.services.my-service.loadBalancer]
36            [[http.services.my-service.loadBalancer.servers]]
37                url = &quot;http://localhost:36001&quot;
38traefik --configFile=traefik.toml
39

Now dashboard on port 8091 works like a charm, but I struggle with reverse proxy request. I suppose it should look like this (based on my configuration file):

1curl -XGET 'http://localhost:36001/api/operability/list'
2[global]
3  checkNewVersion = false
4  sendAnonymousUsage = false
5
6[entryPoints]
7    [entryPoints.web]
8    address = &quot;:8090&quot;
9
10    [entryPoints.traefik]
11    address = &quot;:8091&quot;
12
13[log]
14    level = &quot;DEBUG&quot;
15    filePath = &quot;logs/traefik.log&quot;
16[accessLog]
17    filePath = &quot;logs/access.log&quot;
18
19[api]
20    insecure = true
21    dashboard = true
22
23[providers]
24  [providers.file]
25    filename = &quot;traefik.toml&quot;
26
27# dynamic conf
28[http]
29    [http.routers]
30        [http.routers.my-router]
31            rule = &quot;Path(`/proxy`)&quot;
32            service = &quot;my-service&quot;
33            entryPoints = [&quot;web&quot;]
34    [http.services]
35        [http.services.my-service.loadBalancer]
36            [[http.services.my-service.loadBalancer.servers]]
37                url = &quot;http://localhost:36001&quot;
38traefik --configFile=traefik.toml
39curl -XGET 'http://localhost:8090/proxy/api/operability/list'
40

But all I get it's just:

404 page not found

The question is: is there any mistake in configuration file or is it just a request typo?

edit: My configuration file is based on answers in this questions:

  1. Simple reverse proxy example with Traefik
  2. Traefik v2 as a reverse proxy without docker

edit #2: Traefik version info:

1curl -XGET 'http://localhost:36001/api/operability/list'
2[global]
3  checkNewVersion = false
4  sendAnonymousUsage = false
5
6[entryPoints]
7    [entryPoints.web]
8    address = &quot;:8090&quot;
9
10    [entryPoints.traefik]
11    address = &quot;:8091&quot;
12
13[log]
14    level = &quot;DEBUG&quot;
15    filePath = &quot;logs/traefik.log&quot;
16[accessLog]
17    filePath = &quot;logs/access.log&quot;
18
19[api]
20    insecure = true
21    dashboard = true
22
23[providers]
24  [providers.file]
25    filename = &quot;traefik.toml&quot;
26
27# dynamic conf
28[http]
29    [http.routers]
30        [http.routers.my-router]
31            rule = &quot;Path(`/proxy`)&quot;
32            service = &quot;my-service&quot;
33            entryPoints = [&quot;web&quot;]
34    [http.services]
35        [http.services.my-service.loadBalancer]
36            [[http.services.my-service.loadBalancer.servers]]
37                url = &quot;http://localhost:36001&quot;
38traefik --configFile=traefik.toml
39curl -XGET 'http://localhost:8090/proxy/api/operability/list'
40traefik version
41Version:      2.4.9
42Codename:     livarot
43Go version:   go1.16.5
44Built:        2021-06-21T16:17:58Z
45OS/Arch:      windows/amd64
46

ANSWER

Answered 2021-Jul-14 at 10:26

I've managed to find the answer.

  1. I'm not that smart if I've decided that Traefik would take /proxy and simply redicrect all request to /api/*. The official docs (https://doc.traefik.io/traefik/routing/routers/) says that (I'm quoting):

Use Path if your service listens on the exact path only. For instance, Path: /products would match /products but not /products/shoes.

Use a Prefix matcher if your service listens on a particular base path but also serves requests on sub-paths. For instance, PathPrefix: /products would match /products but also /products/shoes and /products/shirts. Since the path is forwarded as-is, your service is expected to listen on /products.

  1. I did not use any middleware for replacing substring of path

Now answer as example.

First at all: code for microservice in main.go file

1curl -XGET 'http://localhost:36001/api/operability/list'
2[global]
3  checkNewVersion = false
4  sendAnonymousUsage = false
5
6[entryPoints]
7    [entryPoints.web]
8    address = &quot;:8090&quot;
9
10    [entryPoints.traefik]
11    address = &quot;:8091&quot;
12
13[log]
14    level = &quot;DEBUG&quot;
15    filePath = &quot;logs/traefik.log&quot;
16[accessLog]
17    filePath = &quot;logs/access.log&quot;
18
19[api]
20    insecure = true
21    dashboard = true
22
23[providers]
24  [providers.file]
25    filename = &quot;traefik.toml&quot;
26
27# dynamic conf
28[http]
29    [http.routers]
30        [http.routers.my-router]
31            rule = &quot;Path(`/proxy`)&quot;
32            service = &quot;my-service&quot;
33            entryPoints = [&quot;web&quot;]
34    [http.services]
35        [http.services.my-service.loadBalancer]
36            [[http.services.my-service.loadBalancer.servers]]
37                url = &quot;http://localhost:36001&quot;
38traefik --configFile=traefik.toml
39curl -XGET 'http://localhost:8090/proxy/api/operability/list'
40traefik version
41Version:      2.4.9
42Codename:     livarot
43Go version:   go1.16.5
44Built:        2021-06-21T16:17:58Z
45OS/Arch:      windows/amd64
46package main
47
48import (
49    &quot;fmt&quot;
50    &quot;log&quot;
51    &quot;net/http&quot;
52)
53
54func handler(w http.ResponseWriter, r *http.Request) {
55    fmt.Fprintf(w, &quot;{\&quot;message\&quot;: \&quot;ping 123\&quot;}&quot;)
56}
57
58func main() {
59    http.HandleFunc(&quot;/operability/list&quot;, handler)
60    log.Fatal(http.ListenAndServe(&quot;:36001&quot;, nil))
61}
62

Now, configuration file for Traefik v2 in config.tom file:

1curl -XGET 'http://localhost:36001/api/operability/list'
2[global]
3  checkNewVersion = false
4  sendAnonymousUsage = false
5
6[entryPoints]
7    [entryPoints.web]
8    address = &quot;:8090&quot;
9
10    [entryPoints.traefik]
11    address = &quot;:8091&quot;
12
13[log]
14    level = &quot;DEBUG&quot;
15    filePath = &quot;logs/traefik.log&quot;
16[accessLog]
17    filePath = &quot;logs/access.log&quot;
18
19[api]
20    insecure = true
21    dashboard = true
22
23[providers]
24  [providers.file]
25    filename = &quot;traefik.toml&quot;
26
27# dynamic conf
28[http]
29    [http.routers]
30        [http.routers.my-router]
31            rule = &quot;Path(`/proxy`)&quot;
32            service = &quot;my-service&quot;
33            entryPoints = [&quot;web&quot;]
34    [http.services]
35        [http.services.my-service.loadBalancer]
36            [[http.services.my-service.loadBalancer.servers]]
37                url = &quot;http://localhost:36001&quot;
38traefik --configFile=traefik.toml
39curl -XGET 'http://localhost:8090/proxy/api/operability/list'
40traefik version
41Version:      2.4.9
42Codename:     livarot
43Go version:   go1.16.5
44Built:        2021-06-21T16:17:58Z
45OS/Arch:      windows/amd64
46package main
47
48import (
49    &quot;fmt&quot;
50    &quot;log&quot;
51    &quot;net/http&quot;
52)
53
54func handler(w http.ResponseWriter, r *http.Request) {
55    fmt.Fprintf(w, &quot;{\&quot;message\&quot;: \&quot;ping 123\&quot;}&quot;)
56}
57
58func main() {
59    http.HandleFunc(&quot;/operability/list&quot;, handler)
60    log.Fatal(http.ListenAndServe(&quot;:36001&quot;, nil))
61}
62[global]
63  checkNewVersion = false
64  sendAnonymousUsage = false
65
66[entryPoints]
67    [entryPoints.web]
68    address = &quot;:36000&quot;
69
70    [entryPoints.traefik]
71    address = &quot;:8091&quot;
72
73[log]
74    level = &quot;DEBUG&quot;
75    filePath = &quot;logs/traefik.log&quot;
76[accessLog]
77    filePath = &quot;logs/access.log&quot;
78
79[api]
80    insecure = true
81    dashboard = true
82
83[providers]
84  [providers.file]
85    debugLogGeneratedTemplate = true
86    # Point this same file for dynamic configuration
87    filename = &quot;config.toml&quot;
88    watch = true
89
90[http]
91    [http.middlewares]
92        [http.middlewares.test-replacepathregex.replacePathRegex]
93            # We need middleware to replace all &quot;/proxy/&quot; with &quot;/api/&quot;
94            regex = &quot;(?:^|\\W)proxy(?:$|\\W)&quot;
95            replacement = &quot;/api/&quot;
96
97    [http.routers]
98        [http.routers.my-router]
99            # We need to handle all request with pathes defined as &quot;/proxy/*&quot;
100            rule = &quot;PathPrefix(`/proxy/`)&quot;
101            service = &quot;my-service&quot;
102            entryPoints = [&quot;web&quot;]
103            # Use of defined middleware for path replacement
104            middlewares = [&quot;test-replacepathregex&quot;]
105
106    [http.services]
107        [http.services.my-service.loadBalancer]
108            [[http.services.my-service.loadBalancer.servers]]
109                url = &quot;http://localhost:36001/&quot;
110

Start microservice:

1curl -XGET 'http://localhost:36001/api/operability/list'
2[global]
3  checkNewVersion = false
4  sendAnonymousUsage = false
5
6[entryPoints]
7    [entryPoints.web]
8    address = &quot;:8090&quot;
9
10    [entryPoints.traefik]
11    address = &quot;:8091&quot;
12
13[log]
14    level = &quot;DEBUG&quot;
15    filePath = &quot;logs/traefik.log&quot;
16[accessLog]
17    filePath = &quot;logs/access.log&quot;
18
19[api]
20    insecure = true
21    dashboard = true
22
23[providers]
24  [providers.file]
25    filename = &quot;traefik.toml&quot;
26
27# dynamic conf
28[http]
29    [http.routers]
30        [http.routers.my-router]
31            rule = &quot;Path(`/proxy`)&quot;
32            service = &quot;my-service&quot;
33            entryPoints = [&quot;web&quot;]
34    [http.services]
35        [http.services.my-service.loadBalancer]
36            [[http.services.my-service.loadBalancer.servers]]
37                url = &quot;http://localhost:36001&quot;
38traefik --configFile=traefik.toml
39curl -XGET 'http://localhost:8090/proxy/api/operability/list'
40traefik version
41Version:      2.4.9
42Codename:     livarot
43Go version:   go1.16.5
44Built:        2021-06-21T16:17:58Z
45OS/Arch:      windows/amd64
46package main
47
48import (
49    &quot;fmt&quot;
50    &quot;log&quot;
51    &quot;net/http&quot;
52)
53
54func handler(w http.ResponseWriter, r *http.Request) {
55    fmt.Fprintf(w, &quot;{\&quot;message\&quot;: \&quot;ping 123\&quot;}&quot;)
56}
57
58func main() {
59    http.HandleFunc(&quot;/operability/list&quot;, handler)
60    log.Fatal(http.ListenAndServe(&quot;:36001&quot;, nil))
61}
62[global]
63  checkNewVersion = false
64  sendAnonymousUsage = false
65
66[entryPoints]
67    [entryPoints.web]
68    address = &quot;:36000&quot;
69
70    [entryPoints.traefik]
71    address = &quot;:8091&quot;
72
73[log]
74    level = &quot;DEBUG&quot;
75    filePath = &quot;logs/traefik.log&quot;
76[accessLog]
77    filePath = &quot;logs/access.log&quot;
78
79[api]
80    insecure = true
81    dashboard = true
82
83[providers]
84  [providers.file]
85    debugLogGeneratedTemplate = true
86    # Point this same file for dynamic configuration
87    filename = &quot;config.toml&quot;
88    watch = true
89
90[http]
91    [http.middlewares]
92        [http.middlewares.test-replacepathregex.replacePathRegex]
93            # We need middleware to replace all &quot;/proxy/&quot; with &quot;/api/&quot;
94            regex = &quot;(?:^|\\W)proxy(?:$|\\W)&quot;
95            replacement = &quot;/api/&quot;
96
97    [http.routers]
98        [http.routers.my-router]
99            # We need to handle all request with pathes defined as &quot;/proxy/*&quot;
100            rule = &quot;PathPrefix(`/proxy/`)&quot;
101            service = &quot;my-service&quot;
102            entryPoints = [&quot;web&quot;]
103            # Use of defined middleware for path replacement
104            middlewares = [&quot;test-replacepathregex&quot;]
105
106    [http.services]
107        [http.services.my-service.loadBalancer]
108            [[http.services.my-service.loadBalancer.servers]]
109                url = &quot;http://localhost:36001/&quot;
110go run main.go
111

Start traefik:

1curl -XGET 'http://localhost:36001/api/operability/list'
2[global]
3  checkNewVersion = false
4  sendAnonymousUsage = false
5
6[entryPoints]
7    [entryPoints.web]
8    address = &quot;:8090&quot;
9
10    [entryPoints.traefik]
11    address = &quot;:8091&quot;
12
13[log]
14    level = &quot;DEBUG&quot;
15    filePath = &quot;logs/traefik.log&quot;
16[accessLog]
17    filePath = &quot;logs/access.log&quot;
18
19[api]
20    insecure = true
21    dashboard = true
22
23[providers]
24  [providers.file]
25    filename = &quot;traefik.toml&quot;
26
27# dynamic conf
28[http]
29    [http.routers]
30        [http.routers.my-router]
31            rule = &quot;Path(`/proxy`)&quot;
32            service = &quot;my-service&quot;
33            entryPoints = [&quot;web&quot;]
34    [http.services]
35        [http.services.my-service.loadBalancer]
36            [[http.services.my-service.loadBalancer.servers]]
37                url = &quot;http://localhost:36001&quot;
38traefik --configFile=traefik.toml
39curl -XGET 'http://localhost:8090/proxy/api/operability/list'
40traefik version
41Version:      2.4.9
42Codename:     livarot
43Go version:   go1.16.5
44Built:        2021-06-21T16:17:58Z
45OS/Arch:      windows/amd64
46package main
47
48import (
49    &quot;fmt&quot;
50    &quot;log&quot;
51    &quot;net/http&quot;
52)
53
54func handler(w http.ResponseWriter, r *http.Request) {
55    fmt.Fprintf(w, &quot;{\&quot;message\&quot;: \&quot;ping 123\&quot;}&quot;)
56}
57
58func main() {
59    http.HandleFunc(&quot;/operability/list&quot;, handler)
60    log.Fatal(http.ListenAndServe(&quot;:36001&quot;, nil))
61}
62[global]
63  checkNewVersion = false
64  sendAnonymousUsage = false
65
66[entryPoints]
67    [entryPoints.web]
68    address = &quot;:36000&quot;
69
70    [entryPoints.traefik]
71    address = &quot;:8091&quot;
72
73[log]
74    level = &quot;DEBUG&quot;
75    filePath = &quot;logs/traefik.log&quot;
76[accessLog]
77    filePath = &quot;logs/access.log&quot;
78
79[api]
80    insecure = true
81    dashboard = true
82
83[providers]
84  [providers.file]
85    debugLogGeneratedTemplate = true
86    # Point this same file for dynamic configuration
87    filename = &quot;config.toml&quot;
88    watch = true
89
90[http]
91    [http.middlewares]
92        [http.middlewares.test-replacepathregex.replacePathRegex]
93            # We need middleware to replace all &quot;/proxy/&quot; with &quot;/api/&quot;
94            regex = &quot;(?:^|\\W)proxy(?:$|\\W)&quot;
95            replacement = &quot;/api/&quot;
96
97    [http.routers]
98        [http.routers.my-router]
99            # We need to handle all request with pathes defined as &quot;/proxy/*&quot;
100            rule = &quot;PathPrefix(`/proxy/`)&quot;
101            service = &quot;my-service&quot;
102            entryPoints = [&quot;web&quot;]
103            # Use of defined middleware for path replacement
104            middlewares = [&quot;test-replacepathregex&quot;]
105
106    [http.services]
107        [http.services.my-service.loadBalancer]
108            [[http.services.my-service.loadBalancer.servers]]
109                url = &quot;http://localhost:36001/&quot;
110go run main.go
111traefik --configFile config.toml
112

Now check if microservice works correctly:

curl -XGET 'http://localhost:36001/api/operability/list'

{"message": "ping 123"}

And check if Traefik v2 does job well too:

curl -XGET 'http://localhost:36000/proxy/operability/list'

{"message": "ping 123"}

Source https://stackoverflow.com/questions/68111670

Community Discussions contain sources that include Stack Exchange Network

Tutorials and Learning Resources in Microservice

Tutorials and Learning Resources are not available at this moment for Microservice

Share this Page

share link

Get latest updates on Microservice