Popular New Releases in Microservice
mall
v1.0.1
istio
Istio 1.13.3
apollo
Apollo 2.0.0-RC1 Release
apollo
Apollo 1.9.0 Release
nacos
2.1.0-BETA (Apr 01, 2022)
Popular Libraries in Microservice
by doocs java
57101 CC-BY-SA-4.0
😮 Core Interview Questions & Answers For Experienced Java(Backend) Developers | 互联网 Java 工程师进阶知识完全扫盲:涵盖高并发、分布式、高可用、微服务、海量数据处理等领域知识
by macrozheng java
52180 Apache-2.0
mall项目是一套电商系统,包括前台商城系统及后台管理系统,基于SpringBoot+MyBatis实现,采用Docker容器化部署。 前台商城系统包含首页门户、商品推荐、商品搜索、商品展示、购物车、订单流程、会员中心、客户服务、帮助中心等模块。 后台管理系统包含商品管理、订单管理、会员管理、促销管理、运营管理、内容管理、统计报表、财务管理、权限管理、设置等模块。
by istio go
30043 Apache-2.0
Connect, secure, control, and observe services.
by apolloconfig java
26563 Apache-2.0
Apollo is a reliable configuration management system suitable for microservice configuration management scenarios.
by ityouknow java
26196
about learning Spring Boot via examples. Spring Boot 教程、技术栈示例代码,快速简单上手教程。
by ctripcorp java
25294 Apache-2.0
Apollo is a reliable configuration management system suitable for microservice configuration management scenarios.
by alibaba java
21883 Apache-2.0
an easy-to-use dynamic service discovery, configuration and service management platform for building cloud native applications.
by alibaba java
21810 Apache-2.0
Spring Cloud Alibaba provides a one-stop solution for application development for the distributed solutions of Alibaba middleware.
by seata java
21777 Apache-2.0
:fire: Seata is an easy-to-use, high-performance, open source distributed transaction solution.
Trending New libraries in Microservice
by dromara java
8497 Apache-2.0
这可能是史上功能最全的Java权限认证框架!目前已集成——登录认证、权限认证、分布式Session会话、微服务网关鉴权、单点登录、OAuth2.0、踢人下线、Redis集成、前后台分离、记住我模式、模拟他人账号、临时身份切换、账号封禁、多账号认证体系、注解式鉴权、路由拦截式鉴权、花式token生成、自动续签、同端互斥登录、会话治理、密码加密、jwt集成、Spring集成、WebFlux集成...
by megaease go
4243 Apache-2.0
A Cloud Native traffic orchestration system
by cloudwego go
3958 Apache-2.0
A high-performance and strong-extensibility Golang RPC framework that helps developers build microservices.
by douyu go
3691 Apache-2.0
Jupiter是斗鱼开源的面向服务治理的Golang微服务框架
by dromara java
3542 Apache-2.0
这可能是史上功能最全的Java权限认证框架!目前已集成——登录认证、权限认证、分布式Session会话、微服务网关鉴权、单点登录、OAuth2.0、踢人下线、Redis集成、前后台分离、记住我模式、模拟他人账号、临时身份切换、账号封禁、多账号认证体系、注解式鉴权、路由拦截式鉴权、花式token生成、自动续签、同端互斥登录、会话治理、密码加密、jwt集成、Spring集成、WebFlux集成...
by LandGrey java
2694
SpringBoot 相关漏洞学习资料,利用方法和技巧合集,黑盒安全评估 check list
by erda-project go
2294 Apache-2.0
An enterprise-grade Cloud-Native application platform for Kubernetes.
by Jackson0714 java
1083 GPL-3.0
一款面试刷题的 Spring Cloud 开源系统。零碎时间利用小程序查看常见面试题,夯实Java基础。 该项目可以教会你如何搭建SpringBoot项目,Spring Cloud项目。 采用流行的技术,如 SpringBoot、MyBatis、Redis、 MySql、 MongoDB、 RabbitMQ、Elasticsearch,采用Docker容器化部署。
by rbmonster java
1081
java开发 面试八股文(个人的面试及工作总结)
Top Authors in Microservice
1
63 Libraries
2162
2
38 Libraries
1948
3
26 Libraries
2155
4
25 Libraries
16873
5
23 Libraries
1078
6
22 Libraries
1437
7
20 Libraries
186
8
20 Libraries
153
9
19 Libraries
2443
10
16 Libraries
5847
1
63 Libraries
2162
2
38 Libraries
1948
3
26 Libraries
2155
4
25 Libraries
16873
5
23 Libraries
1078
6
22 Libraries
1437
7
20 Libraries
186
8
20 Libraries
153
9
19 Libraries
2443
10
16 Libraries
5847
Trending Kits in Microservice
Here are some of the famous C# Microservice Libraries. C# Microservice Libraries' use cases include Data Access, Business Logic, Security, Automation, Integration, and Monitoring.
C# Microservice Libraries are packages of code that allow developers to rapidly create, deploy, and manage microservices in the C# programming language. They provide a foundation of reusable code and components, such as authentication, logging, and messaging, that can be used to create and deploy microservices quickly. Additionally, they often provide tools and frameworks to assist in developing, deploying, and managing microservices.
Let us have a look at these libraries in detail below.
eShopOnContainers
- Built-in support for distributed application architectures and stateful microservices.
- Fully container-based architecture.
- Support for cloud-native development.
CAP
- Provides a unified API for building distributed, event-driven microservices.
- Provides a lightweight, cloud-native infrastructure for running microservices.
- Provides a complete set of tools and libraries for developing, testing, and deploying microservices.
tye
- Allows developers to create and manage multiple services with a single command.
- Automatically detect and manage configuration changes across services.
- Allows developers to iterate and test their code with a single command quickly.
surging
- Supports Dependency Injection (DI), which makes it easier for developers to manage and maintain their code.
- Provides API gateways that allow for better control over API traffic.
- Provides service discovery, allowing developers to locate network services easily.
coolstore-microservices
- Includes a comprehensive logging and reporting system.
- Supports a wide range of service discovery and monitoring tools.
- Provides a modern CI/CD pipeline for automated deployment of applications and services.
run-aspnetcore-microservices
- End-to-end monitoring and logging of service calls and performance data.
- Intuitive and extensible service-oriented architecture (SOA) framework.
- Support for advanced message-based communication between services.
microdot
- Built on the Actor Model, providing a robust framework for distributed, concurrent, fault-tolerant, and scalable applications.
- Built-in dependency injection system, allowing developers to quickly and easily inject dependencies into their services.
- Built-in support for load-balancing, meaning that services can be distributed across multiple nodes in the cluster.
Micronetes
- Provides a simple way to package, deploy and manage microservices with Docker containers.
- Powerful integration layer that allows developers to connect microservices easily with each other.
- Supports service discovery, which allows services to be discovered and connected automatically.
awesome-dotnet-tips
- Wide selection of tips and tricks for developing more efficient and maintainable C# microservices.
- Easy-to-use command line interface.
- Built-in support for the latest versions of popular C# frameworks.
Convey
- Provides various features, such as distributed tracing, service discovery, and resilience.
- Offers an opinionated approach to microservices architecture.
- Provides a set of tools for testing and monitoring microservices
Microservices break down complex applications into smaller and independent services. Microservice architecture provides solutions to difficulties of using Monolith architecture, such as scalability and numerous lines of code for applications. The benefits are typically divided by business domain, allowing developers to focus on a specific part of the application and iterate on it quickly. Applications can be changed, deleted, and replaced without affecting the entire application, making application maintenance simpler.
Java Microservices Libraries are a set of open-source libraries that empower developers to quickly and easily create microservices-based applications. They provide tools and frameworks for creating and managing microservices, including APIs, distributed systems, and cloud-native architectures. Each service performs a specific task, such as handling user authentication, providing data storage, or processing payments. They help developers easily create distributed applications without having to write complex code. They give us a way to quickly identify and debug any issues that may arise in a distributed system. By using microservices in Java, new technology and process adoption become easier.
Java Microservices rely on a communication protocol called Remote Procedure Call (RPC) to facilitate communication between services. RPC is a robust protocol that allows for the remote execution of procedures or functions without the need for shared memory or database connections between services. This makes it an ideal solution for distributed systems, as it allows for efficient communication between multiple services without requiring them to be co-located. RPC also allows easy integration of new services into an existing system without requiring changes to the current system.
Each Java Microservices Library has its unique set of features and functionalities. light-4j, piggymetrics, Activiti, microservices-spring-boot, mycollab, and microservice-istio are libraries that help us to reuse the code. Micronaut is a full-stack framework for creating JVM-based applications. Apache Camel is an open-source integration framework that provides components for routing messages. Quarkus is a Kubernetes-native Java stack designed to make applications more efficient. Helidon and KumuluzEE are lightweight frameworks for creating serverless applications.
Check out the below list to find the best top 10 Java Microservices Libraries for your app development.
It is very difficult to maintain a large monolithic application. It is also difficult to release new features and bug fixes. But using the Java Microservice libraries like apollo, nacos, armeria can easily solve these problems. The microservice approach has been around for a while now and is being used by many organizations to improve their efficiency, increase the speed of delivery and enhance the quality of their products. Apollo is an open source framework that provides efficient support for building fast and scalable distributed applications with a unified programming model. Apollo uses HTTP as its primary protocol and provides an HTTP client implementation that supports all of the major web browsers (including IE6) as well as other clients such as curl and wget. Apollo also supports streaming data using either Netty or GZIP compression for efficient high-volume data transfers over low bandwidth connections. Nacos is a lightweight library for building reactive asynchronous systems in Java SE 9+. The main purpose of NACOS is to provide a simple yet robust way to create asynchronous applications using Java SE 9+ functional interfaces with low overhead on thread creation overhead and/or blocking calls at the expense of some performance impact due to the need to manage threads yourself. Some of the most popular among developers are:
Python Microservice libraries like falcon, nameko, emissary, etc are becoming popular. These libraries provide you with a lot of benefits like high-performance asynchronous processing, easy integration with NoSQL databases, and REST APIs. These tools help us move away from monolithic systems and towards more scalable ones. They also help us move towards highly available systems that don’t require all of our time spent on infrastructure maintenance and administration. Falcon is an open source Python framework for building scalable web applications. It is backed by Google and supports many different web frameworks such as Flask, Pyramid and Django. Falcon's main purpose is to provide a high-performance asynchronous processing framework for Python web applications. It uses coroutines to provide an event-driven programming model that makes it easy to define complex asynchronous logic and handle multiple concurrent operations within an application. Nameko is a hosted service that lets you create and manage your own DNS records using standard tools. This makes it easy to manage your own DNS records without having to set up servers or worry about security issues. Nameko provides simple collaboration tools so that people can easily create accounts and store data in the cloud without having to worry about securing sensitive information on their own servers. A few of the most popular open source Python Microservice libraries for developers are:
The Ruby Microservice library is a set of often-used modules that can be used to build more complex applications. The library includes a number of microservices, which are small and simple programs that perform a specific function. The Ruby Microservice library contains many microservices, each designed to solve a different type of problem. Kontena is a project that helps you to manage your full stack CI/CD pipeline, including your Docker registry, Kubernetes clusters, CI/CD pipelines and Docker images management. It has a simple yet powerful UI and integrates with other tools like Jenkins and GitlabCI. Stitches is a library for building complex tasks in an easy way. It makes it possible to use many languages in the same task by providing abstractions over different types of tasks (for example, creating HTML pages or running unit tests) via DSLs written in Python or JavaScript. The library can be used from Ruby as well as from other languages such as Java or Kotlin. Expeditor is a toolkit for managing dockerized applications in AWS environments. You can use it for setting up multi-tier applications on AWS or hybrid cloud infrastructures where you need to run services on-premises and manage them through the cloud provider - AWS EC2 instances, ECS clusters, RDS DB instances etc. Popular open source Ruby Microservice libraries for developers include:
On the other hand, if you use C++ microservice libraries like app-mash, edgelessrt, libcluon then this will be perfect for building distributed systems. These libraries have been designed from the ground up to run on multiple machines without any issues and handle distributed systems very well. Libraries like app-mash and edgelessrt are designed with the C++11 standard, so they don’t require you to use any other framework for building your service. Also, these libraries have been tested on Linux and Windows systems and have been used successfully by many developers worldwide. libcluon is also a good choice for building microservices in C++ because it has been tested on both Linux and Windows systems as well as Mac OS X systems. It provides a set of tools that help developers with their daily work: from creating new projects, testing them and deploying them on different platforms (including mobile platforms). App-Mash is a rapid application development framework for easy deployment of web applications. It's built on top of Google App Engine standard library and runs on both Google App Engine and Google Cloud Platform. App-Mash provides a full stack for building robust web applications with all the tools needed to build high performance services: database support, caching, authentication, internationalization support and more. There are several popular open source C++ microservice libraries available for developers:
PHP Microservice is an alternative to the traditional web application architecture based on the use of microservices. The main idea behind this architecture is that each service should be completely independent from other services and should have its own state. Microservice libraries like swoft, hyperf, flyimg make it easier to build PHP microservices by providing a common set of tools for developing scalable applications in PHP. Swoft is a microservice framework for developing REST APIs in PHP. It provides a set of tools and libraries that help you build REST API services quickly and easily. Swoft has a variety of features including authentication and authorization, data access strategies, middleware support, as well as support for various HTTP methodologies. Hyperf is a lightweight PHP microservice toolkit built on top of Symfony components. It offers you an easy way to create HTTP services in an efficient, secure way from the ground up. With Hyperf you can easily create: HTTP API, HTTP API with asynchronous calls or callbacks and many other features that will help you build scalable applications quickly. Many developers depend on the following open source PHP Microservice libraries:
Go microservices are small, single-purpose services that can be independently deployed and scaled. This approach is often used in applications that need to scale well, such as production systems or web applications. One of the biggest advantages of using Go microservices libraries is that you get better performance and reliability than you would by writing your own code. You also avoid the maintenance overhead of managing a monolithic application. Kratos is an open source library that implements Erlang actor framework in Go. Actor-model based systems are one of the most popular patterns for building distributed systems. Actors are lightweight processes that communicate with each other via message passing, which makes them very scalable and durable. Kratos provides an implementation of Erlang actors for Go so you can easily build high performance, reliable systems on top of it. Nomad is another open source library that implements Erlang actor framework in Go and works with kubectl to manage your cluster from within your program code. Nomad supports all kinds of clusters (mock, test) but its main goal is to provide a consistent interface between operations that want a cluster (e.g., Kubernetes). Popular open source Go microservice libraries include:
The use of JavaScript microservice libraries single-spa, moleculer, seneca is not only because it is easy to implement and maintain but also provides a high degree of flexibility. You can easily integrate microservice components into your existing application by using the API. For example, if you want to add a new feature in your application, you can use a JavaScript microservice library like moleculer to develop and test it. They help you to build a large scale application with very little effort, and they allow you to use all of the benefits of the JavaScript language. Moleculer is a JavaScript library that provides an easy and intuitive way to build an API. It is designed to be as simple as possible: it doesn't require an extensive configuration or setup process, but at the same time it allows you to build powerful APIs. Seneca is a lightweight microservice framework that simplifies building, deploying and managing apps. Seneca has recently been rewritten from scratch with Ember (and thus ember-cli), so it supports frontend development with Ember.js apps in addition to backend services with Node and Express (and now both). Some of the most widely used open source JavaScript microservice libraries among developers include:
Trending Discussions on Microservice
Exclude Logs from Datadog Ingestion
Custom Serilog sink with injection?
How to manage Google Cloud credentials for local development
using webclient to call the grapql mutation API in spring boot
Jdeps Module java.annotation not found
How to make a Spring Boot application quit on tomcat failure
Deadlock on insert/select
Rewrite host and port for outgoing request of a pod in an Istio Mesh
Checking list of conditions on API data
Traefik v2 reverse proxy without Docker
QUESTION
Exclude Logs from Datadog Ingestion
Asked 2022-Mar-19 at 22:38I have a kubernetes cluster that's running datadog and some microservices. Each microservice makes healthchecks every 5 seconds to make sure the service is up and running. I want to exclude these healthcheck logs from being ingested into Datadog.
I think I need to use log_processing_rules
and I've tried that but the healthcheck logs are still making it into the logs section of Datadog. My current Deployment looks like this:
1apiVersion: apps/v1
2kind: Deployment
3[ ... SNIP ... ]
4spec:
5 replicas: 2
6 selector:
7 matchLabels:
8 app: my-service
9 template:
10 metadata:
11 labels:
12 app: my-service
13 version: "fac8fb13"
14 annotations:
15 rollme: "IO2ad"
16 tags.datadoghq.com/env: development
17 tags.datadoghq.com/version: "fac8fb13"
18 tags.datadoghq.com/service: my-service
19 tags.datadoghq.com/my-service.logs: |
20 [{
21 "source": my-service,
22 "service": my-service,
23 "log_processing_rules": [
24 {
25 "type": "exclude_at_match",
26 "name": "exclude_healthcheck_logs",
27 "pattern": "\"RequestPath\": \"\/health\""
28 }
29 ]
30 }]
31
and the logs coming out of the kubernetes pod:
1apiVersion: apps/v1
2kind: Deployment
3[ ... SNIP ... ]
4spec:
5 replicas: 2
6 selector:
7 matchLabels:
8 app: my-service
9 template:
10 metadata:
11 labels:
12 app: my-service
13 version: "fac8fb13"
14 annotations:
15 rollme: "IO2ad"
16 tags.datadoghq.com/env: development
17 tags.datadoghq.com/version: "fac8fb13"
18 tags.datadoghq.com/service: my-service
19 tags.datadoghq.com/my-service.logs: |
20 [{
21 "source": my-service,
22 "service": my-service,
23 "log_processing_rules": [
24 {
25 "type": "exclude_at_match",
26 "name": "exclude_healthcheck_logs",
27 "pattern": "\"RequestPath\": \"\/health\""
28 }
29 ]
30 }]
31$ kubectl logs my-service-pod
32
33{
34 "@t": "2022-01-07T19:13:05.3134483Z",
35 "@m": "Request finished HTTP/1.1 GET http://10.64.0.80:5000/health - - - 200 - text/plain 7.5992ms",
36 "@i": "REDACTED",
37 "ElapsedMilliseconds": 7.5992,
38 "StatusCode": 200,
39 "ContentType": "text/plain",
40 "ContentLength": null,
41 "Protocol": "HTTP/1.1",
42 "Method": "GET",
43 "Scheme": "http",
44 "Host": "10.64.0.80:5000",
45 "PathBase": "",
46 "Path": "/health",
47 "QueryString": "",
48 "HostingRequestFinishedLog": "Request finished HTTP/1.1 GET http://10.64.0.80:5000/health - - - 200 - text/plain 7.5992ms",
49 "EventId": {
50 "Id": 2,
51 "Name": "RequestFinished"
52 },
53 "SourceContext": "Microsoft.AspNetCore.Hosting.Diagnostics",
54 "RequestId": "REDACTED",
55 "RequestPath": "/health",
56 "ConnectionId": "REDACTED",
57 "dd_service": "my-service",
58 "dd_version": "54aae2b5",
59 "dd_env": "development",
60 "dd_trace_id": "REDACTED",
61 "dd_span_id": "REDACTED"
62}
63
EDIT: Removed 2nd element of the log_processing_rules
array above as I've tried with 1 and 2 elements in the rules array.
EDIT2: I've also tried changing log_processing_rules
type to INCLUDE at match in an attempt to figure this out:
1apiVersion: apps/v1
2kind: Deployment
3[ ... SNIP ... ]
4spec:
5 replicas: 2
6 selector:
7 matchLabels:
8 app: my-service
9 template:
10 metadata:
11 labels:
12 app: my-service
13 version: "fac8fb13"
14 annotations:
15 rollme: "IO2ad"
16 tags.datadoghq.com/env: development
17 tags.datadoghq.com/version: "fac8fb13"
18 tags.datadoghq.com/service: my-service
19 tags.datadoghq.com/my-service.logs: |
20 [{
21 "source": my-service,
22 "service": my-service,
23 "log_processing_rules": [
24 {
25 "type": "exclude_at_match",
26 "name": "exclude_healthcheck_logs",
27 "pattern": "\"RequestPath\": \"\/health\""
28 }
29 ]
30 }]
31$ kubectl logs my-service-pod
32
33{
34 "@t": "2022-01-07T19:13:05.3134483Z",
35 "@m": "Request finished HTTP/1.1 GET http://10.64.0.80:5000/health - - - 200 - text/plain 7.5992ms",
36 "@i": "REDACTED",
37 "ElapsedMilliseconds": 7.5992,
38 "StatusCode": 200,
39 "ContentType": "text/plain",
40 "ContentLength": null,
41 "Protocol": "HTTP/1.1",
42 "Method": "GET",
43 "Scheme": "http",
44 "Host": "10.64.0.80:5000",
45 "PathBase": "",
46 "Path": "/health",
47 "QueryString": "",
48 "HostingRequestFinishedLog": "Request finished HTTP/1.1 GET http://10.64.0.80:5000/health - - - 200 - text/plain 7.5992ms",
49 "EventId": {
50 "Id": 2,
51 "Name": "RequestFinished"
52 },
53 "SourceContext": "Microsoft.AspNetCore.Hosting.Diagnostics",
54 "RequestId": "REDACTED",
55 "RequestPath": "/health",
56 "ConnectionId": "REDACTED",
57 "dd_service": "my-service",
58 "dd_version": "54aae2b5",
59 "dd_env": "development",
60 "dd_trace_id": "REDACTED",
61 "dd_span_id": "REDACTED"
62}
63"log_processing_rules": [
64 {
65 "type": "include_at_match",
66 "name": "testing_include_at_match",
67 "pattern": "somepath"
68 }
69]
70
and I'm still getting the health logs in Datadog (in theory I should not as /health
is not part of the matching pattern)
ANSWER
Answered 2022-Jan-12 at 20:28I think the problem is that you're defining multiple patterns; the docs state, If you want to match one or more patterns you must define them in a single expression.
Try somtething like this and see what happens:
1apiVersion: apps/v1
2kind: Deployment
3[ ... SNIP ... ]
4spec:
5 replicas: 2
6 selector:
7 matchLabels:
8 app: my-service
9 template:
10 metadata:
11 labels:
12 app: my-service
13 version: "fac8fb13"
14 annotations:
15 rollme: "IO2ad"
16 tags.datadoghq.com/env: development
17 tags.datadoghq.com/version: "fac8fb13"
18 tags.datadoghq.com/service: my-service
19 tags.datadoghq.com/my-service.logs: |
20 [{
21 "source": my-service,
22 "service": my-service,
23 "log_processing_rules": [
24 {
25 "type": "exclude_at_match",
26 "name": "exclude_healthcheck_logs",
27 "pattern": "\"RequestPath\": \"\/health\""
28 }
29 ]
30 }]
31$ kubectl logs my-service-pod
32
33{
34 "@t": "2022-01-07T19:13:05.3134483Z",
35 "@m": "Request finished HTTP/1.1 GET http://10.64.0.80:5000/health - - - 200 - text/plain 7.5992ms",
36 "@i": "REDACTED",
37 "ElapsedMilliseconds": 7.5992,
38 "StatusCode": 200,
39 "ContentType": "text/plain",
40 "ContentLength": null,
41 "Protocol": "HTTP/1.1",
42 "Method": "GET",
43 "Scheme": "http",
44 "Host": "10.64.0.80:5000",
45 "PathBase": "",
46 "Path": "/health",
47 "QueryString": "",
48 "HostingRequestFinishedLog": "Request finished HTTP/1.1 GET http://10.64.0.80:5000/health - - - 200 - text/plain 7.5992ms",
49 "EventId": {
50 "Id": 2,
51 "Name": "RequestFinished"
52 },
53 "SourceContext": "Microsoft.AspNetCore.Hosting.Diagnostics",
54 "RequestId": "REDACTED",
55 "RequestPath": "/health",
56 "ConnectionId": "REDACTED",
57 "dd_service": "my-service",
58 "dd_version": "54aae2b5",
59 "dd_env": "development",
60 "dd_trace_id": "REDACTED",
61 "dd_span_id": "REDACTED"
62}
63"log_processing_rules": [
64 {
65 "type": "include_at_match",
66 "name": "testing_include_at_match",
67 "pattern": "somepath"
68 }
69]
70"log_processing_rules": [
71 {
72 "type": "exclude_at_match",
73 "name": "exclude_healthcheck_logs",
74 "pattern": "\/health|\"RequestPath\": \"\/health\""
75 }
76
QUESTION
Custom Serilog sink with injection?
Asked 2022-Mar-08 at 10:41I have create a simple Serilog sink project that looks like this :
1namespace MyApp.Cloud.Serilog.MQSink
2{
3 public class MessageQueueSink: ILogEventSink
4 {
5 private readonly IMQProducer _MQProducerService;
6 public MessageQueueSink(IMQProducer mQProducerService)
7 {
8 _MQProducerService = mQProducerService;
9 }
10 public void Emit(LogEvent logEvent)
11 {
12 _MQProducerService.Produce<SendLog>(new SendLog() { LogEventJson = JsonConvert.SerializeObject(logEvent)});
13 }
14 }
15}
16
The consuming microservice are starting up like this :
1namespace MyApp.Cloud.Serilog.MQSink
2{
3 public class MessageQueueSink: ILogEventSink
4 {
5 private readonly IMQProducer _MQProducerService;
6 public MessageQueueSink(IMQProducer mQProducerService)
7 {
8 _MQProducerService = mQProducerService;
9 }
10 public void Emit(LogEvent logEvent)
11 {
12 _MQProducerService.Produce<SendLog>(new SendLog() { LogEventJson = JsonConvert.SerializeObject(logEvent)});
13 }
14 }
15}
16 var configurationBuilder = new ConfigurationBuilder().AddJsonFile("appsettings.json").Build();
17 var appSettings = configurationBuilder.Get<AppSettings>();
18
19 configurationBuilder = new ConfigurationBuilder().AddJsonFile("ExtendedSettings.json").Build();
20
21 Host.CreateDefaultBuilder(args)
22 .UseMyAppCloudMQ(context => context.UseSettings(appSettings.MQSettings))
23 .UseSerilog((hostingContext, loggerConfiguration) => loggerConfiguration.ReadFrom.Configuration(hostingContext.Configuration))
24 .ConfigureServices((hostContext, services) =>
25 {
26 services
27 .AddHostedService<ExtendedProgService>()
28 .Configure<MQSettings>(configurationBuilder.GetSection("MQSettings"))
29 })
30 .Build().Run();
31
The serilog part of appsettings.json looks like this :
1namespace MyApp.Cloud.Serilog.MQSink
2{
3 public class MessageQueueSink: ILogEventSink
4 {
5 private readonly IMQProducer _MQProducerService;
6 public MessageQueueSink(IMQProducer mQProducerService)
7 {
8 _MQProducerService = mQProducerService;
9 }
10 public void Emit(LogEvent logEvent)
11 {
12 _MQProducerService.Produce<SendLog>(new SendLog() { LogEventJson = JsonConvert.SerializeObject(logEvent)});
13 }
14 }
15}
16 var configurationBuilder = new ConfigurationBuilder().AddJsonFile("appsettings.json").Build();
17 var appSettings = configurationBuilder.Get<AppSettings>();
18
19 configurationBuilder = new ConfigurationBuilder().AddJsonFile("ExtendedSettings.json").Build();
20
21 Host.CreateDefaultBuilder(args)
22 .UseMyAppCloudMQ(context => context.UseSettings(appSettings.MQSettings))
23 .UseSerilog((hostingContext, loggerConfiguration) => loggerConfiguration.ReadFrom.Configuration(hostingContext.Configuration))
24 .ConfigureServices((hostContext, services) =>
25 {
26 services
27 .AddHostedService<ExtendedProgService>()
28 .Configure<MQSettings>(configurationBuilder.GetSection("MQSettings"))
29 })
30 .Build().Run();
31 "serilog": {
32 "Using": [ "Serilog.Sinks.File", "Serilog.Sinks.Console", "MyApp.Cloud.Serilog.MQSink" ],
33 "MinimumLevel": {
34 "Default": "Debug",
35 "Override": {
36 "Microsoft": "Warning",
37 "System": "Warning"
38 }
39 },
40 "Enrich": [ "FromLogContext", "WithMachineName", "WithProcessId" ],
41 "WriteTo": [
42 {
43 "Name": "MessageQueueSink",
44 "Args": {}
45 }
46 ]
47 }
48
The MQSink project is added as reference to the microservice project and I can see that the MQSink dll ends up in the bin folder.
The problem is that when executing a _logger.LogInformation(...) in the microservice the Emit are never triggered, but if I add a console sink it will output data? I also suspect that the injected MQ will not work properly?
How could this be solved?
EDIT :
Turned on the Serilog internal log and could see that the method MessageQueueSink could not be found. I did not find any way to get this working with appsetings.json so I started to look on how to bind this in code.
To get it working a extension hade to be created :
1namespace MyApp.Cloud.Serilog.MQSink
2{
3 public class MessageQueueSink: ILogEventSink
4 {
5 private readonly IMQProducer _MQProducerService;
6 public MessageQueueSink(IMQProducer mQProducerService)
7 {
8 _MQProducerService = mQProducerService;
9 }
10 public void Emit(LogEvent logEvent)
11 {
12 _MQProducerService.Produce<SendLog>(new SendLog() { LogEventJson = JsonConvert.SerializeObject(logEvent)});
13 }
14 }
15}
16 var configurationBuilder = new ConfigurationBuilder().AddJsonFile("appsettings.json").Build();
17 var appSettings = configurationBuilder.Get<AppSettings>();
18
19 configurationBuilder = new ConfigurationBuilder().AddJsonFile("ExtendedSettings.json").Build();
20
21 Host.CreateDefaultBuilder(args)
22 .UseMyAppCloudMQ(context => context.UseSettings(appSettings.MQSettings))
23 .UseSerilog((hostingContext, loggerConfiguration) => loggerConfiguration.ReadFrom.Configuration(hostingContext.Configuration))
24 .ConfigureServices((hostContext, services) =>
25 {
26 services
27 .AddHostedService<ExtendedProgService>()
28 .Configure<MQSettings>(configurationBuilder.GetSection("MQSettings"))
29 })
30 .Build().Run();
31 "serilog": {
32 "Using": [ "Serilog.Sinks.File", "Serilog.Sinks.Console", "MyApp.Cloud.Serilog.MQSink" ],
33 "MinimumLevel": {
34 "Default": "Debug",
35 "Override": {
36 "Microsoft": "Warning",
37 "System": "Warning"
38 }
39 },
40 "Enrich": [ "FromLogContext", "WithMachineName", "WithProcessId" ],
41 "WriteTo": [
42 {
43 "Name": "MessageQueueSink",
44 "Args": {}
45 }
46 ]
47 }
48public static class MySinkExtensions
49 {
50 public static LoggerConfiguration MessageQueueSink(
51 this Serilog.Configuration.LoggerSinkConfiguration loggerConfiguration,
52 MyApp.Cloud.MQ.Interface.IMQProducer mQProducer = null)
53 {
54 return loggerConfiguration.Sink(new MyApp.Cloud.Serilog.MQSink.MessageQueueSink(mQProducer));
55 }
56 }
57
This made it possible to add the custom sink like this :
1namespace MyApp.Cloud.Serilog.MQSink
2{
3 public class MessageQueueSink: ILogEventSink
4 {
5 private readonly IMQProducer _MQProducerService;
6 public MessageQueueSink(IMQProducer mQProducerService)
7 {
8 _MQProducerService = mQProducerService;
9 }
10 public void Emit(LogEvent logEvent)
11 {
12 _MQProducerService.Produce<SendLog>(new SendLog() { LogEventJson = JsonConvert.SerializeObject(logEvent)});
13 }
14 }
15}
16 var configurationBuilder = new ConfigurationBuilder().AddJsonFile("appsettings.json").Build();
17 var appSettings = configurationBuilder.Get<AppSettings>();
18
19 configurationBuilder = new ConfigurationBuilder().AddJsonFile("ExtendedSettings.json").Build();
20
21 Host.CreateDefaultBuilder(args)
22 .UseMyAppCloudMQ(context => context.UseSettings(appSettings.MQSettings))
23 .UseSerilog((hostingContext, loggerConfiguration) => loggerConfiguration.ReadFrom.Configuration(hostingContext.Configuration))
24 .ConfigureServices((hostContext, services) =>
25 {
26 services
27 .AddHostedService<ExtendedProgService>()
28 .Configure<MQSettings>(configurationBuilder.GetSection("MQSettings"))
29 })
30 .Build().Run();
31 "serilog": {
32 "Using": [ "Serilog.Sinks.File", "Serilog.Sinks.Console", "MyApp.Cloud.Serilog.MQSink" ],
33 "MinimumLevel": {
34 "Default": "Debug",
35 "Override": {
36 "Microsoft": "Warning",
37 "System": "Warning"
38 }
39 },
40 "Enrich": [ "FromLogContext", "WithMachineName", "WithProcessId" ],
41 "WriteTo": [
42 {
43 "Name": "MessageQueueSink",
44 "Args": {}
45 }
46 ]
47 }
48public static class MySinkExtensions
49 {
50 public static LoggerConfiguration MessageQueueSink(
51 this Serilog.Configuration.LoggerSinkConfiguration loggerConfiguration,
52 MyApp.Cloud.MQ.Interface.IMQProducer mQProducer = null)
53 {
54 return loggerConfiguration.Sink(new MyApp.Cloud.Serilog.MQSink.MessageQueueSink(mQProducer));
55 }
56 }
57Host.CreateDefaultBuilder(args)
58 .UseMyAppCloudMQ(context => context.UseSettings(appSettings.MQSettings))
59 .ConfigureServices((hostContext, services) =>
60 {
61 services
62 .Configure<MQSettings>(configurationBuilder.GetSection("MQSettings"))
63 })
64 .UseSerilog((hostingContext, loggerConfiguration) => loggerConfiguration.ReadFrom.Configuration(hostingContext.Configuration).WriteTo.MessageQueueSink())
65 .Build().Run();
66
The custom sink is loaded and the Emit is triggered but I still do not know how to inject the MQ in to the sink? It would also be much better if I could do all the configuration of the Serilog and sink in the appsettings.json file.
ANSWER
Answered 2022-Feb-23 at 18:28If you refer to the Provided Sinks list and examine the source code for some of them, you'll notice that the pattern is usually:
- Construct the sink configuration (usually taking values from
IConfiguration
, inline or a combination of both) - Pass the configuration to the sink registration.
Then the sink implementation instantiates the required services to push logs to.
An alternate approach I could suggest is registering Serilog without any arguments (UseSerilog()
) and then configure the static Serilog.Log
class using the built IServiceProvider
:
1namespace MyApp.Cloud.Serilog.MQSink
2{
3 public class MessageQueueSink: ILogEventSink
4 {
5 private readonly IMQProducer _MQProducerService;
6 public MessageQueueSink(IMQProducer mQProducerService)
7 {
8 _MQProducerService = mQProducerService;
9 }
10 public void Emit(LogEvent logEvent)
11 {
12 _MQProducerService.Produce<SendLog>(new SendLog() { LogEventJson = JsonConvert.SerializeObject(logEvent)});
13 }
14 }
15}
16 var configurationBuilder = new ConfigurationBuilder().AddJsonFile("appsettings.json").Build();
17 var appSettings = configurationBuilder.Get<AppSettings>();
18
19 configurationBuilder = new ConfigurationBuilder().AddJsonFile("ExtendedSettings.json").Build();
20
21 Host.CreateDefaultBuilder(args)
22 .UseMyAppCloudMQ(context => context.UseSettings(appSettings.MQSettings))
23 .UseSerilog((hostingContext, loggerConfiguration) => loggerConfiguration.ReadFrom.Configuration(hostingContext.Configuration))
24 .ConfigureServices((hostContext, services) =>
25 {
26 services
27 .AddHostedService<ExtendedProgService>()
28 .Configure<MQSettings>(configurationBuilder.GetSection("MQSettings"))
29 })
30 .Build().Run();
31 "serilog": {
32 "Using": [ "Serilog.Sinks.File", "Serilog.Sinks.Console", "MyApp.Cloud.Serilog.MQSink" ],
33 "MinimumLevel": {
34 "Default": "Debug",
35 "Override": {
36 "Microsoft": "Warning",
37 "System": "Warning"
38 }
39 },
40 "Enrich": [ "FromLogContext", "WithMachineName", "WithProcessId" ],
41 "WriteTo": [
42 {
43 "Name": "MessageQueueSink",
44 "Args": {}
45 }
46 ]
47 }
48public static class MySinkExtensions
49 {
50 public static LoggerConfiguration MessageQueueSink(
51 this Serilog.Configuration.LoggerSinkConfiguration loggerConfiguration,
52 MyApp.Cloud.MQ.Interface.IMQProducer mQProducer = null)
53 {
54 return loggerConfiguration.Sink(new MyApp.Cloud.Serilog.MQSink.MessageQueueSink(mQProducer));
55 }
56 }
57Host.CreateDefaultBuilder(args)
58 .UseMyAppCloudMQ(context => context.UseSettings(appSettings.MQSettings))
59 .ConfigureServices((hostContext, services) =>
60 {
61 services
62 .Configure<MQSettings>(configurationBuilder.GetSection("MQSettings"))
63 })
64 .UseSerilog((hostingContext, loggerConfiguration) => loggerConfiguration.ReadFrom.Configuration(hostingContext.Configuration).WriteTo.MessageQueueSink())
65 .Build().Run();
66var host = Host.CreateDefaultBuilder(args)
67 // Register your services as usual
68 .UseSerilog()
69 .Build()
70
71Log.Logger = new LoggerConfiguration()
72 .ReadFrom.Configuration(host.Services.GetRequiredService<IConfiguration>())
73 .WriteTo.MessageQueueSink(host.Services.GetRequiredService<IMQProducer>())
74 .CreateLogger();
75
76host.Run();
77
QUESTION
How to manage Google Cloud credentials for local development
Asked 2022-Feb-14 at 23:35I searched a lot how to authenticate/authorize Google's client libraries and it seems no one agrees how to do it.
Some people states that I should create a service account, create a key out from it and give that key to each developer that wants to act as this service account. I hate this solution because it leaks the identity of the service account to multiple person.
Others mentioned that you simply log in with the Cloud SDK and ADC (Application Default Credentials) by doing:
1$ gcloud auth application-default login
2
Then, libraries like google-cloud-storage
will load credentials tied to my user from the ADC.
It's better but still not good for me as this require to go to IAM
and give every developer (or a group) the permissions required for the application to run. Moreover, if the developer runs many applications locally for testing purposes (e.g. microservices), the list of permissions required will probably be very long. Also it will be hard to understand why we gave such permissions after some time.
The last approach I encountered is service account impersonation. This resolve the fact of exposing private keys to developers, and let us define the permission required by an application, let's say A once, associate them to a service account and say:
Hey, let Julien act as the service account used for application A.
Here's a snippet of how to impersonate a principal:
1$ gcloud auth application-default login
2from google.auth import impersonated_credentials
3from google.auth import default
4
5from google.cloud import storage
6
7target_scopes = ['https://www.googleapis.com/auth/devstorage.read_only']
8
9credentials, project = default(scopes=target_scopes)
10
11final_credentials = impersonated_credentials.Credentials(
12 source_credentials=credentials,
13 target_principal="foo@bar-271618.iam.gserviceaccount.com",
14 target_scopes=target_scopes
15)
16
17client = storage.Client(credentials=final_credentials)
18
19print(next(client.list_buckets()))
20
If you want to try this yourself, you need to create the service account you want to impersonate (here foo@bar-271618.iam.gserviceaccount.com) and grant your user the role
Service Account Token Creator
from the service account permission tab.
My only concern is that it would require me to wrap all Google Cloud client libraries I want to use with something that checks if I am running my app locally:
1$ gcloud auth application-default login
2from google.auth import impersonated_credentials
3from google.auth import default
4
5from google.cloud import storage
6
7target_scopes = ['https://www.googleapis.com/auth/devstorage.read_only']
8
9credentials, project = default(scopes=target_scopes)
10
11final_credentials = impersonated_credentials.Credentials(
12 source_credentials=credentials,
13 target_principal="foo@bar-271618.iam.gserviceaccount.com",
14 target_scopes=target_scopes
15)
16
17client = storage.Client(credentials=final_credentials)
18
19print(next(client.list_buckets()))
20from google.auth import impersonated_credentials
21from google.auth import default
22
23from google.cloud import storage
24
25target_scopes = ['https://www.googleapis.com/auth/devstorage.read_only']
26
27credentials, project = default(scopes=target_scopes)
28if env := os.getenv("RUNNING_ENVIRONMENT") == "local":
29 credentials = impersonated_credentials.Credentials(
30 source_credentials=credentials,
31 target_principal=os.environ["TARGET_PRINCIPAL"],
32 target_scopes=target_scopes
33 )
34
35client = storage.Client(credentials=credentials)
36print(next(client.list_buckets()))
37
Also, I have to define the scopes (I think it's the oauth2 access scopes?) I am using, which is pretty annoying.
My question is: I am I going the right direction? Do I overthink all of that? Is there any easier way to achieve all of that?
Here's some of the source I used:
- https://readthedocs.org/projects/google-auth/downloads/pdf/latest/
- https://cloud.google.com/iam/docs/creating-short-lived-service-account-credentials
- https://cloud.google.com/docs/authentication/production
- https://youtu.be/IYGkbDXDR9I?t=822
This topic is discussed here.
I've made a first proposition here to support this enhancement.
Update 2The feature has been implemented! See here for details.
ANSWER
Answered 2021-Oct-02 at 14:00You can use a new gcloud feature and impersonate your local credential like that:
1$ gcloud auth application-default login
2from google.auth import impersonated_credentials
3from google.auth import default
4
5from google.cloud import storage
6
7target_scopes = ['https://www.googleapis.com/auth/devstorage.read_only']
8
9credentials, project = default(scopes=target_scopes)
10
11final_credentials = impersonated_credentials.Credentials(
12 source_credentials=credentials,
13 target_principal="foo@bar-271618.iam.gserviceaccount.com",
14 target_scopes=target_scopes
15)
16
17client = storage.Client(credentials=final_credentials)
18
19print(next(client.list_buckets()))
20from google.auth import impersonated_credentials
21from google.auth import default
22
23from google.cloud import storage
24
25target_scopes = ['https://www.googleapis.com/auth/devstorage.read_only']
26
27credentials, project = default(scopes=target_scopes)
28if env := os.getenv("RUNNING_ENVIRONMENT") == "local":
29 credentials = impersonated_credentials.Credentials(
30 source_credentials=credentials,
31 target_principal=os.environ["TARGET_PRINCIPAL"],
32 target_scopes=target_scopes
33 )
34
35client = storage.Client(credentials=credentials)
36print(next(client.list_buckets()))
37gcloud auth application-default login --impersonate-service-account=<SA email>
38
It's a new feature. Being a Java and Golang developer, I checked and tested the Java client library and it already supports this authentication mode. However, it's not yet the case in Go. And I submitted a pull request to add it into the go client library.
I quickly checked in Python, and it seems implemented. Have a try on one of the latest versions (released after August 3rd 2021) and let me know!!
Note: A few is aware of your use case. I'm happy not to be alone in this case :)
QUESTION
using webclient to call the grapql mutation API in spring boot
Asked 2022-Jan-24 at 12:18I am stuck while calling the graphQL mutation API in spring boot. Let me explain my scenario, I have two microservice one is the AuditConsumeService which consume the message from the activeMQ, and the other is GraphQL layer which simply takes the data from the consume service and put it inside the database. Everything well when i try to push data using graphql playground or postman. How do I push data from AuditConsumeService. In the AuditConsumeService I am trying to send mutation API as a string. the method which is responsible to send that to graphQL layer is
1public Mono<String> sendLogsToGraphQL(String logs){
2 return webClient
3 .post()
4 .uri("http://localhost:8080/logs/createEventLog")
5 .bodyValue(logs)
6 .retrieve()
7 .bodyToMono(String.class);
8 }
9
NOTE: I try to pass data as Object as well but no use.
The String logs
will be given to it from activeMQ. The data which I am sending is;
1public Mono<String> sendLogsToGraphQL(String logs){
2 return webClient
3 .post()
4 .uri("http://localhost:8080/logs/createEventLog")
5 .bodyValue(logs)
6 .retrieve()
7 .bodyToMono(String.class);
8 }
9{
10 "hasError": false,
11 "message": "Hello There",
12 "sender": "Ali Ahmad",
13 "payload": {
14 "type": "String",
15 "title": "Topoic",
16 "description": "This is the demo description of the activemqq"
17 },
18 "serviceInfo":{
19 "version": "v1",
20 "date": "2021-05-18T08:44:17.8237608+05:00",
21 "serverStatus": "UP",
22 "serviceName": "IdentityService"
23 }
24}
25
The mutation will be like;
1public Mono<String> sendLogsToGraphQL(String logs){
2 return webClient
3 .post()
4 .uri("http://localhost:8080/logs/createEventLog")
5 .bodyValue(logs)
6 .retrieve()
7 .bodyToMono(String.class);
8 }
9{
10 "hasError": false,
11 "message": "Hello There",
12 "sender": "Ali Ahmad",
13 "payload": {
14 "type": "String",
15 "title": "Topoic",
16 "description": "This is the demo description of the activemqq"
17 },
18 "serviceInfo":{
19 "version": "v1",
20 "date": "2021-05-18T08:44:17.8237608+05:00",
21 "serverStatus": "UP",
22 "serviceName": "IdentityService"
23 }
24}
25mutation($eventLog:EventLogInput){
26 createEventLog(eventLog: $eventLog){
27 hasError
28 message
29 payload{
30 title,
31 description
32 }
33 }
34}
35
The $eventLog
has json body as;
1public Mono<String> sendLogsToGraphQL(String logs){
2 return webClient
3 .post()
4 .uri("http://localhost:8080/logs/createEventLog")
5 .bodyValue(logs)
6 .retrieve()
7 .bodyToMono(String.class);
8 }
9{
10 "hasError": false,
11 "message": "Hello There",
12 "sender": "Ali Ahmad",
13 "payload": {
14 "type": "String",
15 "title": "Topoic",
16 "description": "This is the demo description of the activemqq"
17 },
18 "serviceInfo":{
19 "version": "v1",
20 "date": "2021-05-18T08:44:17.8237608+05:00",
21 "serverStatus": "UP",
22 "serviceName": "IdentityService"
23 }
24}
25mutation($eventLog:EventLogInput){
26 createEventLog(eventLog: $eventLog){
27 hasError
28 message
29 payload{
30 title,
31 description
32 }
33 }
34}
35{
36 "eventLog": {
37 "hasError": false,
38 "message": "Hello There",
39 "sender": "Ali Ahmad",
40 "payload": {
41 "type": "String",
42 "title": "Topoic",
43 "description": "This is the demo description of the activemqq"
44 },
45 "serviceInfo":{
46 "version": "v1",
47 "date": "2021-05-18T08:44:17.8237608+05:00",
48 "serverStatus": "UP",
49 "serviceName": "IdentityService"
50 }
51}
52}
53
EDIT The follow the below answer, by updating the consumerservice as;
1public Mono<String> sendLogsToGraphQL(String logs){
2 return webClient
3 .post()
4 .uri("http://localhost:8080/logs/createEventLog")
5 .bodyValue(logs)
6 .retrieve()
7 .bodyToMono(String.class);
8 }
9{
10 "hasError": false,
11 "message": "Hello There",
12 "sender": "Ali Ahmad",
13 "payload": {
14 "type": "String",
15 "title": "Topoic",
16 "description": "This is the demo description of the activemqq"
17 },
18 "serviceInfo":{
19 "version": "v1",
20 "date": "2021-05-18T08:44:17.8237608+05:00",
21 "serverStatus": "UP",
22 "serviceName": "IdentityService"
23 }
24}
25mutation($eventLog:EventLogInput){
26 createEventLog(eventLog: $eventLog){
27 hasError
28 message
29 payload{
30 title,
31 description
32 }
33 }
34}
35{
36 "eventLog": {
37 "hasError": false,
38 "message": "Hello There",
39 "sender": "Ali Ahmad",
40 "payload": {
41 "type": "String",
42 "title": "Topoic",
43 "description": "This is the demo description of the activemqq"
44 },
45 "serviceInfo":{
46 "version": "v1",
47 "date": "2021-05-18T08:44:17.8237608+05:00",
48 "serverStatus": "UP",
49 "serviceName": "IdentityService"
50 }
51}
52}
53@Component
54public class Consumer {
55 @Autowired
56 private AuditService auditService;
57
58 private final String MUTATION_QUERY = "mutation($eventLog: EventLogInput){\n" +
59 "createEventLog(eventLog: $eventLog){\n" +
60 "hasError\n" +
61 "}\n" +
62 "}";
63
64 @JmsListener(destination = "Audit.queue")
65 public void consumeLogs(String logs) {
66 Gson gson = new Gson();
67 Object jsonObject = gson.fromJson(logs, Object.class);
68 Map<String, Object> graphQlBody = new HashMap<>();
69 graphQlBody.put("query", MUTATION_QUERY);
70 graphQlBody.put("variables", "{eventLog: " + jsonObject+ "}");
71 auditService.sendLogsToGraphQL(graphQlBody);
72 }
73}
74
Now In the `sendLogsToGraphQL' will becomes.
1public Mono<String> sendLogsToGraphQL(String logs){
2 return webClient
3 .post()
4 .uri("http://localhost:8080/logs/createEventLog")
5 .bodyValue(logs)
6 .retrieve()
7 .bodyToMono(String.class);
8 }
9{
10 "hasError": false,
11 "message": "Hello There",
12 "sender": "Ali Ahmad",
13 "payload": {
14 "type": "String",
15 "title": "Topoic",
16 "description": "This is the demo description of the activemqq"
17 },
18 "serviceInfo":{
19 "version": "v1",
20 "date": "2021-05-18T08:44:17.8237608+05:00",
21 "serverStatus": "UP",
22 "serviceName": "IdentityService"
23 }
24}
25mutation($eventLog:EventLogInput){
26 createEventLog(eventLog: $eventLog){
27 hasError
28 message
29 payload{
30 title,
31 description
32 }
33 }
34}
35{
36 "eventLog": {
37 "hasError": false,
38 "message": "Hello There",
39 "sender": "Ali Ahmad",
40 "payload": {
41 "type": "String",
42 "title": "Topoic",
43 "description": "This is the demo description of the activemqq"
44 },
45 "serviceInfo":{
46 "version": "v1",
47 "date": "2021-05-18T08:44:17.8237608+05:00",
48 "serverStatus": "UP",
49 "serviceName": "IdentityService"
50 }
51}
52}
53@Component
54public class Consumer {
55 @Autowired
56 private AuditService auditService;
57
58 private final String MUTATION_QUERY = "mutation($eventLog: EventLogInput){\n" +
59 "createEventLog(eventLog: $eventLog){\n" +
60 "hasError\n" +
61 "}\n" +
62 "}";
63
64 @JmsListener(destination = "Audit.queue")
65 public void consumeLogs(String logs) {
66 Gson gson = new Gson();
67 Object jsonObject = gson.fromJson(logs, Object.class);
68 Map<String, Object> graphQlBody = new HashMap<>();
69 graphQlBody.put("query", MUTATION_QUERY);
70 graphQlBody.put("variables", "{eventLog: " + jsonObject+ "}");
71 auditService.sendLogsToGraphQL(graphQlBody);
72 }
73}
74public void sendLogsToGraphQL(Map<String, String> logs) {
75 log.info("Logs: {} ", logs);
76 Mono<String> stringMono = webClient
77 .post()
78 .uri("http://localhost:8080/graphql")
79 .bodyValue(BodyInserters.fromValue(logs))
80 .retrieve()
81 .bodyToMono(String.class);
82 log.info("StringMono: {}", stringMono);
83 return stringMono;
84 }
85
The data is not sending to the graphql layer with the specified url.
ANSWER
Answered 2022-Jan-23 at 21:40You have to send the query
and body as variables in post request like shown here
1public Mono<String> sendLogsToGraphQL(String logs){
2 return webClient
3 .post()
4 .uri("http://localhost:8080/logs/createEventLog")
5 .bodyValue(logs)
6 .retrieve()
7 .bodyToMono(String.class);
8 }
9{
10 "hasError": false,
11 "message": "Hello There",
12 "sender": "Ali Ahmad",
13 "payload": {
14 "type": "String",
15 "title": "Topoic",
16 "description": "This is the demo description of the activemqq"
17 },
18 "serviceInfo":{
19 "version": "v1",
20 "date": "2021-05-18T08:44:17.8237608+05:00",
21 "serverStatus": "UP",
22 "serviceName": "IdentityService"
23 }
24}
25mutation($eventLog:EventLogInput){
26 createEventLog(eventLog: $eventLog){
27 hasError
28 message
29 payload{
30 title,
31 description
32 }
33 }
34}
35{
36 "eventLog": {
37 "hasError": false,
38 "message": "Hello There",
39 "sender": "Ali Ahmad",
40 "payload": {
41 "type": "String",
42 "title": "Topoic",
43 "description": "This is the demo description of the activemqq"
44 },
45 "serviceInfo":{
46 "version": "v1",
47 "date": "2021-05-18T08:44:17.8237608+05:00",
48 "serverStatus": "UP",
49 "serviceName": "IdentityService"
50 }
51}
52}
53@Component
54public class Consumer {
55 @Autowired
56 private AuditService auditService;
57
58 private final String MUTATION_QUERY = "mutation($eventLog: EventLogInput){\n" +
59 "createEventLog(eventLog: $eventLog){\n" +
60 "hasError\n" +
61 "}\n" +
62 "}";
63
64 @JmsListener(destination = "Audit.queue")
65 public void consumeLogs(String logs) {
66 Gson gson = new Gson();
67 Object jsonObject = gson.fromJson(logs, Object.class);
68 Map<String, Object> graphQlBody = new HashMap<>();
69 graphQlBody.put("query", MUTATION_QUERY);
70 graphQlBody.put("variables", "{eventLog: " + jsonObject+ "}");
71 auditService.sendLogsToGraphQL(graphQlBody);
72 }
73}
74public void sendLogsToGraphQL(Map<String, String> logs) {
75 log.info("Logs: {} ", logs);
76 Mono<String> stringMono = webClient
77 .post()
78 .uri("http://localhost:8080/graphql")
79 .bodyValue(BodyInserters.fromValue(logs))
80 .retrieve()
81 .bodyToMono(String.class);
82 log.info("StringMono: {}", stringMono);
83 return stringMono;
84 }
85graphQlBody = { "query" : mutation_query, "variables" : { "eventLog" : event_log_json } }
86
And then in webClient you can send body as multiple ways
1public Mono<String> sendLogsToGraphQL(String logs){
2 return webClient
3 .post()
4 .uri("http://localhost:8080/logs/createEventLog")
5 .bodyValue(logs)
6 .retrieve()
7 .bodyToMono(String.class);
8 }
9{
10 "hasError": false,
11 "message": "Hello There",
12 "sender": "Ali Ahmad",
13 "payload": {
14 "type": "String",
15 "title": "Topoic",
16 "description": "This is the demo description of the activemqq"
17 },
18 "serviceInfo":{
19 "version": "v1",
20 "date": "2021-05-18T08:44:17.8237608+05:00",
21 "serverStatus": "UP",
22 "serviceName": "IdentityService"
23 }
24}
25mutation($eventLog:EventLogInput){
26 createEventLog(eventLog: $eventLog){
27 hasError
28 message
29 payload{
30 title,
31 description
32 }
33 }
34}
35{
36 "eventLog": {
37 "hasError": false,
38 "message": "Hello There",
39 "sender": "Ali Ahmad",
40 "payload": {
41 "type": "String",
42 "title": "Topoic",
43 "description": "This is the demo description of the activemqq"
44 },
45 "serviceInfo":{
46 "version": "v1",
47 "date": "2021-05-18T08:44:17.8237608+05:00",
48 "serverStatus": "UP",
49 "serviceName": "IdentityService"
50 }
51}
52}
53@Component
54public class Consumer {
55 @Autowired
56 private AuditService auditService;
57
58 private final String MUTATION_QUERY = "mutation($eventLog: EventLogInput){\n" +
59 "createEventLog(eventLog: $eventLog){\n" +
60 "hasError\n" +
61 "}\n" +
62 "}";
63
64 @JmsListener(destination = "Audit.queue")
65 public void consumeLogs(String logs) {
66 Gson gson = new Gson();
67 Object jsonObject = gson.fromJson(logs, Object.class);
68 Map<String, Object> graphQlBody = new HashMap<>();
69 graphQlBody.put("query", MUTATION_QUERY);
70 graphQlBody.put("variables", "{eventLog: " + jsonObject+ "}");
71 auditService.sendLogsToGraphQL(graphQlBody);
72 }
73}
74public void sendLogsToGraphQL(Map<String, String> logs) {
75 log.info("Logs: {} ", logs);
76 Mono<String> stringMono = webClient
77 .post()
78 .uri("http://localhost:8080/graphql")
79 .bodyValue(BodyInserters.fromValue(logs))
80 .retrieve()
81 .bodyToMono(String.class);
82 log.info("StringMono: {}", stringMono);
83 return stringMono;
84 }
85graphQlBody = { "query" : mutation_query, "variables" : { "eventLog" : event_log_json } }
86public Mono<String> sendLogsToGraphQL(Map<String,Object> body){
87 return webClient
88 .post()
89 .uri("http://localhost:8080/logs/createEventLog")
90 .bodyValue(BodyInserters.fromValue(body))
91 .retrieve()
92 .bodyToMono(String.class);
93}
94
Here i just showed using Map<String,Object>
for forming graphQL request body, but you can also create corresponding POJO classes with the properties of query
and variables
QUESTION
Jdeps Module java.annotation not found
Asked 2022-Jan-20 at 22:48I'm trying to create a minimal jre for Spring Boot microservices using jdeps and jlink, but I'm getting the following error when I get to the using jdeps part
1Exception in thread "main" java.lang.module.FindException: Module java.annotation not found, required by org.apache.tomcat.embed.core
2 at java.base/java.lang.module.Resolver.findFail(Resolver.java:893)
3 at java.base/java.lang.module.Resolver.resolve(Resolver.java:192)
4 at java.base/java.lang.module.Resolver.resolve(Resolver.java:141)
5 at java.base/java.lang.module.Configuration.resolve(Configuration.java:421)
6 at java.base/java.lang.module.Configuration.resolve(Configuration.java:255)
7 at jdk.jdeps/com.sun.tools.jdeps.JdepsConfiguration$Builder.build(JdepsConfiguration.java:564)
8 at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.buildConfig(JdepsTask.java:603)
9 at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:557)
10 at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:533)
11 at jdk.jdeps/com.sun.tools.jdeps.Main.main(Main.java:49)
12
I already tried the following commands with no effect
1Exception in thread "main" java.lang.module.FindException: Module java.annotation not found, required by org.apache.tomcat.embed.core
2 at java.base/java.lang.module.Resolver.findFail(Resolver.java:893)
3 at java.base/java.lang.module.Resolver.resolve(Resolver.java:192)
4 at java.base/java.lang.module.Resolver.resolve(Resolver.java:141)
5 at java.base/java.lang.module.Configuration.resolve(Configuration.java:421)
6 at java.base/java.lang.module.Configuration.resolve(Configuration.java:255)
7 at jdk.jdeps/com.sun.tools.jdeps.JdepsConfiguration$Builder.build(JdepsConfiguration.java:564)
8 at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.buildConfig(JdepsTask.java:603)
9 at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:557)
10 at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:533)
11 at jdk.jdeps/com.sun.tools.jdeps.Main.main(Main.java:49)
12jdeps --ignore-missing-deps --multi-release 17 --module-path target/lib/* target/errorrr-*.jar
13jdeps --multi-release 16 --module-path target/lib/* target/errorrr-*.jar
14jdeps --ignore-missing-deps --multi-release 17 --class-path target/lib/* target/errorrr-*.jar
15
I already tried it with java versions 11, 16 and 17 and different versions of Spring Boot.
All dependencies needed for build are copied to target/lib folder by maven-dependency-plugin plugin when I run mvn install
After identifying the responsible dependency I created a new project from scratch with only it to isolate the error, but it remained.
I tried to use gradle at first but as the error remained I changed it to mavem but also no change.
When I add the specified dependency that is being requested the error changes to
1Exception in thread "main" java.lang.module.FindException: Module java.annotation not found, required by org.apache.tomcat.embed.core
2 at java.base/java.lang.module.Resolver.findFail(Resolver.java:893)
3 at java.base/java.lang.module.Resolver.resolve(Resolver.java:192)
4 at java.base/java.lang.module.Resolver.resolve(Resolver.java:141)
5 at java.base/java.lang.module.Configuration.resolve(Configuration.java:421)
6 at java.base/java.lang.module.Configuration.resolve(Configuration.java:255)
7 at jdk.jdeps/com.sun.tools.jdeps.JdepsConfiguration$Builder.build(JdepsConfiguration.java:564)
8 at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.buildConfig(JdepsTask.java:603)
9 at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:557)
10 at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:533)
11 at jdk.jdeps/com.sun.tools.jdeps.Main.main(Main.java:49)
12jdeps --ignore-missing-deps --multi-release 17 --module-path target/lib/* target/errorrr-*.jar
13jdeps --multi-release 16 --module-path target/lib/* target/errorrr-*.jar
14jdeps --ignore-missing-deps --multi-release 17 --class-path target/lib/* target/errorrr-*.jar
15#13 1.753 Exception in thread "main" java.lang.Error: java.util.concurrent.ExecutionException: com.sun.tools.jdeps.MultiReleaseException
16 #13 1.753 at jdk.jdeps/com.sun.tools.jdeps.DependencyFinder.waitForTasksCompleted(DependencyFinder.java:271)
17 #13 1.753 at jdk.jdeps/com.sun.tools.jdeps.DependencyFinder.parse(DependencyFinder.java:133)
18 #13 1.753 at jdk.jdeps/com.sun.tools.jdeps.DepsAnalyzer.run(DepsAnalyzer.java:129)
19 #13 1.753 at jdk.jdeps/com.sun.tools.jdeps.ModuleExportsAnalyzer.run(ModuleExportsAnalyzer.java:74)
20 #13 1.753 at jdk.jdeps/com.sun.tools.jdeps.JdepsTask$ListModuleDeps.run(JdepsTask.java:1047)
21 #13 1.753 at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:574)
22 #13 1.753 at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:533)
23 #13 1.753 at jdk.jdeps/com.sun.tools.jdeps.Main.main(Main.java:49)
24 #13 1.753 Caused by: java.util.concurrent.ExecutionException: com.sun.tools.jdeps.MultiReleaseException
25 #13 1.753 at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
26 #13 1.753 at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191)
27 #13 1.753 at jdk.jdeps/com.sun.tools.jdeps.DependencyFinder.waitForTasksCompleted(DependencyFinder.java:267)
28 #13 1.754 ... 7 more
29 #13 1.754 Caused by: com.sun.tools.jdeps.MultiReleaseException
30 #13 1.754 at jdk.jdeps/com.sun.tools.jdeps.VersionHelper.add(VersionHelper.java:62)
31 #13 1.754 at jdk.jdeps/com.sun.tools.jdeps.ClassFileReader$JarFileReader.readClassFile(ClassFileReader.java:360)
32 #13 1.754 at jdk.jdeps/com.sun.tools.jdeps.ClassFileReader$JarFileIterator.hasNext(ClassFileReader.java:402)
33 #13 1.754 at jdk.jdeps/com.sun.tools.jdeps.DependencyFinder.lambda$parse$5(DependencyFinder.java:179)
34 #13 1.754 at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
35 #13 1.754 at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
36 #13 1.754 at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
37 #13 1.754 at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
38 #13 1.754 at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
39 #13 1.754 at java.base/java.lang.Thread.run(Thread.java:833)
40
My pom.xml
1Exception in thread "main" java.lang.module.FindException: Module java.annotation not found, required by org.apache.tomcat.embed.core
2 at java.base/java.lang.module.Resolver.findFail(Resolver.java:893)
3 at java.base/java.lang.module.Resolver.resolve(Resolver.java:192)
4 at java.base/java.lang.module.Resolver.resolve(Resolver.java:141)
5 at java.base/java.lang.module.Configuration.resolve(Configuration.java:421)
6 at java.base/java.lang.module.Configuration.resolve(Configuration.java:255)
7 at jdk.jdeps/com.sun.tools.jdeps.JdepsConfiguration$Builder.build(JdepsConfiguration.java:564)
8 at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.buildConfig(JdepsTask.java:603)
9 at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:557)
10 at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:533)
11 at jdk.jdeps/com.sun.tools.jdeps.Main.main(Main.java:49)
12jdeps --ignore-missing-deps --multi-release 17 --module-path target/lib/* target/errorrr-*.jar
13jdeps --multi-release 16 --module-path target/lib/* target/errorrr-*.jar
14jdeps --ignore-missing-deps --multi-release 17 --class-path target/lib/* target/errorrr-*.jar
15#13 1.753 Exception in thread "main" java.lang.Error: java.util.concurrent.ExecutionException: com.sun.tools.jdeps.MultiReleaseException
16 #13 1.753 at jdk.jdeps/com.sun.tools.jdeps.DependencyFinder.waitForTasksCompleted(DependencyFinder.java:271)
17 #13 1.753 at jdk.jdeps/com.sun.tools.jdeps.DependencyFinder.parse(DependencyFinder.java:133)
18 #13 1.753 at jdk.jdeps/com.sun.tools.jdeps.DepsAnalyzer.run(DepsAnalyzer.java:129)
19 #13 1.753 at jdk.jdeps/com.sun.tools.jdeps.ModuleExportsAnalyzer.run(ModuleExportsAnalyzer.java:74)
20 #13 1.753 at jdk.jdeps/com.sun.tools.jdeps.JdepsTask$ListModuleDeps.run(JdepsTask.java:1047)
21 #13 1.753 at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:574)
22 #13 1.753 at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:533)
23 #13 1.753 at jdk.jdeps/com.sun.tools.jdeps.Main.main(Main.java:49)
24 #13 1.753 Caused by: java.util.concurrent.ExecutionException: com.sun.tools.jdeps.MultiReleaseException
25 #13 1.753 at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
26 #13 1.753 at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191)
27 #13 1.753 at jdk.jdeps/com.sun.tools.jdeps.DependencyFinder.waitForTasksCompleted(DependencyFinder.java:267)
28 #13 1.754 ... 7 more
29 #13 1.754 Caused by: com.sun.tools.jdeps.MultiReleaseException
30 #13 1.754 at jdk.jdeps/com.sun.tools.jdeps.VersionHelper.add(VersionHelper.java:62)
31 #13 1.754 at jdk.jdeps/com.sun.tools.jdeps.ClassFileReader$JarFileReader.readClassFile(ClassFileReader.java:360)
32 #13 1.754 at jdk.jdeps/com.sun.tools.jdeps.ClassFileReader$JarFileIterator.hasNext(ClassFileReader.java:402)
33 #13 1.754 at jdk.jdeps/com.sun.tools.jdeps.DependencyFinder.lambda$parse$5(DependencyFinder.java:179)
34 #13 1.754 at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
35 #13 1.754 at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
36 #13 1.754 at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
37 #13 1.754 at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
38 #13 1.754 at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
39 #13 1.754 at java.base/java.lang.Thread.run(Thread.java:833)
40 <?xml version="1.0" encoding="UTF-8"?>
41<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
42 xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
43 <modelVersion>4.0.0</modelVersion>
44 <parent>
45 <groupId>org.springframework.boot</groupId>
46 <artifactId>spring-boot-starter-parent</artifactId>
47 <version>2.6.0</version>
48 <relativePath/> <!-- lookup parent from repository -->
49 </parent>
50 <groupId>com.example</groupId>
51 <artifactId>errorrr</artifactId>
52 <version>0.0.1-SNAPSHOT</version>
53 <name>errorrr</name>
54 <description>Demo project for Spring Boot</description>
55 <properties>
56 <java.version>17</java.version>
57 </properties>
58 <dependencies>
59 <dependency>
60 <groupId>org.springframework.boot</groupId>
61 <artifactId>spring-boot-starter-web</artifactId>
62 </dependency>
63
64 <dependency>
65 <groupId>org.springframework.boot</groupId>
66 <artifactId>spring-boot-starter</artifactId>
67 </dependency>
68
69 <dependency>
70 <groupId>org.springframework.boot</groupId>
71 <artifactId>spring-boot-starter-test</artifactId>
72 <scope>test</scope>
73 </dependency>
74
75 </dependencies>
76
77 <build>
78 <plugins>
79 <plugin>
80 <groupId>org.springframework.boot</groupId>
81 <artifactId>spring-boot-maven-plugin</artifactId>
82 </plugin>
83 <plugin>
84 <groupId>org.apache.maven.plugins</groupId>
85 <artifactId>maven-dependency-plugin</artifactId>
86 <executions>
87 <execution>
88 <id>copy-dependencies</id>
89 <phase>package</phase>
90 <goals>
91 <goal>copy-dependencies</goal>
92 </goals>
93 <configuration>
94 <outputDirectory>${project.build.directory}/lib</outputDirectory>
95 </configuration>
96 </execution>
97 </executions>
98 </plugin>
99 </plugins>
100 </build>
101
102</project>
103
104
If I don't need to use this dependency I can do all the build processes and at the end I have a 76mb jre
ANSWER
Answered 2021-Dec-28 at 14:39I have been struggling with a similar issue In my gradle spring boot project
I am using the output of the following for adding modules in jlink in my dockerfile with (openjdk:17-alpine
):
1Exception in thread "main" java.lang.module.FindException: Module java.annotation not found, required by org.apache.tomcat.embed.core
2 at java.base/java.lang.module.Resolver.findFail(Resolver.java:893)
3 at java.base/java.lang.module.Resolver.resolve(Resolver.java:192)
4 at java.base/java.lang.module.Resolver.resolve(Resolver.java:141)
5 at java.base/java.lang.module.Configuration.resolve(Configuration.java:421)
6 at java.base/java.lang.module.Configuration.resolve(Configuration.java:255)
7 at jdk.jdeps/com.sun.tools.jdeps.JdepsConfiguration$Builder.build(JdepsConfiguration.java:564)
8 at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.buildConfig(JdepsTask.java:603)
9 at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:557)
10 at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:533)
11 at jdk.jdeps/com.sun.tools.jdeps.Main.main(Main.java:49)
12jdeps --ignore-missing-deps --multi-release 17 --module-path target/lib/* target/errorrr-*.jar
13jdeps --multi-release 16 --module-path target/lib/* target/errorrr-*.jar
14jdeps --ignore-missing-deps --multi-release 17 --class-path target/lib/* target/errorrr-*.jar
15#13 1.753 Exception in thread "main" java.lang.Error: java.util.concurrent.ExecutionException: com.sun.tools.jdeps.MultiReleaseException
16 #13 1.753 at jdk.jdeps/com.sun.tools.jdeps.DependencyFinder.waitForTasksCompleted(DependencyFinder.java:271)
17 #13 1.753 at jdk.jdeps/com.sun.tools.jdeps.DependencyFinder.parse(DependencyFinder.java:133)
18 #13 1.753 at jdk.jdeps/com.sun.tools.jdeps.DepsAnalyzer.run(DepsAnalyzer.java:129)
19 #13 1.753 at jdk.jdeps/com.sun.tools.jdeps.ModuleExportsAnalyzer.run(ModuleExportsAnalyzer.java:74)
20 #13 1.753 at jdk.jdeps/com.sun.tools.jdeps.JdepsTask$ListModuleDeps.run(JdepsTask.java:1047)
21 #13 1.753 at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:574)
22 #13 1.753 at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:533)
23 #13 1.753 at jdk.jdeps/com.sun.tools.jdeps.Main.main(Main.java:49)
24 #13 1.753 Caused by: java.util.concurrent.ExecutionException: com.sun.tools.jdeps.MultiReleaseException
25 #13 1.753 at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
26 #13 1.753 at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191)
27 #13 1.753 at jdk.jdeps/com.sun.tools.jdeps.DependencyFinder.waitForTasksCompleted(DependencyFinder.java:267)
28 #13 1.754 ... 7 more
29 #13 1.754 Caused by: com.sun.tools.jdeps.MultiReleaseException
30 #13 1.754 at jdk.jdeps/com.sun.tools.jdeps.VersionHelper.add(VersionHelper.java:62)
31 #13 1.754 at jdk.jdeps/com.sun.tools.jdeps.ClassFileReader$JarFileReader.readClassFile(ClassFileReader.java:360)
32 #13 1.754 at jdk.jdeps/com.sun.tools.jdeps.ClassFileReader$JarFileIterator.hasNext(ClassFileReader.java:402)
33 #13 1.754 at jdk.jdeps/com.sun.tools.jdeps.DependencyFinder.lambda$parse$5(DependencyFinder.java:179)
34 #13 1.754 at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
35 #13 1.754 at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
36 #13 1.754 at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
37 #13 1.754 at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
38 #13 1.754 at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
39 #13 1.754 at java.base/java.lang.Thread.run(Thread.java:833)
40 <?xml version="1.0" encoding="UTF-8"?>
41<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
42 xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
43 <modelVersion>4.0.0</modelVersion>
44 <parent>
45 <groupId>org.springframework.boot</groupId>
46 <artifactId>spring-boot-starter-parent</artifactId>
47 <version>2.6.0</version>
48 <relativePath/> <!-- lookup parent from repository -->
49 </parent>
50 <groupId>com.example</groupId>
51 <artifactId>errorrr</artifactId>
52 <version>0.0.1-SNAPSHOT</version>
53 <name>errorrr</name>
54 <description>Demo project for Spring Boot</description>
55 <properties>
56 <java.version>17</java.version>
57 </properties>
58 <dependencies>
59 <dependency>
60 <groupId>org.springframework.boot</groupId>
61 <artifactId>spring-boot-starter-web</artifactId>
62 </dependency>
63
64 <dependency>
65 <groupId>org.springframework.boot</groupId>
66 <artifactId>spring-boot-starter</artifactId>
67 </dependency>
68
69 <dependency>
70 <groupId>org.springframework.boot</groupId>
71 <artifactId>spring-boot-starter-test</artifactId>
72 <scope>test</scope>
73 </dependency>
74
75 </dependencies>
76
77 <build>
78 <plugins>
79 <plugin>
80 <groupId>org.springframework.boot</groupId>
81 <artifactId>spring-boot-maven-plugin</artifactId>
82 </plugin>
83 <plugin>
84 <groupId>org.apache.maven.plugins</groupId>
85 <artifactId>maven-dependency-plugin</artifactId>
86 <executions>
87 <execution>
88 <id>copy-dependencies</id>
89 <phase>package</phase>
90 <goals>
91 <goal>copy-dependencies</goal>
92 </goals>
93 <configuration>
94 <outputDirectory>${project.build.directory}/lib</outputDirectory>
95 </configuration>
96 </execution>
97 </executions>
98 </plugin>
99 </plugins>
100 </build>
101
102</project>
103
104RUN jdeps \
105 --ignore-missing-deps \
106 -q \
107 --multi-release 17 \
108 --print-module-deps \
109 --class-path build/lib/* \
110 app.jar > deps.info
111
112RUN jlink --verbose \
113 --compress 2 \
114 --strip-java-debug-attributes \
115 --no-header-files \
116 --no-man-pages \
117 --output jre \
118 --add-modules $(cat deps.info)
119
I think your mvn build is fine as long as you have all the required dependencies. But just in case I modified my gradle jar task to include the dependencies as follow:
1Exception in thread "main" java.lang.module.FindException: Module java.annotation not found, required by org.apache.tomcat.embed.core
2 at java.base/java.lang.module.Resolver.findFail(Resolver.java:893)
3 at java.base/java.lang.module.Resolver.resolve(Resolver.java:192)
4 at java.base/java.lang.module.Resolver.resolve(Resolver.java:141)
5 at java.base/java.lang.module.Configuration.resolve(Configuration.java:421)
6 at java.base/java.lang.module.Configuration.resolve(Configuration.java:255)
7 at jdk.jdeps/com.sun.tools.jdeps.JdepsConfiguration$Builder.build(JdepsConfiguration.java:564)
8 at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.buildConfig(JdepsTask.java:603)
9 at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:557)
10 at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:533)
11 at jdk.jdeps/com.sun.tools.jdeps.Main.main(Main.java:49)
12jdeps --ignore-missing-deps --multi-release 17 --module-path target/lib/* target/errorrr-*.jar
13jdeps --multi-release 16 --module-path target/lib/* target/errorrr-*.jar
14jdeps --ignore-missing-deps --multi-release 17 --class-path target/lib/* target/errorrr-*.jar
15#13 1.753 Exception in thread "main" java.lang.Error: java.util.concurrent.ExecutionException: com.sun.tools.jdeps.MultiReleaseException
16 #13 1.753 at jdk.jdeps/com.sun.tools.jdeps.DependencyFinder.waitForTasksCompleted(DependencyFinder.java:271)
17 #13 1.753 at jdk.jdeps/com.sun.tools.jdeps.DependencyFinder.parse(DependencyFinder.java:133)
18 #13 1.753 at jdk.jdeps/com.sun.tools.jdeps.DepsAnalyzer.run(DepsAnalyzer.java:129)
19 #13 1.753 at jdk.jdeps/com.sun.tools.jdeps.ModuleExportsAnalyzer.run(ModuleExportsAnalyzer.java:74)
20 #13 1.753 at jdk.jdeps/com.sun.tools.jdeps.JdepsTask$ListModuleDeps.run(JdepsTask.java:1047)
21 #13 1.753 at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:574)
22 #13 1.753 at jdk.jdeps/com.sun.tools.jdeps.JdepsTask.run(JdepsTask.java:533)
23 #13 1.753 at jdk.jdeps/com.sun.tools.jdeps.Main.main(Main.java:49)
24 #13 1.753 Caused by: java.util.concurrent.ExecutionException: com.sun.tools.jdeps.MultiReleaseException
25 #13 1.753 at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
26 #13 1.753 at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191)
27 #13 1.753 at jdk.jdeps/com.sun.tools.jdeps.DependencyFinder.waitForTasksCompleted(DependencyFinder.java:267)
28 #13 1.754 ... 7 more
29 #13 1.754 Caused by: com.sun.tools.jdeps.MultiReleaseException
30 #13 1.754 at jdk.jdeps/com.sun.tools.jdeps.VersionHelper.add(VersionHelper.java:62)
31 #13 1.754 at jdk.jdeps/com.sun.tools.jdeps.ClassFileReader$JarFileReader.readClassFile(ClassFileReader.java:360)
32 #13 1.754 at jdk.jdeps/com.sun.tools.jdeps.ClassFileReader$JarFileIterator.hasNext(ClassFileReader.java:402)
33 #13 1.754 at jdk.jdeps/com.sun.tools.jdeps.DependencyFinder.lambda$parse$5(DependencyFinder.java:179)
34 #13 1.754 at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
35 #13 1.754 at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
36 #13 1.754 at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
37 #13 1.754 at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
38 #13 1.754 at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
39 #13 1.754 at java.base/java.lang.Thread.run(Thread.java:833)
40 <?xml version="1.0" encoding="UTF-8"?>
41<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
42 xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
43 <modelVersion>4.0.0</modelVersion>
44 <parent>
45 <groupId>org.springframework.boot</groupId>
46 <artifactId>spring-boot-starter-parent</artifactId>
47 <version>2.6.0</version>
48 <relativePath/> <!-- lookup parent from repository -->
49 </parent>
50 <groupId>com.example</groupId>
51 <artifactId>errorrr</artifactId>
52 <version>0.0.1-SNAPSHOT</version>
53 <name>errorrr</name>
54 <description>Demo project for Spring Boot</description>
55 <properties>
56 <java.version>17</java.version>
57 </properties>
58 <dependencies>
59 <dependency>
60 <groupId>org.springframework.boot</groupId>
61 <artifactId>spring-boot-starter-web</artifactId>
62 </dependency>
63
64 <dependency>
65 <groupId>org.springframework.boot</groupId>
66 <artifactId>spring-boot-starter</artifactId>
67 </dependency>
68
69 <dependency>
70 <groupId>org.springframework.boot</groupId>
71 <artifactId>spring-boot-starter-test</artifactId>
72 <scope>test</scope>
73 </dependency>
74
75 </dependencies>
76
77 <build>
78 <plugins>
79 <plugin>
80 <groupId>org.springframework.boot</groupId>
81 <artifactId>spring-boot-maven-plugin</artifactId>
82 </plugin>
83 <plugin>
84 <groupId>org.apache.maven.plugins</groupId>
85 <artifactId>maven-dependency-plugin</artifactId>
86 <executions>
87 <execution>
88 <id>copy-dependencies</id>
89 <phase>package</phase>
90 <goals>
91 <goal>copy-dependencies</goal>
92 </goals>
93 <configuration>
94 <outputDirectory>${project.build.directory}/lib</outputDirectory>
95 </configuration>
96 </execution>
97 </executions>
98 </plugin>
99 </plugins>
100 </build>
101
102</project>
103
104RUN jdeps \
105 --ignore-missing-deps \
106 -q \
107 --multi-release 17 \
108 --print-module-deps \
109 --class-path build/lib/* \
110 app.jar > deps.info
111
112RUN jlink --verbose \
113 --compress 2 \
114 --strip-java-debug-attributes \
115 --no-header-files \
116 --no-man-pages \
117 --output jre \
118 --add-modules $(cat deps.info)
119jar {
120 manifest {
121 attributes "Main-Class": "com.demo.Application"
122 }
123 duplicatesStrategy = DuplicatesStrategy.INCLUDE
124 from {
125 configurations.default.collect { it.isDirectory() ? it : zipTree(it)
126 }
127 }
128}
129
QUESTION
How to make a Spring Boot application quit on tomcat failure
Asked 2022-Jan-15 at 09:55We have a bunch of microservices based on Spring Boot 2.5.4 also including spring-kafka:2.7.6
and spring-boot-actuator:2.5.4
. All the services use Tomcat as servlet container and graceful shutdown enabled. These microservices are containerized using docker.
Due to a misconfiguration, yesterday we faced a problem on one of these containers because it took a port already bound from another one.
Log states:
1Stopping service [Tomcat]
2Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
3***************************
4APPLICATION FAILED TO START
5***************************
6
7Description:
8
9Web server failed to start. Port 8080 was already in use.
10
However, the JVM is still running, because of the kafka consumers/streams.
I need to destroy everything or at least do a System.exit(error-code)
to trigger the docker restart policy. How I could achieve this? If possible, a solution using configuration is better than a solution requiring development.
I developed a minimal test application made of the SpringBootApplication
and a KafkaConsumer
class to ensure the problem isn't related to our microservices. Same result.
POM file
1Stopping service [Tomcat]
2Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
3***************************
4APPLICATION FAILED TO START
5***************************
6
7Description:
8
9Web server failed to start. Port 8080 was already in use.
10<parent>
11 <groupId>org.springframework.boot</groupId>
12 <artifactId>spring-boot-starter-parent</artifactId>
13 <version>2.5.4</version>
14 <relativePath/> <!-- lookup parent from repository -->
15</parent>
16...
17<dependency>
18 <groupId>org.springframework.boot</groupId>
19 <artifactId>spring-boot-starter-web</artifactId>
20</dependency>
21<dependency>
22 <groupId>org.springframework.kafka</groupId>
23 <artifactId>spring-kafka</artifactId>
24</dependency>
25
Kafka listener
1Stopping service [Tomcat]
2Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
3***************************
4APPLICATION FAILED TO START
5***************************
6
7Description:
8
9Web server failed to start. Port 8080 was already in use.
10<parent>
11 <groupId>org.springframework.boot</groupId>
12 <artifactId>spring-boot-starter-parent</artifactId>
13 <version>2.5.4</version>
14 <relativePath/> <!-- lookup parent from repository -->
15</parent>
16...
17<dependency>
18 <groupId>org.springframework.boot</groupId>
19 <artifactId>spring-boot-starter-web</artifactId>
20</dependency>
21<dependency>
22 <groupId>org.springframework.kafka</groupId>
23 <artifactId>spring-kafka</artifactId>
24</dependency>
25@Component
26public class KafkaConsumer {
27
28 @KafkaListener(topics = "test", groupId = "test")
29 public void process(String message) {
30
31 }
32}
33
application.yml
1Stopping service [Tomcat]
2Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
3***************************
4APPLICATION FAILED TO START
5***************************
6
7Description:
8
9Web server failed to start. Port 8080 was already in use.
10<parent>
11 <groupId>org.springframework.boot</groupId>
12 <artifactId>spring-boot-starter-parent</artifactId>
13 <version>2.5.4</version>
14 <relativePath/> <!-- lookup parent from repository -->
15</parent>
16...
17<dependency>
18 <groupId>org.springframework.boot</groupId>
19 <artifactId>spring-boot-starter-web</artifactId>
20</dependency>
21<dependency>
22 <groupId>org.springframework.kafka</groupId>
23 <artifactId>spring-kafka</artifactId>
24</dependency>
25@Component
26public class KafkaConsumer {
27
28 @KafkaListener(topics = "test", groupId = "test")
29 public void process(String message) {
30
31 }
32}
33spring:
34 kafka:
35 bootstrap-servers: kafka:9092
36
Log file
1Stopping service [Tomcat]
2Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
3***************************
4APPLICATION FAILED TO START
5***************************
6
7Description:
8
9Web server failed to start. Port 8080 was already in use.
10<parent>
11 <groupId>org.springframework.boot</groupId>
12 <artifactId>spring-boot-starter-parent</artifactId>
13 <version>2.5.4</version>
14 <relativePath/> <!-- lookup parent from repository -->
15</parent>
16...
17<dependency>
18 <groupId>org.springframework.boot</groupId>
19 <artifactId>spring-boot-starter-web</artifactId>
20</dependency>
21<dependency>
22 <groupId>org.springframework.kafka</groupId>
23 <artifactId>spring-kafka</artifactId>
24</dependency>
25@Component
26public class KafkaConsumer {
27
28 @KafkaListener(topics = "test", groupId = "test")
29 public void process(String message) {
30
31 }
32}
33spring:
34 kafka:
35 bootstrap-servers: kafka:9092
362021-12-17 11:12:24.955 WARN 29067 --- [ main] ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.context.ApplicationContextException: Failed to start bean 'webServerStartStop'; nested exception is org.springframework.boot.web.server.PortInUseException: Port 8080 is already in use
372021-12-17 11:12:24.959 INFO 29067 --- [ main] o.apache.catalina.core.StandardService : Stopping service [Tomcat]
382021-12-17 11:12:24.969 INFO 29067 --- [ main] ConditionEvaluationReportLoggingListener :
39
40Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
412021-12-17 11:12:24.978 ERROR 29067 --- [ main] o.s.b.d.LoggingFailureAnalysisReporter :
42
43***************************
44APPLICATION FAILED TO START
45***************************
46
47Description:
48
49Web server failed to start. Port 8080 was already in use.
50
51Action:
52
53Identify and stop the process that's listening on port 8080 or configure this application to listen on another port.
54
552021-12-17 11:12:25.151 WARN 29067 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-test-1, groupId=test] Error while fetching metadata with correlation id 2 : {test=LEADER_NOT_AVAILABLE}
562021-12-17 11:12:25.154 INFO 29067 --- [ntainer#0-0-C-1] org.apache.kafka.clients.Metadata : [Consumer clientId=consumer-test-1, groupId=test] Cluster ID: NwbnlV2vSdiYtDzgZ81TDQ
572021-12-17 11:12:25.156 INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-test-1, groupId=test] Discovered group coordinator kafka:9092 (id: 2147483636 rack: null)
582021-12-17 11:12:25.159 INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-test-1, groupId=test] (Re-)joining group
592021-12-17 11:12:25.179 INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-test-1, groupId=test] (Re-)joining group
602021-12-17 11:12:27.004 INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-test-1, groupId=test] Successfully joined group with generation Generation{generationId=2, memberId='consumer-test-1-c5924ab5-afc8-4720-a5d7-f8107ace3aad', protocol='range'}
612021-12-17 11:12:27.009 INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-test-1, groupId=test] Finished assignment for group at generation 2: {consumer-test-1-c5924ab5-afc8-4720-a5d7-f8107ace3aad=Assignment(partitions=[test-0])}
622021-12-17 11:12:27.021 INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-test-1, groupId=test] Successfully synced group in generation Generation{generationId=2, memberId='consumer-test-1-c5924ab5-afc8-4720-a5d7-f8107ace3aad', protocol='range'}
632021-12-17 11:12:27.022 INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-test-1, groupId=test] Notifying assignor about the new Assignment(partitions=[test-0])
642021-12-17 11:12:27.025 INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-test-1, groupId=test] Adding newly assigned partitions: test-0
652021-12-17 11:12:27.029 INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-test-1, groupId=test] Found no committed offset for partition test-0
662021-12-17 11:12:27.034 INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-test-1, groupId=test] Found no committed offset for partition test-0
672021-12-17 11:12:27.040 INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.SubscriptionState : [Consumer clientId=consumer-test-1, groupId=test] Resetting offset for partition test-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 11 rack: null)], epoch=0}}.
682021-12-17 11:12:27.045 INFO 29067 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer : test: partitions assigned: [test-0]
69
ANSWER
Answered 2021-Dec-17 at 08:38Since you have everything containerized, it's way simpler.
Just set up a small healthcheck endpoint with Spring Web which serves to see if the server is still running, something like:
1Stopping service [Tomcat]
2Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
3***************************
4APPLICATION FAILED TO START
5***************************
6
7Description:
8
9Web server failed to start. Port 8080 was already in use.
10<parent>
11 <groupId>org.springframework.boot</groupId>
12 <artifactId>spring-boot-starter-parent</artifactId>
13 <version>2.5.4</version>
14 <relativePath/> <!-- lookup parent from repository -->
15</parent>
16...
17<dependency>
18 <groupId>org.springframework.boot</groupId>
19 <artifactId>spring-boot-starter-web</artifactId>
20</dependency>
21<dependency>
22 <groupId>org.springframework.kafka</groupId>
23 <artifactId>spring-kafka</artifactId>
24</dependency>
25@Component
26public class KafkaConsumer {
27
28 @KafkaListener(topics = "test", groupId = "test")
29 public void process(String message) {
30
31 }
32}
33spring:
34 kafka:
35 bootstrap-servers: kafka:9092
362021-12-17 11:12:24.955 WARN 29067 --- [ main] ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.context.ApplicationContextException: Failed to start bean 'webServerStartStop'; nested exception is org.springframework.boot.web.server.PortInUseException: Port 8080 is already in use
372021-12-17 11:12:24.959 INFO 29067 --- [ main] o.apache.catalina.core.StandardService : Stopping service [Tomcat]
382021-12-17 11:12:24.969 INFO 29067 --- [ main] ConditionEvaluationReportLoggingListener :
39
40Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
412021-12-17 11:12:24.978 ERROR 29067 --- [ main] o.s.b.d.LoggingFailureAnalysisReporter :
42
43***************************
44APPLICATION FAILED TO START
45***************************
46
47Description:
48
49Web server failed to start. Port 8080 was already in use.
50
51Action:
52
53Identify and stop the process that's listening on port 8080 or configure this application to listen on another port.
54
552021-12-17 11:12:25.151 WARN 29067 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-test-1, groupId=test] Error while fetching metadata with correlation id 2 : {test=LEADER_NOT_AVAILABLE}
562021-12-17 11:12:25.154 INFO 29067 --- [ntainer#0-0-C-1] org.apache.kafka.clients.Metadata : [Consumer clientId=consumer-test-1, groupId=test] Cluster ID: NwbnlV2vSdiYtDzgZ81TDQ
572021-12-17 11:12:25.156 INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-test-1, groupId=test] Discovered group coordinator kafka:9092 (id: 2147483636 rack: null)
582021-12-17 11:12:25.159 INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-test-1, groupId=test] (Re-)joining group
592021-12-17 11:12:25.179 INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-test-1, groupId=test] (Re-)joining group
602021-12-17 11:12:27.004 INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-test-1, groupId=test] Successfully joined group with generation Generation{generationId=2, memberId='consumer-test-1-c5924ab5-afc8-4720-a5d7-f8107ace3aad', protocol='range'}
612021-12-17 11:12:27.009 INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-test-1, groupId=test] Finished assignment for group at generation 2: {consumer-test-1-c5924ab5-afc8-4720-a5d7-f8107ace3aad=Assignment(partitions=[test-0])}
622021-12-17 11:12:27.021 INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-test-1, groupId=test] Successfully synced group in generation Generation{generationId=2, memberId='consumer-test-1-c5924ab5-afc8-4720-a5d7-f8107ace3aad', protocol='range'}
632021-12-17 11:12:27.022 INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-test-1, groupId=test] Notifying assignor about the new Assignment(partitions=[test-0])
642021-12-17 11:12:27.025 INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-test-1, groupId=test] Adding newly assigned partitions: test-0
652021-12-17 11:12:27.029 INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-test-1, groupId=test] Found no committed offset for partition test-0
662021-12-17 11:12:27.034 INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-test-1, groupId=test] Found no committed offset for partition test-0
672021-12-17 11:12:27.040 INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.SubscriptionState : [Consumer clientId=consumer-test-1, groupId=test] Resetting offset for partition test-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 11 rack: null)], epoch=0}}.
682021-12-17 11:12:27.045 INFO 29067 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer : test: partitions assigned: [test-0]
69@RestController(
70public class HealtcheckController {
71
72 @Get("/monitoring")
73 public String getMonitoring() {
74 return "200: OK";
75 }
76
77}
78
and then refer to it in the HEALTHCHECK
part of your Dockerfile
. If the server stops, then the container will be scheduled as unhealthy
and it will be restarted:
1Stopping service [Tomcat]
2Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
3***************************
4APPLICATION FAILED TO START
5***************************
6
7Description:
8
9Web server failed to start. Port 8080 was already in use.
10<parent>
11 <groupId>org.springframework.boot</groupId>
12 <artifactId>spring-boot-starter-parent</artifactId>
13 <version>2.5.4</version>
14 <relativePath/> <!-- lookup parent from repository -->
15</parent>
16...
17<dependency>
18 <groupId>org.springframework.boot</groupId>
19 <artifactId>spring-boot-starter-web</artifactId>
20</dependency>
21<dependency>
22 <groupId>org.springframework.kafka</groupId>
23 <artifactId>spring-kafka</artifactId>
24</dependency>
25@Component
26public class KafkaConsumer {
27
28 @KafkaListener(topics = "test", groupId = "test")
29 public void process(String message) {
30
31 }
32}
33spring:
34 kafka:
35 bootstrap-servers: kafka:9092
362021-12-17 11:12:24.955 WARN 29067 --- [ main] ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.context.ApplicationContextException: Failed to start bean 'webServerStartStop'; nested exception is org.springframework.boot.web.server.PortInUseException: Port 8080 is already in use
372021-12-17 11:12:24.959 INFO 29067 --- [ main] o.apache.catalina.core.StandardService : Stopping service [Tomcat]
382021-12-17 11:12:24.969 INFO 29067 --- [ main] ConditionEvaluationReportLoggingListener :
39
40Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
412021-12-17 11:12:24.978 ERROR 29067 --- [ main] o.s.b.d.LoggingFailureAnalysisReporter :
42
43***************************
44APPLICATION FAILED TO START
45***************************
46
47Description:
48
49Web server failed to start. Port 8080 was already in use.
50
51Action:
52
53Identify and stop the process that's listening on port 8080 or configure this application to listen on another port.
54
552021-12-17 11:12:25.151 WARN 29067 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-test-1, groupId=test] Error while fetching metadata with correlation id 2 : {test=LEADER_NOT_AVAILABLE}
562021-12-17 11:12:25.154 INFO 29067 --- [ntainer#0-0-C-1] org.apache.kafka.clients.Metadata : [Consumer clientId=consumer-test-1, groupId=test] Cluster ID: NwbnlV2vSdiYtDzgZ81TDQ
572021-12-17 11:12:25.156 INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-test-1, groupId=test] Discovered group coordinator kafka:9092 (id: 2147483636 rack: null)
582021-12-17 11:12:25.159 INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-test-1, groupId=test] (Re-)joining group
592021-12-17 11:12:25.179 INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-test-1, groupId=test] (Re-)joining group
602021-12-17 11:12:27.004 INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-test-1, groupId=test] Successfully joined group with generation Generation{generationId=2, memberId='consumer-test-1-c5924ab5-afc8-4720-a5d7-f8107ace3aad', protocol='range'}
612021-12-17 11:12:27.009 INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-test-1, groupId=test] Finished assignment for group at generation 2: {consumer-test-1-c5924ab5-afc8-4720-a5d7-f8107ace3aad=Assignment(partitions=[test-0])}
622021-12-17 11:12:27.021 INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-test-1, groupId=test] Successfully synced group in generation Generation{generationId=2, memberId='consumer-test-1-c5924ab5-afc8-4720-a5d7-f8107ace3aad', protocol='range'}
632021-12-17 11:12:27.022 INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-test-1, groupId=test] Notifying assignor about the new Assignment(partitions=[test-0])
642021-12-17 11:12:27.025 INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-test-1, groupId=test] Adding newly assigned partitions: test-0
652021-12-17 11:12:27.029 INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-test-1, groupId=test] Found no committed offset for partition test-0
662021-12-17 11:12:27.034 INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-test-1, groupId=test] Found no committed offset for partition test-0
672021-12-17 11:12:27.040 INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.SubscriptionState : [Consumer clientId=consumer-test-1, groupId=test] Resetting offset for partition test-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 11 rack: null)], epoch=0}}.
682021-12-17 11:12:27.045 INFO 29067 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer : test: partitions assigned: [test-0]
69@RestController(
70public class HealtcheckController {
71
72 @Get("/monitoring")
73 public String getMonitoring() {
74 return "200: OK";
75 }
76
77}
78FROM ...
79
80ENTRYPOINT ...
81HEALTHCHECK localhost:8080/monitoring
82
If you don't want to develop anything, then you can just use any other Endpoint that you know it should successfully answer as HEALTCHECK
, but I would recommend that you have one endpoint explicitly for that.
QUESTION
Deadlock on insert/select
Asked 2021-Dec-26 at 12:54Ok, I'm totally lost on deadlock issue. I just don't know how to solve this.
I have these three tables (I have removed not important columns):
1CREATE TABLE [dbo].[ManageServicesRequest]
2(
3 [ReferenceTransactionId] INT NOT NULL,
4 [OrderDate] DATETIMEOFFSET(7) NOT NULL,
5 [QueuePriority] INT NOT NULL,
6 [Queued] DATETIMEOFFSET(7) NULL,
7 CONSTRAINT [PK_ManageServicesRequest] PRIMARY KEY CLUSTERED ([ReferenceTransactionId]),
8)
9
10CREATE TABLE [dbo].[ServiceChange]
11(
12 [ReferenceTransactionId] INT NOT NULL,
13 [ServiceId] VARCHAR(50) NOT NULL,
14 [ServiceStatus] CHAR(1) NOT NULL,
15 [ValidFrom] DATETIMEOFFSET(7) NOT NULL,
16 CONSTRAINT [PK_ServiceChange] PRIMARY KEY CLUSTERED ([ReferenceTransactionId],[ServiceId]),
17 CONSTRAINT [FK_ServiceChange_ManageServiceRequest] FOREIGN KEY ([ReferenceTransactionId]) REFERENCES [ManageServicesRequest]([ReferenceTransactionId]) ON DELETE CASCADE,
18 INDEX [IDX_ServiceChange_ManageServiceRequestId] ([ReferenceTransactionId]),
19 INDEX [IDX_ServiceChange_ServiceId] ([ServiceId])
20)
21
22CREATE TABLE [dbo].[ServiceChangeParameter]
23(
24 [ReferenceTransactionId] INT NOT NULL,
25 [ServiceId] VARCHAR(50) NOT NULL,
26 [ParamCode] VARCHAR(50) NOT NULL,
27 [ParamValue] VARCHAR(50) NOT NULL,
28 [ParamValidFrom] DATETIMEOFFSET(7) NOT NULL,
29 CONSTRAINT [PK_ServiceChangeParameter] PRIMARY KEY CLUSTERED ([ReferenceTransactionId],[ServiceId],[ParamCode]),
30 CONSTRAINT [FK_ServiceChangeParameter_ServiceChange] FOREIGN KEY ([ReferenceTransactionId],[ServiceId]) REFERENCES [ServiceChange] ([ReferenceTransactionId],[ServiceId]) ON DELETE CASCADE,
31 INDEX [IDX_ServiceChangeParameter_ManageServiceRequestId] ([ReferenceTransactionId]),
32 INDEX [IDX_ServiceChangeParameter_ServiceId] ([ServiceId]),
33 INDEX [IDX_ServiceChangeParameter_ParamCode] ([ParamCode])
34)
35
And these two procedures:
1CREATE TABLE [dbo].[ManageServicesRequest]
2(
3 [ReferenceTransactionId] INT NOT NULL,
4 [OrderDate] DATETIMEOFFSET(7) NOT NULL,
5 [QueuePriority] INT NOT NULL,
6 [Queued] DATETIMEOFFSET(7) NULL,
7 CONSTRAINT [PK_ManageServicesRequest] PRIMARY KEY CLUSTERED ([ReferenceTransactionId]),
8)
9
10CREATE TABLE [dbo].[ServiceChange]
11(
12 [ReferenceTransactionId] INT NOT NULL,
13 [ServiceId] VARCHAR(50) NOT NULL,
14 [ServiceStatus] CHAR(1) NOT NULL,
15 [ValidFrom] DATETIMEOFFSET(7) NOT NULL,
16 CONSTRAINT [PK_ServiceChange] PRIMARY KEY CLUSTERED ([ReferenceTransactionId],[ServiceId]),
17 CONSTRAINT [FK_ServiceChange_ManageServiceRequest] FOREIGN KEY ([ReferenceTransactionId]) REFERENCES [ManageServicesRequest]([ReferenceTransactionId]) ON DELETE CASCADE,
18 INDEX [IDX_ServiceChange_ManageServiceRequestId] ([ReferenceTransactionId]),
19 INDEX [IDX_ServiceChange_ServiceId] ([ServiceId])
20)
21
22CREATE TABLE [dbo].[ServiceChangeParameter]
23(
24 [ReferenceTransactionId] INT NOT NULL,
25 [ServiceId] VARCHAR(50) NOT NULL,
26 [ParamCode] VARCHAR(50) NOT NULL,
27 [ParamValue] VARCHAR(50) NOT NULL,
28 [ParamValidFrom] DATETIMEOFFSET(7) NOT NULL,
29 CONSTRAINT [PK_ServiceChangeParameter] PRIMARY KEY CLUSTERED ([ReferenceTransactionId],[ServiceId],[ParamCode]),
30 CONSTRAINT [FK_ServiceChangeParameter_ServiceChange] FOREIGN KEY ([ReferenceTransactionId],[ServiceId]) REFERENCES [ServiceChange] ([ReferenceTransactionId],[ServiceId]) ON DELETE CASCADE,
31 INDEX [IDX_ServiceChangeParameter_ManageServiceRequestId] ([ReferenceTransactionId]),
32 INDEX [IDX_ServiceChangeParameter_ServiceId] ([ServiceId]),
33 INDEX [IDX_ServiceChangeParameter_ParamCode] ([ParamCode])
34)
35CREATE PROCEDURE [dbo].[spCreateManageServicesRequest]
36 @ReferenceTransactionId INT,
37 @OrderDate DATETIMEOFFSET,
38 @QueuePriority INT,
39 @Services ServiceChangeUdt READONLY,
40 @Parameters ServiceChangeParameterUdt READONLY
41AS
42BEGIN
43 SET NOCOUNT ON;
44
45 BEGIN TRY
46 /* VYTVOŘ NOVÝ REQUEST NA ZMĚNU SLUŽEB */
47
48 /* INSERT REQUEST */
49 INSERT INTO [dbo].[ManageServicesRequest]
50 ([ReferenceTransactionId]
51 ,[OrderDate]
52 ,[QueuePriority]
53 ,[Queued])
54 VALUES
55 (@ReferenceTransactionId
56 ,@OrderDate
57 ,@QueuePriority
58 ,NULL)
59
60 /* INSERT SERVICES */
61 INSERT INTO [dbo].[ServiceChange]
62 ([ReferenceTransactionId]
63 ,[ServiceId]
64 ,[ServiceStatus]
65 ,[ValidFrom])
66 SELECT
67 @ReferenceTransactionId AS [ReferenceTransactionId]
68 ,[ServiceId]
69 ,[ServiceStatus]
70 ,[ValidFrom]
71 FROM @Services AS [S]
72
73 /* INSERT PARAMS */
74 INSERT INTO [dbo].[ServiceChangeParameter]
75 ([ReferenceTransactionId]
76 ,[ServiceId]
77 ,[ParamCode]
78 ,[ParamValue]
79 ,[ParamValidFrom])
80 SELECT
81 @ReferenceTransactionId AS [ReferenceTransactionId]
82 ,[ServiceId]
83 ,[ParamCode]
84 ,[ParamValue]
85 ,[ParamValidFrom]
86 FROM @Parameters AS [P]
87
88 END TRY
89 BEGIN CATCH
90 THROW
91 END CATCH
92END
93
94CREATE PROCEDURE [dbo].[spGetManageServicesRequest]
95 @ReferenceTransactionId INT
96AS
97BEGIN
98 SET NOCOUNT ON;
99
100 BEGIN TRY
101 /* VRAŤ MANAGE SERVICES REQUEST PODLE ID */
102
103 SELECT
104 [MR].[ReferenceTransactionId],
105 [MR].[OrderDate],
106 [MR].[QueuePriority],
107 [MR].[Queued],
108
109 [SC].[ReferenceTransactionId],
110 [SC].[ServiceId],
111 [SC].[ServiceStatus],
112 [SC].[ValidFrom],
113
114 [SP].[ReferenceTransactionId],
115 [SP].[ServiceId],
116 [SP].[ParamCode],
117 [SP].[ParamValue],
118 [SP].[ParamValidFrom]
119
120 FROM [dbo].[ManageServicesRequest] AS [MR]
121 LEFT JOIN [dbo].[ServiceChange] AS [SC] ON [SC].[ReferenceTransactionId] = [MR].[ReferenceTransactionId]
122 LEFT JOIN [dbo].[ServiceChangeParameter] AS [SP] ON [SP].[ReferenceTransactionId] = [SC].[ReferenceTransactionId] AND [SP].[ServiceId] = [SC].[ServiceId]
123 WHERE [MR].[ReferenceTransactionId] = @ReferenceTransactionId
124
125 END TRY
126 BEGIN CATCH
127 THROW
128 END CATCH
129END
130
Now these are used this way (it's a simplified C# method that creates a record and then posts record to a micro service queue):
1CREATE TABLE [dbo].[ManageServicesRequest]
2(
3 [ReferenceTransactionId] INT NOT NULL,
4 [OrderDate] DATETIMEOFFSET(7) NOT NULL,
5 [QueuePriority] INT NOT NULL,
6 [Queued] DATETIMEOFFSET(7) NULL,
7 CONSTRAINT [PK_ManageServicesRequest] PRIMARY KEY CLUSTERED ([ReferenceTransactionId]),
8)
9
10CREATE TABLE [dbo].[ServiceChange]
11(
12 [ReferenceTransactionId] INT NOT NULL,
13 [ServiceId] VARCHAR(50) NOT NULL,
14 [ServiceStatus] CHAR(1) NOT NULL,
15 [ValidFrom] DATETIMEOFFSET(7) NOT NULL,
16 CONSTRAINT [PK_ServiceChange] PRIMARY KEY CLUSTERED ([ReferenceTransactionId],[ServiceId]),
17 CONSTRAINT [FK_ServiceChange_ManageServiceRequest] FOREIGN KEY ([ReferenceTransactionId]) REFERENCES [ManageServicesRequest]([ReferenceTransactionId]) ON DELETE CASCADE,
18 INDEX [IDX_ServiceChange_ManageServiceRequestId] ([ReferenceTransactionId]),
19 INDEX [IDX_ServiceChange_ServiceId] ([ServiceId])
20)
21
22CREATE TABLE [dbo].[ServiceChangeParameter]
23(
24 [ReferenceTransactionId] INT NOT NULL,
25 [ServiceId] VARCHAR(50) NOT NULL,
26 [ParamCode] VARCHAR(50) NOT NULL,
27 [ParamValue] VARCHAR(50) NOT NULL,
28 [ParamValidFrom] DATETIMEOFFSET(7) NOT NULL,
29 CONSTRAINT [PK_ServiceChangeParameter] PRIMARY KEY CLUSTERED ([ReferenceTransactionId],[ServiceId],[ParamCode]),
30 CONSTRAINT [FK_ServiceChangeParameter_ServiceChange] FOREIGN KEY ([ReferenceTransactionId],[ServiceId]) REFERENCES [ServiceChange] ([ReferenceTransactionId],[ServiceId]) ON DELETE CASCADE,
31 INDEX [IDX_ServiceChangeParameter_ManageServiceRequestId] ([ReferenceTransactionId]),
32 INDEX [IDX_ServiceChangeParameter_ServiceId] ([ServiceId]),
33 INDEX [IDX_ServiceChangeParameter_ParamCode] ([ParamCode])
34)
35CREATE PROCEDURE [dbo].[spCreateManageServicesRequest]
36 @ReferenceTransactionId INT,
37 @OrderDate DATETIMEOFFSET,
38 @QueuePriority INT,
39 @Services ServiceChangeUdt READONLY,
40 @Parameters ServiceChangeParameterUdt READONLY
41AS
42BEGIN
43 SET NOCOUNT ON;
44
45 BEGIN TRY
46 /* VYTVOŘ NOVÝ REQUEST NA ZMĚNU SLUŽEB */
47
48 /* INSERT REQUEST */
49 INSERT INTO [dbo].[ManageServicesRequest]
50 ([ReferenceTransactionId]
51 ,[OrderDate]
52 ,[QueuePriority]
53 ,[Queued])
54 VALUES
55 (@ReferenceTransactionId
56 ,@OrderDate
57 ,@QueuePriority
58 ,NULL)
59
60 /* INSERT SERVICES */
61 INSERT INTO [dbo].[ServiceChange]
62 ([ReferenceTransactionId]
63 ,[ServiceId]
64 ,[ServiceStatus]
65 ,[ValidFrom])
66 SELECT
67 @ReferenceTransactionId AS [ReferenceTransactionId]
68 ,[ServiceId]
69 ,[ServiceStatus]
70 ,[ValidFrom]
71 FROM @Services AS [S]
72
73 /* INSERT PARAMS */
74 INSERT INTO [dbo].[ServiceChangeParameter]
75 ([ReferenceTransactionId]
76 ,[ServiceId]
77 ,[ParamCode]
78 ,[ParamValue]
79 ,[ParamValidFrom])
80 SELECT
81 @ReferenceTransactionId AS [ReferenceTransactionId]
82 ,[ServiceId]
83 ,[ParamCode]
84 ,[ParamValue]
85 ,[ParamValidFrom]
86 FROM @Parameters AS [P]
87
88 END TRY
89 BEGIN CATCH
90 THROW
91 END CATCH
92END
93
94CREATE PROCEDURE [dbo].[spGetManageServicesRequest]
95 @ReferenceTransactionId INT
96AS
97BEGIN
98 SET NOCOUNT ON;
99
100 BEGIN TRY
101 /* VRAŤ MANAGE SERVICES REQUEST PODLE ID */
102
103 SELECT
104 [MR].[ReferenceTransactionId],
105 [MR].[OrderDate],
106 [MR].[QueuePriority],
107 [MR].[Queued],
108
109 [SC].[ReferenceTransactionId],
110 [SC].[ServiceId],
111 [SC].[ServiceStatus],
112 [SC].[ValidFrom],
113
114 [SP].[ReferenceTransactionId],
115 [SP].[ServiceId],
116 [SP].[ParamCode],
117 [SP].[ParamValue],
118 [SP].[ParamValidFrom]
119
120 FROM [dbo].[ManageServicesRequest] AS [MR]
121 LEFT JOIN [dbo].[ServiceChange] AS [SC] ON [SC].[ReferenceTransactionId] = [MR].[ReferenceTransactionId]
122 LEFT JOIN [dbo].[ServiceChangeParameter] AS [SP] ON [SP].[ReferenceTransactionId] = [SC].[ReferenceTransactionId] AND [SP].[ServiceId] = [SC].[ServiceId]
123 WHERE [MR].[ReferenceTransactionId] = @ReferenceTransactionId
124
125 END TRY
126 BEGIN CATCH
127 THROW
128 END CATCH
129END
130public async Task Consume(ConsumeContext<CreateCommand> context)
131{
132 using (var sql = sqlFactory.Cip)
133 {
134 /*SAVE REQUEST TO DATABASE*/
135 sql.StartTransaction(System.Data.IsolationLevel.Serializable); <----- First transaction starts
136
137 /* Create id */
138 var transactionId = await GetNewId(context.Message.CorrelationId);
139
140 /* Create manage services request */
141 await sql.OrderingGateway.ManageServices.Create(transactionId, context.Message.ApiRequest.OrderDate, context.Message.ApiRequest.Priority, services);
142
143 sql.Commit(); <----- First transaction ends
144
145
146 /// .... Some other stuff ...
147
148 /* Fetch the same object you created in the first transaction */
149 Try
150 {
151 sql.StartTransaction(System.Data.IsolationLevel.Serializable);
152
153 var request = await sql.OrderingGateway.ManageServices.Get(transactionId); <----- HERE BE THE DEADLOCK,
154
155 request.Queued = DateTimeOffset.Now;
156 await sql.OrderingGateway.ManageServices.Update(request);
157
158 ... Here is a posting to a microservice queue ...
159
160 sql.Commit();
161 }
162 catch (Exception)
163 {
164 sql.RollBack();
165 }
166
167 /// .... Some other stuff ....
168}
169
Now my problem is. Why are these two procedures getting deadlocked? The first and the second transaction are never run in parallel for the same record.
Here is the deadlock detail:
1CREATE TABLE [dbo].[ManageServicesRequest]
2(
3 [ReferenceTransactionId] INT NOT NULL,
4 [OrderDate] DATETIMEOFFSET(7) NOT NULL,
5 [QueuePriority] INT NOT NULL,
6 [Queued] DATETIMEOFFSET(7) NULL,
7 CONSTRAINT [PK_ManageServicesRequest] PRIMARY KEY CLUSTERED ([ReferenceTransactionId]),
8)
9
10CREATE TABLE [dbo].[ServiceChange]
11(
12 [ReferenceTransactionId] INT NOT NULL,
13 [ServiceId] VARCHAR(50) NOT NULL,
14 [ServiceStatus] CHAR(1) NOT NULL,
15 [ValidFrom] DATETIMEOFFSET(7) NOT NULL,
16 CONSTRAINT [PK_ServiceChange] PRIMARY KEY CLUSTERED ([ReferenceTransactionId],[ServiceId]),
17 CONSTRAINT [FK_ServiceChange_ManageServiceRequest] FOREIGN KEY ([ReferenceTransactionId]) REFERENCES [ManageServicesRequest]([ReferenceTransactionId]) ON DELETE CASCADE,
18 INDEX [IDX_ServiceChange_ManageServiceRequestId] ([ReferenceTransactionId]),
19 INDEX [IDX_ServiceChange_ServiceId] ([ServiceId])
20)
21
22CREATE TABLE [dbo].[ServiceChangeParameter]
23(
24 [ReferenceTransactionId] INT NOT NULL,
25 [ServiceId] VARCHAR(50) NOT NULL,
26 [ParamCode] VARCHAR(50) NOT NULL,
27 [ParamValue] VARCHAR(50) NOT NULL,
28 [ParamValidFrom] DATETIMEOFFSET(7) NOT NULL,
29 CONSTRAINT [PK_ServiceChangeParameter] PRIMARY KEY CLUSTERED ([ReferenceTransactionId],[ServiceId],[ParamCode]),
30 CONSTRAINT [FK_ServiceChangeParameter_ServiceChange] FOREIGN KEY ([ReferenceTransactionId],[ServiceId]) REFERENCES [ServiceChange] ([ReferenceTransactionId],[ServiceId]) ON DELETE CASCADE,
31 INDEX [IDX_ServiceChangeParameter_ManageServiceRequestId] ([ReferenceTransactionId]),
32 INDEX [IDX_ServiceChangeParameter_ServiceId] ([ServiceId]),
33 INDEX [IDX_ServiceChangeParameter_ParamCode] ([ParamCode])
34)
35CREATE PROCEDURE [dbo].[spCreateManageServicesRequest]
36 @ReferenceTransactionId INT,
37 @OrderDate DATETIMEOFFSET,
38 @QueuePriority INT,
39 @Services ServiceChangeUdt READONLY,
40 @Parameters ServiceChangeParameterUdt READONLY
41AS
42BEGIN
43 SET NOCOUNT ON;
44
45 BEGIN TRY
46 /* VYTVOŘ NOVÝ REQUEST NA ZMĚNU SLUŽEB */
47
48 /* INSERT REQUEST */
49 INSERT INTO [dbo].[ManageServicesRequest]
50 ([ReferenceTransactionId]
51 ,[OrderDate]
52 ,[QueuePriority]
53 ,[Queued])
54 VALUES
55 (@ReferenceTransactionId
56 ,@OrderDate
57 ,@QueuePriority
58 ,NULL)
59
60 /* INSERT SERVICES */
61 INSERT INTO [dbo].[ServiceChange]
62 ([ReferenceTransactionId]
63 ,[ServiceId]
64 ,[ServiceStatus]
65 ,[ValidFrom])
66 SELECT
67 @ReferenceTransactionId AS [ReferenceTransactionId]
68 ,[ServiceId]
69 ,[ServiceStatus]
70 ,[ValidFrom]
71 FROM @Services AS [S]
72
73 /* INSERT PARAMS */
74 INSERT INTO [dbo].[ServiceChangeParameter]
75 ([ReferenceTransactionId]
76 ,[ServiceId]
77 ,[ParamCode]
78 ,[ParamValue]
79 ,[ParamValidFrom])
80 SELECT
81 @ReferenceTransactionId AS [ReferenceTransactionId]
82 ,[ServiceId]
83 ,[ParamCode]
84 ,[ParamValue]
85 ,[ParamValidFrom]
86 FROM @Parameters AS [P]
87
88 END TRY
89 BEGIN CATCH
90 THROW
91 END CATCH
92END
93
94CREATE PROCEDURE [dbo].[spGetManageServicesRequest]
95 @ReferenceTransactionId INT
96AS
97BEGIN
98 SET NOCOUNT ON;
99
100 BEGIN TRY
101 /* VRAŤ MANAGE SERVICES REQUEST PODLE ID */
102
103 SELECT
104 [MR].[ReferenceTransactionId],
105 [MR].[OrderDate],
106 [MR].[QueuePriority],
107 [MR].[Queued],
108
109 [SC].[ReferenceTransactionId],
110 [SC].[ServiceId],
111 [SC].[ServiceStatus],
112 [SC].[ValidFrom],
113
114 [SP].[ReferenceTransactionId],
115 [SP].[ServiceId],
116 [SP].[ParamCode],
117 [SP].[ParamValue],
118 [SP].[ParamValidFrom]
119
120 FROM [dbo].[ManageServicesRequest] AS [MR]
121 LEFT JOIN [dbo].[ServiceChange] AS [SC] ON [SC].[ReferenceTransactionId] = [MR].[ReferenceTransactionId]
122 LEFT JOIN [dbo].[ServiceChangeParameter] AS [SP] ON [SP].[ReferenceTransactionId] = [SC].[ReferenceTransactionId] AND [SP].[ServiceId] = [SC].[ServiceId]
123 WHERE [MR].[ReferenceTransactionId] = @ReferenceTransactionId
124
125 END TRY
126 BEGIN CATCH
127 THROW
128 END CATCH
129END
130public async Task Consume(ConsumeContext<CreateCommand> context)
131{
132 using (var sql = sqlFactory.Cip)
133 {
134 /*SAVE REQUEST TO DATABASE*/
135 sql.StartTransaction(System.Data.IsolationLevel.Serializable); <----- First transaction starts
136
137 /* Create id */
138 var transactionId = await GetNewId(context.Message.CorrelationId);
139
140 /* Create manage services request */
141 await sql.OrderingGateway.ManageServices.Create(transactionId, context.Message.ApiRequest.OrderDate, context.Message.ApiRequest.Priority, services);
142
143 sql.Commit(); <----- First transaction ends
144
145
146 /// .... Some other stuff ...
147
148 /* Fetch the same object you created in the first transaction */
149 Try
150 {
151 sql.StartTransaction(System.Data.IsolationLevel.Serializable);
152
153 var request = await sql.OrderingGateway.ManageServices.Get(transactionId); <----- HERE BE THE DEADLOCK,
154
155 request.Queued = DateTimeOffset.Now;
156 await sql.OrderingGateway.ManageServices.Update(request);
157
158 ... Here is a posting to a microservice queue ...
159
160 sql.Commit();
161 }
162 catch (Exception)
163 {
164 sql.RollBack();
165 }
166
167 /// .... Some other stuff ....
168}
169<deadlock>
170 <victim-list>
171 <victimProcess id="process1dbfa86c4e8" />
172 </victim-list>
173 <process-list>
174 <process id="process1dbfa86c4e8" taskpriority="0" logused="0" waitresource="KEY: 18:72057594046775296 (b42d8e559092)" waittime="2503" ownerId="33411557480" transactionname="user_transaction" lasttranstarted="2021-12-01T01:06:15.303" XDES="0x1ddd2df4420" lockMode="RangeS-S" schedulerid="20" kpid="23000" status="suspended" spid="55" sbid="2" ecid="0" priority="0" trancount="1" lastbatchstarted="2021-12-01T01:06:15.310" lastbatchcompleted="2021-12-01T01:06:15.300" lastattention="1900-01-01T00:00:00.300" clientapp="Core Microsoft SqlClient Data Provider" hostpid="11020" isolationlevel="serializable (4)" xactid="33411557480" currentdb="18" currentdbname="xxx" lockTimeout="4294967295" clientoption1="673185824" clientoption2="128056">
175 <executionStack>
176 <frame procname="xxx.dbo.spGetManageServicesRequest" line="10" stmtstart="356" stmtend="4256" sqlhandle="0x030012001374fc02f91433019aad000001000000000000000000000000000000000000000000000000000000"></frame>
177 </executionStack>
178 </process>
179 <process id="process1dbfa1c1c28" taskpriority="0" logused="1232" waitresource="KEY: 18:72057594046971904 (ffffffffffff)" waittime="6275" ownerId="33411563398" transactionname="user_transaction" lasttranstarted="2021-12-01T01:06:16.450" XDES="0x3d4e842c420" lockMode="RangeI-N" schedulerid="31" kpid="36432" status="suspended" spid="419" sbid="2" ecid="0" priority="0" trancount="2" lastbatchstarted="2021-12-01T01:06:16.480" lastbatchcompleted="2021-12-01T01:06:16.463" lastattention="1900-01-01T00:00:00.463" clientapp="Core Microsoft SqlClient Data Provider" hostpid="11020" isolationlevel="serializable (4)" xactid="33411563398" currentdb="18" currentdbname="xxx" lockTimeout="4294967295" clientoption1="673185824" clientoption2="128056">
180 <executionStack>
181 <frame procname="xxx.dbo.spCreateManageServicesRequest" line="40" stmtstart="2592" stmtend="3226" sqlhandle="0x03001200f01ab84aeb1433019aad000001000000000000000000000000000000000000000000000000000000"></frame>
182 </executionStack>
183 </process>
184 </process-list>
185 <resource-list>
186 <keylock hobtid="72057594046775296" dbid="18" objectname="xxx.dbo.ServiceChange" indexname="PK_ServiceChange" id="lock202ecfd0380" mode="X" associatedObjectId="72057594046775296">
187 <owner-list>
188 <owner id="process1dbfa1c1c28" mode="X" />
189 </owner-list>
190 <waiter-list>
191 <waiter id="process1dbfa86c4e8" mode="RangeS-S" requestType="wait" />
192 </waiter-list>
193 </keylock>
194 <keylock hobtid="72057594046971904" dbid="18" objectname="xxx.dbo.ServiceChangeParameter" indexname="PK_ServiceChangeParameter" id="lock27d3d371880" mode="RangeS-S" associatedObjectId="72057594046971904">
195 <owner-list>
196 <owner id="process1dbfa86c4e8" mode="RangeS-S" />
197 </owner-list>
198 <waiter-list>
199 <waiter id="process1dbfa1c1c28" mode="RangeI-N" requestType="wait" />
200 </waiter-list>
201 </keylock>
202 </resource-list>
203</deadlock>
204
Why is this deadlock happening? How do I avoid it in the future?
Edit: Here is a plan for Get procedure: https://www.brentozar.com/pastetheplan/?id=B1UMMhaqF
Another Edit: After GSerg comment, I changed the line number in the deadlock graph from 65 to 40, due to removed columns that are not important to the question.
ANSWER
Answered 2021-Dec-26 at 12:54You are better off avoiding serializable isolation level. The way the serializable guarantee is provided is often deadlock prone.
If you can't alter your stored procs to use more targeted locking hints that guarantee the results you require at a lesser isolation level then you can prevent this particular deadlock scenario shown by ensuring that all locks are taken out on ServiceChange
first before any are taken out on ServiceChangeParameter
.
One way of doing this would be to introduce a table variable in spGetManageServicesRequest
and materialize the results of
1CREATE TABLE [dbo].[ManageServicesRequest]
2(
3 [ReferenceTransactionId] INT NOT NULL,
4 [OrderDate] DATETIMEOFFSET(7) NOT NULL,
5 [QueuePriority] INT NOT NULL,
6 [Queued] DATETIMEOFFSET(7) NULL,
7 CONSTRAINT [PK_ManageServicesRequest] PRIMARY KEY CLUSTERED ([ReferenceTransactionId]),
8)
9
10CREATE TABLE [dbo].[ServiceChange]
11(
12 [ReferenceTransactionId] INT NOT NULL,
13 [ServiceId] VARCHAR(50) NOT NULL,
14 [ServiceStatus] CHAR(1) NOT NULL,
15 [ValidFrom] DATETIMEOFFSET(7) NOT NULL,
16 CONSTRAINT [PK_ServiceChange] PRIMARY KEY CLUSTERED ([ReferenceTransactionId],[ServiceId]),
17 CONSTRAINT [FK_ServiceChange_ManageServiceRequest] FOREIGN KEY ([ReferenceTransactionId]) REFERENCES [ManageServicesRequest]([ReferenceTransactionId]) ON DELETE CASCADE,
18 INDEX [IDX_ServiceChange_ManageServiceRequestId] ([ReferenceTransactionId]),
19 INDEX [IDX_ServiceChange_ServiceId] ([ServiceId])
20)
21
22CREATE TABLE [dbo].[ServiceChangeParameter]
23(
24 [ReferenceTransactionId] INT NOT NULL,
25 [ServiceId] VARCHAR(50) NOT NULL,
26 [ParamCode] VARCHAR(50) NOT NULL,
27 [ParamValue] VARCHAR(50) NOT NULL,
28 [ParamValidFrom] DATETIMEOFFSET(7) NOT NULL,
29 CONSTRAINT [PK_ServiceChangeParameter] PRIMARY KEY CLUSTERED ([ReferenceTransactionId],[ServiceId],[ParamCode]),
30 CONSTRAINT [FK_ServiceChangeParameter_ServiceChange] FOREIGN KEY ([ReferenceTransactionId],[ServiceId]) REFERENCES [ServiceChange] ([ReferenceTransactionId],[ServiceId]) ON DELETE CASCADE,
31 INDEX [IDX_ServiceChangeParameter_ManageServiceRequestId] ([ReferenceTransactionId]),
32 INDEX [IDX_ServiceChangeParameter_ServiceId] ([ServiceId]),
33 INDEX [IDX_ServiceChangeParameter_ParamCode] ([ParamCode])
34)
35CREATE PROCEDURE [dbo].[spCreateManageServicesRequest]
36 @ReferenceTransactionId INT,
37 @OrderDate DATETIMEOFFSET,
38 @QueuePriority INT,
39 @Services ServiceChangeUdt READONLY,
40 @Parameters ServiceChangeParameterUdt READONLY
41AS
42BEGIN
43 SET NOCOUNT ON;
44
45 BEGIN TRY
46 /* VYTVOŘ NOVÝ REQUEST NA ZMĚNU SLUŽEB */
47
48 /* INSERT REQUEST */
49 INSERT INTO [dbo].[ManageServicesRequest]
50 ([ReferenceTransactionId]
51 ,[OrderDate]
52 ,[QueuePriority]
53 ,[Queued])
54 VALUES
55 (@ReferenceTransactionId
56 ,@OrderDate
57 ,@QueuePriority
58 ,NULL)
59
60 /* INSERT SERVICES */
61 INSERT INTO [dbo].[ServiceChange]
62 ([ReferenceTransactionId]
63 ,[ServiceId]
64 ,[ServiceStatus]
65 ,[ValidFrom])
66 SELECT
67 @ReferenceTransactionId AS [ReferenceTransactionId]
68 ,[ServiceId]
69 ,[ServiceStatus]
70 ,[ValidFrom]
71 FROM @Services AS [S]
72
73 /* INSERT PARAMS */
74 INSERT INTO [dbo].[ServiceChangeParameter]
75 ([ReferenceTransactionId]
76 ,[ServiceId]
77 ,[ParamCode]
78 ,[ParamValue]
79 ,[ParamValidFrom])
80 SELECT
81 @ReferenceTransactionId AS [ReferenceTransactionId]
82 ,[ServiceId]
83 ,[ParamCode]
84 ,[ParamValue]
85 ,[ParamValidFrom]
86 FROM @Parameters AS [P]
87
88 END TRY
89 BEGIN CATCH
90 THROW
91 END CATCH
92END
93
94CREATE PROCEDURE [dbo].[spGetManageServicesRequest]
95 @ReferenceTransactionId INT
96AS
97BEGIN
98 SET NOCOUNT ON;
99
100 BEGIN TRY
101 /* VRAŤ MANAGE SERVICES REQUEST PODLE ID */
102
103 SELECT
104 [MR].[ReferenceTransactionId],
105 [MR].[OrderDate],
106 [MR].[QueuePriority],
107 [MR].[Queued],
108
109 [SC].[ReferenceTransactionId],
110 [SC].[ServiceId],
111 [SC].[ServiceStatus],
112 [SC].[ValidFrom],
113
114 [SP].[ReferenceTransactionId],
115 [SP].[ServiceId],
116 [SP].[ParamCode],
117 [SP].[ParamValue],
118 [SP].[ParamValidFrom]
119
120 FROM [dbo].[ManageServicesRequest] AS [MR]
121 LEFT JOIN [dbo].[ServiceChange] AS [SC] ON [SC].[ReferenceTransactionId] = [MR].[ReferenceTransactionId]
122 LEFT JOIN [dbo].[ServiceChangeParameter] AS [SP] ON [SP].[ReferenceTransactionId] = [SC].[ReferenceTransactionId] AND [SP].[ServiceId] = [SC].[ServiceId]
123 WHERE [MR].[ReferenceTransactionId] = @ReferenceTransactionId
124
125 END TRY
126 BEGIN CATCH
127 THROW
128 END CATCH
129END
130public async Task Consume(ConsumeContext<CreateCommand> context)
131{
132 using (var sql = sqlFactory.Cip)
133 {
134 /*SAVE REQUEST TO DATABASE*/
135 sql.StartTransaction(System.Data.IsolationLevel.Serializable); <----- First transaction starts
136
137 /* Create id */
138 var transactionId = await GetNewId(context.Message.CorrelationId);
139
140 /* Create manage services request */
141 await sql.OrderingGateway.ManageServices.Create(transactionId, context.Message.ApiRequest.OrderDate, context.Message.ApiRequest.Priority, services);
142
143 sql.Commit(); <----- First transaction ends
144
145
146 /// .... Some other stuff ...
147
148 /* Fetch the same object you created in the first transaction */
149 Try
150 {
151 sql.StartTransaction(System.Data.IsolationLevel.Serializable);
152
153 var request = await sql.OrderingGateway.ManageServices.Get(transactionId); <----- HERE BE THE DEADLOCK,
154
155 request.Queued = DateTimeOffset.Now;
156 await sql.OrderingGateway.ManageServices.Update(request);
157
158 ... Here is a posting to a microservice queue ...
159
160 sql.Commit();
161 }
162 catch (Exception)
163 {
164 sql.RollBack();
165 }
166
167 /// .... Some other stuff ....
168}
169<deadlock>
170 <victim-list>
171 <victimProcess id="process1dbfa86c4e8" />
172 </victim-list>
173 <process-list>
174 <process id="process1dbfa86c4e8" taskpriority="0" logused="0" waitresource="KEY: 18:72057594046775296 (b42d8e559092)" waittime="2503" ownerId="33411557480" transactionname="user_transaction" lasttranstarted="2021-12-01T01:06:15.303" XDES="0x1ddd2df4420" lockMode="RangeS-S" schedulerid="20" kpid="23000" status="suspended" spid="55" sbid="2" ecid="0" priority="0" trancount="1" lastbatchstarted="2021-12-01T01:06:15.310" lastbatchcompleted="2021-12-01T01:06:15.300" lastattention="1900-01-01T00:00:00.300" clientapp="Core Microsoft SqlClient Data Provider" hostpid="11020" isolationlevel="serializable (4)" xactid="33411557480" currentdb="18" currentdbname="xxx" lockTimeout="4294967295" clientoption1="673185824" clientoption2="128056">
175 <executionStack>
176 <frame procname="xxx.dbo.spGetManageServicesRequest" line="10" stmtstart="356" stmtend="4256" sqlhandle="0x030012001374fc02f91433019aad000001000000000000000000000000000000000000000000000000000000"></frame>
177 </executionStack>
178 </process>
179 <process id="process1dbfa1c1c28" taskpriority="0" logused="1232" waitresource="KEY: 18:72057594046971904 (ffffffffffff)" waittime="6275" ownerId="33411563398" transactionname="user_transaction" lasttranstarted="2021-12-01T01:06:16.450" XDES="0x3d4e842c420" lockMode="RangeI-N" schedulerid="31" kpid="36432" status="suspended" spid="419" sbid="2" ecid="0" priority="0" trancount="2" lastbatchstarted="2021-12-01T01:06:16.480" lastbatchcompleted="2021-12-01T01:06:16.463" lastattention="1900-01-01T00:00:00.463" clientapp="Core Microsoft SqlClient Data Provider" hostpid="11020" isolationlevel="serializable (4)" xactid="33411563398" currentdb="18" currentdbname="xxx" lockTimeout="4294967295" clientoption1="673185824" clientoption2="128056">
180 <executionStack>
181 <frame procname="xxx.dbo.spCreateManageServicesRequest" line="40" stmtstart="2592" stmtend="3226" sqlhandle="0x03001200f01ab84aeb1433019aad000001000000000000000000000000000000000000000000000000000000"></frame>
182 </executionStack>
183 </process>
184 </process-list>
185 <resource-list>
186 <keylock hobtid="72057594046775296" dbid="18" objectname="xxx.dbo.ServiceChange" indexname="PK_ServiceChange" id="lock202ecfd0380" mode="X" associatedObjectId="72057594046775296">
187 <owner-list>
188 <owner id="process1dbfa1c1c28" mode="X" />
189 </owner-list>
190 <waiter-list>
191 <waiter id="process1dbfa86c4e8" mode="RangeS-S" requestType="wait" />
192 </waiter-list>
193 </keylock>
194 <keylock hobtid="72057594046971904" dbid="18" objectname="xxx.dbo.ServiceChangeParameter" indexname="PK_ServiceChangeParameter" id="lock27d3d371880" mode="RangeS-S" associatedObjectId="72057594046971904">
195 <owner-list>
196 <owner id="process1dbfa86c4e8" mode="RangeS-S" />
197 </owner-list>
198 <waiter-list>
199 <waiter id="process1dbfa1c1c28" mode="RangeI-N" requestType="wait" />
200 </waiter-list>
201 </keylock>
202 </resource-list>
203</deadlock>
204SELECT ...
205FROM [dbo].[ManageServicesRequest] AS [MR]
206 LEFT JOIN [dbo].[ServiceChange] AS [SC] ON [SC].[ReferenceTransactionId] = [MR].[ReferenceTransactionId]
207
to the table variable.
Then join that against [dbo].[ServiceChangeParameter]
to get your final results.
The phase separation introduced by the table variable will ensure the SELECT
statement acquires the locks in the same object order as the insert is doing so prevent deadlocks where the SELECT
statement already holds a lock on ServiceChangeParameter
and is waiting to acquire one on ServiceChange
(as in the deadlock graph here).
It may be instructive to look at the exact locks taken out by the SELECT
running at serializable isolation level. These can be seen with extended events or undocumented trace flag 1200.
Currently your execution plan is below.
For the following example data
1CREATE TABLE [dbo].[ManageServicesRequest]
2(
3 [ReferenceTransactionId] INT NOT NULL,
4 [OrderDate] DATETIMEOFFSET(7) NOT NULL,
5 [QueuePriority] INT NOT NULL,
6 [Queued] DATETIMEOFFSET(7) NULL,
7 CONSTRAINT [PK_ManageServicesRequest] PRIMARY KEY CLUSTERED ([ReferenceTransactionId]),
8)
9
10CREATE TABLE [dbo].[ServiceChange]
11(
12 [ReferenceTransactionId] INT NOT NULL,
13 [ServiceId] VARCHAR(50) NOT NULL,
14 [ServiceStatus] CHAR(1) NOT NULL,
15 [ValidFrom] DATETIMEOFFSET(7) NOT NULL,
16 CONSTRAINT [PK_ServiceChange] PRIMARY KEY CLUSTERED ([ReferenceTransactionId],[ServiceId]),
17 CONSTRAINT [FK_ServiceChange_ManageServiceRequest] FOREIGN KEY ([ReferenceTransactionId]) REFERENCES [ManageServicesRequest]([ReferenceTransactionId]) ON DELETE CASCADE,
18 INDEX [IDX_ServiceChange_ManageServiceRequestId] ([ReferenceTransactionId]),
19 INDEX [IDX_ServiceChange_ServiceId] ([ServiceId])
20)
21
22CREATE TABLE [dbo].[ServiceChangeParameter]
23(
24 [ReferenceTransactionId] INT NOT NULL,
25 [ServiceId] VARCHAR(50) NOT NULL,
26 [ParamCode] VARCHAR(50) NOT NULL,
27 [ParamValue] VARCHAR(50) NOT NULL,
28 [ParamValidFrom] DATETIMEOFFSET(7) NOT NULL,
29 CONSTRAINT [PK_ServiceChangeParameter] PRIMARY KEY CLUSTERED ([ReferenceTransactionId],[ServiceId],[ParamCode]),
30 CONSTRAINT [FK_ServiceChangeParameter_ServiceChange] FOREIGN KEY ([ReferenceTransactionId],[ServiceId]) REFERENCES [ServiceChange] ([ReferenceTransactionId],[ServiceId]) ON DELETE CASCADE,
31 INDEX [IDX_ServiceChangeParameter_ManageServiceRequestId] ([ReferenceTransactionId]),
32 INDEX [IDX_ServiceChangeParameter_ServiceId] ([ServiceId]),
33 INDEX [IDX_ServiceChangeParameter_ParamCode] ([ParamCode])
34)
35CREATE PROCEDURE [dbo].[spCreateManageServicesRequest]
36 @ReferenceTransactionId INT,
37 @OrderDate DATETIMEOFFSET,
38 @QueuePriority INT,
39 @Services ServiceChangeUdt READONLY,
40 @Parameters ServiceChangeParameterUdt READONLY
41AS
42BEGIN
43 SET NOCOUNT ON;
44
45 BEGIN TRY
46 /* VYTVOŘ NOVÝ REQUEST NA ZMĚNU SLUŽEB */
47
48 /* INSERT REQUEST */
49 INSERT INTO [dbo].[ManageServicesRequest]
50 ([ReferenceTransactionId]
51 ,[OrderDate]
52 ,[QueuePriority]
53 ,[Queued])
54 VALUES
55 (@ReferenceTransactionId
56 ,@OrderDate
57 ,@QueuePriority
58 ,NULL)
59
60 /* INSERT SERVICES */
61 INSERT INTO [dbo].[ServiceChange]
62 ([ReferenceTransactionId]
63 ,[ServiceId]
64 ,[ServiceStatus]
65 ,[ValidFrom])
66 SELECT
67 @ReferenceTransactionId AS [ReferenceTransactionId]
68 ,[ServiceId]
69 ,[ServiceStatus]
70 ,[ValidFrom]
71 FROM @Services AS [S]
72
73 /* INSERT PARAMS */
74 INSERT INTO [dbo].[ServiceChangeParameter]
75 ([ReferenceTransactionId]
76 ,[ServiceId]
77 ,[ParamCode]
78 ,[ParamValue]
79 ,[ParamValidFrom])
80 SELECT
81 @ReferenceTransactionId AS [ReferenceTransactionId]
82 ,[ServiceId]
83 ,[ParamCode]
84 ,[ParamValue]
85 ,[ParamValidFrom]
86 FROM @Parameters AS [P]
87
88 END TRY
89 BEGIN CATCH
90 THROW
91 END CATCH
92END
93
94CREATE PROCEDURE [dbo].[spGetManageServicesRequest]
95 @ReferenceTransactionId INT
96AS
97BEGIN
98 SET NOCOUNT ON;
99
100 BEGIN TRY
101 /* VRAŤ MANAGE SERVICES REQUEST PODLE ID */
102
103 SELECT
104 [MR].[ReferenceTransactionId],
105 [MR].[OrderDate],
106 [MR].[QueuePriority],
107 [MR].[Queued],
108
109 [SC].[ReferenceTransactionId],
110 [SC].[ServiceId],
111 [SC].[ServiceStatus],
112 [SC].[ValidFrom],
113
114 [SP].[ReferenceTransactionId],
115 [SP].[ServiceId],
116 [SP].[ParamCode],
117 [SP].[ParamValue],
118 [SP].[ParamValidFrom]
119
120 FROM [dbo].[ManageServicesRequest] AS [MR]
121 LEFT JOIN [dbo].[ServiceChange] AS [SC] ON [SC].[ReferenceTransactionId] = [MR].[ReferenceTransactionId]
122 LEFT JOIN [dbo].[ServiceChangeParameter] AS [SP] ON [SP].[ReferenceTransactionId] = [SC].[ReferenceTransactionId] AND [SP].[ServiceId] = [SC].[ServiceId]
123 WHERE [MR].[ReferenceTransactionId] = @ReferenceTransactionId
124
125 END TRY
126 BEGIN CATCH
127 THROW
128 END CATCH
129END
130public async Task Consume(ConsumeContext<CreateCommand> context)
131{
132 using (var sql = sqlFactory.Cip)
133 {
134 /*SAVE REQUEST TO DATABASE*/
135 sql.StartTransaction(System.Data.IsolationLevel.Serializable); <----- First transaction starts
136
137 /* Create id */
138 var transactionId = await GetNewId(context.Message.CorrelationId);
139
140 /* Create manage services request */
141 await sql.OrderingGateway.ManageServices.Create(transactionId, context.Message.ApiRequest.OrderDate, context.Message.ApiRequest.Priority, services);
142
143 sql.Commit(); <----- First transaction ends
144
145
146 /// .... Some other stuff ...
147
148 /* Fetch the same object you created in the first transaction */
149 Try
150 {
151 sql.StartTransaction(System.Data.IsolationLevel.Serializable);
152
153 var request = await sql.OrderingGateway.ManageServices.Get(transactionId); <----- HERE BE THE DEADLOCK,
154
155 request.Queued = DateTimeOffset.Now;
156 await sql.OrderingGateway.ManageServices.Update(request);
157
158 ... Here is a posting to a microservice queue ...
159
160 sql.Commit();
161 }
162 catch (Exception)
163 {
164 sql.RollBack();
165 }
166
167 /// .... Some other stuff ....
168}
169<deadlock>
170 <victim-list>
171 <victimProcess id="process1dbfa86c4e8" />
172 </victim-list>
173 <process-list>
174 <process id="process1dbfa86c4e8" taskpriority="0" logused="0" waitresource="KEY: 18:72057594046775296 (b42d8e559092)" waittime="2503" ownerId="33411557480" transactionname="user_transaction" lasttranstarted="2021-12-01T01:06:15.303" XDES="0x1ddd2df4420" lockMode="RangeS-S" schedulerid="20" kpid="23000" status="suspended" spid="55" sbid="2" ecid="0" priority="0" trancount="1" lastbatchstarted="2021-12-01T01:06:15.310" lastbatchcompleted="2021-12-01T01:06:15.300" lastattention="1900-01-01T00:00:00.300" clientapp="Core Microsoft SqlClient Data Provider" hostpid="11020" isolationlevel="serializable (4)" xactid="33411557480" currentdb="18" currentdbname="xxx" lockTimeout="4294967295" clientoption1="673185824" clientoption2="128056">
175 <executionStack>
176 <frame procname="xxx.dbo.spGetManageServicesRequest" line="10" stmtstart="356" stmtend="4256" sqlhandle="0x030012001374fc02f91433019aad000001000000000000000000000000000000000000000000000000000000"></frame>
177 </executionStack>
178 </process>
179 <process id="process1dbfa1c1c28" taskpriority="0" logused="1232" waitresource="KEY: 18:72057594046971904 (ffffffffffff)" waittime="6275" ownerId="33411563398" transactionname="user_transaction" lasttranstarted="2021-12-01T01:06:16.450" XDES="0x3d4e842c420" lockMode="RangeI-N" schedulerid="31" kpid="36432" status="suspended" spid="419" sbid="2" ecid="0" priority="0" trancount="2" lastbatchstarted="2021-12-01T01:06:16.480" lastbatchcompleted="2021-12-01T01:06:16.463" lastattention="1900-01-01T00:00:00.463" clientapp="Core Microsoft SqlClient Data Provider" hostpid="11020" isolationlevel="serializable (4)" xactid="33411563398" currentdb="18" currentdbname="xxx" lockTimeout="4294967295" clientoption1="673185824" clientoption2="128056">
180 <executionStack>
181 <frame procname="xxx.dbo.spCreateManageServicesRequest" line="40" stmtstart="2592" stmtend="3226" sqlhandle="0x03001200f01ab84aeb1433019aad000001000000000000000000000000000000000000000000000000000000"></frame>
182 </executionStack>
183 </process>
184 </process-list>
185 <resource-list>
186 <keylock hobtid="72057594046775296" dbid="18" objectname="xxx.dbo.ServiceChange" indexname="PK_ServiceChange" id="lock202ecfd0380" mode="X" associatedObjectId="72057594046775296">
187 <owner-list>
188 <owner id="process1dbfa1c1c28" mode="X" />
189 </owner-list>
190 <waiter-list>
191 <waiter id="process1dbfa86c4e8" mode="RangeS-S" requestType="wait" />
192 </waiter-list>
193 </keylock>
194 <keylock hobtid="72057594046971904" dbid="18" objectname="xxx.dbo.ServiceChangeParameter" indexname="PK_ServiceChangeParameter" id="lock27d3d371880" mode="RangeS-S" associatedObjectId="72057594046971904">
195 <owner-list>
196 <owner id="process1dbfa86c4e8" mode="RangeS-S" />
197 </owner-list>
198 <waiter-list>
199 <waiter id="process1dbfa1c1c28" mode="RangeI-N" requestType="wait" />
200 </waiter-list>
201 </keylock>
202 </resource-list>
203</deadlock>
204SELECT ...
205FROM [dbo].[ManageServicesRequest] AS [MR]
206 LEFT JOIN [dbo].[ServiceChange] AS [SC] ON [SC].[ReferenceTransactionId] = [MR].[ReferenceTransactionId]
207INSERT INTO [dbo].[ManageServicesRequest]
208VALUES (26410821, GETDATE(), 1, GETDATE()),
209 (26410822, GETDATE(), 1, GETDATE()),
210 (26410823, GETDATE(), 1, GETDATE());
211
212INSERT INTO [dbo].[ServiceChange]
213VALUES (26410821, 'X', 'X', GETDATE()),
214 (26410822, 'X', 'X', GETDATE()),
215 (26410823, 'X', 'X', GETDATE());
216
217INSERT INTO [dbo].[ServiceChangeParameter]
218VALUES (26410821, 'X', 'P1','P1', GETDATE()),
219 (26410823, 'X', 'P1','P1', GETDATE());
220
The trace flag output (for WHERE [MR].[ReferenceTransactionId] = 26410822
) is
1CREATE TABLE [dbo].[ManageServicesRequest]
2(
3 [ReferenceTransactionId] INT NOT NULL,
4 [OrderDate] DATETIMEOFFSET(7) NOT NULL,
5 [QueuePriority] INT NOT NULL,
6 [Queued] DATETIMEOFFSET(7) NULL,
7 CONSTRAINT [PK_ManageServicesRequest] PRIMARY KEY CLUSTERED ([ReferenceTransactionId]),
8)
9
10CREATE TABLE [dbo].[ServiceChange]
11(
12 [ReferenceTransactionId] INT NOT NULL,
13 [ServiceId] VARCHAR(50) NOT NULL,
14 [ServiceStatus] CHAR(1) NOT NULL,
15 [ValidFrom] DATETIMEOFFSET(7) NOT NULL,
16 CONSTRAINT [PK_ServiceChange] PRIMARY KEY CLUSTERED ([ReferenceTransactionId],[ServiceId]),
17 CONSTRAINT [FK_ServiceChange_ManageServiceRequest] FOREIGN KEY ([ReferenceTransactionId]) REFERENCES [ManageServicesRequest]([ReferenceTransactionId]) ON DELETE CASCADE,
18 INDEX [IDX_ServiceChange_ManageServiceRequestId] ([ReferenceTransactionId]),
19 INDEX [IDX_ServiceChange_ServiceId] ([ServiceId])
20)
21
22CREATE TABLE [dbo].[ServiceChangeParameter]
23(
24 [ReferenceTransactionId] INT NOT NULL,
25 [ServiceId] VARCHAR(50) NOT NULL,
26 [ParamCode] VARCHAR(50) NOT NULL,
27 [ParamValue] VARCHAR(50) NOT NULL,
28 [ParamValidFrom] DATETIMEOFFSET(7) NOT NULL,
29 CONSTRAINT [PK_ServiceChangeParameter] PRIMARY KEY CLUSTERED ([ReferenceTransactionId],[ServiceId],[ParamCode]),
30 CONSTRAINT [FK_ServiceChangeParameter_ServiceChange] FOREIGN KEY ([ReferenceTransactionId],[ServiceId]) REFERENCES [ServiceChange] ([ReferenceTransactionId],[ServiceId]) ON DELETE CASCADE,
31 INDEX [IDX_ServiceChangeParameter_ManageServiceRequestId] ([ReferenceTransactionId]),
32 INDEX [IDX_ServiceChangeParameter_ServiceId] ([ServiceId]),
33 INDEX [IDX_ServiceChangeParameter_ParamCode] ([ParamCode])
34)
35CREATE PROCEDURE [dbo].[spCreateManageServicesRequest]
36 @ReferenceTransactionId INT,
37 @OrderDate DATETIMEOFFSET,
38 @QueuePriority INT,
39 @Services ServiceChangeUdt READONLY,
40 @Parameters ServiceChangeParameterUdt READONLY
41AS
42BEGIN
43 SET NOCOUNT ON;
44
45 BEGIN TRY
46 /* VYTVOŘ NOVÝ REQUEST NA ZMĚNU SLUŽEB */
47
48 /* INSERT REQUEST */
49 INSERT INTO [dbo].[ManageServicesRequest]
50 ([ReferenceTransactionId]
51 ,[OrderDate]
52 ,[QueuePriority]
53 ,[Queued])
54 VALUES
55 (@ReferenceTransactionId
56 ,@OrderDate
57 ,@QueuePriority
58 ,NULL)
59
60 /* INSERT SERVICES */
61 INSERT INTO [dbo].[ServiceChange]
62 ([ReferenceTransactionId]
63 ,[ServiceId]
64 ,[ServiceStatus]
65 ,[ValidFrom])
66 SELECT
67 @ReferenceTransactionId AS [ReferenceTransactionId]
68 ,[ServiceId]
69 ,[ServiceStatus]
70 ,[ValidFrom]
71 FROM @Services AS [S]
72
73 /* INSERT PARAMS */
74 INSERT INTO [dbo].[ServiceChangeParameter]
75 ([ReferenceTransactionId]
76 ,[ServiceId]
77 ,[ParamCode]
78 ,[ParamValue]
79 ,[ParamValidFrom])
80 SELECT
81 @ReferenceTransactionId AS [ReferenceTransactionId]
82 ,[ServiceId]
83 ,[ParamCode]
84 ,[ParamValue]
85 ,[ParamValidFrom]
86 FROM @Parameters AS [P]
87
88 END TRY
89 BEGIN CATCH
90 THROW
91 END CATCH
92END
93
94CREATE PROCEDURE [dbo].[spGetManageServicesRequest]
95 @ReferenceTransactionId INT
96AS
97BEGIN
98 SET NOCOUNT ON;
99
100 BEGIN TRY
101 /* VRAŤ MANAGE SERVICES REQUEST PODLE ID */
102
103 SELECT
104 [MR].[ReferenceTransactionId],
105 [MR].[OrderDate],
106 [MR].[QueuePriority],
107 [MR].[Queued],
108
109 [SC].[ReferenceTransactionId],
110 [SC].[ServiceId],
111 [SC].[ServiceStatus],
112 [SC].[ValidFrom],
113
114 [SP].[ReferenceTransactionId],
115 [SP].[ServiceId],
116 [SP].[ParamCode],
117 [SP].[ParamValue],
118 [SP].[ParamValidFrom]
119
120 FROM [dbo].[ManageServicesRequest] AS [MR]
121 LEFT JOIN [dbo].[ServiceChange] AS [SC] ON [SC].[ReferenceTransactionId] = [MR].[ReferenceTransactionId]
122 LEFT JOIN [dbo].[ServiceChangeParameter] AS [SP] ON [SP].[ReferenceTransactionId] = [SC].[ReferenceTransactionId] AND [SP].[ServiceId] = [SC].[ServiceId]
123 WHERE [MR].[ReferenceTransactionId] = @ReferenceTransactionId
124
125 END TRY
126 BEGIN CATCH
127 THROW
128 END CATCH
129END
130public async Task Consume(ConsumeContext<CreateCommand> context)
131{
132 using (var sql = sqlFactory.Cip)
133 {
134 /*SAVE REQUEST TO DATABASE*/
135 sql.StartTransaction(System.Data.IsolationLevel.Serializable); <----- First transaction starts
136
137 /* Create id */
138 var transactionId = await GetNewId(context.Message.CorrelationId);
139
140 /* Create manage services request */
141 await sql.OrderingGateway.ManageServices.Create(transactionId, context.Message.ApiRequest.OrderDate, context.Message.ApiRequest.Priority, services);
142
143 sql.Commit(); <----- First transaction ends
144
145
146 /// .... Some other stuff ...
147
148 /* Fetch the same object you created in the first transaction */
149 Try
150 {
151 sql.StartTransaction(System.Data.IsolationLevel.Serializable);
152
153 var request = await sql.OrderingGateway.ManageServices.Get(transactionId); <----- HERE BE THE DEADLOCK,
154
155 request.Queued = DateTimeOffset.Now;
156 await sql.OrderingGateway.ManageServices.Update(request);
157
158 ... Here is a posting to a microservice queue ...
159
160 sql.Commit();
161 }
162 catch (Exception)
163 {
164 sql.RollBack();
165 }
166
167 /// .... Some other stuff ....
168}
169<deadlock>
170 <victim-list>
171 <victimProcess id="process1dbfa86c4e8" />
172 </victim-list>
173 <process-list>
174 <process id="process1dbfa86c4e8" taskpriority="0" logused="0" waitresource="KEY: 18:72057594046775296 (b42d8e559092)" waittime="2503" ownerId="33411557480" transactionname="user_transaction" lasttranstarted="2021-12-01T01:06:15.303" XDES="0x1ddd2df4420" lockMode="RangeS-S" schedulerid="20" kpid="23000" status="suspended" spid="55" sbid="2" ecid="0" priority="0" trancount="1" lastbatchstarted="2021-12-01T01:06:15.310" lastbatchcompleted="2021-12-01T01:06:15.300" lastattention="1900-01-01T00:00:00.300" clientapp="Core Microsoft SqlClient Data Provider" hostpid="11020" isolationlevel="serializable (4)" xactid="33411557480" currentdb="18" currentdbname="xxx" lockTimeout="4294967295" clientoption1="673185824" clientoption2="128056">
175 <executionStack>
176 <frame procname="xxx.dbo.spGetManageServicesRequest" line="10" stmtstart="356" stmtend="4256" sqlhandle="0x030012001374fc02f91433019aad000001000000000000000000000000000000000000000000000000000000"></frame>
177 </executionStack>
178 </process>
179 <process id="process1dbfa1c1c28" taskpriority="0" logused="1232" waitresource="KEY: 18:72057594046971904 (ffffffffffff)" waittime="6275" ownerId="33411563398" transactionname="user_transaction" lasttranstarted="2021-12-01T01:06:16.450" XDES="0x3d4e842c420" lockMode="RangeI-N" schedulerid="31" kpid="36432" status="suspended" spid="419" sbid="2" ecid="0" priority="0" trancount="2" lastbatchstarted="2021-12-01T01:06:16.480" lastbatchcompleted="2021-12-01T01:06:16.463" lastattention="1900-01-01T00:00:00.463" clientapp="Core Microsoft SqlClient Data Provider" hostpid="11020" isolationlevel="serializable (4)" xactid="33411563398" currentdb="18" currentdbname="xxx" lockTimeout="4294967295" clientoption1="673185824" clientoption2="128056">
180 <executionStack>
181 <frame procname="xxx.dbo.spCreateManageServicesRequest" line="40" stmtstart="2592" stmtend="3226" sqlhandle="0x03001200f01ab84aeb1433019aad000001000000000000000000000000000000000000000000000000000000"></frame>
182 </executionStack>
183 </process>
184 </process-list>
185 <resource-list>
186 <keylock hobtid="72057594046775296" dbid="18" objectname="xxx.dbo.ServiceChange" indexname="PK_ServiceChange" id="lock202ecfd0380" mode="X" associatedObjectId="72057594046775296">
187 <owner-list>
188 <owner id="process1dbfa1c1c28" mode="X" />
189 </owner-list>
190 <waiter-list>
191 <waiter id="process1dbfa86c4e8" mode="RangeS-S" requestType="wait" />
192 </waiter-list>
193 </keylock>
194 <keylock hobtid="72057594046971904" dbid="18" objectname="xxx.dbo.ServiceChangeParameter" indexname="PK_ServiceChangeParameter" id="lock27d3d371880" mode="RangeS-S" associatedObjectId="72057594046971904">
195 <owner-list>
196 <owner id="process1dbfa86c4e8" mode="RangeS-S" />
197 </owner-list>
198 <waiter-list>
199 <waiter id="process1dbfa1c1c28" mode="RangeI-N" requestType="wait" />
200 </waiter-list>
201 </keylock>
202 </resource-list>
203</deadlock>
204SELECT ...
205FROM [dbo].[ManageServicesRequest] AS [MR]
206 LEFT JOIN [dbo].[ServiceChange] AS [SC] ON [SC].[ReferenceTransactionId] = [MR].[ReferenceTransactionId]
207INSERT INTO [dbo].[ManageServicesRequest]
208VALUES (26410821, GETDATE(), 1, GETDATE()),
209 (26410822, GETDATE(), 1, GETDATE()),
210 (26410823, GETDATE(), 1, GETDATE());
211
212INSERT INTO [dbo].[ServiceChange]
213VALUES (26410821, 'X', 'X', GETDATE()),
214 (26410822, 'X', 'X', GETDATE()),
215 (26410823, 'X', 'X', GETDATE());
216
217INSERT INTO [dbo].[ServiceChangeParameter]
218VALUES (26410821, 'X', 'P1','P1', GETDATE()),
219 (26410823, 'X', 'P1','P1', GETDATE());
220Process 51 acquiring IS lock on OBJECT: 7:1557580587:0 (class bit2000000 ref1) result: OK
221
222Process 51 acquiring IS lock on OBJECT: 7:1509580416:0 (class bit2000000 ref1) result: OK
223
224Process 51 acquiring IS lock on OBJECT: 7:1477580302:0 (class bit2000000 ref1) result: OK
225
226Process 51 acquiring IS lock on PAGE: 7:1:600 (class bit2000000 ref0) result: OK
227
228Process 51 acquiring S lock on KEY: 7:72057594044940288 (1b148afa48fb) (class bit2000000 ref0) result: OK
229
230Process 51 acquiring IS lock on PAGE: 7:1:608 (class bit2000000 ref0) result: OK
231
232Process 51 acquiring RangeS-S lock on KEY: 7:72057594045005824 (a69d56b089b6) (class bit2000000 ref0) result: OK
233
234Process 51 acquiring IS lock on PAGE: 7:1:632 (class bit2000000 ref0) result: OK
235
236Process 51 acquiring RangeS-S lock on KEY: 7:72057594045202432 (c37d1982c3c9) (class bit2000000 ref0) result: OK
237
238Process 51 acquiring RangeS-S lock on KEY: 7:72057594045005824 (2ef5265f2b42) (class bit2000000 ref0) result: OK
239
The order of locks taken is indicated in the image below. Range locks apply to the range of possible values from the given key value, to the nearest key value below it (in key order - so above it in the image!).
First node 1 is called and it takes an S
lock on the row in ManageServicesRequest
, then node 2 is called and a RangeS-S
lock is taken on a key in ServiceChange
the values from this row are then used to do the lookup in ServiceChangeParameter
- in this case there are no matching rows for the predicate but a RangeS-S
lock is still taken out covering the range from the next highest key to the preceding one (range (26410821, 'X', 'P1') ... (26410823, 'X', 'P1')
in this case).
Then node 2 is called again to see if there are any more rows. Even in the case that there aren't an additional RangeS-S
lock is taken on the next row in ServiceChange
.
In the case of your deadlock graph it seems that the range being locked in ServiceChangeParameter
is the range to infinity (denoted by ffffffffffff
) - this will happen here when it does a look up for a key value at or beyond the last key in the index.
An alternative to the table variable might also be to change the query as below.
1CREATE TABLE [dbo].[ManageServicesRequest]
2(
3 [ReferenceTransactionId] INT NOT NULL,
4 [OrderDate] DATETIMEOFFSET(7) NOT NULL,
5 [QueuePriority] INT NOT NULL,
6 [Queued] DATETIMEOFFSET(7) NULL,
7 CONSTRAINT [PK_ManageServicesRequest] PRIMARY KEY CLUSTERED ([ReferenceTransactionId]),
8)
9
10CREATE TABLE [dbo].[ServiceChange]
11(
12 [ReferenceTransactionId] INT NOT NULL,
13 [ServiceId] VARCHAR(50) NOT NULL,
14 [ServiceStatus] CHAR(1) NOT NULL,
15 [ValidFrom] DATETIMEOFFSET(7) NOT NULL,
16 CONSTRAINT [PK_ServiceChange] PRIMARY KEY CLUSTERED ([ReferenceTransactionId],[ServiceId]),
17 CONSTRAINT [FK_ServiceChange_ManageServiceRequest] FOREIGN KEY ([ReferenceTransactionId]) REFERENCES [ManageServicesRequest]([ReferenceTransactionId]) ON DELETE CASCADE,
18 INDEX [IDX_ServiceChange_ManageServiceRequestId] ([ReferenceTransactionId]),
19 INDEX [IDX_ServiceChange_ServiceId] ([ServiceId])
20)
21
22CREATE TABLE [dbo].[ServiceChangeParameter]
23(
24 [ReferenceTransactionId] INT NOT NULL,
25 [ServiceId] VARCHAR(50) NOT NULL,
26 [ParamCode] VARCHAR(50) NOT NULL,
27 [ParamValue] VARCHAR(50) NOT NULL,
28 [ParamValidFrom] DATETIMEOFFSET(7) NOT NULL,
29 CONSTRAINT [PK_ServiceChangeParameter] PRIMARY KEY CLUSTERED ([ReferenceTransactionId],[ServiceId],[ParamCode]),
30 CONSTRAINT [FK_ServiceChangeParameter_ServiceChange] FOREIGN KEY ([ReferenceTransactionId],[ServiceId]) REFERENCES [ServiceChange] ([ReferenceTransactionId],[ServiceId]) ON DELETE CASCADE,
31 INDEX [IDX_ServiceChangeParameter_ManageServiceRequestId] ([ReferenceTransactionId]),
32 INDEX [IDX_ServiceChangeParameter_ServiceId] ([ServiceId]),
33 INDEX [IDX_ServiceChangeParameter_ParamCode] ([ParamCode])
34)
35CREATE PROCEDURE [dbo].[spCreateManageServicesRequest]
36 @ReferenceTransactionId INT,
37 @OrderDate DATETIMEOFFSET,
38 @QueuePriority INT,
39 @Services ServiceChangeUdt READONLY,
40 @Parameters ServiceChangeParameterUdt READONLY
41AS
42BEGIN
43 SET NOCOUNT ON;
44
45 BEGIN TRY
46 /* VYTVOŘ NOVÝ REQUEST NA ZMĚNU SLUŽEB */
47
48 /* INSERT REQUEST */
49 INSERT INTO [dbo].[ManageServicesRequest]
50 ([ReferenceTransactionId]
51 ,[OrderDate]
52 ,[QueuePriority]
53 ,[Queued])
54 VALUES
55 (@ReferenceTransactionId
56 ,@OrderDate
57 ,@QueuePriority
58 ,NULL)
59
60 /* INSERT SERVICES */
61 INSERT INTO [dbo].[ServiceChange]
62 ([ReferenceTransactionId]
63 ,[ServiceId]
64 ,[ServiceStatus]
65 ,[ValidFrom])
66 SELECT
67 @ReferenceTransactionId AS [ReferenceTransactionId]
68 ,[ServiceId]
69 ,[ServiceStatus]
70 ,[ValidFrom]
71 FROM @Services AS [S]
72
73 /* INSERT PARAMS */
74 INSERT INTO [dbo].[ServiceChangeParameter]
75 ([ReferenceTransactionId]
76 ,[ServiceId]
77 ,[ParamCode]
78 ,[ParamValue]
79 ,[ParamValidFrom])
80 SELECT
81 @ReferenceTransactionId AS [ReferenceTransactionId]
82 ,[ServiceId]
83 ,[ParamCode]
84 ,[ParamValue]
85 ,[ParamValidFrom]
86 FROM @Parameters AS [P]
87
88 END TRY
89 BEGIN CATCH
90 THROW
91 END CATCH
92END
93
94CREATE PROCEDURE [dbo].[spGetManageServicesRequest]
95 @ReferenceTransactionId INT
96AS
97BEGIN
98 SET NOCOUNT ON;
99
100 BEGIN TRY
101 /* VRAŤ MANAGE SERVICES REQUEST PODLE ID */
102
103 SELECT
104 [MR].[ReferenceTransactionId],
105 [MR].[OrderDate],
106 [MR].[QueuePriority],
107 [MR].[Queued],
108
109 [SC].[ReferenceTransactionId],
110 [SC].[ServiceId],
111 [SC].[ServiceStatus],
112 [SC].[ValidFrom],
113
114 [SP].[ReferenceTransactionId],
115 [SP].[ServiceId],
116 [SP].[ParamCode],
117 [SP].[ParamValue],
118 [SP].[ParamValidFrom]
119
120 FROM [dbo].[ManageServicesRequest] AS [MR]
121 LEFT JOIN [dbo].[ServiceChange] AS [SC] ON [SC].[ReferenceTransactionId] = [MR].[ReferenceTransactionId]
122 LEFT JOIN [dbo].[ServiceChangeParameter] AS [SP] ON [SP].[ReferenceTransactionId] = [SC].[ReferenceTransactionId] AND [SP].[ServiceId] = [SC].[ServiceId]
123 WHERE [MR].[ReferenceTransactionId] = @ReferenceTransactionId
124
125 END TRY
126 BEGIN CATCH
127 THROW
128 END CATCH
129END
130public async Task Consume(ConsumeContext<CreateCommand> context)
131{
132 using (var sql = sqlFactory.Cip)
133 {
134 /*SAVE REQUEST TO DATABASE*/
135 sql.StartTransaction(System.Data.IsolationLevel.Serializable); <----- First transaction starts
136
137 /* Create id */
138 var transactionId = await GetNewId(context.Message.CorrelationId);
139
140 /* Create manage services request */
141 await sql.OrderingGateway.ManageServices.Create(transactionId, context.Message.ApiRequest.OrderDate, context.Message.ApiRequest.Priority, services);
142
143 sql.Commit(); <----- First transaction ends
144
145
146 /// .... Some other stuff ...
147
148 /* Fetch the same object you created in the first transaction */
149 Try
150 {
151 sql.StartTransaction(System.Data.IsolationLevel.Serializable);
152
153 var request = await sql.OrderingGateway.ManageServices.Get(transactionId); <----- HERE BE THE DEADLOCK,
154
155 request.Queued = DateTimeOffset.Now;
156 await sql.OrderingGateway.ManageServices.Update(request);
157
158 ... Here is a posting to a microservice queue ...
159
160 sql.Commit();
161 }
162 catch (Exception)
163 {
164 sql.RollBack();
165 }
166
167 /// .... Some other stuff ....
168}
169<deadlock>
170 <victim-list>
171 <victimProcess id="process1dbfa86c4e8" />
172 </victim-list>
173 <process-list>
174 <process id="process1dbfa86c4e8" taskpriority="0" logused="0" waitresource="KEY: 18:72057594046775296 (b42d8e559092)" waittime="2503" ownerId="33411557480" transactionname="user_transaction" lasttranstarted="2021-12-01T01:06:15.303" XDES="0x1ddd2df4420" lockMode="RangeS-S" schedulerid="20" kpid="23000" status="suspended" spid="55" sbid="2" ecid="0" priority="0" trancount="1" lastbatchstarted="2021-12-01T01:06:15.310" lastbatchcompleted="2021-12-01T01:06:15.300" lastattention="1900-01-01T00:00:00.300" clientapp="Core Microsoft SqlClient Data Provider" hostpid="11020" isolationlevel="serializable (4)" xactid="33411557480" currentdb="18" currentdbname="xxx" lockTimeout="4294967295" clientoption1="673185824" clientoption2="128056">
175 <executionStack>
176 <frame procname="xxx.dbo.spGetManageServicesRequest" line="10" stmtstart="356" stmtend="4256" sqlhandle="0x030012001374fc02f91433019aad000001000000000000000000000000000000000000000000000000000000"></frame>
177 </executionStack>
178 </process>
179 <process id="process1dbfa1c1c28" taskpriority="0" logused="1232" waitresource="KEY: 18:72057594046971904 (ffffffffffff)" waittime="6275" ownerId="33411563398" transactionname="user_transaction" lasttranstarted="2021-12-01T01:06:16.450" XDES="0x3d4e842c420" lockMode="RangeI-N" schedulerid="31" kpid="36432" status="suspended" spid="419" sbid="2" ecid="0" priority="0" trancount="2" lastbatchstarted="2021-12-01T01:06:16.480" lastbatchcompleted="2021-12-01T01:06:16.463" lastattention="1900-01-01T00:00:00.463" clientapp="Core Microsoft SqlClient Data Provider" hostpid="11020" isolationlevel="serializable (4)" xactid="33411563398" currentdb="18" currentdbname="xxx" lockTimeout="4294967295" clientoption1="673185824" clientoption2="128056">
180 <executionStack>
181 <frame procname="xxx.dbo.spCreateManageServicesRequest" line="40" stmtstart="2592" stmtend="3226" sqlhandle="0x03001200f01ab84aeb1433019aad000001000000000000000000000000000000000000000000000000000000"></frame>
182 </executionStack>
183 </process>
184 </process-list>
185 <resource-list>
186 <keylock hobtid="72057594046775296" dbid="18" objectname="xxx.dbo.ServiceChange" indexname="PK_ServiceChange" id="lock202ecfd0380" mode="X" associatedObjectId="72057594046775296">
187 <owner-list>
188 <owner id="process1dbfa1c1c28" mode="X" />
189 </owner-list>
190 <waiter-list>
191 <waiter id="process1dbfa86c4e8" mode="RangeS-S" requestType="wait" />
192 </waiter-list>
193 </keylock>
194 <keylock hobtid="72057594046971904" dbid="18" objectname="xxx.dbo.ServiceChangeParameter" indexname="PK_ServiceChangeParameter" id="lock27d3d371880" mode="RangeS-S" associatedObjectId="72057594046971904">
195 <owner-list>
196 <owner id="process1dbfa86c4e8" mode="RangeS-S" />
197 </owner-list>
198 <waiter-list>
199 <waiter id="process1dbfa1c1c28" mode="RangeI-N" requestType="wait" />
200 </waiter-list>
201 </keylock>
202 </resource-list>
203</deadlock>
204SELECT ...
205FROM [dbo].[ManageServicesRequest] AS [MR]
206 LEFT JOIN [dbo].[ServiceChange] AS [SC] ON [SC].[ReferenceTransactionId] = [MR].[ReferenceTransactionId]
207INSERT INTO [dbo].[ManageServicesRequest]
208VALUES (26410821, GETDATE(), 1, GETDATE()),
209 (26410822, GETDATE(), 1, GETDATE()),
210 (26410823, GETDATE(), 1, GETDATE());
211
212INSERT INTO [dbo].[ServiceChange]
213VALUES (26410821, 'X', 'X', GETDATE()),
214 (26410822, 'X', 'X', GETDATE()),
215 (26410823, 'X', 'X', GETDATE());
216
217INSERT INTO [dbo].[ServiceChangeParameter]
218VALUES (26410821, 'X', 'P1','P1', GETDATE()),
219 (26410823, 'X', 'P1','P1', GETDATE());
220Process 51 acquiring IS lock on OBJECT: 7:1557580587:0 (class bit2000000 ref1) result: OK
221
222Process 51 acquiring IS lock on OBJECT: 7:1509580416:0 (class bit2000000 ref1) result: OK
223
224Process 51 acquiring IS lock on OBJECT: 7:1477580302:0 (class bit2000000 ref1) result: OK
225
226Process 51 acquiring IS lock on PAGE: 7:1:600 (class bit2000000 ref0) result: OK
227
228Process 51 acquiring S lock on KEY: 7:72057594044940288 (1b148afa48fb) (class bit2000000 ref0) result: OK
229
230Process 51 acquiring IS lock on PAGE: 7:1:608 (class bit2000000 ref0) result: OK
231
232Process 51 acquiring RangeS-S lock on KEY: 7:72057594045005824 (a69d56b089b6) (class bit2000000 ref0) result: OK
233
234Process 51 acquiring IS lock on PAGE: 7:1:632 (class bit2000000 ref0) result: OK
235
236Process 51 acquiring RangeS-S lock on KEY: 7:72057594045202432 (c37d1982c3c9) (class bit2000000 ref0) result: OK
237
238Process 51 acquiring RangeS-S lock on KEY: 7:72057594045005824 (2ef5265f2b42) (class bit2000000 ref0) result: OK
239SELECT ...
240FROM [dbo].[ManageServicesRequest] AS [MR]
241 LEFT JOIN [dbo].[ServiceChange] AS [SC] ON [SC].[ReferenceTransactionId] = [MR].[ReferenceTransactionId]
242 LEFT HASH JOIN [dbo].[ServiceChangeParameter] AS [SP] ON [SP].[ReferenceTransactionId] = [MR].[ReferenceTransactionId] AND [SP].[ServiceId] = [SC].[ServiceId]
243 WHERE [MR].[ReferenceTransactionId] = @ReferenceTransactionId
244
The final predicate on [dbo].[ServiceChangeParameter] is changed to reference [MR].[ReferenceTransactionId]
instead of [SC].[ReferenceTransactionId]
and an explicit hash join hint is added.
This gives a plan like the below where all the locks on ServiceChange
are taken during the hash table build stage before any are taken on ServiceChangeParameter
- without changing the ReferenceTransactionId
condition the new plan had a scan rather than a seek on ServiceChangeParameter
which is why that change was made (it allows the optimiser to use the implied equality predicate on @ReferenceTransactionId)
QUESTION
Rewrite host and port for outgoing request of a pod in an Istio Mesh
Asked 2021-Nov-17 at 09:30I have to get the existing microservices run. They are given as docker images. They talk to each other by configured hostnames and ports. I started to use Istio to view and configure the outgoing calls of each microservice. Now I am at the point that I need to rewrite / redirect the host and the port of a request that goes out of one container. How can I do that with Istio?
I will try to give a minimum example. There are two services, service-a and service-b.
1apiVersion: apps/v1
2kind: Deployment
3metadata:
4 name: service-b
5spec:
6 selector:
7 matchLabels:
8 run: service-b
9 replicas: 1
10 template:
11 metadata:
12 labels:
13 run: service-b
14 spec:
15 containers:
16 - name: service-b
17 image: nginx
18 ports:
19 - containerPort: 80
20 name: web
21---
22apiVersion: v1
23kind: Service
24metadata:
25 name: service-b
26 labels:
27 run: service-b
28spec:
29 ports:
30 - port: 8080
31 protocol: TCP
32 targetPort: 80
33 name: service-b
34 selector:
35 run: service-b
36
37---
38apiVersion: apps/v1
39kind: Deployment
40metadata:
41 name: service-a
42spec:
43 selector:
44 matchLabels:
45 run: service-a
46 replicas: 1
47 template:
48 metadata:
49 labels:
50 run: service-a
51 spec:
52 containers:
53 - name: service-a
54 image: nginx
55 ports:
56 - containerPort: 80
57 name: web
58---
59apiVersion: v1
60kind: Service
61metadata:
62 name: service-a
63 labels:
64 run: service-a
65spec:
66 ports:
67 - port: 8081
68 protocol: TCP
69 targetPort: 80
70 name: service-a
71 selector:
72 run: service-a
73
74
I can docker exec into service-a and successfully execute:
1apiVersion: apps/v1
2kind: Deployment
3metadata:
4 name: service-b
5spec:
6 selector:
7 matchLabels:
8 run: service-b
9 replicas: 1
10 template:
11 metadata:
12 labels:
13 run: service-b
14 spec:
15 containers:
16 - name: service-b
17 image: nginx
18 ports:
19 - containerPort: 80
20 name: web
21---
22apiVersion: v1
23kind: Service
24metadata:
25 name: service-b
26 labels:
27 run: service-b
28spec:
29 ports:
30 - port: 8080
31 protocol: TCP
32 targetPort: 80
33 name: service-b
34 selector:
35 run: service-b
36
37---
38apiVersion: apps/v1
39kind: Deployment
40metadata:
41 name: service-a
42spec:
43 selector:
44 matchLabels:
45 run: service-a
46 replicas: 1
47 template:
48 metadata:
49 labels:
50 run: service-a
51 spec:
52 containers:
53 - name: service-a
54 image: nginx
55 ports:
56 - containerPort: 80
57 name: web
58---
59apiVersion: v1
60kind: Service
61metadata:
62 name: service-a
63 labels:
64 run: service-a
65spec:
66 ports:
67 - port: 8081
68 protocol: TCP
69 targetPort: 80
70 name: service-a
71 selector:
72 run: service-a
73
74root@service-a-d44f55d8c-8cp8m:/# curl -v service-b:8080
75
76< HTTP/1.1 200 OK
77< server: envoy
78
79
Now, to simulate my problem, I want to reach service-b by using another hostname and port. I want to configure Istio the way that this call will also work:
1apiVersion: apps/v1
2kind: Deployment
3metadata:
4 name: service-b
5spec:
6 selector:
7 matchLabels:
8 run: service-b
9 replicas: 1
10 template:
11 metadata:
12 labels:
13 run: service-b
14 spec:
15 containers:
16 - name: service-b
17 image: nginx
18 ports:
19 - containerPort: 80
20 name: web
21---
22apiVersion: v1
23kind: Service
24metadata:
25 name: service-b
26 labels:
27 run: service-b
28spec:
29 ports:
30 - port: 8080
31 protocol: TCP
32 targetPort: 80
33 name: service-b
34 selector:
35 run: service-b
36
37---
38apiVersion: apps/v1
39kind: Deployment
40metadata:
41 name: service-a
42spec:
43 selector:
44 matchLabels:
45 run: service-a
46 replicas: 1
47 template:
48 metadata:
49 labels:
50 run: service-a
51 spec:
52 containers:
53 - name: service-a
54 image: nginx
55 ports:
56 - containerPort: 80
57 name: web
58---
59apiVersion: v1
60kind: Service
61metadata:
62 name: service-a
63 labels:
64 run: service-a
65spec:
66 ports:
67 - port: 8081
68 protocol: TCP
69 targetPort: 80
70 name: service-a
71 selector:
72 run: service-a
73
74root@service-a-d44f55d8c-8cp8m:/# curl -v service-b:8080
75
76< HTTP/1.1 200 OK
77< server: envoy
78
79root@service-a-d44f55d8c-8cp8m:/# curl -v service-x:7777
80
Best regards, Christian
ANSWER
Answered 2021-Nov-16 at 10:56There are two solutions which can be used depending on necessity of using istio
features.
If no istio
features are planned to use, it can be solved using native kubernetes. In turn, if some istio
feature are intended to use, it can be solved using istio virtual service
. Below are two options:
1. Native kubernetes
Service-x
should be pointed to the backend of service-b
deployment. Below is selector
which points to deployment: service-b
:
1apiVersion: apps/v1
2kind: Deployment
3metadata:
4 name: service-b
5spec:
6 selector:
7 matchLabels:
8 run: service-b
9 replicas: 1
10 template:
11 metadata:
12 labels:
13 run: service-b
14 spec:
15 containers:
16 - name: service-b
17 image: nginx
18 ports:
19 - containerPort: 80
20 name: web
21---
22apiVersion: v1
23kind: Service
24metadata:
25 name: service-b
26 labels:
27 run: service-b
28spec:
29 ports:
30 - port: 8080
31 protocol: TCP
32 targetPort: 80
33 name: service-b
34 selector:
35 run: service-b
36
37---
38apiVersion: apps/v1
39kind: Deployment
40metadata:
41 name: service-a
42spec:
43 selector:
44 matchLabels:
45 run: service-a
46 replicas: 1
47 template:
48 metadata:
49 labels:
50 run: service-a
51 spec:
52 containers:
53 - name: service-a
54 image: nginx
55 ports:
56 - containerPort: 80
57 name: web
58---
59apiVersion: v1
60kind: Service
61metadata:
62 name: service-a
63 labels:
64 run: service-a
65spec:
66 ports:
67 - port: 8081
68 protocol: TCP
69 targetPort: 80
70 name: service-a
71 selector:
72 run: service-a
73
74root@service-a-d44f55d8c-8cp8m:/# curl -v service-b:8080
75
76< HTTP/1.1 200 OK
77< server: envoy
78
79root@service-a-d44f55d8c-8cp8m:/# curl -v service-x:7777
80apiVersion: v1
81kind: Service
82metadata:
83 name: service-x
84 labels:
85 run: service-x
86spec:
87 ports:
88 - port: 7777
89 protocol: TCP
90 targetPort: 80
91 name: service-x
92 selector:
93 run: service-b
94
This way request will go through istio
anyway because sidecar containers are injected.
1apiVersion: apps/v1
2kind: Deployment
3metadata:
4 name: service-b
5spec:
6 selector:
7 matchLabels:
8 run: service-b
9 replicas: 1
10 template:
11 metadata:
12 labels:
13 run: service-b
14 spec:
15 containers:
16 - name: service-b
17 image: nginx
18 ports:
19 - containerPort: 80
20 name: web
21---
22apiVersion: v1
23kind: Service
24metadata:
25 name: service-b
26 labels:
27 run: service-b
28spec:
29 ports:
30 - port: 8080
31 protocol: TCP
32 targetPort: 80
33 name: service-b
34 selector:
35 run: service-b
36
37---
38apiVersion: apps/v1
39kind: Deployment
40metadata:
41 name: service-a
42spec:
43 selector:
44 matchLabels:
45 run: service-a
46 replicas: 1
47 template:
48 metadata:
49 labels:
50 run: service-a
51 spec:
52 containers:
53 - name: service-a
54 image: nginx
55 ports:
56 - containerPort: 80
57 name: web
58---
59apiVersion: v1
60kind: Service
61metadata:
62 name: service-a
63 labels:
64 run: service-a
65spec:
66 ports:
67 - port: 8081
68 protocol: TCP
69 targetPort: 80
70 name: service-a
71 selector:
72 run: service-a
73
74root@service-a-d44f55d8c-8cp8m:/# curl -v service-b:8080
75
76< HTTP/1.1 200 OK
77< server: envoy
78
79root@service-a-d44f55d8c-8cp8m:/# curl -v service-x:7777
80apiVersion: v1
81kind: Service
82metadata:
83 name: service-x
84 labels:
85 run: service-x
86spec:
87 ports:
88 - port: 7777
89 protocol: TCP
90 targetPort: 80
91 name: service-x
92 selector:
93 run: service-b
94# curl -vI service-b:8080
95
96* Trying xx.xx.xx.xx:8080...
97* Connected to service-b (xx.xx.xx.xx) port 8080 (#0)
98> Host: service-b:8080
99< HTTP/1.1 200 OK
100< server: envoy
101
and
1apiVersion: apps/v1
2kind: Deployment
3metadata:
4 name: service-b
5spec:
6 selector:
7 matchLabels:
8 run: service-b
9 replicas: 1
10 template:
11 metadata:
12 labels:
13 run: service-b
14 spec:
15 containers:
16 - name: service-b
17 image: nginx
18 ports:
19 - containerPort: 80
20 name: web
21---
22apiVersion: v1
23kind: Service
24metadata:
25 name: service-b
26 labels:
27 run: service-b
28spec:
29 ports:
30 - port: 8080
31 protocol: TCP
32 targetPort: 80
33 name: service-b
34 selector:
35 run: service-b
36
37---
38apiVersion: apps/v1
39kind: Deployment
40metadata:
41 name: service-a
42spec:
43 selector:
44 matchLabels:
45 run: service-a
46 replicas: 1
47 template:
48 metadata:
49 labels:
50 run: service-a
51 spec:
52 containers:
53 - name: service-a
54 image: nginx
55 ports:
56 - containerPort: 80
57 name: web
58---
59apiVersion: v1
60kind: Service
61metadata:
62 name: service-a
63 labels:
64 run: service-a
65spec:
66 ports:
67 - port: 8081
68 protocol: TCP
69 targetPort: 80
70 name: service-a
71 selector:
72 run: service-a
73
74root@service-a-d44f55d8c-8cp8m:/# curl -v service-b:8080
75
76< HTTP/1.1 200 OK
77< server: envoy
78
79root@service-a-d44f55d8c-8cp8m:/# curl -v service-x:7777
80apiVersion: v1
81kind: Service
82metadata:
83 name: service-x
84 labels:
85 run: service-x
86spec:
87 ports:
88 - port: 7777
89 protocol: TCP
90 targetPort: 80
91 name: service-x
92 selector:
93 run: service-b
94# curl -vI service-b:8080
95
96* Trying xx.xx.xx.xx:8080...
97* Connected to service-b (xx.xx.xx.xx) port 8080 (#0)
98> Host: service-b:8080
99< HTTP/1.1 200 OK
100< server: envoy
101# curl -vI service-x:7777
102
103* Trying yy.yy.yy.yy:7777...
104* Connected to service-x (yy.yy.yy.yy) port 7777 (#0)
105> Host: service-x:7777
106< HTTP/1.1 200 OK
107< server: envoy
108
2. Istio virtual service
In this example virtual service is used. Service service-x
still needs to be created, but now we don't specify any selectors:
1apiVersion: apps/v1
2kind: Deployment
3metadata:
4 name: service-b
5spec:
6 selector:
7 matchLabels:
8 run: service-b
9 replicas: 1
10 template:
11 metadata:
12 labels:
13 run: service-b
14 spec:
15 containers:
16 - name: service-b
17 image: nginx
18 ports:
19 - containerPort: 80
20 name: web
21---
22apiVersion: v1
23kind: Service
24metadata:
25 name: service-b
26 labels:
27 run: service-b
28spec:
29 ports:
30 - port: 8080
31 protocol: TCP
32 targetPort: 80
33 name: service-b
34 selector:
35 run: service-b
36
37---
38apiVersion: apps/v1
39kind: Deployment
40metadata:
41 name: service-a
42spec:
43 selector:
44 matchLabels:
45 run: service-a
46 replicas: 1
47 template:
48 metadata:
49 labels:
50 run: service-a
51 spec:
52 containers:
53 - name: service-a
54 image: nginx
55 ports:
56 - containerPort: 80
57 name: web
58---
59apiVersion: v1
60kind: Service
61metadata:
62 name: service-a
63 labels:
64 run: service-a
65spec:
66 ports:
67 - port: 8081
68 protocol: TCP
69 targetPort: 80
70 name: service-a
71 selector:
72 run: service-a
73
74root@service-a-d44f55d8c-8cp8m:/# curl -v service-b:8080
75
76< HTTP/1.1 200 OK
77< server: envoy
78
79root@service-a-d44f55d8c-8cp8m:/# curl -v service-x:7777
80apiVersion: v1
81kind: Service
82metadata:
83 name: service-x
84 labels:
85 run: service-x
86spec:
87 ports:
88 - port: 7777
89 protocol: TCP
90 targetPort: 80
91 name: service-x
92 selector:
93 run: service-b
94# curl -vI service-b:8080
95
96* Trying xx.xx.xx.xx:8080...
97* Connected to service-b (xx.xx.xx.xx) port 8080 (#0)
98> Host: service-b:8080
99< HTTP/1.1 200 OK
100< server: envoy
101# curl -vI service-x:7777
102
103* Trying yy.yy.yy.yy:7777...
104* Connected to service-x (yy.yy.yy.yy) port 7777 (#0)
105> Host: service-x:7777
106< HTTP/1.1 200 OK
107< server: envoy
108apiVersion: v1
109kind: Service
110metadata:
111 name: service-x
112 labels:
113 run: service-x
114spec:
115 ports:
116 - port: 7777
117 protocol: TCP
118 targetPort: 80
119 name: service-x
120
Test it from another pod:
1apiVersion: apps/v1
2kind: Deployment
3metadata:
4 name: service-b
5spec:
6 selector:
7 matchLabels:
8 run: service-b
9 replicas: 1
10 template:
11 metadata:
12 labels:
13 run: service-b
14 spec:
15 containers:
16 - name: service-b
17 image: nginx
18 ports:
19 - containerPort: 80
20 name: web
21---
22apiVersion: v1
23kind: Service
24metadata:
25 name: service-b
26 labels:
27 run: service-b
28spec:
29 ports:
30 - port: 8080
31 protocol: TCP
32 targetPort: 80
33 name: service-b
34 selector:
35 run: service-b
36
37---
38apiVersion: apps/v1
39kind: Deployment
40metadata:
41 name: service-a
42spec:
43 selector:
44 matchLabels:
45 run: service-a
46 replicas: 1
47 template:
48 metadata:
49 labels:
50 run: service-a
51 spec:
52 containers:
53 - name: service-a
54 image: nginx
55 ports:
56 - containerPort: 80
57 name: web
58---
59apiVersion: v1
60kind: Service
61metadata:
62 name: service-a
63 labels:
64 run: service-a
65spec:
66 ports:
67 - port: 8081
68 protocol: TCP
69 targetPort: 80
70 name: service-a
71 selector:
72 run: service-a
73
74root@service-a-d44f55d8c-8cp8m:/# curl -v service-b:8080
75
76< HTTP/1.1 200 OK
77< server: envoy
78
79root@service-a-d44f55d8c-8cp8m:/# curl -v service-x:7777
80apiVersion: v1
81kind: Service
82metadata:
83 name: service-x
84 labels:
85 run: service-x
86spec:
87 ports:
88 - port: 7777
89 protocol: TCP
90 targetPort: 80
91 name: service-x
92 selector:
93 run: service-b
94# curl -vI service-b:8080
95
96* Trying xx.xx.xx.xx:8080...
97* Connected to service-b (xx.xx.xx.xx) port 8080 (#0)
98> Host: service-b:8080
99< HTTP/1.1 200 OK
100< server: envoy
101# curl -vI service-x:7777
102
103* Trying yy.yy.yy.yy:7777...
104* Connected to service-x (yy.yy.yy.yy) port 7777 (#0)
105> Host: service-x:7777
106< HTTP/1.1 200 OK
107< server: envoy
108apiVersion: v1
109kind: Service
110metadata:
111 name: service-x
112 labels:
113 run: service-x
114spec:
115 ports:
116 - port: 7777
117 protocol: TCP
118 targetPort: 80
119 name: service-x
120# curl -vI service-x:7777
121
122* Trying yy.yy.yy.yy:7777...
123* Connected to service-x (yy.yy.yy.yy) port 7777 (#0)
124> Host: service-x:7777
125< HTTP/1.1 503 Service Unavailable
126< server: envoy
127
503
error which is expected. Now creating virtual service
which will route requests to service-b
on port: 8080
:
1apiVersion: apps/v1
2kind: Deployment
3metadata:
4 name: service-b
5spec:
6 selector:
7 matchLabels:
8 run: service-b
9 replicas: 1
10 template:
11 metadata:
12 labels:
13 run: service-b
14 spec:
15 containers:
16 - name: service-b
17 image: nginx
18 ports:
19 - containerPort: 80
20 name: web
21---
22apiVersion: v1
23kind: Service
24metadata:
25 name: service-b
26 labels:
27 run: service-b
28spec:
29 ports:
30 - port: 8080
31 protocol: TCP
32 targetPort: 80
33 name: service-b
34 selector:
35 run: service-b
36
37---
38apiVersion: apps/v1
39kind: Deployment
40metadata:
41 name: service-a
42spec:
43 selector:
44 matchLabels:
45 run: service-a
46 replicas: 1
47 template:
48 metadata:
49 labels:
50 run: service-a
51 spec:
52 containers:
53 - name: service-a
54 image: nginx
55 ports:
56 - containerPort: 80
57 name: web
58---
59apiVersion: v1
60kind: Service
61metadata:
62 name: service-a
63 labels:
64 run: service-a
65spec:
66 ports:
67 - port: 8081
68 protocol: TCP
69 targetPort: 80
70 name: service-a
71 selector:
72 run: service-a
73
74root@service-a-d44f55d8c-8cp8m:/# curl -v service-b:8080
75
76< HTTP/1.1 200 OK
77< server: envoy
78
79root@service-a-d44f55d8c-8cp8m:/# curl -v service-x:7777
80apiVersion: v1
81kind: Service
82metadata:
83 name: service-x
84 labels:
85 run: service-x
86spec:
87 ports:
88 - port: 7777
89 protocol: TCP
90 targetPort: 80
91 name: service-x
92 selector:
93 run: service-b
94# curl -vI service-b:8080
95
96* Trying xx.xx.xx.xx:8080...
97* Connected to service-b (xx.xx.xx.xx) port 8080 (#0)
98> Host: service-b:8080
99< HTTP/1.1 200 OK
100< server: envoy
101# curl -vI service-x:7777
102
103* Trying yy.yy.yy.yy:7777...
104* Connected to service-x (yy.yy.yy.yy) port 7777 (#0)
105> Host: service-x:7777
106< HTTP/1.1 200 OK
107< server: envoy
108apiVersion: v1
109kind: Service
110metadata:
111 name: service-x
112 labels:
113 run: service-x
114spec:
115 ports:
116 - port: 7777
117 protocol: TCP
118 targetPort: 80
119 name: service-x
120# curl -vI service-x:7777
121
122* Trying yy.yy.yy.yy:7777...
123* Connected to service-x (yy.yy.yy.yy) port 7777 (#0)
124> Host: service-x:7777
125< HTTP/1.1 503 Service Unavailable
126< server: envoy
127apiVersion: networking.istio.io/v1beta1
128kind: VirtualService
129metadata:
130 name: service-x-to-b
131spec:
132 hosts:
133 - service-x
134 http:
135 - route:
136 - destination:
137 host: service-b
138 port:
139 number: 8080
140
Testing from the pod:
1apiVersion: apps/v1
2kind: Deployment
3metadata:
4 name: service-b
5spec:
6 selector:
7 matchLabels:
8 run: service-b
9 replicas: 1
10 template:
11 metadata:
12 labels:
13 run: service-b
14 spec:
15 containers:
16 - name: service-b
17 image: nginx
18 ports:
19 - containerPort: 80
20 name: web
21---
22apiVersion: v1
23kind: Service
24metadata:
25 name: service-b
26 labels:
27 run: service-b
28spec:
29 ports:
30 - port: 8080
31 protocol: TCP
32 targetPort: 80
33 name: service-b
34 selector:
35 run: service-b
36
37---
38apiVersion: apps/v1
39kind: Deployment
40metadata:
41 name: service-a
42spec:
43 selector:
44 matchLabels:
45 run: service-a
46 replicas: 1
47 template:
48 metadata:
49 labels:
50 run: service-a
51 spec:
52 containers:
53 - name: service-a
54 image: nginx
55 ports:
56 - containerPort: 80
57 name: web
58---
59apiVersion: v1
60kind: Service
61metadata:
62 name: service-a
63 labels:
64 run: service-a
65spec:
66 ports:
67 - port: 8081
68 protocol: TCP
69 targetPort: 80
70 name: service-a
71 selector:
72 run: service-a
73
74root@service-a-d44f55d8c-8cp8m:/# curl -v service-b:8080
75
76< HTTP/1.1 200 OK
77< server: envoy
78
79root@service-a-d44f55d8c-8cp8m:/# curl -v service-x:7777
80apiVersion: v1
81kind: Service
82metadata:
83 name: service-x
84 labels:
85 run: service-x
86spec:
87 ports:
88 - port: 7777
89 protocol: TCP
90 targetPort: 80
91 name: service-x
92 selector:
93 run: service-b
94# curl -vI service-b:8080
95
96* Trying xx.xx.xx.xx:8080...
97* Connected to service-b (xx.xx.xx.xx) port 8080 (#0)
98> Host: service-b:8080
99< HTTP/1.1 200 OK
100< server: envoy
101# curl -vI service-x:7777
102
103* Trying yy.yy.yy.yy:7777...
104* Connected to service-x (yy.yy.yy.yy) port 7777 (#0)
105> Host: service-x:7777
106< HTTP/1.1 200 OK
107< server: envoy
108apiVersion: v1
109kind: Service
110metadata:
111 name: service-x
112 labels:
113 run: service-x
114spec:
115 ports:
116 - port: 7777
117 protocol: TCP
118 targetPort: 80
119 name: service-x
120# curl -vI service-x:7777
121
122* Trying yy.yy.yy.yy:7777...
123* Connected to service-x (yy.yy.yy.yy) port 7777 (#0)
124> Host: service-x:7777
125< HTTP/1.1 503 Service Unavailable
126< server: envoy
127apiVersion: networking.istio.io/v1beta1
128kind: VirtualService
129metadata:
130 name: service-x-to-b
131spec:
132 hosts:
133 - service-x
134 http:
135 - route:
136 - destination:
137 host: service-b
138 port:
139 number: 8080
140# curl -vI service-x:7777
141
142* Trying yy.yy.yy.yy:7777...
143* Connected to service-x (yy.yy.yy.yy) port 7777 (#0)
144> Host: service-x:7777
145< HTTP/1.1 200 OK
146< server: envoy
147
QUESTION
Checking list of conditions on API data
Asked 2021-Aug-31 at 00:23I am using an API which is sending some data about products, every 1 second. on the other hand I have a list of user-created conditions. And I want to check if any data that comes, matches any of the conditions. and if so, I want to notify the user.
for example , user condition maybe like this : price < 30000 and productName = 'chairNumber2'
and the data would be something like this :
{'data':[{'name':'chair1','price':'20000','color':blue},{'name':'chairNumber2','price':'45500','color':green},{'name':'chairNumber2','price':'27000','color':blue}]
I am using microservice architecture, and on validating condition I am sending a message on RabbitMQ to my notification service
I have tried the naïve solution (every 1 second, check every condition , and if any data meets the condition then pass on data my other service) but this takes so much RAM and time(time order is in n*m,n being the count of conditions, and m is the count of data), so I am looking for a better scenario
ANSWER
Answered 2021-Aug-31 at 00:23It's an interesting problem. I have to confess I don't really know how I would do it - it depends a lot on exactly how fast the processing needs to occur, and a lot of other factors not mentioned - such as what constraints to do you have in terms of the technology stack you have, is it on-premise or in the cloud, must the solution be coded by you/your team or can you buy some $$ tool. For future reference, for architecture questions especially, any context you can provide is really helpful - e.g. constraints.
I did think of Pub-Sub, which may offer patterns you can use, but you really just need a simple implementation that will work within your code base, AND very importantly you only have one consuming client, the RabbitMQ queue - it's not like you have X number of random clients wanting the data. So an off-the-shelf Pub-Sub solution might not be a good fit.
Assuming you want a "home-grown" solution, this is what has come to mind so far:
("flow" connectors show data flow, which could be interpreted as a 'push'; where as the other lines are UML "dependency" lines; e.g. the match engine depends on data held in the batch, but it's agnostic as to how that happens).
- The external data source is where the data is coming from. I had not made any assumptions about how that works or what control you have over it.
- Interface, all this does is take the raw data and put it into batches that can be processed later by the Match Engine. How the interface works depends on how you want to balance (a) the data coming in, and (b) what you know the match engine expects.
- Batches are thrown into a batch queue. It's job is to ensure that no data is lost before its processed, that processing can be managed (order of batch processing, resilience, etc).
- Match engine, works fast on the assumption that the size of each batch is a manageable number of records/changes. It's job is to take changes and ask who's interested in them, and return the results to the RabbitMQ. So its inputs are just the batches and the user & user matching rules (more on that later). How this actually works I'm not sure, worst case it iterates through each rule seeing who has a match - what you're doing now, but...
Key point: the queue would also allow you to scale-out the number of match engine instances - but, I don't know what affect that has downstream on the RabbitMQ and it's downstream consumers (the order in which the updates would arrive, etc).
What's not shown: caching. The match engine needs to know what the matching rules are, and which users those rules relate to. The fastest way to do that look-up is probably in memory, not a database read (unless you can be smart about how that happens), which brings me to this addition:
- Data Source is wherever the user data, and user matching rules, are kept. I have assumed they are external to "Your Solution" but it doesn't matter.
- Cache is something that holds the user matches (rules) & user data. It's sole job is to hold these in a way that is optimized for the Match Engine to work fast. You could logically say it was part of the match engine, or separate. How you approach this might be determined by whether or not you intend to scale-out the match engine.
- Data Provider is simply the component whose job it is to fetch user & rule data and make it available for caching.
So, the Rule engine, cache and data provider could all be separate components, or logically parts of the one component / microservice.
QUESTION
Traefik v2 reverse proxy without Docker
Asked 2021-Jul-14 at 10:26I have a dead simple Golang microservice (no Docker, just simple binary file) which returns simple message on GET-request.
1curl -XGET 'http://localhost:36001/api/operability/list'
2
{"message": "ping 123"}
Now I want to do reverse proxy via Traefik-v2, so I've made configuration file "traefik.toml":
1curl -XGET 'http://localhost:36001/api/operability/list'
2[global]
3 checkNewVersion = false
4 sendAnonymousUsage = false
5
6[entryPoints]
7 [entryPoints.web]
8 address = ":8090"
9
10 [entryPoints.traefik]
11 address = ":8091"
12
13[log]
14 level = "DEBUG"
15 filePath = "logs/traefik.log"
16[accessLog]
17 filePath = "logs/access.log"
18
19[api]
20 insecure = true
21 dashboard = true
22
23[providers]
24 [providers.file]
25 filename = "traefik.toml"
26
27# dynamic conf
28[http]
29 [http.routers]
30 [http.routers.my-router]
31 rule = "Path(`/proxy`)"
32 service = "my-service"
33 entryPoints = ["web"]
34 [http.services]
35 [http.services.my-service.loadBalancer]
36 [[http.services.my-service.loadBalancer.servers]]
37 url = "http://localhost:36001"
38
Starting Traefik (I'm using binary distribution):
1curl -XGET 'http://localhost:36001/api/operability/list'
2[global]
3 checkNewVersion = false
4 sendAnonymousUsage = false
5
6[entryPoints]
7 [entryPoints.web]
8 address = ":8090"
9
10 [entryPoints.traefik]
11 address = ":8091"
12
13[log]
14 level = "DEBUG"
15 filePath = "logs/traefik.log"
16[accessLog]
17 filePath = "logs/access.log"
18
19[api]
20 insecure = true
21 dashboard = true
22
23[providers]
24 [providers.file]
25 filename = "traefik.toml"
26
27# dynamic conf
28[http]
29 [http.routers]
30 [http.routers.my-router]
31 rule = "Path(`/proxy`)"
32 service = "my-service"
33 entryPoints = ["web"]
34 [http.services]
35 [http.services.my-service.loadBalancer]
36 [[http.services.my-service.loadBalancer.servers]]
37 url = "http://localhost:36001"
38traefik --configFile=traefik.toml
39
Now dashboard on port 8091 works like a charm, but I struggle with reverse proxy request. I suppose it should look like this (based on my configuration file):
1curl -XGET 'http://localhost:36001/api/operability/list'
2[global]
3 checkNewVersion = false
4 sendAnonymousUsage = false
5
6[entryPoints]
7 [entryPoints.web]
8 address = ":8090"
9
10 [entryPoints.traefik]
11 address = ":8091"
12
13[log]
14 level = "DEBUG"
15 filePath = "logs/traefik.log"
16[accessLog]
17 filePath = "logs/access.log"
18
19[api]
20 insecure = true
21 dashboard = true
22
23[providers]
24 [providers.file]
25 filename = "traefik.toml"
26
27# dynamic conf
28[http]
29 [http.routers]
30 [http.routers.my-router]
31 rule = "Path(`/proxy`)"
32 service = "my-service"
33 entryPoints = ["web"]
34 [http.services]
35 [http.services.my-service.loadBalancer]
36 [[http.services.my-service.loadBalancer.servers]]
37 url = "http://localhost:36001"
38traefik --configFile=traefik.toml
39curl -XGET 'http://localhost:8090/proxy/api/operability/list'
40
But all I get it's just:
404 page not found
The question is: is there any mistake in configuration file or is it just a request typo?
edit: My configuration file is based on answers in this questions:
edit #2: Traefik version info:
1curl -XGET 'http://localhost:36001/api/operability/list'
2[global]
3 checkNewVersion = false
4 sendAnonymousUsage = false
5
6[entryPoints]
7 [entryPoints.web]
8 address = ":8090"
9
10 [entryPoints.traefik]
11 address = ":8091"
12
13[log]
14 level = "DEBUG"
15 filePath = "logs/traefik.log"
16[accessLog]
17 filePath = "logs/access.log"
18
19[api]
20 insecure = true
21 dashboard = true
22
23[providers]
24 [providers.file]
25 filename = "traefik.toml"
26
27# dynamic conf
28[http]
29 [http.routers]
30 [http.routers.my-router]
31 rule = "Path(`/proxy`)"
32 service = "my-service"
33 entryPoints = ["web"]
34 [http.services]
35 [http.services.my-service.loadBalancer]
36 [[http.services.my-service.loadBalancer.servers]]
37 url = "http://localhost:36001"
38traefik --configFile=traefik.toml
39curl -XGET 'http://localhost:8090/proxy/api/operability/list'
40traefik version
41Version: 2.4.9
42Codename: livarot
43Go version: go1.16.5
44Built: 2021-06-21T16:17:58Z
45OS/Arch: windows/amd64
46
ANSWER
Answered 2021-Jul-14 at 10:26I've managed to find the answer.
- I'm not that smart if I've decided that Traefik would take /proxy and simply redicrect all request to /api/*. The official docs (https://doc.traefik.io/traefik/routing/routers/) says that (I'm quoting):
Use Path if your service listens on the exact path only. For instance, Path: /products would match /products but not /products/shoes.
Use a Prefix matcher if your service listens on a particular base path but also serves requests on sub-paths. For instance, PathPrefix: /products would match /products but also /products/shoes and /products/shirts. Since the path is forwarded as-is, your service is expected to listen on /products.
- I did not use any middleware for replacing substring of path
Now answer as example.
First at all: code for microservice in main.go file
1curl -XGET 'http://localhost:36001/api/operability/list'
2[global]
3 checkNewVersion = false
4 sendAnonymousUsage = false
5
6[entryPoints]
7 [entryPoints.web]
8 address = ":8090"
9
10 [entryPoints.traefik]
11 address = ":8091"
12
13[log]
14 level = "DEBUG"
15 filePath = "logs/traefik.log"
16[accessLog]
17 filePath = "logs/access.log"
18
19[api]
20 insecure = true
21 dashboard = true
22
23[providers]
24 [providers.file]
25 filename = "traefik.toml"
26
27# dynamic conf
28[http]
29 [http.routers]
30 [http.routers.my-router]
31 rule = "Path(`/proxy`)"
32 service = "my-service"
33 entryPoints = ["web"]
34 [http.services]
35 [http.services.my-service.loadBalancer]
36 [[http.services.my-service.loadBalancer.servers]]
37 url = "http://localhost:36001"
38traefik --configFile=traefik.toml
39curl -XGET 'http://localhost:8090/proxy/api/operability/list'
40traefik version
41Version: 2.4.9
42Codename: livarot
43Go version: go1.16.5
44Built: 2021-06-21T16:17:58Z
45OS/Arch: windows/amd64
46package main
47
48import (
49 "fmt"
50 "log"
51 "net/http"
52)
53
54func handler(w http.ResponseWriter, r *http.Request) {
55 fmt.Fprintf(w, "{\"message\": \"ping 123\"}")
56}
57
58func main() {
59 http.HandleFunc("/operability/list", handler)
60 log.Fatal(http.ListenAndServe(":36001", nil))
61}
62
Now, configuration file for Traefik v2 in config.tom file:
1curl -XGET 'http://localhost:36001/api/operability/list'
2[global]
3 checkNewVersion = false
4 sendAnonymousUsage = false
5
6[entryPoints]
7 [entryPoints.web]
8 address = ":8090"
9
10 [entryPoints.traefik]
11 address = ":8091"
12
13[log]
14 level = "DEBUG"
15 filePath = "logs/traefik.log"
16[accessLog]
17 filePath = "logs/access.log"
18
19[api]
20 insecure = true
21 dashboard = true
22
23[providers]
24 [providers.file]
25 filename = "traefik.toml"
26
27# dynamic conf
28[http]
29 [http.routers]
30 [http.routers.my-router]
31 rule = "Path(`/proxy`)"
32 service = "my-service"
33 entryPoints = ["web"]
34 [http.services]
35 [http.services.my-service.loadBalancer]
36 [[http.services.my-service.loadBalancer.servers]]
37 url = "http://localhost:36001"
38traefik --configFile=traefik.toml
39curl -XGET 'http://localhost:8090/proxy/api/operability/list'
40traefik version
41Version: 2.4.9
42Codename: livarot
43Go version: go1.16.5
44Built: 2021-06-21T16:17:58Z
45OS/Arch: windows/amd64
46package main
47
48import (
49 "fmt"
50 "log"
51 "net/http"
52)
53
54func handler(w http.ResponseWriter, r *http.Request) {
55 fmt.Fprintf(w, "{\"message\": \"ping 123\"}")
56}
57
58func main() {
59 http.HandleFunc("/operability/list", handler)
60 log.Fatal(http.ListenAndServe(":36001", nil))
61}
62[global]
63 checkNewVersion = false
64 sendAnonymousUsage = false
65
66[entryPoints]
67 [entryPoints.web]
68 address = ":36000"
69
70 [entryPoints.traefik]
71 address = ":8091"
72
73[log]
74 level = "DEBUG"
75 filePath = "logs/traefik.log"
76[accessLog]
77 filePath = "logs/access.log"
78
79[api]
80 insecure = true
81 dashboard = true
82
83[providers]
84 [providers.file]
85 debugLogGeneratedTemplate = true
86 # Point this same file for dynamic configuration
87 filename = "config.toml"
88 watch = true
89
90[http]
91 [http.middlewares]
92 [http.middlewares.test-replacepathregex.replacePathRegex]
93 # We need middleware to replace all "/proxy/" with "/api/"
94 regex = "(?:^|\\W)proxy(?:$|\\W)"
95 replacement = "/api/"
96
97 [http.routers]
98 [http.routers.my-router]
99 # We need to handle all request with pathes defined as "/proxy/*"
100 rule = "PathPrefix(`/proxy/`)"
101 service = "my-service"
102 entryPoints = ["web"]
103 # Use of defined middleware for path replacement
104 middlewares = ["test-replacepathregex"]
105
106 [http.services]
107 [http.services.my-service.loadBalancer]
108 [[http.services.my-service.loadBalancer.servers]]
109 url = "http://localhost:36001/"
110
Start microservice:
1curl -XGET 'http://localhost:36001/api/operability/list'
2[global]
3 checkNewVersion = false
4 sendAnonymousUsage = false
5
6[entryPoints]
7 [entryPoints.web]
8 address = ":8090"
9
10 [entryPoints.traefik]
11 address = ":8091"
12
13[log]
14 level = "DEBUG"
15 filePath = "logs/traefik.log"
16[accessLog]
17 filePath = "logs/access.log"
18
19[api]
20 insecure = true
21 dashboard = true
22
23[providers]
24 [providers.file]
25 filename = "traefik.toml"
26
27# dynamic conf
28[http]
29 [http.routers]
30 [http.routers.my-router]
31 rule = "Path(`/proxy`)"
32 service = "my-service"
33 entryPoints = ["web"]
34 [http.services]
35 [http.services.my-service.loadBalancer]
36 [[http.services.my-service.loadBalancer.servers]]
37 url = "http://localhost:36001"
38traefik --configFile=traefik.toml
39curl -XGET 'http://localhost:8090/proxy/api/operability/list'
40traefik version
41Version: 2.4.9
42Codename: livarot
43Go version: go1.16.5
44Built: 2021-06-21T16:17:58Z
45OS/Arch: windows/amd64
46package main
47
48import (
49 "fmt"
50 "log"
51 "net/http"
52)
53
54func handler(w http.ResponseWriter, r *http.Request) {
55 fmt.Fprintf(w, "{\"message\": \"ping 123\"}")
56}
57
58func main() {
59 http.HandleFunc("/operability/list", handler)
60 log.Fatal(http.ListenAndServe(":36001", nil))
61}
62[global]
63 checkNewVersion = false
64 sendAnonymousUsage = false
65
66[entryPoints]
67 [entryPoints.web]
68 address = ":36000"
69
70 [entryPoints.traefik]
71 address = ":8091"
72
73[log]
74 level = "DEBUG"
75 filePath = "logs/traefik.log"
76[accessLog]
77 filePath = "logs/access.log"
78
79[api]
80 insecure = true
81 dashboard = true
82
83[providers]
84 [providers.file]
85 debugLogGeneratedTemplate = true
86 # Point this same file for dynamic configuration
87 filename = "config.toml"
88 watch = true
89
90[http]
91 [http.middlewares]
92 [http.middlewares.test-replacepathregex.replacePathRegex]
93 # We need middleware to replace all "/proxy/" with "/api/"
94 regex = "(?:^|\\W)proxy(?:$|\\W)"
95 replacement = "/api/"
96
97 [http.routers]
98 [http.routers.my-router]
99 # We need to handle all request with pathes defined as "/proxy/*"
100 rule = "PathPrefix(`/proxy/`)"
101 service = "my-service"
102 entryPoints = ["web"]
103 # Use of defined middleware for path replacement
104 middlewares = ["test-replacepathregex"]
105
106 [http.services]
107 [http.services.my-service.loadBalancer]
108 [[http.services.my-service.loadBalancer.servers]]
109 url = "http://localhost:36001/"
110go run main.go
111
Start traefik:
1curl -XGET 'http://localhost:36001/api/operability/list'
2[global]
3 checkNewVersion = false
4 sendAnonymousUsage = false
5
6[entryPoints]
7 [entryPoints.web]
8 address = ":8090"
9
10 [entryPoints.traefik]
11 address = ":8091"
12
13[log]
14 level = "DEBUG"
15 filePath = "logs/traefik.log"
16[accessLog]
17 filePath = "logs/access.log"
18
19[api]
20 insecure = true
21 dashboard = true
22
23[providers]
24 [providers.file]
25 filename = "traefik.toml"
26
27# dynamic conf
28[http]
29 [http.routers]
30 [http.routers.my-router]
31 rule = "Path(`/proxy`)"
32 service = "my-service"
33 entryPoints = ["web"]
34 [http.services]
35 [http.services.my-service.loadBalancer]
36 [[http.services.my-service.loadBalancer.servers]]
37 url = "http://localhost:36001"
38traefik --configFile=traefik.toml
39curl -XGET 'http://localhost:8090/proxy/api/operability/list'
40traefik version
41Version: 2.4.9
42Codename: livarot
43Go version: go1.16.5
44Built: 2021-06-21T16:17:58Z
45OS/Arch: windows/amd64
46package main
47
48import (
49 "fmt"
50 "log"
51 "net/http"
52)
53
54func handler(w http.ResponseWriter, r *http.Request) {
55 fmt.Fprintf(w, "{\"message\": \"ping 123\"}")
56}
57
58func main() {
59 http.HandleFunc("/operability/list", handler)
60 log.Fatal(http.ListenAndServe(":36001", nil))
61}
62[global]
63 checkNewVersion = false
64 sendAnonymousUsage = false
65
66[entryPoints]
67 [entryPoints.web]
68 address = ":36000"
69
70 [entryPoints.traefik]
71 address = ":8091"
72
73[log]
74 level = "DEBUG"
75 filePath = "logs/traefik.log"
76[accessLog]
77 filePath = "logs/access.log"
78
79[api]
80 insecure = true
81 dashboard = true
82
83[providers]
84 [providers.file]
85 debugLogGeneratedTemplate = true
86 # Point this same file for dynamic configuration
87 filename = "config.toml"
88 watch = true
89
90[http]
91 [http.middlewares]
92 [http.middlewares.test-replacepathregex.replacePathRegex]
93 # We need middleware to replace all "/proxy/" with "/api/"
94 regex = "(?:^|\\W)proxy(?:$|\\W)"
95 replacement = "/api/"
96
97 [http.routers]
98 [http.routers.my-router]
99 # We need to handle all request with pathes defined as "/proxy/*"
100 rule = "PathPrefix(`/proxy/`)"
101 service = "my-service"
102 entryPoints = ["web"]
103 # Use of defined middleware for path replacement
104 middlewares = ["test-replacepathregex"]
105
106 [http.services]
107 [http.services.my-service.loadBalancer]
108 [[http.services.my-service.loadBalancer.servers]]
109 url = "http://localhost:36001/"
110go run main.go
111traefik --configFile config.toml
112
Now check if microservice works correctly:
curl -XGET 'http://localhost:36001/api/operability/list'
{"message": "ping 123"}
And check if Traefik v2 does job well too:
curl -XGET 'http://localhost:36000/proxy/operability/list'
{"message": "ping 123"}
Community Discussions contain sources that include Stack Exchange Network
Tutorials and Learning Resources in Microservice
Tutorials and Learning Resources are not available at this moment for Microservice