micro-service | 一个简单的微服务框架 | Microservice library
kandi X-RAY | micro-service Summary
kandi X-RAY | micro-service Summary
micro-service a mirco service framework/一个简单的node微服务框架.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of micro-service
micro-service Key Features
micro-service Examples and Code Snippets
Community Discussions
Trending Discussions on micro-service
QUESTION
I'm working on a Golang Micro-service which uses Java based Cucumber tests for BDDs.
There is a date variable inside the schema and it is defined as:
...ANSWER
Answered 2021-Jun-11 at 11:49The Go code you provided will not impact the way how the Time
instance will be serialized as you are parsing it back into Time
after serializing it to a string
.
If you have control over how your date fields are serialized, you can apply the following format that should be aligned with what you provided to Jackson's ObjectMapper
:
QUESTION
We are develop a micro-service system that use ActiveMQ Artemis as the communication method between service. Since the requirement ask to be able to stop the listeners at runtime, we can not use @JmsListener
provide by spring-artemis. After digging the internet and finding out that spring use MessageListenerContainer
behind the scence, we come up with the idea of maintain a list of MessageListenerContainer
our self.
ANSWER
Answered 2021-Jun-01 at 20:15By default the broker will auto-create addresses and queues as required when a message is sent or a consumer is created by the core JMS client. These resources will also be auto-deleted by default when they're no longer needed (i.e. when a queue has no consumers and messages or when an address no longer has any queues bound to it). This is controlled by these settings in broker.xml
which are discussed in the documentation:
auto-create-queues
auto-delete-queues
auto-create-addresses
auto-delete-addresses
To be clear, auto-deletion should not cause any message loss by default as queues should only be deleted when they have 0 consumers and 0 messages. However, you can always set auto-deletion to false
to be 100% safe.
Queues representing durable JMS topic subscriptions won't be deleted as they are meant to stay and gather messages while the consumer is offline. In other words, a durable topic subscription will remain if the client using the subscription is shutdown without first explicitly removing the subscription. That's the whole point of durable subscriptions - they are durable. Any client can use a durable topic subscription if it connects with the same client ID and uses the same subscription name. However, unless the durable subscription is a "shared" durable subscription then only one client at a time can be connected to it. Shared durable topic subscriptions were added in JMS 2.0.
QUESTION
I've been implementing feature file tests for a microservice in Golang using Godog.
There are 54 steps in my feature file and I generated the step definitions for all of them.
When I run the test using go test
command, the first 22 scenarios pass and the 23 is declared as Undefined
even though its definition is present.
My Console output after any test:
...ANSWER
Answered 2021-May-25 at 06:51It turns out Godog doesn't trim the Steps from the feature
files.
So, In my feature files, the steps for the above lines included:
When reading resource of id "ID1"_____(blank space)
The regex pattern therefore wasn't able to map the step definitions to these lines for some reason.
As soon as I removed the empty spaces, it worked great.
QUESTION
I work on a micro-service project using spring boot web flux and here are some services:
- baseInfoService
- notificationService
- accountService
- orderService
- performService
I'm implementing a service in OrderService which would has this flow:
...ANSWER
Answered 2021-May-23 at 12:49First, if you use reactor you shouldnt call blocking apis, as you did it in the save method. When you use webflux, you have a small number of threads and if you block those threads your application performance will be very poor. I suggest using reactive database driver instead.
1, You shouldnt use plane object in the controller, because you have to block the thread to get the object itself. In reactor, you mustnt call blocking operation. Also suggest to use blockhound if you are not sure about what is blocking. It will throw an exception during tests if blocking methods are called.
2, In reactive stream, you have to use the reactive operations, like map
, flatmap
, etc.. to do operations on your objects.
For example, suppose you want to go through a list of objects, load some data from web to each and save those into the database (note that here I'm going to use mock db and webservice, but you can change those into real services and the essence of the example is the processor. Also I use Kotlin here, which is similar to Java)
QUESTION
I have upgraded some micro services that talk to each other from Spring Boot 1.5.3 to 2.3.5. Now when my micro service A calls micro service B, the call fails with the following status on the network tab of chrome's developer tools (blocked:mixed-content)
I am not sure what has changed that I start getting this error.
In browser's console I get the below error:
...ANSWER
Answered 2021-Apr-21 at 07:11Understood the issue and found the solution.
Looks like the security hooks used in SpringBoot 1 are deprecated in SpringBoot 2. So in my micro-service B the below config in properties file wasn't working after upgrade
QUESTION
We have noticed excessive logging from the TrackingEventProcessor
class when scaling up the micro-service to 2 replicas:
Our Axon setup:
- axon version 3.4.3
- spring-boot version 2.1.6.RELEASE
- an in-house couchbase implementation as the
TokenStore
, i.e.CouchBaseTokenStore
- PostgreSQL v11.7 as event store
- Segment count for each tracking event processor = 1
- Tracking Event Processors are configured
forSingleThreadedProcessing
We are seeing the following messages a lot:
...ANSWER
Answered 2021-Apr-15 at 19:54I managed to fix the problem with the help of Allard (see comments on question). The fix was to also persist the token after it has been claimed in the fetch()
method. We also started making use of the replace()
method supplied by the Couchbase SDK instead of the upsert()
method, to better harness the CAS (Compare-and-Swap) optimistic concurency:
QUESTION
I am new to the concept of messaging brokers such as RabbitMQ and wanted to learn some best practices.
RabbitMQ seems to be a great way to facilitate asynchronous communication between micro-services, however, I have a beginners question that I could not find an answer to anywhere else.
When would one NOT use a message broker such as RabbitMQ in a micro-services architecture?
As an example:
Let's say I have two services. Service A and Service B (auth service)
The client makes a request to service A which in turn must communicate with service B (auth service) to authenticate the user and authorize the request. (using Basic Auth)
...ANSWER
Answered 2021-Apr-09 at 12:19Well actually what you are describing is mostly close to the HTTP.
HTTP is synchronous which means that you have to wait for a response. The solution to this issue is AMQP as you mention. With AMQP you don't necessarily need to wait(you can configure it).
Its not necessarily a bad idea but what most microservices depend on is something called eventual consistency. As this will be a quite long answer with a lot of ifs I would suggest taking a look into Microservices Architecture
For example here is the part about the http vs amqp since its mostly a question about sychronous vs asychronous communication It goes into great detail about different approaches of microservices design listing pros and cons for your specific question and others.
For example in your case the Auth would happen at the API gateway as its not considered best practice to leave the microservices open for all the client applications.
QUESTION
data "azurerm_api_management_api" "example" {
api_name = "my-api"
api_management_name = "example-apim"
resource_group_name = "search-service"
}
resource "azurerm_api_management_api_policy" "example" {
api_name = data.azurerm_api_management_api.example.name
api_management_name = data.azurerm_api_management_api.example.api_management_name
resource_group_name = data.azurerm_api_management_api.example.resource_group_name
xml_content = <
XML
}
...ANSWER
Answered 2021-Apr-05 at 18:53Found a way, there is something called azurerm_api_management_api_operation_policy
operation id is something you can get it from api-spec file, which uniquely identifies individual apis
QUESTION
I am using a Java micro-service architecture in my application and generating separate log files for each micro-service.
I am using ELK stack approach to visualize the logs in Kibana, but the problem is whatever the fields that I'm getting from Elastic Search that are related to server logs fields. some example fields are @timestamp,@version,@path,@version.keyword,@host.
i want to customize this fields by adding some fields like customerId,txn-Id,mobile no so that we can analyze the data easily.
I'm using org.apache.logging.log4j2 to write the logs. Can I set above fields (customerId,txn-Id,mobile) to log files? And then Elastic will store these fields with the above default fields and then these custom fields should available in a Kibana dashboard. Is this possible?
...ANSWER
Answered 2021-Mar-30 at 09:27It's definitely possible to do that. I've not done it with the log4j2 stack (I have with slf4j/logback), but the basic approach is:
- set those fields in the Mapped Diagnostic Context (I'm fairly sure log4j2 supports that)
- use a log appender which logs to logstash-structured JSON
- configure filebeat to ship the JSON logs
- if filebeat is shipping to logstash, you'll need to configure logstash to pass those preformatted JSON logs directly to elasticsearch
QUESTION
Im new to Spring Boot and got a problem were i need to consume 2 remote Rest services and merge the results. Would need some insight on the right approach. I got something like this:
...ANSWER
Answered 2021-Mar-30 at 08:50I assume the following from the information you provide:
- You have two Datatypes (Java classes). They should be merged together to one Java class
- You have to load this data from different sources
- Non of the classes are leading
I can provide you some example code. The code is based on the previos adoptions. This will give you an idea. It's not a simple copy and paste solution.
At first create a class with all fields you want to include in the result:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install micro-service
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page