kandi X-RAY | Micro-Services Summary
kandi X-RAY | Micro-Services Summary
Micro-Services
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of Micro-Services
Micro-Services Key Features
Micro-Services Examples and Code Snippets
@Override
public boolean shouldFilter() {
return RequestContext.getCurrentContext().getRequest().getRequestURI().endsWith(Swagger2Controller.DEFAULT_URL);
}
Community Discussions
Trending Discussions on Micro-Services
QUESTION
I am new to the concept of messaging brokers such as RabbitMQ and wanted to learn some best practices.
RabbitMQ seems to be a great way to facilitate asynchronous communication between micro-services, however, I have a beginners question that I could not find an answer to anywhere else.
When would one NOT use a message broker such as RabbitMQ in a micro-services architecture?
As an example:
Let's say I have two services. Service A and Service B (auth service)
The client makes a request to service A which in turn must communicate with service B (auth service) to authenticate the user and authorize the request. (using Basic Auth)
...ANSWER
Answered 2021-Apr-09 at 12:19Well actually what you are describing is mostly close to the HTTP.
HTTP is synchronous which means that you have to wait for a response. The solution to this issue is AMQP as you mention. With AMQP you don't necessarily need to wait(you can configure it).
Its not necessarily a bad idea but what most microservices depend on is something called eventual consistency. As this will be a quite long answer with a lot of ifs I would suggest taking a look into Microservices Architecture
For example here is the part about the http vs amqp since its mostly a question about sychronous vs asychronous communication It goes into great detail about different approaches of microservices design listing pros and cons for your specific question and others.
For example in your case the Auth would happen at the API gateway as its not considered best practice to leave the microservices open for all the client applications.
QUESTION
data "azurerm_api_management_api" "example" {
api_name = "my-api"
api_management_name = "example-apim"
resource_group_name = "search-service"
}
resource "azurerm_api_management_api_policy" "example" {
api_name = data.azurerm_api_management_api.example.name
api_management_name = data.azurerm_api_management_api.example.api_management_name
resource_group_name = data.azurerm_api_management_api.example.resource_group_name
xml_content = <
XML
}
...ANSWER
Answered 2021-Apr-05 at 18:53Found a way, there is something called azurerm_api_management_api_operation_policy
operation id is something you can get it from api-spec file, which uniquely identifies individual apis
QUESTION
Im new to Spring Boot and got a problem were i need to consume 2 remote Rest services and merge the results. Would need some insight on the right approach. I got something like this:
...ANSWER
Answered 2021-Mar-30 at 08:50I assume the following from the information you provide:
- You have two Datatypes (Java classes). They should be merged together to one Java class
- You have to load this data from different sources
- Non of the classes are leading
I can provide you some example code. The code is based on the previos adoptions. This will give you an idea. It's not a simple copy and paste solution.
At first create a class with all fields you want to include in the result:
QUESTION
I have configured micro-services infrastructure on AWS ECS.
I would like to know how we can configure notifications for each successful/failure deployment.
I wish to receive the successful notification in the following scenario.
when task definition is updated, the old container should be stopped & the new container is up & running.
If a new task definition is failed & we still have the old container running, then we should receive failure notification.
Please let me know what are the flexible options we have in AWS keeping cost in mind?
...ANSWER
Answered 2021-Mar-09 at 10:26Amazon ECS has recently introduced a new feature dubbed "deployment circuit breaker" that covers that exact scenario. I would suggest you go through this blog post that has some insides about how it works including the events notifications.
QUESTION
As modern systems, especially for micro-services, connection pooling for HTTP client is quite often deployed. But with the QUIC + TLS1.3, it seems like to me that connection pooling would be useless as there is support for 0-RTT in QUIC.
Is any QUIC expert available to share more on this topic?
...ANSWER
Answered 2021-Feb-21 at 21:41It's still valid, because:
- a 0-RTT request is more computentionally expensive on both the client as well as the server side than just reusing the connection, since all private key crypto operations and certificate checks still apply
- 0-RTT requests can introduce security issues due to providing a chance for replay attacks (see https://tools.ietf.org/html/draft-ietf-quic-tls-34#section-9.2). Without using the 0-RTT feature a QUIC handshake still requires 1-RTT.
However since QUIC already provides multiplexing multiple requests on a stream the client should not be required to keep a full pool of connections around. A single connection is typically sufficient, as long as the server advertises being able to support a high enough number of streams.
QUESTION
I have three micro-services
- gateway (Spring Cloud Gateway)
- security
- insurance
All the requests are made to the gateway service on port 8080 that redirect them to the specific service
...ANSWER
Answered 2021-Jan-24 at 00:20Add the following property on the Auth Server:
QUESTION
We are using Kafka as messaging system between micro-services. We have a kafka consumer listening to a particular topic and then publishing the data into another topic to be picked up by Kafka Connector who is responsible for publishing it into some data storage.
We are using Apache Avro as serialization mechanism.
We need to enable the DLQ to add the fault tolerance to the Kafka Consumer and the Kafka Connector.
Any message could move to DLQ due to multiple reasons:
- Bad format
- Bad Data
- Throttling with high volume of messages , so some message could move to DLQ
- Publish to Data store failed due to connectivity.
For the 3rd and 4th points as above, we would like to re-try message again from DLQ.
What is the best practice on the same. Please advise.
...ANSWER
Answered 2021-Jan-18 at 08:13Only push to DLQ records that cause non-retryable errors, that is: point 1 (bad format) and point 2 (bad data) in your example. For the format of the DLQ records, a good approach is to:
- push to DLQ the exact same kafka record value and key as the original one, do not wrap it inside any kind of envelope. This makes it much easier to reprocess with other tools during troubleshooting (e.g. with a new version of a deserializer or so).
- add a bunch of Kafka header to communicate meta-data about the error, a few typical examples would be:
- original topic name, partition, offset and Kafka timestamp of this record
- exception or error message
- name and version of the application that failed to process that record
- time of the error
Typically I use one single DLQ topic per service or application (not one per inbound topic, not a shared one across services). That tends to keep things independent and manageable.
Oh, and you probably want to put some monitoring and alert on the inbound traffic to the DLQ topic ;)
Point 3 (high volume) should, IMHO, be dealt with some sort of auto-scaling, not with a DLQ. Try to always over-estimate (a bit) the number of partitions of the input topic, since the maximum number of instances you can start of your service is limited by that. A too high number of messages is not going to overload your service, since the Kafka consumers are explicitly polling for more messages when they decide to, so they're never asking for more than the app can process. What happens if there is a peak of messages is simply they'll keep piling up in the upstream kafka topic.
Point 4 (connectivity) should be retried directly from the source topic, without any DLQ involved, since the error is transient. Dropping the message to a DLQ and picking up the next one is not going to solve any issue since, well, the connectivity issue will still be present and the next message would likely be dropped as well. Reading, or not reading, a record from Kafka is not making it go away, so a record stored there is easy to read again later. You can program your service to move forward to the next inbound record only if it successfully writes a resulting record to the outbound topic (see Kafka transactions: reading a topic is actually involving a write operation since the new consumer offsets need to be persisted, so you can tell your program to persist new offsets and the output records as part of the same atomic transaction).
Kafka is more like a storage system (with just 2 operations: sequential reads and sequential writes) than a messaging queue, it's good at persistence, data replication, throughput, scale... (...and hype ;) ). It tends to be really good for representing data as a sequence of events, as in "event sourcing". If the needs of this microservice setup is mostly asynchronous point-to-point messaging, and if most scenarios would rather favor super low latency and choose to drop messages rather than reprocessing old ones (as seems suggested by the 4 points listed), maybe a lossy in-memory queuing system like Redis queues is more appropriate?
QUESTION
I'm utilizing the new HTTP Client
within the Laravel 8 framework, which I use to call my APIs for certain micro-services.
How can I attain the Server IP of each sender when I receive a response? All of my clients are specifically client websites hosted on various servers online.
Here's a sample function:
...ANSWER
Answered 2021-Jan-15 at 11:26Laravel has a function in its api :
QUESTION
We are using keycloak in a multi-tenant micro-services application.
We have planed to use one realm
per tenant.
Also there is single endpoint that all user requests (from all tenants) authenticated with JWT bearer token flow.
Is that possible to create one application client in keycloak and share it amount all realms?
Or we have to create a client (with same name) for each realms?
ANSWER
Answered 2021-Jan-06 at 08:26Is that possible to create one application client in keycloak and share it amount all realms?
Out-of-the box this is not possible, just like users, clients are defined at the Realm level, and consequently, cannot be shared among realms.
QUESTION
In microservices architecture, I'd like to detect and generate an alert based on threshold when a service goes down. I was wondering to use for each client-side microservice the circuit breaker to send informations when the circuit switches and to create an alert related to the state 'down' of the target. But I don't if it's a good pattern. Moreover I have two concerns, the first one is to monitor microservices and aggregate their data to generate an alert if threshold is reached and the second one is to do the same thing with third-party services (external services) used by microservices by a gateway. According to you, What's the best way to monitor micro-services ? Thanks
...ANSWER
Answered 2021-Jan-02 at 18:36A circuit breaker should be used when you need to allow a failing service to recover, instead of continuously hitting it with requests, despite the fact that it cannot serve them. This is a nice article that you can check for more details about how to use this pattern.
So if your service is likely to recover in a short amount of time, you can use a circuit breaker.
Otherwise, which I think it's the case here (because you want an alert to be fired when the service is down), I would focus more on the reasons why that microservice would fail, and I would try to minimize the chance of occurrence.
If you use Kubernetes, you can monitor the microservices using Prometheus. Here is a nice article that can get you started. Prometheus scrapes the Kubernetes API and exposes metrics related to pods. You can create an alert based on those.
To monitor external services, you can use Prometheus for this too. If the external service is able to expose metrics via Prometheus, then you just plug it and you're done. Otherwise, you have to write some code to check the health of that service and expose a metric based on that.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install Micro-Services
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page