cqrs | Infrastructure for creating CQRS applications | Microservice library
kandi X-RAY | cqrs Summary
kandi X-RAY | cqrs Summary
Infrastructure for creating CQRS applications.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Pop a command from the queue .
- Resolve service .
- Unsubscribe event handler .
- Lazy load command handler .
- Deserialize command data .
- Handle the given query .
- Publish command .
- Register a query subscriber
- Creates not found handler .
- Registers a command handler .
cqrs Key Features
cqrs Examples and Code Snippets
use GpsLab\Component\Command\Command;
class RenameArticleCommand implements Command
{
public $article_id;
public $new_name = '';
}
use GpsLab\Component\Command\Command;
use Doctrine\ORM\EntityManagerInterface;
class RenameArticleHandler
{
use GpsLab\Component\Query\Query;
class ArticleByIdentityQuery implements Query
{
public $article_id;
}
use GpsLab\Component\Query\Query;
use Doctrine\ORM\EntityManagerInterface;
class ArticleByIdentityHandler
{
private $em;
public fu
Community Discussions
Trending Discussions on cqrs
QUESTION
I want to know that can we do CQRS without axon server in spring boot application and other thing is what are the axon alternative Frameworks for spring boot? And also what are the difference between axon community edition avd the enterprise edition? How does affect it when we the horizontal scaling the application. Thanks.
...ANSWER
Answered 2021-Jun-13 at 22:55CQRS being an architectural pattern, you can most definitely do CQRS in vanilla Spring Boot. It might require you to break up the read side and write side in separately deployed services and manually arrange to keep them eventually consistent.
QUESTION
When reading about CQRS it is often mentioned that the write model should not depend on any read model (assuming there is one write model and up to N read models). This makes a lot of sense, especially since read models usually only become eventually consistent with the write model. Also, we should be able to change or replace read models without breaking the write model.
However, read models might contain valuable information that is aggregated across many entities of the write model. These aggregations might even contain non-trivial business rules. One can easily imagine a business policy that evaluates a piece of information that a read model possesses, and in reaction to that changes one or many entities via the write model. But where should this policy be located/implemented? Isn't this critical business logic that tightly couples information coming from one particular read model with the write model?
When I want to implement said policy without coupling the write model to the read model, I can imagine the following strategy: Include a materialized view in the write model that gets updated synchronously whenever a relevant part of the involved entities changes (when using DDD, this could be done via domain events). However, this denormalizes the write model, and is effectively a special read model embedded in the write model itself.
I can imagine that DDD purists would say that such a policy should not exist, because it represents a business invariant/rule that encompasses multiple entities (a.k.a. aggregates). I could probably agree in theory, but in practice, I often encounter such requirements anyway.
Finally, my question is simply: How do you deal with requirements that change data in reaction to certain conditions whose evaluation requires a read model?
...ANSWER
Answered 2021-Jun-07 at 01:20First, any write model which validates commands is a read model (because at some point validating a command requires a read), albeit one that is optimized for the purpose of validating commands. So I'm not sure where you're seeing that a write model shouldn't depend on a read model.
Second, a domain event is implicitly a command to the consumers of the event: "process/consider/incorporate this event", in which case a write model processor can subscribe to the events arising from a different write model: from the perspective of the subscribing write model, these are just commands.
QUESTION
Before posting this, I referred many sites and learning platforms but saw similar pattern of developing CQRS with event sourcing. Again to have proper events you need to follow DDD pattern. I have below questions.
- Can we keep read and write DB in sync just by publishing event from write model and consume it at read model using event handler and updating read database
- Why Event-Sourcing and replay of events needed if my requirement is to only see latest data
- I can manage audit of data as and when events reaches at event handler
- I can version messages based on timestamp in case if race condition.
- Please explain by doing steps 1,3 and 4, am I still following CQRS pattern?
FYI, I am using .NetCore 3.1, AWS Lambda services, MassTransit as a MessageBus and SQS as a transport.
Thanks in advance.
...ANSWER
Answered 2021-May-11 at 14:30As soon as you have separate data models for reading and writing, you're following CQRS. Event sourcing is not strictly required.
Note that accomplishing 1 in an application in a way which preserves the expected eventual consistency of the read side with the write side is rather difficult. You'll need to ensure that you publish the event if and only if the update of the write DB succeeded (i.e. there's never a case where you publish and don't update nor is there ever a case where you update but don't publish: if either of those could happen, you cannot guarantee eventual consistency). For instance, if your application does the update and if that succeeds, publishes the event, what happens if the process crashes (or you get network partitioned from the DB, or your lambda exceeds its time limit...) between the update and publishing?
The 2 best ways to ensure eventual consistency are to
- update the write side DB by subscribing to the published event stream
- use change data capture on the write side DB to generate events to publish
The first is at least very close to event sourcing (one could argue either way: I'd say that it depends on the extent to which your design considers the published event stream the source of truth). In the second, remember that you've basically lost all the contextual domain knowledge around the what's happened around that event: you're only seeing what changed in the DB's representation of the model.
Event sourcing and CQRS mutually improve each other (you can event source without doing CQRS, too, though it's only in certain applications that ES without CQRS is practical); event sourcing tends to let you keep the domain front-and-center and since it's append-only, it's the most optimized write model you can have.
QUESTION
I'm scaffolding my backend application and I want to use CQRS and rabbitmq with it (I'm pretty new to rabbitmq). For that, I have specified different vhosts for my prod and dev environments, but I'm not sure how to use exchanges and queues for command, events and query buses.
Should I use just one exchange, named for example CQRS and three different queues for commands, queries and events?
Or maybe should I use three different exchanges (named query_bus, command_bus and event_bus) and inside each one map one queue to every possible command query and event using routing keys?
Thanks!
...ANSWER
Answered 2021-May-11 at 09:11You should have separate queues for different content (commands, queries...).
Because then its easier to see if the command or query side is lagging/slow by examining the length of each queue. The queue length gives you nice charts for your dashboard.
QUESTION
I'm having an odd case while thinking about a solution for my problem.
A quick recap: I'm using an event store with CQRS, and i have 2 aggregates called 'Group' and 'User'.
Basically a User defines some characteristics like his region, age, and a couple of interests.
He then can choose to 'match' with a Group that is in the same region, around the same age and same interests.
Now here's the case: the 'matchmaking' part should happen completely on the backend, it can be a long running process, but for the client it's just 1 call to the endpoint and the end result should be him matching with a group.
So for this case, I have to query the groups which have the same region, the same age slice, the interests don't really matter in my query. I know have a list of groups, and the match maker is going to give each group a rating based on the common interests between the group and the user. The group with the best rating will be joined.
So again, using CQRS and ES, and my problem is that this case seems a mix between queries and a command, and mixing queries into a match command seems to go against the purpose of CQRS.
Querying multiple groups and filtering them against my write side, the event store, also is a bad idea as the aggregates have to be rebuilt and loaded in memory before being able to filter them out.
So I:m kind of stuck here, something is telling me that a long running process / saga could be an answer to my problem, but I don't see how I would still not break the mix of query and commands in my saga, as a saga is basically a chain of commands/events.
How do I tackle this specific case ? No real code is needed, a conceptual solution to get me going is perfect.
...ANSWER
Answered 2021-May-08 at 15:19Hi this is actually a case where CQRS can shine.
Creating a dedicated matching model seems to be ideal for this case to allow answering what might be a rather non-trivial query in other forms.
So,
- create a dedicated (possibly ephemeral, possibly checkpointed/persisted) query model as derived store.
- Upon request run a query to get the top matches.
- based on the results of the query send a command to update the event store with the new links.
The query model will not need to manage commands and could be updated on a push basis from the event store. This will keep it rather simple to build and keep up to date and further can be optimized to only have the data needed for for this particular query.
An in-memory graph might do well.
-Chris
p.s.
On the command side: the commands here would each only update a single aggregate instance.
Further using the write ahead pattern would allow for not needing any sort of process manager or "saga."
e.g.
For each new membership 1 command to add the new membership to the user stream, then 1 command to the group to add the new member information. Then a simple audit process can scan for incomplete membership assignments both on start up/recovery and as a periodic data quality check.
-Chris
QUESTION
I'm in the process of learning about Event Sourcing and CQRS in distributed systems and I'm having some trouble trying to work out when is the best time to perform validation ... before, or after, the event has been stored? I've done a heap of searching and reading on the subject but I just can't seem to find an answer/suggestion that addresses this question.
For example (simple example), if I have a Web API request to withdraw some money from a bank account, I might perform the following validation:
- Does the bank account exist?
- Does the bank account have enough funds to withdraw?
When the request comes in, do I save the event before performing the above validation (and risk storing invalid events) or after the validation (and risk something going wrong part-way through the process, like the service going down, and not storing the event at all)? In the case of CQRS, is the event stored before the Command is executed or as part of the Command (in the Command handler)?
I can appreciate some validation would be performed before even making the request (e.g. valid amount to withdraw) but there might be a situation where some validation can't be done before making the request.
This also leads to working out how I can return an error (e.g. Bank Account is not valid) in the response of the Web API call?
My understanding of this subject may be all wrong, but as I mentioned before, I'm just learning this subject and I'm hoping someone either has an answer, or can point me to some posts/articles, that will help my understanding.
...ANSWER
Answered 2021-Apr-28 at 02:08Events are statements of fact and cannot be changed. They represent something that actually happened.
You could introduce validation on a command before it results in a series of events.
Since you mentioned a bank account, many times a bank will not restrict you from overdrawing your account. They just add a new fact that represents an overdraft fee as a result of the withdrawal. This scenario involves a reaction to a withdrawal event, not validation before the event occurs.
QUESTION
I am coding a new application usign CQRS+ES architecture with Event Store DB. In my app, I have the following streams:
- user-1
- user-2
- user-3
- ...
Each stream contains all events regarding a given user.
I am now creating a projection called user-account, which consists in basic data regarding my user's account (like first name, email, and others)
What is the optimal way to design that projection?
I should have a single projection for each user, creating projections called:
- user-account-1
- user-account-2
- user-account-3
- ...
Or a single projection for all user-accounts? Being it a key-value pair record (that may store millions of keys in the future)
...ANSWER
Answered 2021-Apr-06 at 12:19You can go with one stream per user. Projections are like dimensions. A user can exist in different "dimensions" (CDC naming) and have a different shape in each.
Read https://www.eventstore.com/blog/the-cost-of-creating-a-stream
QUESTION
I'm making a custom enumeration class along the lines of the Microsoft recommendation but struggling to make a version that supports flag-style enums.
The problem occurs when trying to bitwise or together two instances to create a new instance that doesn't exist.
...ANSWER
Answered 2021-Apr-22 at 21:51The problem is going to always be that you cannot construct the instance of Colors
from inside the abstract base class natively. ie, You can constrain a generic to have new()
but not to have a specific constructor like new(int, string)
.
So one option is to define (and redefine for each instance of your enumeration) the operator inside the concrete class itself
QUESTION
As I have been able to verify, in MassTransit with Azure Service Bus, each type of object consumed by a "Consumer" generates a Topic for that type regardless of whether it is only consumed in a specific "receive endpoint" (queue). When sending a message of this type with the "Send()" method, the message is sent directly to the "receive endpoint" (queue) without going through the topic. If this same message is published with the "Publish()" method, it is published in the Topic, and is forwarded to the receive endpoint (queue) from the corresponding subscriber.
My application uses a CQRS pattern where the messages are divided into commands and events. Commands use the send-receive pattern and are therefore always dispatched in MassTransit with the "Send()" method. The events, however, are based on the publish-subscribe pattern, and therefore are always dispatched in MassTransit with the "Publish()" method. As a result, a large number of topics are created on the bus that are never used (one for each type of command), since the messages belonging to these topics are sent directly to the receiver's queue.
For all these reasons, the question I ask is whether it is possible to configure MassTransit so that it does not automatically create the topics of some types of messages consumed because they will only be sent using the "Send()" method? Does this make sense in MassTransit or is it not possible/recommended? Thank you!
Regards
Edited 16/04/2021
After doing some testing, I edit this topic to clarify that the intention is to configure MassTransit so that it does not automatically create the topics of some types of messages consumed, all of them received on the same receive endpoint. That is, the intention is to configure (dynamically if possible, through the type of object) which types of messages consumed create a topic and which do not in the same receive endpoint. Let's imagine that we have a receive endpoint (a queue) associated with a service, and this service is capable of consuming both commands and events, since the commands are only dispatched through Send(), it is not necessary to create the topic for them, however the events that are dispatched via Publish(), they need their topic (and their subscribers) to exist in order to deliver the message and be consumed.
Thanks in advance
...ANSWER
Answered 2021-Apr-22 at 21:24Yes, for a receive endpoint hosting a consumer that will only receive Sent messages, you can specify ConfigureConsumeTopology = false
for that receive endpoint. You can do that via a ConsumerDefinition
, or when configuring the receive endpoint directly.
It is also possible to disable topology configuration per message type using an attribute on the message contract:
QUESTION
I am following a basic tutorial for a project with CQRS and EventSourcing. In the source code of the full tutorial project there is only one configuration class to configure Axon framework for the project, the code is as follows:
...ANSWER
Answered 2021-Mar-12 at 20:57The com.mongodb.client.MongoClient
class is the new API and probably is not compatible with the version of Axon Framework that you are using.
To make it work you may need to downgrade your Mongo dependency (spring-boot-starter-data-mongodb
) and use the legacy API import com.mongodb.MongoClient
.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install cqrs
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page