event-store | Event Sourcing Library for PHP | Microservice library

 by   malocher PHP Version: Current License: No License

kandi X-RAY | event-store Summary

kandi X-RAY | event-store Summary

event-store is a PHP library typically used in Architecture, Microservice, Symfony, Kafka applications. event-store has no bugs, it has no vulnerabilities and it has low support. You can download it from GitHub.

but do not require a CQRS sytem to be used. However, both systems can be linked very easy.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              event-store has a low active ecosystem.
              It has 19 star(s) with 2 fork(s). There are 6 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              event-store has no issues reported. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of event-store is current.

            kandi-Quality Quality

              event-store has 0 bugs and 0 code smells.

            kandi-Security Security

              event-store has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              event-store code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              event-store does not have a standard license declared.
              Check the repository for any license declaration and review the terms closely.
              OutlinedDot
              Without a license, all rights are reserved, and you cannot use the library in your applications.

            kandi-Reuse Reuse

              event-store releases are not available. You will need to build from source code and install.
              Installation instructions, examples and code snippets are available.
              event-store saves you 760 person hours of effort in developing the same functionality from scratch.
              It has 1751 lines of code, 179 functions and 48 files.
              It has medium code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed event-store and discovered the below as its top functions. This is intended to give you an instant insight into event-store implemented functionality, and help decide if they suit your requirements.
            • Save an event
            • Updates an event .
            • Returns the event dispatcher .
            • Create the schema for the given streams
            • Adds the commands .
            • Configure the console command .
            • Constructs a managed event source object from an object history .
            • Execute the command
            • Get the event source .
            • Sets the source id
            Get all kandi verified functions for this library.

            event-store Key Features

            No Key Features are available at this moment for event-store.

            event-store Examples and Code Snippets

            No Code Snippets are available at this moment for event-store.

            Community Discussions

            QUESTION

            How to get Axon event-identifier from the event-store
            Asked 2021-May-25 at 18:33

            Just a short question here...

            by using Axon, we know that AggregateLifecycle#apply(Object) will be doing the event-sourced for us which under the hood going to persist our event into our event-store.

            With regards to that matter, how to get the event-identifier (not the aggregate identifier) once we call that particular apply method ?

            Thanks

            ...

            ANSWER

            Answered 2021-May-25 at 18:33

            Based on your another answer, let me suggest you a way to follow.

            The MessageIdentifier as used by AxonFramework (AF) is nothing more than an UUID generated for each Message you create.

            Since you only need to reuse that info, you can pretty much get it from the Message while handling it. To make things easier for you, Axon provides a MessageIdentifierParameterResolver meaning you can simply use it in any @MessageHandler of you (of course, I am assuming you are using Spring as well).

            Example:

            Source https://stackoverflow.com/questions/67680810

            QUESTION

            Nodejs ts: event-sourcing and cqrs, event bus
            Asked 2021-Mar-24 at 13:47

            Hello I have a command bus, a query bus, which basically has a keypair with the name of the command or query and the handler and then I execute the command that should publish my event. But I have some doubts about how I could do my event-bus. is the command-bus part of an event-bus? how could I do an event-bus with the handlers

            command-bus:

            ...

            ANSWER

            Answered 2021-Mar-24 at 13:47

            I see there's some confusion between the various Buses and the Event Store. Before attempting to implement an Event Bus, you need to answer one important question that lies at the foundation of any Event Sourcing implementation:

            • How to preserve the Event Store as the Single Source of Truth?

            That is, your Event Store contains the complete state of the domain. This also means that the consumers of the Event Bus (whatever it ends up being - a message queue, a streaming platform, Redis, etc.) should only get the events that are persisted. Therefore, the goals become:

            • Only deliver events on the Bus that are persisted to the Store (so if you get an error writing to the Store, or maybe a Concurrency Exception, do not deliver via bus!)
            • Deliver all events to all interested consumers, without losing any events

            These two goals intuitively translate to "I want atomic commit between the Event Store and the Event Bus". This is simplest to achieve when they're the same thing!

            So, instead of thinking about how to connect an "Event Bus" to command handlers and send events back and forth, think about how to retrieve already persisted events from the Event Store and subscribe to that. This also removes any dependency between command handlers and event subscribers - they live on different sides of the Event Store (writer vs. reader), and could be in different processes, on different machines.

            Source https://stackoverflow.com/questions/66700452

            QUESTION

            Writing to a topic from a Processor in a Spring Cloud Streams Kafka Stream application
            Asked 2020-May-02 at 21:43

            I am using the Processor API to do some low level processing into a state store. The point is I also need to write into a topic after storing into the store. How can it be done in a Spring Cloud Streams Kafka applications?

            ...

            ANSWER

            Answered 2020-May-02 at 21:43

            You can't. The process() method is a terminal operation that does not allow you to emit data downstream. Instead, you can use transform() though (it's basically the same a process() but allows you to emit data downstream); or depending on your app, transformValues() or flatTransform() etc.

            Using transform() you get KStream back, that you can write into a topic.

            Source https://stackoverflow.com/questions/61558446

            QUESTION

            How to replay in a deterministic way in CQRS / event-sourcing?
            Asked 2020-Feb-04 at 11:41

            In CQRS / ES based systems, you store events in an event-store. These events refer to an aggregate, and they have an order with respect to the aggregate they belong to. Furthermore, aggregates are consistency / transactional boundaries, which means that any transactional guarantees are only given on a per-aggregate level.

            Now, supposed I have a read model which consumes events from multiple aggregates (which is perfectly fine, AFAIK). To be able to replay the read model in a deterministic way, the events need some kind of global ordering, across aggregates – otherwise you wouldn't know whether to replay events for aggregate A before or after the ones for B, or how to intermix them.

            The simplest solution to achieve this is by using a timestamp on the events, but typically timestamps are not fine-granular enough (or, to put it another way, not all databases are created equal). Another option is to use a global sequence, but this is bad performance-wise and hinders scaling.

            How do you solve this issue? Or is my basic assumption, that replays of read models should be deterministic, wrong?

            ...

            ANSWER

            Answered 2020-Feb-04 at 09:29

            How do you solve this issue?

            It's known issue, and of course nor simple timestamps, nor global sequence, nor event naïve methods will not help.
            Use vector clock with weak timestamp to enumerate your events and vector cursor to read them. That guarantees some stable deterministic order to intermix events between aggregates. This will work even if each thread has clock synchronization gap, which is regular use case for database clusters, because perfect timestamp synchronization is impossible.
            Also this automatically gives possibility to seamless mix reading events from event store and event bus later, and excludes any database locks inter different aggregates events.

            Algorithm draft:
            1) Determine real quantity of simultaneous transactions in your database, e.g. maximum number of workers in cluster.
            Since every event had been written in only one transaction in one thread, you can determine it's unique id as tuple (thread number, thread counter), where thread counter is amount of transactions processed on current thread.
            Calculate event weak timestamp as MAX(thread timestamp, aggregate timestamp), where aggregate timestamp is timestamp of last event for current aggregate.

            2) Prepare vector cursor for reading events via thread number boundary. Read events from each thread sequentially until timestamp gap exceed allowed value. Allowed weak timestamp gap is trade between event reading performance and preserving native events order.
            Minimal value is cluster threads synchronization time delta, so events are arrived in native aggregate intermix order. Maximum value is infinity, so events will be spitted by aggregate. When using RDBMS like postgres, that value can be automatically determined via smart SQL query.

            You can see referent implementation for PostgreSQL database for saving events and loading events. Saving events performance is about 10000 events per second for 4GB RAM RDS Postgres cluster.

            Source https://stackoverflow.com/questions/60050722

            QUESTION

            How to display intermediate results in a windowed streaming-etl?
            Asked 2020-Jan-24 at 07:50

            We currently do a real-time aggregation of data in an event-store. The idea is to visualize transaction data for multiple time ranges (monthly, weekly, daily, hourly) and for multiple nominal keys. We regularly have late data, so we need to account for that. Furthermore the requirement is to display "running" results, that is value of the current window even before it is complete.

            Currently we are using Kafka and Apache Storm (specifically Trident i.e. microbatches) to do this. Our architecture roughly looks like this:

            (Apologies for my ugly pictures). We use MongoDB as a key-value store to persist the State and then make it accessible (read-only) by a Microservice that returns the current value it was queried for. There are multiple problems with that design

            1. The code is really high maintenance
            2. It is really hard to guarantee exactly-once processing in this manner
            3. Updating the state after every aggregation obviously has performance implications but it is sufficiently fast.

            We got the impression, that with Apache Flink or Kafka streams better frameworks (especially from a maintenance standpoint - Storm tends to be really verbose) have become available since we started this project. Trying these out it seemed like writing to a database, especially mongoDB is not state of the art anymore. The standard use case I saw is state being persisted internally in RocksDB or memory and then written back to Kafka once a window is complete.

            Unfortunately this makes it quite difficult to display intermediate results and because the state is persisted internally we would need the allowed Lateness of events to be in the order of months or years. Is there a better solution for this requirements than hijacking the state of the real-time stream? Personally I feel like this would be a standard requirement but couldn't find a standard solution for this.

            ...

            ANSWER

            Answered 2020-Jan-22 at 12:04

            You could study Konstantin Knauf's Queryable Billing Demo as an example of how to approach some of the issues involved. The central, relevant ideas used there are:

            1. Trigger the windows after every event, so that their results are being continuously updated
            2. Make the results queryable (using Flink's queryable state API)

            This was the subject of a Flink Forward conference talk. Video is available.

            Rather than making the results queryable, you could instead stream out the window updates to a dashboard or database.

            Also, note that you can cascade windows, meaning that the results of the hourly windows could be the input to the daily windows, etc.

            Source https://stackoverflow.com/questions/59857255

            QUESTION

            How do I load a CSV file into a Db2 Event Store remotely using a Db2 client?
            Asked 2019-Oct-06 at 23:41

            I see in the documentation for Db2 Event Store that a CSV file can be loaded into the system when the file is within the system in this document https://www.ibm.com/support/knowledgecenter/en/SSGNPV_2.0.0/local/loadcsv.html. I also found that you can connect to a Db2 Event Store database using the standard Db2 client in How do I connect to an IBM Db2 Event Store instance from a remote Db2 instance?. What I am trying to do now is load a CSV file using that connection. Is it possible to load it remotely ?

            ...

            ANSWER

            Answered 2019-Oct-02 at 20:09

            With other answers mentioned the connection and loading using the traditional db2. I have to add some more details that are required specifically for Db2 Event Store.

            Assuming we are using a Db2 Client container, which can be found at docker hub with tag ibmcom/db2. Basically we have to go through following steps:

            1/ establish a remote connection from db2 client container to the remote db2 eventstore database

            2/ use db2 CLP commands to load the csv file using the db2's external table load feature, which will load csv file from db2 client container to the remote eventstore database.

            Step 1: Run the following commands, or run the it in a script. Note that the commands need to be run as the db2 user in the db2 client container. The db2 user name is typically db2inst1

            Source https://stackoverflow.com/questions/57993828

            QUESTION

            Flow 'unexpected token <' for jsx code in IDE
            Asked 2019-May-07 at 20:06

            For some reason my ide is printing out 'Unexpected token <. Remember, adjacent JSX elements must be wrapped in an enclosing parent tag' for the following react code. I don't understand why it's printing that error since the component it's referring to is wrapped in an enclosing parent tag the tag specifically.

            ...

            ANSWER

            Answered 2019-May-07 at 20:06

            I had a very similar issue. Firstly delete (rename) your .babelrc file which you use (remove where ever you set it).

            If storybook can't find that file, then it will use its own settings. This worked for me to prove that it was that file that caused the issue.

            If this is the same for you then create a new .babelrc file, and place it into the storybook folder. Storybook will now use this and your project can continue to use the existing one.

            The tricky part is finding the config setting in you existing .babelrc file that is breaking storybook - for me it was the react-hot-load/babel, but you don't have that listed.

            My file ended up with only @babel/plugin-proposal-class-properties and @babel/plugin-proposal-rest-spread for plugins.

            Source https://stackoverflow.com/questions/50422822

            QUESTION

            Symfony, proopgh, event soursing Error 42S02. Maybe the event streams table is not setup?
            Asked 2019-Mar-22 at 10:56

            Good day/night,
            I'm really new in prooph event sourcing.
            Try to understand how it works with symfony.

            Take a look on this project.
            https://github.com/prooph/proophessor-do-symfony
            What should I do with the DB in the beginning ?

            I run the command php bin/console event-store:event-stream:create But get the error message :

            ...

            ANSWER

            Answered 2019-Mar-22 at 10:55

            Actually, I did not create the stream. Just followed this documentation, it's okay now

            https://github.com/prooph/proophessor-do/blob/master/docs/installation/manual.md

            Source https://stackoverflow.com/questions/55290636

            QUESTION

            understanding Lagoms persistent read side
            Asked 2019-Jan-14 at 10:28

            I read through the Lagom documentation, and already wrote a few small services that interact with each other. But because this is my first foray into CQRS i still have a few conceptual issues about the persistent read side that i don't really understand.

            For instance, i have a user-service that keeps a list of users (as aggregates) and their profile data like email addresses, names, addresses, etc.

            The questions i have now are

            • if i want to retrieve the users profile given a certain email-address, should i query the read side for the users id, and then query the event-store using this id for the profile data? or should the read side already keep all profile information?

            • If the read side has all information, what is the reason for the event-store? If its truly write-only, it's not really useful is it?

            • Should i design my system that i can use the event-store as much as possible or should i have a read side for everything? what are the scalability implications?

            • if the user-model changes (for instance, the profile now includes a description of the profile) and i use a read-side that contains all profile data, how do i update this read side in lagom to now also contain this description?

            • Following that question, should i keep different read-side tables for different fields of the profile instead of one table containing the whole profile

            • if a different service needs access to the data, should it always ask the user-service, or should it keep its own read side as needed? In case of the latter, doesn't that violate the CQRS principle that the service that owns the data should be the only one reading and writing that data?

            As you can see, this whole concept hasn't really 'clicked' yet, and i am thankful for answers and/or some pointers.

            ...

            ANSWER

            Answered 2019-Jan-14 at 10:28

            if i want to retrieve the users profile given a certain email-address, should i query the read side for the users id, and then query the event-store using this id for the profile data? or should the read side already keep all profile information?

            You should use a specially designed ReadModel for searching profiles using the email address. You should query the Event-store only to rehydrate the Aggregates, and you rehydrate the Aggregates only to send them commands, not queries. In CQRS an Aggregate may not be queried.

            If the read side has all information, what is the reason for the event-store? If its truly write-only, it's not really useful is it?

            The Event-store is the source of truth for the write side (Aggregates). It is used to rehydrate the Aggregates (they rebuild their internal & private state based on the previous emitted events) before the process commands and to persist the new events. So the Event-store is append-only but also used to read the event-stream (the events emitted by an Aggregate instance). The Event-store ensures that an Aggregate instance (that is, identified by a type and an ID) processes only a command at a time.

            if the user-model changes (for instance, the profile now includes a description of the profile) and i use a read-side that contains all profile data, how do i update this read side in lagom to now also contain this description?

            I don't use any other framework but my own but I guess that you rewrite (to use the new added field on the events) and rebuild the ReadModel.

            Following that question, should i keep different read-side tables for different fields of the profile instead of one table containing the whole profile

            You should have a separate ReadModel (with its own table(s)) for each use case. The ReadModel should be blazing fast, this means it should be as small as possible, only with the fields needed for that particular use case. This is very important, it is one of the main benefits of using CQRS.

            if a different service needs access to the data, should it always ask the user-service, or should it keep its own read side as needed? In case of the latter, doesn't that violate the CQRS principle that the service that owns the data should be the only one reading and writing that data?

            Here depends on you, the architect. It is preferred that each ReadModel owns its data, that is, it should subscribe to the right events, it should not depend on other ReadModels. But this leads to a lot of code duplication. In my experience I've seen a desire to have some canonical ReadModels that own some data but also can share it on demand. For this, in CQRS, there is also the term query. Just like commands and events, queries can travel in your system, but only from ReadModel to ReadModel.

            Queries should not be sent during a client's request. They should be sent only in the background, as an asynchronous synchronization mechanism. This is an important aspect that influences the resilience and responsiveness of your system.

            I've use also live queries, that are pushed from the authoritative ReadModels to the subscribed ReadModels in real time, when the answer changes.

            In case of the latter, doesn't that violate the CQRS principle that the service that owns the data should be the only one reading and writing that data?

            No, it does not. CQRS does not specify how the R (Read side) is updated, only that the R should not process commands and C should not be queried.

            Source https://stackoverflow.com/questions/54142905

            QUESTION

            CQRS Event-sourcing and own database per microservice
            Asked 2019-Jan-10 at 11:59

            I have some questions above event-sourcing and cqrs in microservices architecture. I understand that after send command some microservice executes it and emits event. Event-store subcsribes on it and saves inside his database. Also some ReadModel basing on this event generates and saves optimized data inside read database.

            My first question is - Can microservice has his own database and store data inside it too? Or maybe in event-sourcing approach microservices don't have their own databases and everything is only stored inside event store?

            My second question is - when I execute command in microservice and need some data for validation purposes do I need call ReadModel or what? Assuming microservices haven't got their own databases I have no choice?

            ...

            ANSWER

            Answered 2019-Jan-10 at 11:59

            Can microservice has his own database and store data inside it too?

            Definitely, microservice can have its own database. But let's use terms from ES/CQRS. Database can represent Event Store (append-only log of immutabale events) and Read Model - some database used to answer queries which is populated by proseccing events.

            So, microservice can have its own Read model, populated from events from other microservices.

            Or microservice can process commands and save events to the shared Event Store.

            Or microservice can process commands and save events to its own Event store.

            Choice is yours, and it depends on degree of separation you want to achieve among microservices.

            I would put all events that usually consumed together into same Event store. Which means I should be able to query for these events and have a single ordered stream as a result.

            when I execute command in microservice and need some data for validation purposes do I need call ReadModel or what?

            Command is executed by Aggregate, that has its own state. This state is built by processing all events for this aggregate, and this state should be used to validate a command.

            You cannot/should not talk to Read Models in the command handler, primarily because those read models are not consistent with aggregate state. Aggregate state is consistent.

            You can query Read Model before sending a command (to make sure it can be sent). But in command handler you need to rely on aggregate state only.

            There is a famous case of registering user with requirement of a unique name. As a primary validation, in your UI code you can query read model and tell user that entered name is taken. If name is not taken, UI lets user issue a command. I'm assuming your Aggregate root is user.

            But when processing this command ({id:123, type:CREATE_USER, name:somename}) you cannot check that "somename" is taken, because aggregate state for user 123 does not contain a list of taken names. You can potentially query some AllUsernames read model, but it can be milliseconds old, and some other user could take this "somename" already. So in this scenario, you will find a duplication during adding names to read model. And at that point you can do some compensation action - usually issue a command to suspend a user with duplicated name and ask him to re-register or change his name somehow.

            It may seems strange, but if you have a really distributed system with several replicas of user list, you'll have the same problem, so why not just embrace the fact that data is always not fully consistent, and just deal with it?

            Source https://stackoverflow.com/questions/54125607

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install event-store

            Installation of malocher\event-store uses composer. For composer documentation, please refer to [getcomposer.org](http://getcomposer.org/). Add following requirement to your composer.json. More information comming soon.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/malocher/event-store.git

          • CLI

            gh repo clone malocher/event-store

          • sshUrl

            git@github.com:malocher/event-store.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link