common-kafka | Common utilities for Apache Kafka | Pub Sub library

 by   cerner Java Version: 3.0 License: Apache-2.0

kandi X-RAY | common-kafka Summary

kandi X-RAY | common-kafka Summary

common-kafka is a Java library typically used in Messaging, Pub Sub, Kafka applications. common-kafka has no bugs, it has no vulnerabilities, it has build file available, it has a Permissive License and it has low support. You can download it from GitHub, Maven.

This repository contains common Kafka code supporting Cerner's cloud-based solutions. For Maven, add the following,.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              common-kafka has a low active ecosystem.
              It has 32 star(s) with 16 fork(s). There are 11 watchers for this library.
              There were 1 major release(s) in the last 12 months.
              There are 0 open issues and 11 have been closed. On average issues are closed in 26 days. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of common-kafka is 3.0

            kandi-Quality Quality

              common-kafka has 0 bugs and 0 code smells.

            kandi-Security Security

              common-kafka has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              common-kafka code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              common-kafka is licensed under the Apache-2.0 License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              common-kafka releases are not available. You will need to build from source code and install.
              Deployable package is available in Maven.
              Build file is available. You can build the component from source.
              Installation instructions are not available. Examples and code snippets are available.

            Top functions reviewed by kandi - BETA

            kandi has reviewed common-kafka and discovered the below as its top functions. This is intended to give you an instant insight into common-kafka implemented functionality, and help decide if they suit your requirements.
            • Returns the next record
            • Checks if any of the partitions has been paused
            • Returns the next record for this partition
            • Polls from the consumer group
            • Returns the offset that should be used for a given topic partition
            • Gets the earliest offset for the topic partition
            • Gets the offset for the topic partition
            • Gets a Kafka message producer instance
            • Retrieves the producercurrency from the properties
            • Creates a consumer
            • Adds the given records to the Kafka partition
            • Initialize the configuration
            • Gets a map of all partitions assigned to a consumer group
            • Returns the replication factor for a given topic
            • Creates ZooKeeper connection
            • Updates the configuration of a specific topic
            • Removes the partition if it was paused
            • Gets the committed offsets for the given topics and topics
            • Resets offsets to the given offsets
            • Initialize the Kafka producer
            • Closes all producers
            • Adds partitions to a specific topic
            • Delete a topic
            • Assigns partitions to each consumer
            • Gets the properties of a Kafka topic
            • Returns the offsets for the given topics
            Get all kandi verified functions for this library.

            common-kafka Key Features

            No Key Features are available at this moment for common-kafka.

            common-kafka Examples and Code Snippets

            No Code Snippets are available at this moment for common-kafka.

            Community Discussions

            Trending Discussions on common-kafka

            QUESTION

            Design Kafka consumers and producers for scalability
            Asked 2021-Jan-23 at 12:45

            I want to design a solution for sending different kinds of e-mails to several providers. The general overview.

            I have several upstream providers Sendgrid, Zoho, Mailgun and etc. They will be used to send e-mails and etc. For example:

            • E-mail for Register new user
            • E-mail for Remove user
            • E-mail for Space Quota limit

            (in general around 6 types of e-mails)

            Every type of e-mail should be generated into Producers, converted into Serialized Java Object and Send to the appropriate Kafka Consumer integrated with the Upstream provider.

            The questions is how to design Kafka for maximum performance and scalability?

            • 1-st solution so far that I can think if is to have topic for every type of e-mail message and every gateway(6x4 = 24 topics). In the future I'm expecting to add more types of messages and gateways. Maybe it will reach 600 topics. This will make a lot Java source code for maintenance and a lot of topics to manage. Another downside will be that Kafka logs will be huge.

            • 2-nd solution will be to use 1 topic for each consumer(integration gateway). But in this case how I can send every type different serialized Java object based on the type of message that I want to send?

            Is there some better way to design this setup so that it allow me to scale it much more easy and make it very robust for future integrations?

            You can see here how I send message between consumers and producers: org.apache.kafka.common.KafkaException: class SaleRequestFactory is not an instance of org.apache.kafka.common.serialization.Serializer

            EDIT:

            1. Order matters because the communication will be asyncronius. Producers will wait for returned messages for status
            2. It's not important to keep the data of each gateway on a different topic
            3. What kind of isolation do you want? I want ot isolate the messages/topics completely from each other in order to prevent mistakes in future when I need to add more gateways or types of messages

            is it important to you to keep the data of each gateway on a different topic? - no, I just want ot isolate hte data.

            If you would go with a single topic per gateway, do you care about the overhead it will make on the client-side? - read unnecessary messages, write more logic, hybrid serializer, etc

            I have no idea here. My main consern is to make the system easy to extent with new features.

            ...

            ANSWER

            Answered 2021-Jan-21 at 16:10

            I think that one topic per event-type would indeed be too much for the operational overhead you mentioned.

            Option 2 I think would be the right way - one topic per integration-gateway, with dedicated consumers. The advantages are:

            • You isolate the workload on the topic-level (many messages on integration-gateway A will not impact the consumers for the gateway B)
            • You can scale the consumers based on the topic workload

            The producers will serialize the message according to the requirements of the gateway, and they will publish it on the specific topic. The consumers will just read the messages and push it.

            Source https://stackoverflow.com/questions/65811681

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install common-kafka

            You can download it from GitHub, Maven.
            You can use common-kafka like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the common-kafka component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .

            Support

            You are welcome to contribute to Common-Kafka. Read our Contribution guidelines.
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            Install
            Maven
            Gradle
            CLONE
          • HTTPS

            https://github.com/cerner/common-kafka.git

          • CLI

            gh repo clone cerner/common-kafka

          • sshUrl

            git@github.com:cerner/common-kafka.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Explore Related Topics

            Consider Popular Pub Sub Libraries

            EventBus

            by greenrobot

            kafka

            by apache

            celery

            by celery

            rocketmq

            by apache

            pulsar

            by apache

            Try Top Libraries by cerner

            terra-core

            by cernerJavaScript

            kaiju

            by cernerRuby

            fhir.cerner.com

            by cernerRuby

            smart-on-fhir-tutorial

            by cernerJavaScript

            bunsen

            by cernerJava