kandi X-RAY | common-kafka Summary
kandi X-RAY | common-kafka Summary
This repository contains common Kafka code supporting Cerner's cloud-based solutions. For Maven, add the following,.
Top functions reviewed by kandi - BETA
- Returns the next record
- Checks if any of the partitions has been paused
- Returns the next record for this partition
- Polls from the consumer group
- Returns the offset that should be used for a given topic partition
- Gets the earliest offset for the topic partition
- Gets the offset for the topic partition
- Gets a Kafka message producer instance
- Retrieves the producercurrency from the properties
- Creates a consumer
- Adds the given records to the Kafka partition
- Initialize the configuration
- Gets a map of all partitions assigned to a consumer group
- Returns the replication factor for a given topic
- Creates ZooKeeper connection
- Updates the configuration of a specific topic
- Removes the partition if it was paused
- Gets the committed offsets for the given topics and topics
- Resets offsets to the given offsets
- Initialize the Kafka producer
- Closes all producers
- Adds partitions to a specific topic
- Delete a topic
- Assigns partitions to each consumer
- Gets the properties of a Kafka topic
- Returns the offsets for the given topics
common-kafka Key Features
common-kafka Examples and Code Snippets
Trending Discussions on common-kafka
I want to design a solution for sending different kinds of e-mails to several providers. The general overview.
I have several upstream providers Sendgrid, Zoho, Mailgun and etc. They will be used to send e-mails and etc. For example:
- E-mail for Register new user
- E-mail for Remove user
- E-mail for Space Quota limit
(in general around 6 types of e-mails)
Every type of e-mail should be generated into Producers, converted into Serialized Java Object and Send to the appropriate Kafka Consumer integrated with the Upstream provider.
The questions is how to design Kafka for maximum performance and scalability?
1-st solution so far that I can think if is to have topic for every type of e-mail message and every gateway(6x4 = 24 topics). In the future I'm expecting to add more types of messages and gateways. Maybe it will reach 600 topics. This will make a lot Java source code for maintenance and a lot of topics to manage. Another downside will be that Kafka logs will be huge.
2-nd solution will be to use 1 topic for each consumer(integration gateway). But in this case how I can send every type different serialized Java object based on the type of message that I want to send?
Is there some better way to design this setup so that it allow me to scale it much more easy and make it very robust for future integrations?
You can see here how I send message between consumers and producers: org.apache.kafka.common.KafkaException: class SaleRequestFactory is not an instance of org.apache.kafka.common.serialization.Serializer
- Order matters because the communication will be asyncronius. Producers will wait for returned messages for status
- It's not important to keep the data of each gateway on a different topic
- What kind of isolation do you want? I want ot isolate the messages/topics completely from each other in order to prevent mistakes in future when I need to add more gateways or types of messages
is it important to you to keep the data of each gateway on a different topic? - no, I just want ot isolate hte data.
If you would go with a single topic per gateway, do you care about the overhead it will make on the client-side? - read unnecessary messages, write more logic, hybrid serializer, etc
I have no idea here. My main consern is to make the system easy to extent with new features....
ANSWERAnswered 2021-Jan-21 at 16:10
I think that one topic per event-type would indeed be too much for the operational overhead you mentioned.
Option 2 I think would be the right way - one topic per integration-gateway, with dedicated consumers. The advantages are:
- You isolate the workload on the topic-level (many messages on integration-gateway A will not impact the consumers for the gateway B)
- You can scale the consumers based on the topic workload
The producers will serialize the message according to the requirements of the gateway, and they will publish it on the specific topic. The consumers will just read the messages and push it.
No vulnerabilities reported
You can use common-kafka like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the common-kafka component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .
Reuse Trending Solutions
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page