kafka-streams-in-action | Source code for the Kafka Streams in Action Book | Pub Sub library
kandi X-RAY | kafka-streams-in-action Summary
kandi X-RAY | kafka-streams-in-action Summary
Welcome to the source code for Kafka Streams in Action. Here you'll find directions for running the example code from the book. If any of the examples fail to produce output make sure you have created the topics needed. For those running on Max/*nix there is the create-topic.sh file in the bin directory which creates all required topics ahead of time. If you have any issues with the examples you can post a question in the Manning Authors forum at The examples in Chapter 9 are more involved and require some extra steps to run. The first example we'll go over is the Kafka-Connect and Kafka Streams integration.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Main method for testing
- Generate the stock transactions with a key function
- Generates a list of public companies
- Main method for testing
- Start the server
- Gets the properties
- Entry point for testing
- Generate a beer purchase
- Generates a list of beer purchase
- Creates a unique hash code
- Builds the topology
- ProcessesBeer purchase
- Main method
- Null if values are equal
- Transform key - value pair into keyvalue store
- Cogroup events
- Main method to insert data in DB
- Main entry point for testing
- Fetches data from state store
- Main entry point
- Fetches session from session store
- Starts a Kafka producer
- Creates a hash code for this person
- Calculates the Correlated purchase based on another purchase
- Compares two Purchase objects
- Main method to start the producer
kafka-streams-in-action Key Features
kafka-streams-in-action Examples and Code Snippets
Community Discussions
Trending Discussions on kafka-streams-in-action
QUESTION
I come across this phrase from https://niqdev.github.io/devops/kafka/ and https://livebook.manning.com/book/kafka-streams-in-action/chapter-2/109 (Kafka Streams in Action )
The controller broker is responsible for setting up leader/follower relationships for all partitions of a topic. If a Kafka node dies or is unresponsive (to ZooKeeper heartbeats), all of its assigned partitions (both leader and follower) are reassigned by the controller broker.
I think it is not correct assignment of follower partitions to other brokers - as the partitions wont heal themselves unless the broker comes back . I know it ONLY happens for leader replica where if the broker that has leader replica gone down, one of the broker that contains follower will become leader. But, I dont think "reassigment" of followers will happen automatically unless reassignment is initiated manually. Please add your inputs
...ANSWER
Answered 2020-Jul-23 at 19:30The terminology might be a little off indeed but still applies. Followers are not necessarily assigned to other brokers but they need to change the endpoint to where they are going to send fetch requests. The follower's job is to stay in-sync with the leader, and if the leader has been assigned to a new broker because the old one failed then the followers need to send their fetch requests to the new elected broker. I think that is what reassignment means in the context that you shared.
QUESTION
I am trying out the sample KafkaStreams code from Chapter 4 from the book - Kafka Streams in Action. I pretty much copied the code in github - https://github.com/bbejeck/kafka-streams-in-action/blob/master/src/main/java/bbejeck/chapter_4/ZMartKafkaStreamsAddStateApp.java This is an example using StateStore. When I run the code as is, no data is flowing through the topology. I verified that mock data is being generated, as I can see the offset in the input topic - transactions go up. However, nothing in the output topics, and nothing is printed to console.
However, when I comment line 81-88 (https://github.com/bbejeck/kafka-streams-in-action/blob/master/src/main/java/bbejeck/chapter_4/ZMartKafkaStreamsAddStateApp.java#L81-L88), basically avoid creating the "through()" processor node, the code works. I see data being generated to the "patterns" topics, and output generate in console.
I am using Kafka broker and client version 2.4. Would appreciate any help or pointers to debug the issue.
Thank you, Ahmed.
...ANSWER
Answered 2020-Feb-23 at 08:26It is well documented, that you need to create intermediate topic that you use via through()
manually and upfront before you start your application. Intermediate topics, similar to input and output topics are not managed by Kafka Streams, but it's the users responsibility to manage them.
Cf: https://docs.confluent.io/current/streams/developer-guide/manage-topics.html
Btw: there is work in progress to add a new repartition()
operator that allows you to repartition via a topic that will be managed by Kafka Streams (cf. https://cwiki.apache.org/confluence/display/KAFKA/KIP-221%3A+Enhance+DSL+with+Connecting+Topic+Creation+and+Repartition+Hint)
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install kafka-streams-in-action
You can use kafka-streams-in-action like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the kafka-streams-in-action component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page