scala-kafka | Quick up and running using Scala for Apache Kafka | Pub Sub library
kandi X-RAY | scala-kafka Summary
kandi X-RAY | scala-kafka Summary
Quick up and running using Scala for Apache Kafka. Use Vagrant to get up and running. 1) Install Vagrant [2) Install Virtual Box [In the main kafka folder. 1) vagrant up 2) ./gradlew test. once this is done * Zookeeper will be running 192.168.86.5 * Broker 1 on 192.168.86.10 * All the tests in src/test/scala/* should pass. If you want you can login to the machines using vagrant ssh but you don’t need to. You can access the brokers and zookeeper by their IP from your local without having to go into vm. bin/kafka-console-producer.sh --broker-list 192.168.86.10:9092 --topic . bin/kafka-console-consumer.sh --zookeeper 192.168.86.5:2181 --topic --from-beginning.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of scala-kafka
scala-kafka Key Features
scala-kafka Examples and Code Snippets
Community Discussions
Trending Discussions on scala-kafka
QUESTION
TLDR:
- Is committing produced message's offset as consumed (even if it wasn't) expected behavior for auto-commit enabled Kafka clients? (for the applications that consuming and producing the same topic)
Detailed explanation:
I have a simple scala application that has an Akka actor which consumes messages from a Kafka topic and produces the message to the same topic if any exception occurs during message processing.
...ANSWER
Answered 2022-Jan-31 at 17:58As far as Kafka is concerned, the message is consumed as soon as Alpakka Kafka reads it from Kafka.
This is before the actor inside of Alpakka Kafka has emitted it to a downstream consumer for application level processing.
Kafka auto-commit (enable.auto.commit = true
) will thus result in the offset being committed before the message has been sent to your actor.
The Kafka docs on offset management do (as of this writing) refer to enable.auto.commit
as having an at-least-once semantic, but as noted in my first paragraph, this is an at-least-once delivery semantic, not an at-least-once processing semantic. The latter is an application level concern, and accomplishing that requires delaying the offset commit until processing has completed.
The Alpakka Kafka docs have an involved discussion about at-least-once processing: in this case, at-least-once processing will likely entail introducing manual offset committing and replacing mapAsyncUnordered
with mapAsync
(since mapAsyncUnordered
in conjunction with manual offset committing means that your application can only guarantee that a message from Kafka gets processed at-least-zero times).
In Alpakka Kafka, a broad taxonomy of message processing guarantees:
- hard at-most-once:
Consumer.atMostOnceSource
- commit after every message before processing - soft at-most-once:
enable.auto.commit = true
- "soft" because the commits are actually batched for increased throughput, so this is really "at-most-once, except when it's at-least-once" - hard at-least-once: manual commit only after all processing has been verified to succeed
- soft at-least-once: manual commit after some processing has been completed (i.e. "at-least-once, except when it's at-most-once")
- exactly-once: not possible in general, but if your processing has the means to dedupe and thus make duplicates idempotent, you can have effectively-once
QUESTION
I am trying to check with testcontainers a streaming pipeline as a integration test but I don´t know how get bootstrapServers, at least in last testcontainers version and create a specific topic there. How can I use 'containerDef' to extract bootstrapservers and add a topic?
...ANSWER
Answered 2021-Oct-07 at 15:22The only problem here is that you are explicitly casting that KafkaContainer.Def
to ContainerDef
.
The type of container provided by withContianers
, Containter
is decided by path dependent type
in provided ContainerDef
,
QUESTION
I'm trying to install Kafka in my sbt, but when I click on "import changes" I'm getting an error:
[error] stack trace is suppressed; run 'last update' for the full output [error] stack trace is suppressed; run 'last ssExtractDependencies' for the full output [error] (update) sbt.librarymanagement.ResolveException: Error downloading net.cakesolutions:scala-kafka-client_2.13:2.3.1 [error] Not found [error] Not found [error] not found: C:\Users\macca.ivy2\local\net.cakesolutions\scala-kafka-client_2.13\2.3.1\ivys\ivy.xml [error] not found: https://repo1.maven.org/maven2/net/cakesolutions/scala-kafka-client_2.13/2.3.1/scala-kafka-client_2.13-2.3.1.pom [error] (ssExtractDependencies) sbt.librarymanagement.ResolveException: Error downloading net.cakesolutions:scala-kafka-client_2.13:2.3.1 [error] Not found [error] Not found [error] not found: C:\Users\macca.ivy2\local\net.cakesolutions\scala-kafka-client_2.13\2.3.1\ivys\ivy.xml [error] not found: https://repo1.maven.org/maven2/net/cakesolutions/scala-kafka-client_2.13/2.3.1/scala-kafka-client_2.13-2.3.1.pom [error] Total time: 1 s, completed 19:56:34 26/04/2020 [info] shutting down sbt server
build.sbt:
...ANSWER
Answered 2020-Apr-26 at 19:46Per the github page for scala-kafka-client
, you'll need to add a bintray resolver to your build.sbt:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install scala-kafka
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page