Alpakka | Akka Streams Connectors - Alpakka
kandi X-RAY | Alpakka Summary
kandi X-RAY | Alpakka Summary
This project provides a home to Akka Streams connectors to various technologies, protocols or libraries.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of Alpakka
Alpakka Key Features
Alpakka Examples and Code Snippets
Community Discussions
Trending Discussions on Alpakka
QUESTION
I'm attempting to use the default Kafka config settings but I'm unsure how ${akka.kafka.consumer}
is set. Reading https://doc.akka.io/docs/alpakka-kafka/current/consumer.html#config-inheritance Ive defined the following :
In application.conf I define :
...ANSWER
Answered 2022-Mar-01 at 12:47As you've noticed, the section of the docs is "config inheritance". It is showing how you can define your normal configuration in one section and then extend/replace that configuration in another section. The sample they show has a akka { kafka.consumer }
section at the top (click the "source" button the example to see this section). And then, because of the naming and inheritance features of HOCON, can just inherit from that section using ${akka.kafka.consumer}
. There's no need to actually use that inheritance to configure Alpakka, that is just a best practice. If you are just trying to use the default config, it's, well, already the default.
For example, if you are just trying to define the bootstrap server, you don't have to use inheritance to do that as they do in that example. You can just specify akka.kafka.consumer.kafka-clients.bootstrap.servers = "yourbootstrap:9092"
directly. The inheritance is just so that you can specify the shared settings in one place.
If you are just trying to learn how to configure Alpakka, look at the section immediately above this on Settings. I find it most helpful to look at the reference config, which you can see by clicking on the reference.conf tab next to "Important consumer settings".
QUESTION
I'm trying to understand what is the best way to implement with akka stream and alpakka the following scenario (simplified):
- The frontend opens a websocket connection with the backend
- Backend should wait an initialization message with some parameters (for example
bootstrapServers
,topicName
and atransformationMethod
that is a string parameter) - Once these informations are in place, backend can start the alpakka consumer to consume from topic
topicName
frombootstrapServers
and applying some transformation to the data based ontransformationMethod
, pushing these results inside the websocket - Periodically, frontend can send through the websocket messages that changes the
transformationMethod
field, so that the transformation algorithm of the messages consumed from Kafka can dynamically change, based on the value oftransformationMethod
provided into the websocket.
I don't understand if it's possible to achieve this on akka stream inside a Graph, especially the dynamic part, both for the initialization of the alpakka consumer and also for the dynamic changing of the transformationMethod
parameter.
Example:
Frontend establish connection, and after 10 second it sends trough the socket the following:
...ANSWER
Answered 2022-Feb-03 at 17:08I would probably tend to implement this by spawning an actor for each websocket, prematerializing a Source
which will receive messages from the actor (probably using ActorSource.actorRefWithBackpressure
), building a Sink
(likely using ActorSink.actorRefWithBackpressure
) which adapts incoming websocket messages into control-plane messages (initialization (including the ActorRef
associated with the prematerialized source) and transformation changes) and sends them to the actor, and then tying them together using the handleMessagesWithSinkSource
on WebsocketUpgrade
.
The actor you're spawning would, on receipt of the initialization message, start a stream which is feeding messages to it from Kafka; some backpressure can be fed back to Kafka by having the stream feed messages via an ask protocol which waits for an ack; in order to keep that stream alive, the actor would need to ack within a certain period of time regardless of what the downstream did, so there's a decision to be made around having the actor buffer messages or drop them.
QUESTION
TLDR:
- Is committing produced message's offset as consumed (even if it wasn't) expected behavior for auto-commit enabled Kafka clients? (for the applications that consuming and producing the same topic)
Detailed explanation:
I have a simple scala application that has an Akka actor which consumes messages from a Kafka topic and produces the message to the same topic if any exception occurs during message processing.
...ANSWER
Answered 2022-Jan-31 at 17:58As far as Kafka is concerned, the message is consumed as soon as Alpakka Kafka reads it from Kafka.
This is before the actor inside of Alpakka Kafka has emitted it to a downstream consumer for application level processing.
Kafka auto-commit (enable.auto.commit = true
) will thus result in the offset being committed before the message has been sent to your actor.
The Kafka docs on offset management do (as of this writing) refer to enable.auto.commit
as having an at-least-once semantic, but as noted in my first paragraph, this is an at-least-once delivery semantic, not an at-least-once processing semantic. The latter is an application level concern, and accomplishing that requires delaying the offset commit until processing has completed.
The Alpakka Kafka docs have an involved discussion about at-least-once processing: in this case, at-least-once processing will likely entail introducing manual offset committing and replacing mapAsyncUnordered
with mapAsync
(since mapAsyncUnordered
in conjunction with manual offset committing means that your application can only guarantee that a message from Kafka gets processed at-least-zero times).
In Alpakka Kafka, a broad taxonomy of message processing guarantees:
- hard at-most-once:
Consumer.atMostOnceSource
- commit after every message before processing - soft at-most-once:
enable.auto.commit = true
- "soft" because the commits are actually batched for increased throughput, so this is really "at-most-once, except when it's at-least-once" - hard at-least-once: manual commit only after all processing has been verified to succeed
- soft at-least-once: manual commit after some processing has been completed (i.e. "at-least-once, except when it's at-most-once")
- exactly-once: not possible in general, but if your processing has the means to dedupe and thus make duplicates idempotent, you can have effectively-once
QUESTION
I am trying to write a piece of code which does following:-
- Reads a large csv file from remote source like s3.
- Process the file record by record.
- Send notification to user
- Write the output to a remote location
Sample record in input csv:
...ANSWER
Answered 2021-Nov-24 at 03:36The output of notify
is a PushResult
, but the input of writeOutput
is ByteString
. Once you change that it will compile. In case you need ByteString
, get the same from OutputRecord
.
BTW, in the sample code that you have provided, a similar error exists in readCSV
and process
.
QUESTION
I'm creating an akka-stream using Alpakka and the Slick module but I'm stuck in a type mismatch problem.
One branch is about getting the total number of invoices in their table:
...ANSWER
Answered 2021-Oct-22 at 19:27Would sth like this work for you?
QUESTION
Is there a way to read only X number of message in particular time period suppose 1 minutes using Akka kafka Stream Consumer https://doc.akka.io/docs/alpakka-kafka/0.15/consumer.html via some configuration. Need to handle a situation where there is bombardment of messages from producer at particular time so consumer can be impacted.
...ANSWER
Answered 2021-Sep-27 at 19:58The throttle
stage in Akka Streams can be used to limit the rate at which elements are passed to the next stage in a stream (using the Scala API):
QUESTION
All,
I am developing an application, which use alpakka spring boot integration to read data from kafka. I have most of the code ready, the only place i am stuck is how to initialize a continuous running stream, as this is going to be a backend application and wont be having any api to be called from ?
...ANSWER
Answered 2021-Sep-01 at 23:58As far as I know, Alpakka's Spring integration is basically designed around exposing Akka Streams via a Spring HTTP controller. So I'm not sure what purpose bringing Spring into this serves, since there's quite an impedance mismatch between the way an Akka application will tend to like to work and the way a Spring application will tend to like to work.
Assuming you're talking about using Alpakka Kafka, the most idiomatic thing to do would be to just start a stream fed by an Alpakka Kafka Source
in your main
method and it will run until killed or it fails. You may want to use a RestartSource around the consumer and business logic to ensure that in the event of failure the stream restarts (note that one should generally expect messages for which the offset commit hadn't happened to be processed again, as Kafka in typical cases can only guarantee at-least-once processing).
QUESTION
I have 2 consumers running with the same group-id and reading from topic having 3 partititons and parsing messages with KafkaAvroDeserializer. The consumer has these settings:
...ANSWER
Answered 2021-Aug-26 at 12:53When you restart the Kafka source after a failure, that results in a new consumer being created; eventually the consumer in the failed source is declared dead by Kafka, triggering a rebalance. In that rebalance, there are no external guarantees of which consumer in the group will be assigned which partition. This would explain why your other consumer in the group reads that partition.
The issue here with a poison message derailing consumption is a major reason I've developed a preference to treat keys and values from Kafka as blobs by using the ByteArrayDeserializer
and do the deserialization myself in the stream, which gives me the ability to record (e.g. by logging; producing the message to a dead-letter topic for later inspection can also work) that there was a malformed message in the topic and move on by committing the offset. Either
in Scala is particularly good for moving the malformed message directly to the committer.
QUESTION
Im trying to read parque file from S3 using akka streams following the official doc but I am getting this error
java.io.IOException: No FileSystem for scheme: s3a
this is the code that triggered that exception. I will highly appreciate any clue/example of how should I do it correctly
ANSWER
Answered 2021-Aug-19 at 08:01You are most likely missing hadoop-aws
lib on your classpath.
Have a look here: https://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/index.html
And also this SO gives some more details how to setup credentials for access to S3: How do I configure S3 access for org.apache.parquet.avro.AvroParquetReader?
Once you have AvroParquetReader
correctly initialized, then you can create Akka Stream's Source
out of it as per the Alpakka Avro Parquet doc (https://doc.akka.io/docs/alpakka/current/avroparquet.html#source-initiation)
QUESTION
How can we consume SSE in scala play framework? Most of the resources that I could find were to make an SSE source. I want to reliably listen to SSE events from other services (with autoconnect). The most relevant article was https://doc.akka.io/docs/alpakka/current/sse.html . I implemented this but this does not seem to work (code below). Also the event that I am su
...ANSWER
Answered 2021-Aug-02 at 18:30The issue in your code is that the events: Future
would only complete when the stream (eventSource
) completes.
I'm not familiar with SSE but the stream likely never completes in your case as it's always listening for new events.
You can learn more in Akka Stream documentation.
Depending on what you want to do with the events, you could just map
on the stream like:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install Alpakka
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page