confluent-kafka-dotnet | Confluent 's Apache Kafka .NET client | Pub Sub library
kandi X-RAY | confluent-kafka-dotnet Summary
kandi X-RAY | confluent-kafka-dotnet Summary
Confluent's Apache Kafka .NET client
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of confluent-kafka-dotnet
confluent-kafka-dotnet Key Features
confluent-kafka-dotnet Examples and Code Snippets
Community Discussions
Trending Discussions on confluent-kafka-dotnet
QUESTION
I have been working with the Kafka .NET Client, everything is working but right now the timestamp is automatically set and what I need is to be able to send my own timestamp.
...ANSWER
Answered 2021-Nov-02 at 13:43The Message class also has a Timestamp field
QUESTION
I am using Confluent Kafka .NET to create a consumer for a partitioned topic.
As Confluent Kafka .NET does not support consuming in batches, I built a function that consumes messages until the batch size is reached. The idea of this function is to build batches with messages from the same partition only, that is why I stop building the batch once I consume a result that has a different partition and return whatever number of messages I was able to consume up to that point.
Goal or Objective: I want to be able to process the messages I returned in the batch, and commit the offsets for those messages only. i.e:
Message Consumed From Partition Offset Stored in Batch 0 0 Yes 0 1 Yes 2 0 NoFrom the table above I would like to process both messages I got from partition 0. Message from partition 2 would be ignored and (hopefully) PICKED UP LATER in another call to ConsumeBatch.
To commit I simply call the synchronous Commit
function passing the offset of the latest message I processed as parameter. In this case I would pass the offset of the second message of the batch shown in the table above (Partition 0 - Offset 1).
ISSUE:
The problem is that for some reason, when I build a batch like the one shown above, the messages I decide not to process because of validations are being ignored forever. i.e: Message 0 of partition 2 will never be picked up by the consumer again.
As you can see in the consumer configuration below, I have set both EnableAutoCommit and EnableAutoOffsetStore as false. I think this would be enough for the consumer to not do anything with the offsets and be able to pick up ignored messages in another Consume
call, but it isn't. The offset is somehow increasing up to the latest consumed message for each partition, regardless of my configuration.
Can anybody give me some light on what am I missing here to achieve the desired behavior if possible?
Simplified version of the function to build the batch:
...ANSWER
Answered 2021-Sep-11 at 00:11The key was to use the Seek
function to reset the partition's offset to a specific position so that the ignored message could be picked up again as part of another batch.
In the same function above:
QUESTION
I am fairly new to Kafka and Polly. I am seeking advice with respect to how to implement failure resiliency when using the Admin Client with Kakfa Confluent .NET client. I am using the Admin Client to create a topic if it does not already exist during startup of a Blazor Server Web App.
Initially, I am trying to use polly to implement a simple wait and retry policy, listed below. I am expecting this to retry a create topic operation for a configurable number of attempts. Between each retry attempt there is a short configurable wait delay. If all retry attempts have been exhausted then a fatal error is signalled and the application gracefully exits.
Wait and Retry Policy
...ANSWER
Answered 2021-Mar-25 at 11:53After filing an issue at the Confluent Kafka GitHub repository it looks as though the problem described in this question is due to a confirmed bug in the Confluent Kafka .NET library.
Workaround is suggested by the libraries author here.
Essentially until the bug is fixed, a new AdminClient instance has to be created for each retry attempt.
QUESTION
So, I have a situation where I am suppose to match the pattern of the topics that I am subscribing to. The structure of my topics is 3-part "part1.part2.part3" so for e.g. DbServerName.Domain.DbTableName.
Now according to this post https://github.com/confluentinc/confluent-kafka-dotnet/issues/245 if I prefix my topic name with a "^" it should work.
Soconsumer.Subscribe("^")
work fine --give all the topics consumer.Subscribe("^DbServerName.public.DbTableName")
also works fine.
but if I want to match my topics against just DbTableName irrespective of whatever DbserverName and domain might be it doesn't work
So
consumer.Subscribe("^.^.tableName")
doesn't work
consumer.Subscribe("^tablename")
also doesn't work.
any suggestion on how to achieve this functionality would be much appreciated. cheers !!
...ANSWER
Answered 2020-Aug-13 at 15:55Based on the description, ^
is a feature toggle, so the pattern would be
QUESTION
I am using confluent-kafka-dotnet (Confluent.Kafka) to produce messages for Kafka.
I would like to manually manage which partitions messages go to in order to preserve order for certain groups of messages.
How is it possible to get a list of PartitionTopic
for a Kafka topic?
I have looked at this solution in Java.
But I don't understand how to achieve the same functionality with Confluent.Kafka
.
Alternatively, it would be acceptable to send messages with keys, because the same keys are guaranteed to be on the same partitions. But again, I could not find a way to create a new Message
with any key type other than Null
. So, an example of sending a message with a non-null key would be helpful.
ANSWER
Answered 2020-Jul-14 at 10:12I have worked out that the reason I could not create messages with a non-null key was because I had specified my Producer with .
Thank you to mjwills for asking me for minimal reproducible example, which prompted me to work this out.
QUESTION
I have read some How To's regarding Kafka and how it works with topics, producers, consumers, consumer groups etc but its not clear what you have to do to achieve no lost messages and that consumers in consumer-groups only read un-comitted messages. All examples are of the simplest kind and they give no guidance
Scenario:
Lets say I have a TopicA with 4 partitions P1-P4. I have 2 consumers C1 and C2 that belongs to consumer group CG1... What do I have to do when coding/setting up C1 and C2 so no messages will ever be lost i.e if C1 or C2 crashes/restarts they should start read unread messages (uncomitted) from P1-P4 in order they arrived to Kafka. Do I have to configure C1 and C2 to know about P1-P4 or is this done under the hood using for example confluent-kafka-dotnet ?
Thanks!
...ANSWER
Answered 2020-Jul-10 at 12:26When C1 or C2 crashes (or be restarted) The alive consumer continue reading the partitions of the died consumer, when de consumer go alive again, the partitions are rebalanced again agains the consumers.
If your two (or N... all!) consumers crashes (or be restarted), when their go online again continue reading from the last point where they left it, without losing or repeating messages.
https://www.oreilly.com/library/view/kafka-the-definitive/9781491936153/ch04.html
Consumers in a consumer group share ownership of the partitions in the topics they subscribe to. When we add a new consumer to the group, it starts consuming messages from partitions previously consumed by another consumer. The same thing happens when a consumer shuts down or crashes; it leaves the group, and the partitions it used to consume will be consumed by one of the remaining consumers. Reassignment of partitions to consumers also happen when the topics the consumer group is consuming are modified (e.g., if an administrator adds new partitions).
https://medium.com/@jhansireddy007/how-to-parallelise-kafka-consumers-59c8b0bbc37a
Q. What if consumer-B goes down? A. Kafka will do rebalancing and it would assign all the four partitions to consumer-A.
QUESTION
I'm using docker to run kafka and other services from https://github.com/confluentinc/cp-all-in-one with confluent nuget packages for kafka, avro and schemaRegistry in my test project.
If it goes to sending json messages I have no problem till now, but I'm struggling with sending avro serialized messages.
I saw https://github.com/confluentinc/confluent-kafka-dotnet/tree/master/examples/AvroSpecific example and I tried to do it the same way but eventually I get an exception like below:
Local: Value serialization error
at Confluent.Kafka.Producer2.d__52.MoveNext() at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at System.Runtime.CompilerServices.TaskAwaiter
1.GetResult() at Kafka_producer.KafkaService.d__10.MoveNext() in C:\Users\lu95eb\source\repos\Kafka_playground\Kafka producer\KafkaService.cs:line 126
with inner exception
Object reference not set to an instance of an object.
at Confluent.SchemaRegistry.Serdes.SpecificSerializerImpl1..ctor(ISchemaRegistryClient schemaRegistryClient, Boolean autoRegisterSchema, Int32 initialBufferSize) at Confluent.SchemaRegistry.Serdes.AvroSerializer
1.d__6.MoveNext() at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at System.Runtime.CompilerServices.TaskAwaiter.ValidateEnd(Task task) at Confluent.Kafka.Producer`2.d__52.MoveNext()
Here's my SpecificRecord class
...ANSWER
Answered 2020-Jul-10 at 10:27If anybody is curious about the solution (I can't imagine how someone could be ;)) then I wrote 'custom' avro serializer and deserializer and works like a charm.
QUESTION
I just started using Kafka and hit the following rookie error:
...ANSWER
Answered 2020-Apr-15 at 17:23I suggest checking out the working examples/
dir in that repo to see working code that you can copy into your own projects.
Assuming you want to use Avro
You would use schemagen
to create your class, not manaully write it.
Then you must always add some ValueSerializer
in Kafka clients in order to send data
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install confluent-kafka-dotnet
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page