confluent-kafka-dotnet | Confluent 's Apache Kafka .NET client | Pub Sub library

 by   confluentinc C# Version: v2.1.1 License: Apache-2.0

kandi X-RAY | confluent-kafka-dotnet Summary

kandi X-RAY | confluent-kafka-dotnet Summary

confluent-kafka-dotnet is a C# library typically used in Messaging, Pub Sub, Kafka applications. confluent-kafka-dotnet has no bugs, it has no vulnerabilities, it has a Permissive License and it has medium support. You can download it from GitHub.

Confluent's Apache Kafka .NET client
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              confluent-kafka-dotnet has a medium active ecosystem.
              It has 2553 star(s) with 794 fork(s). There are 395 watchers for this library.
              There were 1 major release(s) in the last 12 months.
              There are 426 open issues and 1019 have been closed. On average issues are closed in 96 days. There are 53 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of confluent-kafka-dotnet is v2.1.1

            kandi-Quality Quality

              confluent-kafka-dotnet has 0 bugs and 0 code smells.

            kandi-Security Security

              confluent-kafka-dotnet has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              confluent-kafka-dotnet code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              confluent-kafka-dotnet is licensed under the Apache-2.0 License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              confluent-kafka-dotnet releases are available to install and integrate.
              Installation instructions are not available. Examples and code snippets are available.
              confluent-kafka-dotnet saves you 107 person hours of effort in developing the same functionality from scratch.
              It has 303 lines of code, 0 functions and 370 files.
              It has low code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of confluent-kafka-dotnet
            Get all kandi verified functions for this library.

            confluent-kafka-dotnet Key Features

            No Key Features are available at this moment for confluent-kafka-dotnet.

            confluent-kafka-dotnet Examples and Code Snippets

            No Code Snippets are available at this moment for confluent-kafka-dotnet.

            Community Discussions

            QUESTION

            Kafka .NET Client custom timestamp
            Asked 2021-Nov-02 at 13:43

            I have been working with the Kafka .NET Client, everything is working but right now the timestamp is automatically set and what I need is to be able to send my own timestamp.

            ...

            ANSWER

            Answered 2021-Nov-02 at 13:43

            QUESTION

            Kafka: Consume partition with manual batching - Messages are being skipped
            Asked 2021-Sep-11 at 00:11

            I am using Confluent Kafka .NET to create a consumer for a partitioned topic.

            As Confluent Kafka .NET does not support consuming in batches, I built a function that consumes messages until the batch size is reached. The idea of this function is to build batches with messages from the same partition only, that is why I stop building the batch once I consume a result that has a different partition and return whatever number of messages I was able to consume up to that point.

            Goal or Objective: I want to be able to process the messages I returned in the batch, and commit the offsets for those messages only. i.e:

            Message Consumed From Partition Offset Stored in Batch 0 0 Yes 0 1 Yes 2 0 No

            From the table above I would like to process both messages I got from partition 0. Message from partition 2 would be ignored and (hopefully) PICKED UP LATER in another call to ConsumeBatch.

            To commit I simply call the synchronous Commit function passing the offset of the latest message I processed as parameter. In this case I would pass the offset of the second message of the batch shown in the table above (Partition 0 - Offset 1).

            ISSUE:

            The problem is that for some reason, when I build a batch like the one shown above, the messages I decide not to process because of validations are being ignored forever. i.e: Message 0 of partition 2 will never be picked up by the consumer again.

            As you can see in the consumer configuration below, I have set both EnableAutoCommit and EnableAutoOffsetStore as false. I think this would be enough for the consumer to not do anything with the offsets and be able to pick up ignored messages in another Consume call, but it isn't. The offset is somehow increasing up to the latest consumed message for each partition, regardless of my configuration.

            Can anybody give me some light on what am I missing here to achieve the desired behavior if possible?

            Simplified version of the function to build the batch:

            ...

            ANSWER

            Answered 2021-Sep-11 at 00:11

            The key was to use the Seek function to reset the partition's offset to a specific position so that the ignored message could be picked up again as part of another batch.

            In the same function above:

            Source https://stackoverflow.com/questions/69125483

            QUESTION

            How To Implement A Wait Retry Transient Fault Handling Policy?
            Asked 2021-Mar-25 at 11:53

            I am fairly new to Kafka and Polly. I am seeking advice with respect to how to implement failure resiliency when using the Admin Client with Kakfa Confluent .NET client. I am using the Admin Client to create a topic if it does not already exist during startup of a Blazor Server Web App.

            Initially, I am trying to use polly to implement a simple wait and retry policy, listed below. I am expecting this to retry a create topic operation for a configurable number of attempts. Between each retry attempt there is a short configurable wait delay. If all retry attempts have been exhausted then a fatal error is signalled and the application gracefully exits.

            Wait and Retry Policy

            ...

            ANSWER

            Answered 2021-Mar-25 at 11:53

            After filing an issue at the Confluent Kafka GitHub repository it looks as though the problem described in this question is due to a confirmed bug in the Confluent Kafka .NET library.

            Workaround is suggested by the libraries author here.

            Essentially until the bug is fixed, a new AdminClient instance has to be created for each retry attempt.

            Source https://stackoverflow.com/questions/65509865

            QUESTION

            Pattern Matching in Confluent.net
            Asked 2020-Aug-13 at 15:55

            So, I have a situation where I am suppose to match the pattern of the topics that I am subscribing to. The structure of my topics is 3-part "part1.part2.part3" so for e.g. DbServerName.Domain.DbTableName.
            Now according to this post https://github.com/confluentinc/confluent-kafka-dotnet/issues/245 if I prefix my topic name with a "^" it should work.

            So
            consumer.Subscribe("^") work fine --give all the topics
            consumer.Subscribe("^DbServerName.public.DbTableName") also works fine.

            but if I want to match my topics against just DbTableName irrespective of whatever DbserverName and domain might be it doesn't work

            So consumer.Subscribe("^.^.tableName") doesn't work consumer.Subscribe("^tablename") also doesn't work.

            any suggestion on how to achieve this functionality would be much appreciated. cheers !!

            ...

            ANSWER

            Answered 2020-Aug-13 at 15:55

            Based on the description, ^ is a feature toggle, so the pattern would be

            Source https://stackoverflow.com/questions/63375999

            QUESTION

            How can I get a list of all `PartitionTopic`s for a topic in Confluent.Kafka?
            Asked 2020-Jul-14 at 11:36

            I am using confluent-kafka-dotnet (Confluent.Kafka) to produce messages for Kafka.

            I would like to manually manage which partitions messages go to in order to preserve order for certain groups of messages.

            How is it possible to get a list of PartitionTopic for a Kafka topic?

            I have looked at this solution in Java. But I don't understand how to achieve the same functionality with Confluent.Kafka.

            Alternatively, it would be acceptable to send messages with keys, because the same keys are guaranteed to be on the same partitions. But again, I could not find a way to create a new Message with any key type other than Null. So, an example of sending a message with a non-null key would be helpful.

            ...

            ANSWER

            Answered 2020-Jul-14 at 10:12

            I have worked out that the reason I could not create messages with a non-null key was because I had specified my Producer with .

            Thank you to mjwills for asking me for minimal reproducible example, which prompted me to work this out.

            Source https://stackoverflow.com/questions/62892202

            QUESTION

            Kafka concept for high availability at-least-one
            Asked 2020-Jul-10 at 12:35

            I have read some How To's regarding Kafka and how it works with topics, producers, consumers, consumer groups etc but its not clear what you have to do to achieve no lost messages and that consumers in consumer-groups only read un-comitted messages. All examples are of the simplest kind and they give no guidance

            Scenario:

            Lets say I have a TopicA with 4 partitions P1-P4. I have 2 consumers C1 and C2 that belongs to consumer group CG1... What do I have to do when coding/setting up C1 and C2 so no messages will ever be lost i.e if C1 or C2 crashes/restarts they should start read unread messages (uncomitted) from P1-P4 in order they arrived to Kafka. Do I have to configure C1 and C2 to know about P1-P4 or is this done under the hood using for example confluent-kafka-dotnet ?

            Thanks!

            ...

            ANSWER

            Answered 2020-Jul-10 at 12:26

            When C1 or C2 crashes (or be restarted) The alive consumer continue reading the partitions of the died consumer, when de consumer go alive again, the partitions are rebalanced again agains the consumers.

            If your two (or N... all!) consumers crashes (or be restarted), when their go online again continue reading from the last point where they left it, without losing or repeating messages.

            https://www.oreilly.com/library/view/kafka-the-definitive/9781491936153/ch04.html

            Consumers in a consumer group share ownership of the partitions in the topics they subscribe to. When we add a new consumer to the group, it starts consuming messages from partitions previously consumed by another consumer. The same thing happens when a consumer shuts down or crashes; it leaves the group, and the partitions it used to consume will be consumed by one of the remaining consumers. Reassignment of partitions to consumers also happen when the topics the consumer group is consuming are modified (e.g., if an administrator adds new partitions).

            https://medium.com/@jhansireddy007/how-to-parallelise-kafka-consumers-59c8b0bbc37a

            Q. What if consumer-B goes down? A. Kafka will do rebalancing and it would assign all the four partitions to consumer-A.

            Source https://stackoverflow.com/questions/62833811

            QUESTION

            C# confluent kafka problem with avro serialization
            Asked 2020-Jul-10 at 10:27

            I'm using docker to run kafka and other services from https://github.com/confluentinc/cp-all-in-one with confluent nuget packages for kafka, avro and schemaRegistry in my test project.

            If it goes to sending json messages I have no problem till now, but I'm struggling with sending avro serialized messages.

            I saw https://github.com/confluentinc/confluent-kafka-dotnet/tree/master/examples/AvroSpecific example and I tried to do it the same way but eventually I get an exception like below:

            Local: Value serialization error
            at Confluent.Kafka.Producer2.d__52.MoveNext() at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at System.Runtime.CompilerServices.TaskAwaiter1.GetResult() at Kafka_producer.KafkaService.d__10.MoveNext() in C:\Users\lu95eb\source\repos\Kafka_playground\Kafka producer\KafkaService.cs:line 126

            with inner exception

            Object reference not set to an instance of an object.
            at Confluent.SchemaRegistry.Serdes.SpecificSerializerImpl1..ctor(ISchemaRegistryClient schemaRegistryClient, Boolean autoRegisterSchema, Int32 initialBufferSize) at Confluent.SchemaRegistry.Serdes.AvroSerializer1.d__6.MoveNext() at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at System.Runtime.CompilerServices.TaskAwaiter.ValidateEnd(Task task) at Confluent.Kafka.Producer`2.d__52.MoveNext()

            Here's my SpecificRecord class

            ...

            ANSWER

            Answered 2020-Jul-10 at 10:27

            If anybody is curious about the solution (I can't imagine how someone could be ;)) then I wrote 'custom' avro serializer and deserializer and works like a charm.

            Source https://stackoverflow.com/questions/62570757

            QUESTION

            Kafka Producer Error: ' Value serializer not specified and there is no default serializer defined for type ...'
            Asked 2020-Apr-15 at 17:23

            I just started using Kafka and hit the following rookie error:

            ...

            ANSWER

            Answered 2020-Apr-15 at 17:23

            I suggest checking out the working examples/ dir in that repo to see working code that you can copy into your own projects.

            Assuming you want to use Avro

            You would use schemagen to create your class, not manaully write it.

            Then you must always add some ValueSerializer in Kafka clients in order to send data

            Source https://stackoverflow.com/questions/61213311

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install confluent-kafka-dotnet

            You can download it from GitHub.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/confluentinc/confluent-kafka-dotnet.git

          • CLI

            gh repo clone confluentinc/confluent-kafka-dotnet

          • sshUrl

            git@github.com:confluentinc/confluent-kafka-dotnet.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Explore Related Topics

            Consider Popular Pub Sub Libraries

            EventBus

            by greenrobot

            kafka

            by apache

            celery

            by celery

            rocketmq

            by apache

            pulsar

            by apache

            Try Top Libraries by confluentinc

            librdkafka

            by confluentincC

            ksql

            by confluentincJava

            confluent-kafka-go

            by confluentincGo

            confluent-kafka-python

            by confluentincPython

            kafka-streams-examples

            by confluentincJava