kafka-operator | Oh no! Yet another Kafka operator for Kubernetes | Pub Sub library

 by   banzaicloud Go Version: v0.17.0 License: Apache-2.0

kandi X-RAY | kafka-operator Summary

kandi X-RAY | kafka-operator Summary

kafka-operator is a Go library typically used in Messaging, Pub Sub, Kafka applications. kafka-operator has no bugs, it has no vulnerabilities, it has a Permissive License and it has low support. You can download it from GitHub.

Apache Kafka is an open-source distributed streaming platform, and some of the main features of the Kafka-operator are:. We took a different approach to what's out there - we believe for a good reason - please read on to understand more about our design motivations and some of the scenarios which were driving us to create the Banzai Cloud Kafka operator. The Banzai Cloud Kafka operator is a core part of Banzai Cloud Supertubes that helps you create production-ready Kafka cluster on Kubernetes, with scaling, rebalancing, and alerts based self healing. While the Kafka operator itself is an open-source project, the Banzai Cloud Supertubes product extends the functionality of the Kafka operator with commercial features (for example, built-in monitoring and multiple ways of disaster recovery). Read a detailed comparison of Supertubes and the Kafka operator.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              kafka-operator has a low active ecosystem.
              It has 522 star(s) with 134 fork(s). There are 26 watchers for this library.
              OutlinedDot
              It had no major release in the last 12 months.
              There are 37 open issues and 141 have been closed. On average issues are closed in 65 days. There are 3 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of kafka-operator is v0.17.0

            kandi-Quality Quality

              kafka-operator has 0 bugs and 0 code smells.

            kandi-Security Security

              kafka-operator has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              kafka-operator code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              kafka-operator is licensed under the Apache-2.0 License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              kafka-operator releases are available to install and integrate.
              Installation instructions are available. Examples and code snippets are not available.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of kafka-operator
            Get all kandi verified functions for this library.

            kafka-operator Key Features

            No Key Features are available at this moment for kafka-operator.

            kafka-operator Examples and Code Snippets

            No Code Snippets are available at this moment for kafka-operator.

            Community Discussions

            QUESTION

            Kubernetes MirrorMaker2 Cannot Load Secret
            Asked 2022-Mar-19 at 20:27

            I have a weird issue that no one can pinpoint. To make sure it was not an Azure Kubernetes issue, I also spun up minikube to test locally and I am getting the same error. The one thing in common Strimzi 0.28 for MirrorMaker2.

            You can read the entire thread here in case it might help. We are stuck on a dead end. The link to the entire discussion is github under strimzi;

            I moved it as I didn't want to spam as a gentleman by the name of scholzj helped and gave some great advice. But nothing seems to work.

            Here is what I have done.

            Create The Secret

            Replaced actual data with , , for posting purposes.

            ...

            ANSWER

            Answered 2022-Mar-19 at 20:27

            The issue was using cat <

            I think it's because of $ in the username. EH needs this as the actual username for the connection. Once I made the above into a file between cat < it ran from the CLI without changing anything.

            It worked.

            Source https://stackoverflow.com/questions/71505835

            QUESTION

            Prometheus install using helm - prometheus and alertmanger pods Terminating in a loop
            Asked 2022-Jan-19 at 10:46

            Hell All- i have Prometheus installed using Helm

            ...

            ANSWER

            Answered 2022-Jan-19 at 10:46

            You can check for another instance of Prometheus running on the same cluster:

            Source https://stackoverflow.com/questions/70749197

            QUESTION

            Prometheus on GKE to monitor Strimzi Kafka - how to get the Prometheus Pod IP
            Asked 2022-Jan-16 at 20:56

            I'm trying to deploy Prometheus on GKE to monitor an existing Strimzi Kafka GKE cluster, and am facing issues. (ref - https://strimzi.io/docs/operators/latest/deploying.html#proc-metrics-deploying-prometheus-operator-str)

            Here is what is done :

            here are the changes :

            ...

            ANSWER

            Answered 2022-Jan-05 at 01:46

            Assuming that the Prometheus pods did start, their collective hostnames would be found via service discovery like prometheus.monitoring.svc.cluster.local

            https://cloud.google.com/kubernetes-engine/docs/concepts/service-discovery

            You might also be interesting in exposing Prometheus itself

            Source https://stackoverflow.com/questions/70586260

            QUESTION

            External access to Kafka using Strimzi
            Asked 2021-Oct-28 at 15:45

            I'm attempting to provide bi-direction external access to Kafka using Strimzi by following this guide: Red Hat Developer - Kafka in Kubernetes

            My YAML taken from the Strimizi examples on GitHub, is as follows:

            ...

            ANSWER

            Answered 2021-Oct-28 at 15:45

            Strimzi just created the Kubernetes Service of type Loadbalancer. It is up to your Kubernetes cluster to provision the load balancer and set its external address which Strimzi can use. When the external address is listed as pending it means the load balancer is not (yet) created. In some public clouds that can take few minutes, so it might be just about waiting for it. But keep in mind that the load balancers are not supported in all environments => and when they are not supported, you cannot really use them. So you really need to double check whether your environment supports them or not. Typically, different clouds would support load balancers while some local or bare-metal environments might not (but it really depends).

            I'm also not really sure why did you configured the advertised host and port:

            Source https://stackoverflow.com/questions/69752073

            QUESTION

            Strimzi kafka exporter kafka_consumergroup_members metric
            Asked 2021-Jul-20 at 14:31

            I deployed a kafka cluster with consumer and producer clients on Kubernetes using Strimzi operator. I used the following strimzi deployment file , https://github.com/strimzi/strimzi-kafka-operator/blob/main/examples/metrics/kafka-metrics.yaml

            I am using Kafka exporter to monitor consumer related metrics (Messages in/out per second per topic, lag by consumer group, offsets etc..). However, I am interested in configuring Prometheus to scrape the kafka_exporter metric "kafka_consumergroup_members" for later display on Grafana. What additional configuration shall I add to the strimzi Prometheus configuration file (https://github.com/strimzi/strimzi-kafka-operator/blob/main/examples/metrics/prometheus-install/prometheus-rules.yaml) or any other deployment file (e.g., https://github.com/strimzi/strimzi-kafka-operator/blob/main/examples/metrics/kafka-metrics.yaml) so that "kafka_consumergroup_members" from the kafka_exporter metric is scraped.

            ...

            ANSWER

            Answered 2021-Jul-20 at 14:31

            The Kafka Exporter is a separate tool which provides additional metrics not provided by Kafka itself. It is not configurable in what metrics does it offer - you can only limit for which topics / consumer groups it will show the metrics.

            So all metrics supported by Kafka Exporter are published on its metrics endpoint and when Prometheus scapes them it should scrape all of them. So if you have the other KAfka Exporter metrics in your Prometheus, you should already have this one as well (you actuall need to have some active consumer grups for it to show up).

            Source https://stackoverflow.com/questions/68455790

            QUESTION

            JsonMappingException while loading yaml Kafka configuration via Fabric8io kubernetes-client
            Asked 2021-Mar-19 at 06:59

            I have a problem with fabric8io kubernetes-client using.

            What i want: Creating Kafka cluster with Strimzi operator in Kubernetes. If i do all steps from Strimzi quickstart guide with CLI and kubectl it's all good.

            But when i load yaml resources with kubernetes-client:5.2.1 library from Java code, there is an Exception occurs:

            ...

            ANSWER

            Answered 2021-Mar-19 at 06:59

            I'm from Fabric8 team. Kafka is a Custom Resource which means it's model is not registered in KubernetesClient so this is the reason why you're facing No resource type found for:kafka.strimzi.io/v1#Kafka error from KubenetesClient. KubernetesClient provides two methods for dealing with Custom Resources:

            1. Typeless API - Usage of CustomResources as raw Hashmaps
            2. Typed API - Provide POJOs for CustomResource types

            I'll provide examples of using both APIs to load your Kafka yaml fragment.

            Typeless API:

            For Typeless API you would need to provide a CustomResourceDefinitionContext, an object with details of CustomResource group, version, kind, plural etc. Here is how it would look like: KafkaLoadTypeless.java

            Source https://stackoverflow.com/questions/66690692

            QUESTION

            Does Kafka work with load balancers using reverse proxies?
            Asked 2020-Sep-20 at 09:34

            Recently I found some articles/projects using reverse proxy load balancers in front of Kafka (ie. https://github.com/banzaicloud/kafka-operator).

            Until today, I though I understood Kafka principles and architecture well, because I thought that Kafka uses client-side balancing and consistent hashing so every client knows which partition is mastered on which broker and communicated with appropriate broker directly. Kafka even has protocol for notifying producers and consumers about repartitioning and other topology changes, so I really though that Kafka is supposed to be used without any load balancer. This has been even proven to me by receiving exceptions about producing to invalid broker/partition when we experienced issues during operations.

            So what's the meaning of those proxies and load balancers in front of Kafka?

            ...

            ANSWER

            Answered 2020-Sep-20 at 09:34

            You are not wrong! All the things that you've mentioned in there are correct.

            If we are talking about a "classic" Load Balancer you would need to meet the following 2 conditions in order to use it with Kafka:

            1. Load Balancers are at the TCP level ( can't use L6 or L7 Load Balancers with Kafka )
            2. One Load Balancer per Kafka Broker ( just as you've mentioned, clients connect directly to the Broker that have business with )

            The articles that you are mentioning are probably somehow related to Envoy Kafka Filter ( including Banzai ).

            I don't know the internal details but I think I can make a summary. The main challenge with dynamic routing is the Kafka Protocol which is a TCP level protocol. So, we don't have access to metadata as we would have in the case of a higher-level protocol ( e.g: HTTP ) so that we can properly route the communication.

            Envoy developed a Kafka Filter which allows just that. When a client connects to the proxy, the proxy can decode the Kafka protocol and it knows "ok, so you want to connect to x broker, let me do that for you".

            Source https://stackoverflow.com/questions/63975910

            QUESTION

            Helm Chart: How do I install dependencies first?
            Asked 2020-Feb-19 at 08:26

            I've been developing a prototype chart that depends on some custom resource definitions that are defined in one of the child charts.

            To be more specific, I'm trying to create the resources defined in the strimzi-kafka-operator within my helm chart and would like the dependency to be explicitly installed first. I followed the helm documentation and added the following to my Chart.yaml

            ...

            ANSWER

            Answered 2020-Feb-19 at 08:26

            Regarding CRD's: the fact that Helm by default won't manage those1 is a feature, not a bug. It will still install them if not present; but it won't modify or delete existing CRD's. The previous version of Helm (v2) does, but (speaking from experience) that can get you into all sorts of trouble if you're not careful. Quoting from the link you referenced:

            There is not support at this time for upgrading or deleting CRDs using Helm. This was an explicit decision after much community discussion due to the danger for unintentional data loss. [...] One of the distinct disadvantages of the crd-install method used in Helm 2 was the inability to properly validate charts due to changing API availability (a CRD is actually adding another available API to your Kubernetes cluster). If a chart installed a CRD, helm no longer had a valid set of API versions to work against. [...] With the new crds method of CRD installation, we now ensure that Helm has completely valid information about the current state of the cluster.

            The idea here is that Helm should operate only at the level of release data (adding/removing deployments, storage, etc.); but with CRD's, you're actually modifying an extension to the Kubernetes API itself, potentially inadvertently breaking other releases that use the same definitions. Consider if you're on a team that has a "library" of CRDs shared between several charts, and you want to uninstall one — formerly, With v2, Helm would happily let you modify or even delete those at will, with no checks on if/how they were used in other releases. Changes to CRDs are changes to your control plane / core API, and should be treated as such — you're modifying global resources.

            In short: with v3, Helm positions itself more as a "developer" tool to define, template, and manage releases; CRDs, however, are meant to be managed independently e.g. by a "cluster administrator". At the end of the day, it's a win for all sides, since developers can setup/teardown deployments at will, with confidence that it's not going to break functionality elsewhere... and whoever's on call won't have to deal with alerts if/when you accidentally delete/modify a CRD and break things in production :)

            See also the extensive discussion here for more context behind this decision.

            Hope this helps!

            Source https://stackoverflow.com/questions/60283240

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install kafka-operator

            For detailed installation instructions, see the Banzai Cloud Documentation Page.

            Support

            The documentation of the Kafka operator project is available at the Banzai Cloud Documentation Page.
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries

            Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Explore Related Topics

            Consider Popular Pub Sub Libraries

            EventBus

            by greenrobot

            kafka

            by apache

            celery

            by celery

            rocketmq

            by apache

            pulsar

            by apache

            Try Top Libraries by banzaicloud

            bank-vaults

            by banzaicloudGo

            pipeline

            by banzaicloudGo

            logging-operator

            by banzaicloudGo

            koperator

            by banzaicloudGo

            istio-operator

            by banzaicloudGo