kafka-operator | Oh no! Yet another Kafka operator for Kubernetes | Pub Sub library
kandi X-RAY | kafka-operator Summary
kandi X-RAY | kafka-operator Summary
Apache Kafka is an open-source distributed streaming platform, and some of the main features of the Kafka-operator are:. We took a different approach to what's out there - we believe for a good reason - please read on to understand more about our design motivations and some of the scenarios which were driving us to create the Banzai Cloud Kafka operator. The Banzai Cloud Kafka operator is a core part of Banzai Cloud Supertubes that helps you create production-ready Kafka cluster on Kubernetes, with scaling, rebalancing, and alerts based self healing. While the Kafka operator itself is an open-source project, the Banzai Cloud Supertubes product extends the functionality of the Kafka operator with commercial features (for example, built-in monitoring and multiple ways of disaster recovery). Read a detailed comparison of Supertubes and the Kafka operator.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of kafka-operator
kafka-operator Key Features
kafka-operator Examples and Code Snippets
Community Discussions
Trending Discussions on kafka-operator
QUESTION
I have a weird issue that no one can pinpoint. To make sure it was not an Azure Kubernetes issue, I also spun up minikube to test locally and I am getting the same error. The one thing in common Strimzi 0.28 for MirrorMaker2.
You can read the entire thread here in case it might help. We are stuck on a dead end. The link to the entire discussion is github under strimzi;
I moved it as I didn't want to spam as a gentleman by the name of scholzj helped and gave some great advice. But nothing seems to work.
Here is what I have done.
Create The Secret
Replaced actual data with , , for posting purposes.
...ANSWER
Answered 2022-Mar-19 at 20:27The issue was using cat <
I think it's because of $ in the username. EH needs this as the actual username for the connection. Once I made the above into a file between cat < it ran from the CLI without changing anything.
It worked.
QUESTION
Hell All- i have Prometheus installed using Helm
...ANSWER
Answered 2022-Jan-19 at 10:46You can check for another instance of Prometheus running on the same cluster:
QUESTION
I'm trying to deploy Prometheus on GKE to monitor an existing Strimzi Kafka GKE cluster, and am facing issues. (ref - https://strimzi.io/docs/operators/latest/deploying.html#proc-metrics-deploying-prometheus-operator-str)
Here is what is done :
- created a namespace - monitoring, while Kafka is deployed in namespace - kafka
- modified the kafka-deployment.yaml to include metricsConfig and KafkaExporter as specified in file https://github.com/strimzi/strimzi-kafka-operator/tree/0.26.0/examples/metrics/kafka-metrics.yaml
here are the changes :
...ANSWER
Answered 2022-Jan-05 at 01:46Assuming that the Prometheus pods did start, their collective hostnames would be found via service discovery like prometheus.monitoring.svc.cluster.local
https://cloud.google.com/kubernetes-engine/docs/concepts/service-discovery
You might also be interesting in exposing Prometheus itself
QUESTION
I'm attempting to provide bi-direction external access to Kafka using Strimzi by following this guide: Red Hat Developer - Kafka in Kubernetes
My YAML taken from the Strimizi examples on GitHub, is as follows:
...ANSWER
Answered 2021-Oct-28 at 15:45Strimzi just created the Kubernetes Service of type Loadbalancer
. It is up to your Kubernetes cluster to provision the load balancer and set its external address which Strimzi can use. When the external address is listed as pending
it means the load balancer is not (yet) created. In some public clouds that can take few minutes, so it might be just about waiting for it. But keep in mind that the load balancers are not supported in all environments => and when they are not supported, you cannot really use them. So you really need to double check whether your environment supports them or not. Typically, different clouds would support load balancers while some local or bare-metal environments might not (but it really depends).
I'm also not really sure why did you configured the advertised host and port:
QUESTION
I deployed a kafka cluster with consumer and producer clients on Kubernetes using Strimzi operator. I used the following strimzi deployment file , https://github.com/strimzi/strimzi-kafka-operator/blob/main/examples/metrics/kafka-metrics.yaml
I am using Kafka exporter to monitor consumer related metrics (Messages in/out per second per topic, lag by consumer group, offsets etc..). However, I am interested in configuring Prometheus to scrape the kafka_exporter metric "kafka_consumergroup_members
" for later display on Grafana. What additional configuration shall I add to the strimzi Prometheus configuration file (https://github.com/strimzi/strimzi-kafka-operator/blob/main/examples/metrics/prometheus-install/prometheus-rules.yaml) or any other deployment file (e.g., https://github.com/strimzi/strimzi-kafka-operator/blob/main/examples/metrics/kafka-metrics.yaml) so that "kafka_consumergroup_members" from the kafka_exporter metric is scraped.
ANSWER
Answered 2021-Jul-20 at 14:31The Kafka Exporter is a separate tool which provides additional metrics not provided by Kafka itself. It is not configurable in what metrics does it offer - you can only limit for which topics / consumer groups it will show the metrics.
So all metrics supported by Kafka Exporter are published on its metrics endpoint and when Prometheus scapes them it should scrape all of them. So if you have the other KAfka Exporter metrics in your Prometheus, you should already have this one as well (you actuall need to have some active consumer grups for it to show up).
QUESTION
I have a problem with fabric8io kubernetes-client using.
What i want: Creating Kafka cluster with Strimzi operator in Kubernetes. If i do all steps from Strimzi quickstart guide with CLI and kubectl it's all good.
But when i load yaml resources with kubernetes-client:5.2.1
library from Java code, there is an Exception occurs:
ANSWER
Answered 2021-Mar-19 at 06:59I'm from Fabric8 team. Kafka
is a Custom Resource which means it's model is not registered in KubernetesClient so this is the reason why you're facing No resource type found for:kafka.strimzi.io/v1#Kafka
error from KubenetesClient. KubernetesClient provides two methods for dealing with Custom Resources:
- Typeless API - Usage of CustomResources as raw Hashmaps
- Typed API - Provide POJOs for CustomResource types
I'll provide examples of using both APIs to load your Kafka yaml fragment.
Typeless API:
For Typeless API you would need to provide a CustomResourceDefinitionContext
, an object with details of CustomResource group, version, kind, plural etc. Here is how it would look like:
KafkaLoadTypeless.java
QUESTION
Recently I found some articles/projects using reverse proxy load balancers in front of Kafka (ie. https://github.com/banzaicloud/kafka-operator).
Until today, I though I understood Kafka principles and architecture well, because I thought that Kafka uses client-side balancing and consistent hashing so every client knows which partition is mastered on which broker and communicated with appropriate broker directly. Kafka even has protocol for notifying producers and consumers about repartitioning and other topology changes, so I really though that Kafka is supposed to be used without any load balancer. This has been even proven to me by receiving exceptions about producing to invalid broker/partition when we experienced issues during operations.
So what's the meaning of those proxies and load balancers in front of Kafka?
...ANSWER
Answered 2020-Sep-20 at 09:34You are not wrong! All the things that you've mentioned in there are correct.
If we are talking about a "classic" Load Balancer you would need to meet the following 2 conditions in order to use it with Kafka:
- Load Balancers are at the TCP level ( can't use L6 or L7 Load Balancers with Kafka )
- One Load Balancer per Kafka Broker ( just as you've mentioned, clients connect directly to the Broker that have business with )
The articles that you are mentioning are probably somehow related to Envoy Kafka Filter ( including Banzai ).
I don't know the internal details but I think I can make a summary. The main challenge with dynamic routing is the Kafka Protocol which is a TCP level protocol. So, we don't have access to metadata as we would have in the case of a higher-level protocol ( e.g: HTTP ) so that we can properly route the communication.
Envoy developed a Kafka Filter which allows just that. When a client connects to the proxy, the proxy can decode the Kafka protocol and it knows "ok, so you want to connect to x broker, let me do that for you".
QUESTION
I've been developing a prototype chart that depends on some custom resource definitions that are defined in one of the child charts.
To be more specific, I'm trying to create the resources defined in the strimzi-kafka-operator within my helm chart and would like the dependency to be explicitly installed first. I followed the helm documentation and added the following to my Chart.yaml
...ANSWER
Answered 2020-Feb-19 at 08:26Regarding CRD's: the fact that Helm by default won't manage those1 is a feature, not a bug. It will still install them if not present; but it won't modify or delete existing CRD's. The previous version of Helm (v2) does, but (speaking from experience) that can get you into all sorts of trouble if you're not careful. Quoting from the link you referenced:
There is not support at this time for upgrading or deleting CRDs using Helm. This was an explicit decision after much community discussion due to the danger for unintentional data loss. [...] One of the distinct disadvantages of the crd-install method used in Helm 2 was the inability to properly validate charts due to changing API availability (a CRD is actually adding another available API to your Kubernetes cluster). If a chart installed a CRD, helm no longer had a valid set of API versions to work against. [...] With the new crds method of CRD installation, we now ensure that Helm has completely valid information about the current state of the cluster.
The idea here is that Helm should operate only at the level of release data (adding/removing deployments, storage, etc.); but with CRD's, you're actually modifying an extension to the Kubernetes API itself, potentially inadvertently breaking other releases that use the same definitions. Consider if you're on a team that has a "library" of CRDs shared between several charts, and you want to uninstall one — formerly, With v2, Helm would happily let you modify or even delete those at will, with no checks on if/how they were used in other releases. Changes to CRDs are changes to your control plane / core API, and should be treated as such — you're modifying global resources.
In short: with v3, Helm positions itself more as a "developer" tool to define, template, and manage releases; CRDs, however, are meant to be managed independently e.g. by a "cluster administrator". At the end of the day, it's a win for all sides, since developers can setup/teardown deployments at will, with confidence that it's not going to break functionality elsewhere... and whoever's on call won't have to deal with alerts if/when you accidentally delete/modify a CRD and break things in production :)
See also the extensive discussion here for more context behind this decision.
Hope this helps!
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install kafka-operator
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page