Kafdrop | Kafka UI and Monitoring Tool | Pub Sub library
kandi X-RAY | Kafdrop Summary
kandi X-RAY | Kafdrop Summary
Kafdrop is a UI for monitoring Apache Kafka clusters. The tool displays information such as brokers, topics, partitions, and even lets you view messages. It is a light weight application that runs on Spring Boot and requires very little configuration.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Provides a view of messages in a topic
- Apply common configuration properties
- Gets a list of messages from a topic
- Method to get the deserializer
- Loads the properties from an ini file
- Obtains the properties for a given section
- Create a consumer
- Gets or creates a consumer topic
- Creates a cluster summary from a topic object
- Populate a summary with partition data
- Parse a ZK topic object
- Get information about a Kafka cluster
- Updates the controller
- Compares this version to another version
- Get jmx port from environment
- Returns the coordinator for the given channel
- Parse partition metadata
- Read a consumer registration from the registry
- Get all partition offsets for a topic
- Apply CORS headers
- Returns a string representation of a version identifier
- Creates a broker instance
- Displays information about the cluster
- Validates this Version object
- Create messageVO from a consumer record
- Merge two ClusterSummary objects
Kafdrop Key Features
Kafdrop Examples and Code Snippets
Community Discussions
Trending Discussions on Kafdrop
QUESTION
I am implementing username/password in Kafka.
When I tried with PLAINTEXT
works as expected, but when I implement SASL_PLAINTEXT
I can't connect.
This is my docker-compose:
...ANSWER
Answered 2021-Jun-04 at 08:50Remove this line from configuration
KAFKA_ZOOKEEPER_PROTOCOL SASL_PLAINTEXT
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
user_kafkauser="kafkapassword";
};
Notice user_kafkauser
QUESTION
I'm running Kafka in docker and I've a .NET application that I want to use to consume messages. I've followed following tutorials with no luck:
https://www.confluent.io/blog/kafka-client-cannot-connect-to-broker-on-aws-on-docker-etc/
Connect to Kafka running in Docker
Interact with kafka docker container from outside of docker host
On my consumer application I get the following error if I try to conenct directly to containers ip:
ANSWER
Answered 2021-May-10 at 14:13If you are running your consuming .NET app outside of Docker, you should try to connect to localhost:9092
. The kafka
hostname is only valid in Docker.
You'll find an example of how to run Kafka in Docker and consume the messages from it using a .NET Core app here.
You could compare the docker-compose.yml from that example with yours.
Here is how the the .NET Core app sets up the consumer:
QUESTION
shell scripting to print output with comma delimter instead of tab delimter for listing docker services
...ANSWER
Answered 2021-Mar-30 at 09:29Don't use awk and use the built in filter options of docker-ps
QUESTION
I am trying to run kafka datagen connector inside kafka-connect container and my kafka resides in AWS MSK using : https://github.com/confluentinc/kafka-connect-datagen/blob/master/Dockerfile-confluenthub.
I am using kafdrop as a web browser for kafka broker (MSK). I don't see Kafka datagen generating any test messages. Is there anything other configuration I need to do except installing the kafka-datagen connector
Also, how can I check inside confluentinc/kafka-connect image what topics are created and whether messages are consumed or not?
Dockerfile looks like :
...ANSWER
Answered 2021-Mar-27 at 20:57I just added in the dockerfile and ran RUN confluent-hub install --no-prompt confluentinc/kafka-connect-datagen:0.4.0 inside the dockerfile. Nothing else. No error logs .
That alone doesn't run the connector, only makes it available to the Connect API. Notice the curl example in the docs https://github.com/confluentinc/kafka-connect-datagen#run-connector-in-docker-compose
So, expose port 8083 and make the request to add the connector, and make sure to add all the relevant environment variables when you're running the container
QUESTION
I setup Kafka and Zookeeper on my local machine and I would like to use Kafdrop as UI. I tried running with docker command below:
...ANSWER
Answered 2020-Jul-23 at 06:09Kafka is not HTTP-based. You do not need a schema protocol to connect to Kafka, and angle brackets do not need used.
You also cannot use localhost
, as that is Kafdrop container, not Kafka.
I suggest you use Docker Compose with Kafdrop and Kafka
QUESTION
I setup a kafka cluster using bitnami kafka and zookeeper and I wanted to view this cluster or at least one broker using kafdrop. I used docker compose to build all the components. I initially followed this tutorial and then added the kafdrop config in the docker-compose.yml
...ANSWER
Answered 2020-Aug-26 at 15:05Your second way is the right way. Also for the KAFKA_CFG_ADVERTISED_LISTENERS
vars which I'm not sure are necessary. You just need to make sure to use the right ports. This should work fine:
QUESTION
ANSWER
Answered 2020-May-06 at 19:11It looks like a problem of the new version of Kafdrop. I got the same with 3.25.0. Rollback to 3.23.0 helped, it displays my messages.
QUESTION
I am having issues trying to deserialize a kafka message to a POJO using Spring Kafka. I want to use the key and value parts of the message to construct the POJO.
The kafka message key is a string.
The kafka message value is JSON.
I've tried doing just the value portion of the message by following the tutorials at codenotfound.com and baeldung.com. Except that I also want to have the key-value in the POJO and the java application isn't generating the message.
How do I get the java application to appropriately deserialize a kafka message into a POJO?
For example:
...ANSWER
Answered 2020-Mar-05 at 23:34welcome to StackOverflow!
By default Spring Kafka uses a String Deserializer when consuming the message, so in your case it looks like you want to deserialize a Json message, for this the first step would be to register as a value deserializer to be JsonDeserializer.class. This should work for the values of the message but still doesn't solve the key which you also want.
In Kafka the Key and Value serializers are not combined so I don't think there's an easy way to get also the key while deseriliazing, the easiest options you would have are probably:
Make the key part of your Json object so it will be automatically deserialized with the JsonDeserliazer.
Process on the consumer side receiving instead of the Object itself but instead use
ConsumerRecord
which will return the key and value deserialized so you can simply add the key to the deserialized object using a setter.
I hope it helps to clarify. I will take a quick look in your example on Github and do a PR, done. So to fix it using the approach to have the key as part of the message payload(check the PR in your Repo):
Add the key to the Data Object as a property and for your consumer:
QUESTION
Using the kafka-console-producer
I can post messages to topic acl
using the user write
Using the kafka-console-consumer
I cannot read messages from topic acl
as user read
However, I can login, all ACLs are correct, when I use a wrong password it complains, so SASL_SSL and the ACL works. In kafka-authorizer.log
, after enabling the DEBUG
mode:
ANSWER
Answered 2019-Oct-13 at 12:33In case someone stumbles upon this one:
I enabled DEBUG logging in the file /etc/kafka/tools-log4j.properties
(CentOS)
then when starting the consumer it showed a lot of info, including a message about group leader not available
.
It turned out that I started my 3-broker cluster with a wrong default setting provided in the server.properties file. After reinstalling the servers and changing that, it worked! Please note, I'm still in development trying to get everything up and running, apparently this settings is used when the first consumer connects.
QUESTION
I know that similar questions have been asked, but none of the topics, articles and blogs that I found allowed me to resolve my issue. Let me be very straightforward and specific here:
1. What I have:
Docker Swarm cluster (1 local node), NGINX as a reverse proxy, and for the sake of this example: apache, spark, rstudio and jupyter notebook containers.
2. What I want:
I want to set up NGINX to that I can expose to the host only one port (80 - NGINX) and serve these 4 applications through NGINX over the same port (80) but different paths. On my local dev environment I want apache to be accesible on "127.0.0.1/apache", rstudio under "127.0.0.1/rstudio", spark UI under "127.0.0.1/spark" and jupyter under "127.0.0.1/jupyter". All these applications use different ports internally, this is not a problem (apache - 80, spark - 8080, rstudio - 8787, jupyter - 8888). I want them to use the same port externally, on the host.
3. What I don't have:
I don't have and won't have a domain name. My stack should be able to work when all I have is a public IP to the server or multiple servers that I own. No domain name. I saw multiple examples on how to do things that I want to do using hostnames, I don't want that. I want to acces my stack only by IP and path, for example 123.123.123.123/jupyter.
4. What I came up with:
And now to my actual problem - I have a partialy working solution. Concretely, apache and rstudio are working ok, jupyter and spark are not. By not I mean that jupyter redirections are causing problems. When I go to 127.0.0.1/jupyter I am being redirected to the login page, but instead of redirecting to 127.0.0.1/jupyter/tree, it redirects me to 127.0.0.1/tree, which of course does not exist. Spark UI won't render properly, beacuse all css and js files are under 127.0.0.1/spark/some.css, but spark UI tries to get them from 127.0.0.1/some.css and the same story is basically with all other dashboards
In my actual stack I have more services like hue, kafdrop etc. and none of them work. Actually the only things that work are apache, tomcat and rstudio. I'm suprised that rstudio works without problems with authentication, logging in, out etc. It is completely ok. I actually have no idea why it works, when everything else fails.
I tried to do the same with Traefik - same outcome. With traefik I could not even set up rstudio, all dashboards suffered the same problem - not properly loading static content, or dashboards with login page - bad redirects.
5. Questions:
So my questions are:
- are the things that I'm trying to acomplish even possible?
- if not, why using different hostnames makes it possible, but different paths on the same host do not work?
- if it is possible, then how should I set up NGINX to work properly?
My minimal working example is below: First initialize swarm and create network:
...ANSWER
Answered 2019-Jun-15 at 22:13I can't help with Jupyter and Spark but hope that this answer will help you.
If you plan to put something behind a reverse proxy, you should verify that it can work behind a reverse proxy, as you mentioned.
127.0.0.1/jupyter/tree, it redirects me to 127.0.0.1/tree
because for Jupyter root is /
, not /jupyter
, so you need to find in config how to change it, as an example for Grafana.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install Kafdrop
You can use Kafdrop like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the Kafdrop component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page