kandi background
Explore Kits

kafdrop | web UI for viewing Kafka topics and browsing consumer groups | Pub Sub library

 by   obsidiandynamics Java Version: 3.30.0 License: Apache-2.0

 by   obsidiandynamics Java Version: 3.30.0 License: Apache-2.0

Download this library from

kandi X-RAY | kafdrop Summary

kafdrop is a Java library typically used in Messaging, Pub Sub, Kafka applications. kafdrop has no bugs, it has no vulnerabilities, it has build file available, it has a Permissive License and it has medium support. You can download it from GitHub.
<img src="https://raw.githubusercontent.com/wiki/obsidiandynamics/kafdrop/images/kafdrop-logo.png" width="90px" alt="logo"/> Kafdrop – Kafka Web UI   [![Tweet](https://img.shields.io/twitter/url/http/shields.io.svg?style=social)](https://twitter.com/intent/tweet?url=https%3A%2F%2Fgithub.com%2Fobsidiandynamics%2Fkafdrop&text=Get%20Kafdrop%20%E2%80%94%20a%20web-based%20UI%20for%20viewing%20%23ApacheKafka%20topics%20and%20browsing%20consumers%20) ===. [![Price](https://img.shields.io/badge/price-FREE-0098f7.svg)](https://github.com/obsidiandynamics/kafdrop/blob/master/LICENSE) [![Release with mvn](https://github.com/obsidiandynamics/kafdrop/actions/workflows/master.yml/badge.svg)](https://github.com/obsidiandynamics/kafdrop/actions/workflows/master.yml) [![Docker](https://img.shields.io/docker/pulls/obsidiandynamics/kafdrop.svg)](https://hub.docker.com/r/obsidiandynamics/kafdrop) [![Language grade: Java](https://img.shields.io/lgtm/grade/java/g/obsidiandynamics/kafdrop.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/obsidiandynamics/kafdrop/context:java). <em>Kafdrop is a web UI for viewing Kafka topics and browsing consumer groups.</em> The tool displays information such as brokers, topics, partitions, consumers, and lets you view messages. This project is a reboot of Kafdrop 2.x, dragged kicking and screaming into the world of JDK 11+, Kafka 2.x, Helm and Kubernetes. It’s a lightweight application that runs on Spring Boot and is dead-easy to configure, supporting SASL and TLS-secured brokers.
Support
Support
Quality
Quality
Security
Security
License
License
Reuse
Reuse

kandi-support Support

  • kafdrop has a medium active ecosystem.
  • It has 3533 star(s) with 543 fork(s). There are 51 watchers for this library.
  • There were 2 major release(s) in the last 6 months.
  • There are 8 open issues and 228 have been closed. On average issues are closed in 345 days. There are 37 open pull requests and 0 closed requests.
  • It has a neutral sentiment in the developer community.
  • The latest version of kafdrop is 3.30.0
kafdrop Support
Best in #Pub Sub
Average in #Pub Sub
kafdrop Support
Best in #Pub Sub
Average in #Pub Sub

quality kandi Quality

  • kafdrop has 0 bugs and 0 code smells.
kafdrop Quality
Best in #Pub Sub
Average in #Pub Sub
kafdrop Quality
Best in #Pub Sub
Average in #Pub Sub

securitySecurity

  • kafdrop has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
  • kafdrop code analysis shows 0 unresolved vulnerabilities.
  • There are 0 security hotspots that need review.
kafdrop Security
Best in #Pub Sub
Average in #Pub Sub
kafdrop Security
Best in #Pub Sub
Average in #Pub Sub

license License

  • kafdrop is licensed under the Apache-2.0 License. This license is Permissive.
  • Permissive licenses have the least restrictions, and you can use them in most projects.
kafdrop License
Best in #Pub Sub
Average in #Pub Sub
kafdrop License
Best in #Pub Sub
Average in #Pub Sub

buildReuse

  • kafdrop releases are available to install and integrate.
  • Build file is available. You can build the component from source.
  • Installation instructions, examples and code snippets are available.
  • kafdrop saves you 1498 person hours of effort in developing the same functionality from scratch.
  • It has 3528 lines of code, 303 functions and 65 files.
  • It has medium code complexity. Code complexity directly impacts maintainability of the code.
kafdrop Reuse
Best in #Pub Sub
Average in #Pub Sub
kafdrop Reuse
Best in #Pub Sub
Average in #Pub Sub
Top functions reviewed by kandi - BETA

kandi has reviewed kafdrop and discovered the below as its top functions. This is intended to give you an instant insight into kafdrop implemented functionality, and help decide if they suit your requirements.

  • Parses the given value .
  • Gets the latest records for a given topic .
  • Get information about a specific partition or message .
  • Converts a list of consumerVos to a list of consumers .
  • Entry point for the download wrapper .
  • Apply common properties .
  • adds CORS headers to the client
  • Creates a bean post processor post processing .
  • Describe configs .
  • Loads the properties for an ini file .

kafdrop Key Features

View Kafka brokers — topic and partition assignments, and controller status

View topics — partition count, replication status, and custom configuration

Browse messages — JSON, plain text, Avro and Protobuf encoding

View consumer groups — per-partition parked offsets, combined and per-partition lag

Create new topics

View ACLs

Support for Azure Event Hubs

Running from JAR

copy iconCopydownload iconDownload
java --add-opens=java.base/sun.nio.ch=ALL-UNNAMED \
    -jar target/kafdrop-&lt;version&gt;.jar \
    --kafka.brokerConnect=&lt;host:port,host:port&gt;,...

Option 1: Using Protobuf Descriptor

copy iconCopydownload iconDownload
--protobufdesc.directory=/var/protobuf_desc

Defaulting to Protobuf

copy iconCopydownload iconDownload
--message.format=PROTOBUF

Running with Docker

copy iconCopydownload iconDownload
docker run -d --rm -p 9000:9000 \
    -e KAFKA_BROKERCONNECT=&lt;host:port,host:port&gt; \
    -e JVM_OPTS="-Xms32M -Xmx64M" \
    -e SERVER_SERVLET_CONTEXTPATH="/" \
    obsidiandynamics/kafdrop

Running in Kubernetes (using a Helm Chart)

copy iconCopydownload iconDownload
git clone https://github.com/obsidiandynamics/kafdrop &amp;&amp; cd kafdrop

Protobuf support via helm chart:

copy iconCopydownload iconDownload
helm upgrade -i kafdrop chart --set image.tag=3.x.x \
    --set kafka.brokerConnect=&lt;host:port,host:port&gt; \
    --set server.servlet.contextPath="/" \
    --set mountProtoDesc.enabled=true \
    --set mountProtoDesc.hostPath="&lt;path/to/desc/folder&gt;" \
    --set jvm.opts="-Xms32M -Xmx64M"

Building

copy iconCopydownload iconDownload
$ mvn clean package

Docker Compose

copy iconCopydownload iconDownload
cd docker-compose/kafka-kafdrop
docker-compose up

Swagger

copy iconCopydownload iconDownload
/v2/api-docs

CORS Headers

copy iconCopydownload iconDownload
cors.allowOrigins (default is *)
cors.allowMethods (default is GET,POST,PUT,DELETE)
cors.maxAge (default is 3600)
cors.allowCredentials (default is true)
cors.allowHeaders (default is Origin,Accept,X-Requested-With,Content-Type,Access-Control-Request-Method,Access-Control-Request-Headers,Authorization)

Topic Configuration

copy iconCopydownload iconDownload
--topic.deleteEnabled=false

Actuator

copy iconCopydownload iconDownload
management.endpoints.web.base-path=&lt;path&gt;

Using Docker

copy iconCopydownload iconDownload
docker run -d --rm -p 9000:9000 \
    -e KAFKA_BROKERCONNECT=&lt;host:port,host:port&gt; \
    -e KAFKA_PROPERTIES="$(cat kafka.properties | base64)" \
    -e KAFKA_TRUSTSTORE="$(cat kafka.truststore.jks | base64)" \   # optional
    -e KAFKA_KEYSTORE="$(cat kafka.keystore.jks | base64)" \       # optional
    obsidiandynamics/kafdrop

Using Helm

copy iconCopydownload iconDownload
helm upgrade -i kafdrop chart --set image.tag=3.x.x \
    --set kafka.brokerConnect=&lt;host:port,host:port&gt; \
    --set kafka.properties="$(cat kafka.properties | base64)" \
    --set kafka.truststore="$(cat kafka.truststore.jks | base64)" \
    --set kafka.keystore="$(cat kafka.keystore.jks | base64)"

Setup

copy iconCopydownload iconDownload
htpasswd -c /usr/local/etc/nginx/.htpasswd admin

PySpark doesn't find Kafka source

copy iconCopydownload iconDownload
spark-submit \
  --packages org.apache.spark:spark-sql-kafka-0-10_2.12:3.2.0 \
  script.py

Why does Kafka Mirrormaker target topic contain half of original messages?

copy iconCopydownload iconDownload
clusters = source, dest

source.bootstrap.servers = sourcebroker1:9092,sourcebroker2:9092
dest.bootstrap.servers = destbroker1:9091,destbroker2:9092
topics = .*
groups = mm2topic
source->dest.enabled = true
offsets.topic.replication.factor=1
offset.storage.replication.factor=1
auto.offset.reset=latest

docker-compose.yml with 3 zookepers and 1 broker set up with public IP - broker failed to start with no meaningful logs (but works with 1 zookeeper)

copy iconCopydownload iconDownload
services:
  zookeeper-1:
    image: confluentinc/cp-zookeeper:6.2.1
    hostname: zookeeper-1
    container_name: zookeeper-1
    ports:
      - "2181:2181"
      - "2888:2888"
      - "3888:3888"
    environment:
      ZOOKEEPER_SERVER_ID: 1
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_PEER_PORT: 2888
      ZOOKEEPER_LEADER_PORT: 3888
      ...
      ZOOKEEPER_SERVERS: "localhost:2888:3888;zookeeper-1:12888:13888;zookeeper-2:22888:23888"
    ...

  zookeeper-2:
    image: confluentinc/cp-zookeeper:6.2.1
    hostname: zookeeper-2
    container_name: zookeeper-2
    ports:
      - "12181:12181"
      - "12888:12888"
      - "13888:13888"
    environment:
      ZOOKEEPER_SERVER_ID: 2
      ZOOKEEPER_CLIENT_PORT: 12181
      ZOOKEEPER_PEER_PORT: 12888
      ZOOKEEPER_LEADER_PORT: 13888
      ...
      ZOOKEEPER_SERVERS: "zookeeper-1:2888:3888;localhost:12888:13888;zookeeper-2:22888:23888"
    ...

  zookeeper-3:
    image: confluentinc/cp-zookeeper:6.2.1
    hostname: zookeeper-3
    container_name: zookeeper-3
    ports:
      - "22181:22181"
      - "22888:22888"
      - "23888:23888"
    environment:
      ZOOKEEPER_SERVER_ID: 3
      ZOOKEEPER_CLIENT_PORT: 22181
      ZOOKEEPER_PEER_PORT: 22888
      ZOOKEEPER_LEADER_PORT: 23888
      ...
      ZOOKEEPER_SERVERS: "zookeeper-1:2888:3888;zookeeper-2:12888:13888;localhost:22888:23888"
    ...

  broker-1:
    image: confluentinc/cp-kafka:6.2.1
    hostname: broker-1
    container_name: broker-1
    depends_on:
      - zookeeper-1
      - zookeeper-2
      - zookeeper-3
    # ports removed because the listener is internal to the docker network only
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_LISTENERS: INSIDE://0.0.0.0:9092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: INSIDE://broker-1:9092
      KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
      KAFKA_ZOOKEEPER_CONNECT: "zookeeper-1:zookeeper-2:12181,zookeeper-3:22181"
      ...
-----------------------
    extra_hosts:
      - "kafka-1:192.168.1.11"
      - "kafka-2:192.168.1.12"
      - "kafka-3:192.168.1.13"
version: '3.7'

x-zoo: &zoo "kafka-1:2888:3888;kafka-2:2888:3888;kafka-3:2888:3888"
x-kafkaZookeepers: &kafkaZookeepers "kafka-1:2181,kafka-2:2181,kafka-3:2181"
x-kafkaBrokers: &kafkaBrokers "kafka-1:9092"

services:
  zookeeper:
    image: confluentinc/cp-zookeeper:6.2.1
    hostname: zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"
      - "2888:2888"
      - "3888:3888"
    extra_hosts:
      - "kafka-1:192.168.1.11"
      - "kafka-2:192.168.1.12"
      - "kafka-3:192.168.1.13"
    environment:
      ZOOKEEPER_SERVER_ID: 1
      ZOOKEEPER_QUORUM_LISTEN_ON_ALL_IPS: 'true'
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_PEER_PORT: 2888
      ZOOKEEPER_LEADER_PORT: 3888
      ZOOKEEPER_TICK_TIME: 2000
      ZOOKEEPER_INIT_LIMIT: 5
      ZOOKEEPER_SYNC_LIMIT: 2
      ZOOKEEPER_SERVERS: *zoo
    volumes:
      - ./kafka-data/zookeeper:/var/lib/zookeeper/data
      - ./kafka-data/zookeeper-logs:/var/lib/zookeeper/log
    networks:
      - mynet

  broker:
    image: confluentinc/cp-kafka:6.2.1
    hostname: broker
    container_name: broker
    depends_on:
      - zookeeper
    ports:
      - "9092:9092"
    extra_hosts:
      - "kafka-1:192.168.1.11"
      - "kafka-2:192.168.1.12"
      - "kafka-3:192.168.1.13"
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_LISTENERS: OUTSIDE://0.0.0.0:9092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: OUTSIDE:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: OUTSIDE://192.168.1.11:9092
      KAFKA_INTER_BROKER_LISTENER_NAME: OUTSIDE
      KAFKA_ZOOKEEPER_CONNECT: *kafkaZookeepers
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_LOG_MESSAGE_TIMESTAMP_TYPE: 'LogAppendTime'
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_CONNECTIONS_MAX_IDLE_MS: 31536000000 # 1 year
    volumes:
      - ./kafka-data/kafka:/var/lib/kafka/data
    networks:
      - mynet

  kafka-ui:
    image: provectuslabs/kafka-ui:0.2.1
    hostname: kafka-ui
    container_name: kafka-ui
    depends_on:
      - broker
    ports:
      - "8084:8080"
    extra_hosts:
      - "kafka-1:192.168.1.11"
      - "kafka-2:192.168.1.12"
      - "kafka-3:192.168.1.13"
    environment:
      KAFKA_CLUSTERS_0_NAME: local_kafka
      KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: *kafkaBrokers
      KAFKA_CLUSTERS_0_ZOOKEEPER: *kafkaZookeepers
    networks:
      - mynet

networks:
  mynet:
    driver: bridge
version: '3.7'

x-zoo: &zoo "kafka-1:2888:3888;kafka-2:2888:3888;kafka-3:2888:3888"
x-kafkaZookeepers: &kafkaZookeepers "kafka-1:2181,kafka-2:2181,kafka-3:2181"
x-kafkaBrokers: &kafkaBrokers "kafka-1:9092"

services:
  zookeeper:
    image: confluentinc/cp-zookeeper:6.2.1
    hostname: zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"
      - "2888:2888"
      - "3888:3888"
    extra_hosts:
      - "kafka-1:192.168.1.11"
      - "kafka-2:192.168.1.12"
      - "kafka-3:192.168.1.13"
    environment:
      ZOOKEEPER_SERVER_ID: 2
      ZOOKEEPER_QUORUM_LISTEN_ON_ALL_IPS: 'true'
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_PEER_PORT: 2888
      ZOOKEEPER_LEADER_PORT: 3888
      ZOOKEEPER_TICK_TIME: 2000
      ZOOKEEPER_INIT_LIMIT: 5
      ZOOKEEPER_SYNC_LIMIT: 2
      ZOOKEEPER_SERVERS: *zoo
    volumes:
      - ./kafka-data/zookeeper:/var/lib/zookeeper/data
      - ./kafka-data/zookeeper-logs:/var/lib/zookeeper/log
    networks:
      - mynet

networks:
  mynet:
    driver: bridge
version: '3.7'

x-zoo: &zoo "kafka-1:2888:3888;kafka-2:2888:3888;kafka-3:2888:3888"
x-kafkaZookeepers: &kafkaZookeepers "kafka-1:2181,kafka-2:2181,kafka-3:2181"
x-kafkaBrokers: &kafkaBrokers "kafka-1:9092"

services:
  zookeeper:
    image: confluentinc/cp-zookeeper:6.2.1
    hostname: zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"
      - "2888:2888"
      - "3888:3888"
    extra_hosts:
      - "kafka-1:192.168.1.11"
      - "kafka-2:192.168.1.12"
      - "kafka-3:192.168.1.13"
    environment:
      ZOOKEEPER_SERVER_ID: 3
      ZOOKEEPER_QUORUM_LISTEN_ON_ALL_IPS: 'true'
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_PEER_PORT: 2888
      ZOOKEEPER_LEADER_PORT: 3888
      ZOOKEEPER_TICK_TIME: 2000
      ZOOKEEPER_INIT_LIMIT: 5
      ZOOKEEPER_SYNC_LIMIT: 2
      ZOOKEEPER_SERVERS: *zoo
    volumes:
      - ./kafka-data/zookeeper:/var/lib/zookeeper/data
      - ./kafka-data/zookeeper-logs:/var/lib/zookeeper/log
    networks:
      - mynet

networks:
  mynet:
    driver: bridge
test@kafka-1:~/Kafka-Docker$ cat /etc/hosts
127.0.0.1       localhost
127.0.1.1       kafka-1

192.168.1.11    kafka-1
192.168.1.12    kafka-2
192.168.1.13    kafka-3

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

-----------------------
    extra_hosts:
      - "kafka-1:192.168.1.11"
      - "kafka-2:192.168.1.12"
      - "kafka-3:192.168.1.13"
version: '3.7'

x-zoo: &zoo "kafka-1:2888:3888;kafka-2:2888:3888;kafka-3:2888:3888"
x-kafkaZookeepers: &kafkaZookeepers "kafka-1:2181,kafka-2:2181,kafka-3:2181"
x-kafkaBrokers: &kafkaBrokers "kafka-1:9092"

services:
  zookeeper:
    image: confluentinc/cp-zookeeper:6.2.1
    hostname: zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"
      - "2888:2888"
      - "3888:3888"
    extra_hosts:
      - "kafka-1:192.168.1.11"
      - "kafka-2:192.168.1.12"
      - "kafka-3:192.168.1.13"
    environment:
      ZOOKEEPER_SERVER_ID: 1
      ZOOKEEPER_QUORUM_LISTEN_ON_ALL_IPS: 'true'
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_PEER_PORT: 2888
      ZOOKEEPER_LEADER_PORT: 3888
      ZOOKEEPER_TICK_TIME: 2000
      ZOOKEEPER_INIT_LIMIT: 5
      ZOOKEEPER_SYNC_LIMIT: 2
      ZOOKEEPER_SERVERS: *zoo
    volumes:
      - ./kafka-data/zookeeper:/var/lib/zookeeper/data
      - ./kafka-data/zookeeper-logs:/var/lib/zookeeper/log
    networks:
      - mynet

  broker:
    image: confluentinc/cp-kafka:6.2.1
    hostname: broker
    container_name: broker
    depends_on:
      - zookeeper
    ports:
      - "9092:9092"
    extra_hosts:
      - "kafka-1:192.168.1.11"
      - "kafka-2:192.168.1.12"
      - "kafka-3:192.168.1.13"
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_LISTENERS: OUTSIDE://0.0.0.0:9092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: OUTSIDE:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: OUTSIDE://192.168.1.11:9092
      KAFKA_INTER_BROKER_LISTENER_NAME: OUTSIDE
      KAFKA_ZOOKEEPER_CONNECT: *kafkaZookeepers
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_LOG_MESSAGE_TIMESTAMP_TYPE: 'LogAppendTime'
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_CONNECTIONS_MAX_IDLE_MS: 31536000000 # 1 year
    volumes:
      - ./kafka-data/kafka:/var/lib/kafka/data
    networks:
      - mynet

  kafka-ui:
    image: provectuslabs/kafka-ui:0.2.1
    hostname: kafka-ui
    container_name: kafka-ui
    depends_on:
      - broker
    ports:
      - "8084:8080"
    extra_hosts:
      - "kafka-1:192.168.1.11"
      - "kafka-2:192.168.1.12"
      - "kafka-3:192.168.1.13"
    environment:
      KAFKA_CLUSTERS_0_NAME: local_kafka
      KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: *kafkaBrokers
      KAFKA_CLUSTERS_0_ZOOKEEPER: *kafkaZookeepers
    networks:
      - mynet

networks:
  mynet:
    driver: bridge
version: '3.7'

x-zoo: &zoo "kafka-1:2888:3888;kafka-2:2888:3888;kafka-3:2888:3888"
x-kafkaZookeepers: &kafkaZookeepers "kafka-1:2181,kafka-2:2181,kafka-3:2181"
x-kafkaBrokers: &kafkaBrokers "kafka-1:9092"

services:
  zookeeper:
    image: confluentinc/cp-zookeeper:6.2.1
    hostname: zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"
      - "2888:2888"
      - "3888:3888"
    extra_hosts:
      - "kafka-1:192.168.1.11"
      - "kafka-2:192.168.1.12"
      - "kafka-3:192.168.1.13"
    environment:
      ZOOKEEPER_SERVER_ID: 2
      ZOOKEEPER_QUORUM_LISTEN_ON_ALL_IPS: 'true'
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_PEER_PORT: 2888
      ZOOKEEPER_LEADER_PORT: 3888
      ZOOKEEPER_TICK_TIME: 2000
      ZOOKEEPER_INIT_LIMIT: 5
      ZOOKEEPER_SYNC_LIMIT: 2
      ZOOKEEPER_SERVERS: *zoo
    volumes:
      - ./kafka-data/zookeeper:/var/lib/zookeeper/data
      - ./kafka-data/zookeeper-logs:/var/lib/zookeeper/log
    networks:
      - mynet

networks:
  mynet:
    driver: bridge
version: '3.7'

x-zoo: &zoo "kafka-1:2888:3888;kafka-2:2888:3888;kafka-3:2888:3888"
x-kafkaZookeepers: &kafkaZookeepers "kafka-1:2181,kafka-2:2181,kafka-3:2181"
x-kafkaBrokers: &kafkaBrokers "kafka-1:9092"

services:
  zookeeper:
    image: confluentinc/cp-zookeeper:6.2.1
    hostname: zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"
      - "2888:2888"
      - "3888:3888"
    extra_hosts:
      - "kafka-1:192.168.1.11"
      - "kafka-2:192.168.1.12"
      - "kafka-3:192.168.1.13"
    environment:
      ZOOKEEPER_SERVER_ID: 3
      ZOOKEEPER_QUORUM_LISTEN_ON_ALL_IPS: 'true'
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_PEER_PORT: 2888
      ZOOKEEPER_LEADER_PORT: 3888
      ZOOKEEPER_TICK_TIME: 2000
      ZOOKEEPER_INIT_LIMIT: 5
      ZOOKEEPER_SYNC_LIMIT: 2
      ZOOKEEPER_SERVERS: *zoo
    volumes:
      - ./kafka-data/zookeeper:/var/lib/zookeeper/data
      - ./kafka-data/zookeeper-logs:/var/lib/zookeeper/log
    networks:
      - mynet

networks:
  mynet:
    driver: bridge
test@kafka-1:~/Kafka-Docker$ cat /etc/hosts
127.0.0.1       localhost
127.0.1.1       kafka-1

192.168.1.11    kafka-1
192.168.1.12    kafka-2
192.168.1.13    kafka-3

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

-----------------------
    extra_hosts:
      - "kafka-1:192.168.1.11"
      - "kafka-2:192.168.1.12"
      - "kafka-3:192.168.1.13"
version: '3.7'

x-zoo: &zoo "kafka-1:2888:3888;kafka-2:2888:3888;kafka-3:2888:3888"
x-kafkaZookeepers: &kafkaZookeepers "kafka-1:2181,kafka-2:2181,kafka-3:2181"
x-kafkaBrokers: &kafkaBrokers "kafka-1:9092"

services:
  zookeeper:
    image: confluentinc/cp-zookeeper:6.2.1
    hostname: zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"
      - "2888:2888"
      - "3888:3888"
    extra_hosts:
      - "kafka-1:192.168.1.11"
      - "kafka-2:192.168.1.12"
      - "kafka-3:192.168.1.13"
    environment:
      ZOOKEEPER_SERVER_ID: 1
      ZOOKEEPER_QUORUM_LISTEN_ON_ALL_IPS: 'true'
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_PEER_PORT: 2888
      ZOOKEEPER_LEADER_PORT: 3888
      ZOOKEEPER_TICK_TIME: 2000
      ZOOKEEPER_INIT_LIMIT: 5
      ZOOKEEPER_SYNC_LIMIT: 2
      ZOOKEEPER_SERVERS: *zoo
    volumes:
      - ./kafka-data/zookeeper:/var/lib/zookeeper/data
      - ./kafka-data/zookeeper-logs:/var/lib/zookeeper/log
    networks:
      - mynet

  broker:
    image: confluentinc/cp-kafka:6.2.1
    hostname: broker
    container_name: broker
    depends_on:
      - zookeeper
    ports:
      - "9092:9092"
    extra_hosts:
      - "kafka-1:192.168.1.11"
      - "kafka-2:192.168.1.12"
      - "kafka-3:192.168.1.13"
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_LISTENERS: OUTSIDE://0.0.0.0:9092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: OUTSIDE:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: OUTSIDE://192.168.1.11:9092
      KAFKA_INTER_BROKER_LISTENER_NAME: OUTSIDE
      KAFKA_ZOOKEEPER_CONNECT: *kafkaZookeepers
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_LOG_MESSAGE_TIMESTAMP_TYPE: 'LogAppendTime'
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_CONNECTIONS_MAX_IDLE_MS: 31536000000 # 1 year
    volumes:
      - ./kafka-data/kafka:/var/lib/kafka/data
    networks:
      - mynet

  kafka-ui:
    image: provectuslabs/kafka-ui:0.2.1
    hostname: kafka-ui
    container_name: kafka-ui
    depends_on:
      - broker
    ports:
      - "8084:8080"
    extra_hosts:
      - "kafka-1:192.168.1.11"
      - "kafka-2:192.168.1.12"
      - "kafka-3:192.168.1.13"
    environment:
      KAFKA_CLUSTERS_0_NAME: local_kafka
      KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: *kafkaBrokers
      KAFKA_CLUSTERS_0_ZOOKEEPER: *kafkaZookeepers
    networks:
      - mynet

networks:
  mynet:
    driver: bridge
version: '3.7'

x-zoo: &zoo "kafka-1:2888:3888;kafka-2:2888:3888;kafka-3:2888:3888"
x-kafkaZookeepers: &kafkaZookeepers "kafka-1:2181,kafka-2:2181,kafka-3:2181"
x-kafkaBrokers: &kafkaBrokers "kafka-1:9092"

services:
  zookeeper:
    image: confluentinc/cp-zookeeper:6.2.1
    hostname: zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"
      - "2888:2888"
      - "3888:3888"
    extra_hosts:
      - "kafka-1:192.168.1.11"
      - "kafka-2:192.168.1.12"
      - "kafka-3:192.168.1.13"
    environment:
      ZOOKEEPER_SERVER_ID: 2
      ZOOKEEPER_QUORUM_LISTEN_ON_ALL_IPS: 'true'
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_PEER_PORT: 2888
      ZOOKEEPER_LEADER_PORT: 3888
      ZOOKEEPER_TICK_TIME: 2000
      ZOOKEEPER_INIT_LIMIT: 5
      ZOOKEEPER_SYNC_LIMIT: 2
      ZOOKEEPER_SERVERS: *zoo
    volumes:
      - ./kafka-data/zookeeper:/var/lib/zookeeper/data
      - ./kafka-data/zookeeper-logs:/var/lib/zookeeper/log
    networks:
      - mynet

networks:
  mynet:
    driver: bridge
version: '3.7'

x-zoo: &zoo "kafka-1:2888:3888;kafka-2:2888:3888;kafka-3:2888:3888"
x-kafkaZookeepers: &kafkaZookeepers "kafka-1:2181,kafka-2:2181,kafka-3:2181"
x-kafkaBrokers: &kafkaBrokers "kafka-1:9092"

services:
  zookeeper:
    image: confluentinc/cp-zookeeper:6.2.1
    hostname: zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"
      - "2888:2888"
      - "3888:3888"
    extra_hosts:
      - "kafka-1:192.168.1.11"
      - "kafka-2:192.168.1.12"
      - "kafka-3:192.168.1.13"
    environment:
      ZOOKEEPER_SERVER_ID: 3
      ZOOKEEPER_QUORUM_LISTEN_ON_ALL_IPS: 'true'
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_PEER_PORT: 2888
      ZOOKEEPER_LEADER_PORT: 3888
      ZOOKEEPER_TICK_TIME: 2000
      ZOOKEEPER_INIT_LIMIT: 5
      ZOOKEEPER_SYNC_LIMIT: 2
      ZOOKEEPER_SERVERS: *zoo
    volumes:
      - ./kafka-data/zookeeper:/var/lib/zookeeper/data
      - ./kafka-data/zookeeper-logs:/var/lib/zookeeper/log
    networks:
      - mynet

networks:
  mynet:
    driver: bridge
test@kafka-1:~/Kafka-Docker$ cat /etc/hosts
127.0.0.1       localhost
127.0.1.1       kafka-1

192.168.1.11    kafka-1
192.168.1.12    kafka-2
192.168.1.13    kafka-3

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

-----------------------
    extra_hosts:
      - "kafka-1:192.168.1.11"
      - "kafka-2:192.168.1.12"
      - "kafka-3:192.168.1.13"
version: '3.7'

x-zoo: &zoo "kafka-1:2888:3888;kafka-2:2888:3888;kafka-3:2888:3888"
x-kafkaZookeepers: &kafkaZookeepers "kafka-1:2181,kafka-2:2181,kafka-3:2181"
x-kafkaBrokers: &kafkaBrokers "kafka-1:9092"

services:
  zookeeper:
    image: confluentinc/cp-zookeeper:6.2.1
    hostname: zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"
      - "2888:2888"
      - "3888:3888"
    extra_hosts:
      - "kafka-1:192.168.1.11"
      - "kafka-2:192.168.1.12"
      - "kafka-3:192.168.1.13"
    environment:
      ZOOKEEPER_SERVER_ID: 1
      ZOOKEEPER_QUORUM_LISTEN_ON_ALL_IPS: 'true'
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_PEER_PORT: 2888
      ZOOKEEPER_LEADER_PORT: 3888
      ZOOKEEPER_TICK_TIME: 2000
      ZOOKEEPER_INIT_LIMIT: 5
      ZOOKEEPER_SYNC_LIMIT: 2
      ZOOKEEPER_SERVERS: *zoo
    volumes:
      - ./kafka-data/zookeeper:/var/lib/zookeeper/data
      - ./kafka-data/zookeeper-logs:/var/lib/zookeeper/log
    networks:
      - mynet

  broker:
    image: confluentinc/cp-kafka:6.2.1
    hostname: broker
    container_name: broker
    depends_on:
      - zookeeper
    ports:
      - "9092:9092"
    extra_hosts:
      - "kafka-1:192.168.1.11"
      - "kafka-2:192.168.1.12"
      - "kafka-3:192.168.1.13"
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_LISTENERS: OUTSIDE://0.0.0.0:9092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: OUTSIDE:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: OUTSIDE://192.168.1.11:9092
      KAFKA_INTER_BROKER_LISTENER_NAME: OUTSIDE
      KAFKA_ZOOKEEPER_CONNECT: *kafkaZookeepers
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_LOG_MESSAGE_TIMESTAMP_TYPE: 'LogAppendTime'
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_CONNECTIONS_MAX_IDLE_MS: 31536000000 # 1 year
    volumes:
      - ./kafka-data/kafka:/var/lib/kafka/data
    networks:
      - mynet

  kafka-ui:
    image: provectuslabs/kafka-ui:0.2.1
    hostname: kafka-ui
    container_name: kafka-ui
    depends_on:
      - broker
    ports:
      - "8084:8080"
    extra_hosts:
      - "kafka-1:192.168.1.11"
      - "kafka-2:192.168.1.12"
      - "kafka-3:192.168.1.13"
    environment:
      KAFKA_CLUSTERS_0_NAME: local_kafka
      KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: *kafkaBrokers
      KAFKA_CLUSTERS_0_ZOOKEEPER: *kafkaZookeepers
    networks:
      - mynet

networks:
  mynet:
    driver: bridge
version: '3.7'

x-zoo: &zoo "kafka-1:2888:3888;kafka-2:2888:3888;kafka-3:2888:3888"
x-kafkaZookeepers: &kafkaZookeepers "kafka-1:2181,kafka-2:2181,kafka-3:2181"
x-kafkaBrokers: &kafkaBrokers "kafka-1:9092"

services:
  zookeeper:
    image: confluentinc/cp-zookeeper:6.2.1
    hostname: zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"
      - "2888:2888"
      - "3888:3888"
    extra_hosts:
      - "kafka-1:192.168.1.11"
      - "kafka-2:192.168.1.12"
      - "kafka-3:192.168.1.13"
    environment:
      ZOOKEEPER_SERVER_ID: 2
      ZOOKEEPER_QUORUM_LISTEN_ON_ALL_IPS: 'true'
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_PEER_PORT: 2888
      ZOOKEEPER_LEADER_PORT: 3888
      ZOOKEEPER_TICK_TIME: 2000
      ZOOKEEPER_INIT_LIMIT: 5
      ZOOKEEPER_SYNC_LIMIT: 2
      ZOOKEEPER_SERVERS: *zoo
    volumes:
      - ./kafka-data/zookeeper:/var/lib/zookeeper/data
      - ./kafka-data/zookeeper-logs:/var/lib/zookeeper/log
    networks:
      - mynet

networks:
  mynet:
    driver: bridge
version: '3.7'

x-zoo: &zoo "kafka-1:2888:3888;kafka-2:2888:3888;kafka-3:2888:3888"
x-kafkaZookeepers: &kafkaZookeepers "kafka-1:2181,kafka-2:2181,kafka-3:2181"
x-kafkaBrokers: &kafkaBrokers "kafka-1:9092"

services:
  zookeeper:
    image: confluentinc/cp-zookeeper:6.2.1
    hostname: zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"
      - "2888:2888"
      - "3888:3888"
    extra_hosts:
      - "kafka-1:192.168.1.11"
      - "kafka-2:192.168.1.12"
      - "kafka-3:192.168.1.13"
    environment:
      ZOOKEEPER_SERVER_ID: 3
      ZOOKEEPER_QUORUM_LISTEN_ON_ALL_IPS: 'true'
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_PEER_PORT: 2888
      ZOOKEEPER_LEADER_PORT: 3888
      ZOOKEEPER_TICK_TIME: 2000
      ZOOKEEPER_INIT_LIMIT: 5
      ZOOKEEPER_SYNC_LIMIT: 2
      ZOOKEEPER_SERVERS: *zoo
    volumes:
      - ./kafka-data/zookeeper:/var/lib/zookeeper/data
      - ./kafka-data/zookeeper-logs:/var/lib/zookeeper/log
    networks:
      - mynet

networks:
  mynet:
    driver: bridge
test@kafka-1:~/Kafka-Docker$ cat /etc/hosts
127.0.0.1       localhost
127.0.1.1       kafka-1

192.168.1.11    kafka-1
192.168.1.12    kafka-2
192.168.1.13    kafka-3

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

-----------------------
    extra_hosts:
      - "kafka-1:192.168.1.11"
      - "kafka-2:192.168.1.12"
      - "kafka-3:192.168.1.13"
version: '3.7'

x-zoo: &zoo "kafka-1:2888:3888;kafka-2:2888:3888;kafka-3:2888:3888"
x-kafkaZookeepers: &kafkaZookeepers "kafka-1:2181,kafka-2:2181,kafka-3:2181"
x-kafkaBrokers: &kafkaBrokers "kafka-1:9092"

services:
  zookeeper:
    image: confluentinc/cp-zookeeper:6.2.1
    hostname: zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"
      - "2888:2888"
      - "3888:3888"
    extra_hosts:
      - "kafka-1:192.168.1.11"
      - "kafka-2:192.168.1.12"
      - "kafka-3:192.168.1.13"
    environment:
      ZOOKEEPER_SERVER_ID: 1
      ZOOKEEPER_QUORUM_LISTEN_ON_ALL_IPS: 'true'
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_PEER_PORT: 2888
      ZOOKEEPER_LEADER_PORT: 3888
      ZOOKEEPER_TICK_TIME: 2000
      ZOOKEEPER_INIT_LIMIT: 5
      ZOOKEEPER_SYNC_LIMIT: 2
      ZOOKEEPER_SERVERS: *zoo
    volumes:
      - ./kafka-data/zookeeper:/var/lib/zookeeper/data
      - ./kafka-data/zookeeper-logs:/var/lib/zookeeper/log
    networks:
      - mynet

  broker:
    image: confluentinc/cp-kafka:6.2.1
    hostname: broker
    container_name: broker
    depends_on:
      - zookeeper
    ports:
      - "9092:9092"
    extra_hosts:
      - "kafka-1:192.168.1.11"
      - "kafka-2:192.168.1.12"
      - "kafka-3:192.168.1.13"
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_LISTENERS: OUTSIDE://0.0.0.0:9092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: OUTSIDE:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: OUTSIDE://192.168.1.11:9092
      KAFKA_INTER_BROKER_LISTENER_NAME: OUTSIDE
      KAFKA_ZOOKEEPER_CONNECT: *kafkaZookeepers
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_LOG_MESSAGE_TIMESTAMP_TYPE: 'LogAppendTime'
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_CONNECTIONS_MAX_IDLE_MS: 31536000000 # 1 year
    volumes:
      - ./kafka-data/kafka:/var/lib/kafka/data
    networks:
      - mynet

  kafka-ui:
    image: provectuslabs/kafka-ui:0.2.1
    hostname: kafka-ui
    container_name: kafka-ui
    depends_on:
      - broker
    ports:
      - "8084:8080"
    extra_hosts:
      - "kafka-1:192.168.1.11"
      - "kafka-2:192.168.1.12"
      - "kafka-3:192.168.1.13"
    environment:
      KAFKA_CLUSTERS_0_NAME: local_kafka
      KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: *kafkaBrokers
      KAFKA_CLUSTERS_0_ZOOKEEPER: *kafkaZookeepers
    networks:
      - mynet

networks:
  mynet:
    driver: bridge
version: '3.7'

x-zoo: &zoo "kafka-1:2888:3888;kafka-2:2888:3888;kafka-3:2888:3888"
x-kafkaZookeepers: &kafkaZookeepers "kafka-1:2181,kafka-2:2181,kafka-3:2181"
x-kafkaBrokers: &kafkaBrokers "kafka-1:9092"

services:
  zookeeper:
    image: confluentinc/cp-zookeeper:6.2.1
    hostname: zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"
      - "2888:2888"
      - "3888:3888"
    extra_hosts:
      - "kafka-1:192.168.1.11"
      - "kafka-2:192.168.1.12"
      - "kafka-3:192.168.1.13"
    environment:
      ZOOKEEPER_SERVER_ID: 2
      ZOOKEEPER_QUORUM_LISTEN_ON_ALL_IPS: 'true'
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_PEER_PORT: 2888
      ZOOKEEPER_LEADER_PORT: 3888
      ZOOKEEPER_TICK_TIME: 2000
      ZOOKEEPER_INIT_LIMIT: 5
      ZOOKEEPER_SYNC_LIMIT: 2
      ZOOKEEPER_SERVERS: *zoo
    volumes:
      - ./kafka-data/zookeeper:/var/lib/zookeeper/data
      - ./kafka-data/zookeeper-logs:/var/lib/zookeeper/log
    networks:
      - mynet

networks:
  mynet:
    driver: bridge
version: '3.7'

x-zoo: &zoo "kafka-1:2888:3888;kafka-2:2888:3888;kafka-3:2888:3888"
x-kafkaZookeepers: &kafkaZookeepers "kafka-1:2181,kafka-2:2181,kafka-3:2181"
x-kafkaBrokers: &kafkaBrokers "kafka-1:9092"

services:
  zookeeper:
    image: confluentinc/cp-zookeeper:6.2.1
    hostname: zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"
      - "2888:2888"
      - "3888:3888"
    extra_hosts:
      - "kafka-1:192.168.1.11"
      - "kafka-2:192.168.1.12"
      - "kafka-3:192.168.1.13"
    environment:
      ZOOKEEPER_SERVER_ID: 3
      ZOOKEEPER_QUORUM_LISTEN_ON_ALL_IPS: 'true'
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_PEER_PORT: 2888
      ZOOKEEPER_LEADER_PORT: 3888
      ZOOKEEPER_TICK_TIME: 2000
      ZOOKEEPER_INIT_LIMIT: 5
      ZOOKEEPER_SYNC_LIMIT: 2
      ZOOKEEPER_SERVERS: *zoo
    volumes:
      - ./kafka-data/zookeeper:/var/lib/zookeeper/data
      - ./kafka-data/zookeeper-logs:/var/lib/zookeeper/log
    networks:
      - mynet

networks:
  mynet:
    driver: bridge
test@kafka-1:~/Kafka-Docker$ cat /etc/hosts
127.0.0.1       localhost
127.0.1.1       kafka-1

192.168.1.11    kafka-1
192.168.1.12    kafka-2
192.168.1.13    kafka-3

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

Connect .NET aplication to kafka running in docker

copy iconCopydownload iconDownload
using IConsumer<Null, YourMessageType> consumer =
    new ConsumerBuilder<Null, YourMessageType>(
        new ConsumerConfig
        {
            GroupId = Guid.NewGuid().ToString(),
            BootstrapServers = "localhost:9092",
            AutoOffsetReset = AutoOffsetReset.Earliest
        })
        .SetValueDeserializer(new AvroDeserializer<YourMessageType>(schemaRegistry).AsSyncOverAsync())
        .Build();
-----------------------
     - KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:29092,PLAINTEXT_HOST://kafka:9092
     - KAFKA_LISTENERS=PLAINTEXT://kafka:29092,PLAINTEXT_HOST://kafka:9092
     - KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
     - KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:29092,PLAINTEXT_HOST://0.0.0.0:9092
BootstrapServers = "localhost:9092",
-----------------------
     - KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:29092,PLAINTEXT_HOST://kafka:9092
     - KAFKA_LISTENERS=PLAINTEXT://kafka:29092,PLAINTEXT_HOST://kafka:9092
     - KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
     - KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:29092,PLAINTEXT_HOST://0.0.0.0:9092
BootstrapServers = "localhost:9092",
-----------------------
     - KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:29092,PLAINTEXT_HOST://kafka:9092
     - KAFKA_LISTENERS=PLAINTEXT://kafka:29092,PLAINTEXT_HOST://kafka:9092
     - KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
     - KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:29092,PLAINTEXT_HOST://0.0.0.0:9092
BootstrapServers = "localhost:9092",

i want to separate field with tab delimiter and replace with comma in shell script

copy iconCopydownload iconDownload
docker ps --format "{{ .Image }},{{.ID}},{{.Command}},{{.CreatedAt }},{{.Status }},{{.Ports }},{{.Names}}"
docker ps --format "table {{ .Image }},{{.ID}},{{.Command}},{{.CreatedAt }},{{.Status }},{{.Ports }},{{.Names}}"
-----------------------
docker ps --format "{{ .Image }},{{.ID}},{{.Command}},{{.CreatedAt }},{{.Status }},{{.Ports }},{{.Names}}"
docker ps --format "table {{ .Image }},{{.ID}},{{.Command}},{{.CreatedAt }},{{.Status }},{{.Ports }},{{.Names}}"
-----------------------
docker ps -a |tr -s " "| sed 's/ /,/g'

Kafdrop - Cannot connect to Kafka Cluster setup using bitnami/kafka

copy iconCopydownload iconDownload
version: '2'

networks:
  kafka-net:
    driver: bridge

services:
  zookeeper-server:
    image: 'bitnami/zookeeper:latest'
    networks:
      - kafka-net
    ports:
      - '2181:2181'
    environment:
      - ALLOW_ANONYMOUS_LOGIN=yes
  kafdrop:
    image: obsidiandynamics/kafdrop
    networks:
      - kafka-net
    restart: "no"
    ports:
      - "9000:9000"
    environment:
      KAFKA_BROKERCONNECT: "PLAINTEXT://kafka-server1:9092,PLAINTEXT://kafka-server2:9092,PLAINTEXT://kafka-server3:9092"
      JVM_OPTS: "-Xms16M -Xmx48M -Xss180K -XX:-TieredCompilation -XX:+UseStringDeduplication -noverify"
    depends_on:
      - "kafka-server1"
      - "kafka-server2"
      - "kafka-server3"
  kafka-server1:
    image: 'bitnami/kafka:latest'
    networks:
      - kafka-net    
    ports:
      - '9092:9092'
    environment:
      - KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper-server:2181
      - ALLOW_PLAINTEXT_LISTENER=yes
    depends_on:
      - zookeeper-server
  kafka-server2:
    image: 'bitnami/kafka:latest'
    networks:
      - kafka-net    
    ports:
      - '9093:9092'
    environment:
      - KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper-server:2181
      - ALLOW_PLAINTEXT_LISTENER=yes
    depends_on:
      - zookeeper-server
  kafka-server3:
    image: 'bitnami/kafka:latest'
    networks:
      - kafka-net    
    ports:
      - '9094:9092'
    environment:
      - KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper-server:2181
      - ALLOW_PLAINTEXT_LISTENER=yes
    depends_on:
      - zookeeper-server

How do I deserialize a kafka message to a POJO?

copy iconCopydownload iconDownload
 @Component
public class ExampleConsumer {

    @KafkaListener(topics = "data")
    public void processData(Data data) {
        System.out.println("Data:" + data);
    }
}
spring:
  application:
    name: kafka-consumer-example
  kafka:
    bootstrap-servers: localhost:9092
    consumer:
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      value-deserializer: org.springframework.kafka.support.serializer.JsonDeserializer
      group-id: data_consumer
      client-id: ${spring.application.name}
      properties:
        spring.json.value.default.type: com.example.consumer.example.Data
        spring.json.type.mapping: "data:com.example.consumer.example.Data"
        spring.json.trusted.packages: "*"

    listener:
      missing-topics-fatal: false
-----------------------
 @Component
public class ExampleConsumer {

    @KafkaListener(topics = "data")
    public void processData(Data data) {
        System.out.println("Data:" + data);
    }
}
spring:
  application:
    name: kafka-consumer-example
  kafka:
    bootstrap-servers: localhost:9092
    consumer:
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      value-deserializer: org.springframework.kafka.support.serializer.JsonDeserializer
      group-id: data_consumer
      client-id: ${spring.application.name}
      properties:
        spring.json.value.default.type: com.example.consumer.example.Data
        spring.json.type.mapping: "data:com.example.consumer.example.Data"
        spring.json.trusted.packages: "*"

    listener:
      missing-topics-fatal: false
-----------------------
@Bean
public ConcurrentKafkaListenerContainerFactory<String, Data> kafkaListenerContainerFactory() {
    ConcurrentKafkaListenerContainerFactory<String, Data> factory = new ConcurrentKafkaListenerContainerFactory<>();
    factory.setConsumerFactory(dataConsumerFactory());
    return factory;
}
Data:Data{key='null', value1='1st value', value2='2nd value'}
-----------------------
@Bean
public ConcurrentKafkaListenerContainerFactory<String, Data> kafkaListenerContainerFactory() {
    ConcurrentKafkaListenerContainerFactory<String, Data> factory = new ConcurrentKafkaListenerContainerFactory<>();
    factory.setConsumerFactory(dataConsumerFactory());
    return factory;
}
Data:Data{key='null', value1='1st value', value2='2nd value'}

Community Discussions

Trending Discussions on kafdrop
  • PySpark doesn't find Kafka source
  • Why does Kafka Mirrormaker target topic contain half of original messages?
  • docker-compose.yml with 3 zookepers and 1 broker set up with public IP - broker failed to start with no meaningful logs (but works with 1 zookeeper)
  • Quarkus can't connect to kafka from inside docker
  • Connect to Kakfka broker with SASL_PLAINTEXT in docker-compose (binami/kafka)
  • Connect .NET aplication to kafka running in docker
  • i want to separate field with tab delimiter and replace with comma in shell script
  • Unable to run kafka connect datagen inside kafka connect docker image
  • Kafdrop (localhost/127.0.0.1:9092) could not be established. Broker may not be available
  • Kafdrop - Cannot connect to Kafka Cluster setup using bitnami/kafka
Trending Discussions on kafdrop

QUESTION

PySpark doesn't find Kafka source

Asked 2022-Jan-24 at 23:36

I am trying to deploy a docker container with Kafka and Spark and would like to read to Kafka Topic from a pyspark application. Kafka is working and I can write to a topic and also spark is working. But when I try to read the Kafka stream I get the error message:

pyspark.sql.utils.AnalysisException:  Failed to find data source: kafka. Please deploy the application as per the deployment section of "Structured Streaming + Kafka Integration Guide".

My Docker Compose yaml looks like this:

---
version: '3.7'

services:
  zookeeper:
    image: bitnami/zookeeper:3
    ports:
      - 2181:2181
    environment:
      ALLOW_ANONYMOUS_LOGIN: "yes"
  kafka:
    image: bitnami/kafka:2
    ports:
      - 9092:9092
    environment:
      KAFKA_CFG_ZOOKEEPER_CONNECT: zookeeper:2181
      ALLOW_PLAINTEXT_LISTENER: "yes"
      KAFKA_LISTENERS: >-
          INTERNAL://:29092,EXTERNAL://:9092
      KAFKA_ADVERTISED_LISTENERS: >-
          INTERNAL://kafka:29092,EXTERNAL://localhost:9092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: >-
          INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: "INTERNAL"
    depends_on:
      - zookeeper

  spark:
    image: docker.io/bitnami/spark:3-debian-10
    environment:
      - SPARK_MODE=master
      - SPARK_RPC_AUTHENTICATION_ENABLED=no
      - SPARK_RPC_ENCRYPTION_ENABLED=no
      - SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no
      - SPARK_SSL_ENABLED=no
    ports:
      - '8080:8080'
    volumes:
      - ./:/home/workspace/
      - ./spark/jars:/opt/bitnami/spark/.ivy2 

  spark-worker-1:
    image: docker.io/bitnami/spark:3-debian-10
    environment:
      - SPARK_MODE=worker
      - SPARK_MASTER_URL=spark://spark:7077
      - SPARK_WORKER_MEMORY=1G
      - SPARK_WORKER_CORES=1
      - SPARK_RPC_AUTHENTICATION_ENABLED=no
      - SPARK_RPC_ENCRYPTION_ENABLED=no
      - SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no
      - SPARK_SSL_ENABLED=no
    volumes:
      - ./:/home/workspace/
      - ./spark/jars:/opt/bitnami/spark/.ivy2 
      
  kafdrop:
    image: obsidiandynamics/kafdrop:latest
    ports:
      - 9000:9000
    environment:
      KAFKA_BROKERCONNECT: kafka:29092
    depends_on:
      - kafka

and the pyspark app:

from pyspark.sql import SparkSession
import os

#os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages org.apache.spark:spark-sql-kafka-0-10_2.12:3.2.0,org.apache.kafka:kafka-clients:2.8.1'
# the source for this data pipeline is a kafka topic, defined below
spark = SparkSession.builder.appName("fuel-level").master("local[*]").getOrCreate()
spark.sparkContext.setLogLevel('WARN')

kafkaRawStreamingDF = spark                          \
    .readStream                                          \
    .format("kafka")                                     \
    .option("kafka.bootstrap.servers", "localhost:9092") \
    .option("subscribe","SimLab-KUKA")                  \
    .option("startingOffsets","earliest")\
    .load()                                     

#this is necessary for Kafka Data Frame to be readable, into a single column  value
kafkaStreamingDF = kafkaRawStreamingDF.selectExpr("cast(key as string) key", "cast(value as string) value")

kafkaStreamingDF.writeStream.outputMode("append").format("console").start().awaitTermination()

I am new to Spark and docker, so maybe It's an obvious mistake, I hope you can help me

EDIT When I uncomment os.env I get the following error:

Error: Missing application resource.

Usage: spark-submit [options] <app jar | python file | R file> [app arguments]
Usage: spark-submit --kill [submission ID] --master [spark://...]
Usage: spark-submit --status [submission ID] --master [spark://...]
Usage: spark-submit run-example [options] example-class [example args]

Options:
  --master MASTER_URL         spark://host:port, mesos://host:port, yarn,
                              k8s://https://host:port, or local (Default: local[*]).
  --deploy-mode DEPLOY_MODE   Whether to launch the driver program locally ("client") or
                              on one of the worker machines inside the cluster ("cluster")
                              (Default: client).
  --class CLASS_NAME          Your application's main class (for Java / Scala apps).
  --name NAME                 A name of your application.
  --jars JARS                 Comma-separated list of jars to include on the driver
                              and executor classpaths.
  --packages                  Comma-separated list of maven coordinates of jars to include
                              on the driver and executor classpaths. Will search the local
                              maven repo, then maven central and any additional remote
                              repositories given by --repositories. The format for the
                              coordinates should be groupId:artifactId:version.
  --exclude-packages          Comma-separated list of groupId:artifactId, to exclude while
                              resolving the dependencies provided in --packages to avoid
                              dependency conflicts.
  --repositories              Comma-separated list of additional remote repositories to
                              search for the maven coordinates given with --packages.
  --py-files PY_FILES         Comma-separated list of .zip, .egg, or .py files to place
                              on the PYTHONPATH for Python apps.
  --files FILES               Comma-separated list of files to be placed in the working
                              directory of each executor. File paths of these files
                              in executors can be accessed via SparkFiles.get(fileName).
  --archives ARCHIVES         Comma-separated list of archives to be extracted into the
                              working directory of each executor.

  --conf, -c PROP=VALUE       Arbitrary Spark configuration property.
  --properties-file FILE      Path to a file from which to load extra properties. If not
                              specified, this will look for conf/spark-defaults.conf.

  --driver-memory MEM         Memory for driver (e.g. 1000M, 2G) (Default: 1024M).
  --driver-java-options       Extra Java options to pass to the driver.
  --driver-library-path       Extra library path entries to pass to the driver.
  --driver-class-path         Extra class path entries to pass to the driver. Note that
                              jars added with --jars are automatically included in the
                              classpath.

  --executor-memory MEM       Memory per executor (e.g. 1000M, 2G) (Default: 1G).

  --proxy-user NAME           User to impersonate when submitting the application.
                              This argument does not work with --principal / --keytab.

  --help, -h                  Show this help message and exit.
  --verbose, -v               Print additional debug output.
  --version,                  Print the version of current Spark.

 Cluster deploy mode only:
  --driver-cores NUM          Number of cores used by the driver, only in cluster mode
                              (Default: 1).

 Spark standalone or Mesos with cluster deploy mode only:
  --supervise                 If given, restarts the driver on failure.

 Spark standalone, Mesos or K8s with cluster deploy mode only:
  --kill SUBMISSION_ID        If given, kills the driver specified.
  --status SUBMISSION_ID      If given, requests the status of the driver specified.

 Spark standalone, Mesos and Kubernetes only:
  --total-executor-cores NUM  Total cores for all executors.

 Spark standalone, YARN and Kubernetes only:
  --executor-cores NUM        Number of cores used by each executor. (Default: 1 in
                              YARN and K8S modes, or all available cores on the worker
                              in standalone mode).

 Spark on YARN and Kubernetes only:
  --num-executors NUM         Number of executors to launch (Default: 2).
                              If dynamic allocation is enabled, the initial number of
                              executors will be at least NUM.
  --principal PRINCIPAL       Principal to be used to login to KDC.
  --keytab KEYTAB             The full path to the file that contains the keytab for the
                              principal specified above.

 Spark on YARN only:
  --queue QUEUE_NAME          The YARN queue to submit to (Default: "default").
      
Traceback (most recent call last):
  File "/Users/janikbischoff/Documents/Uni/PuL/BA/Code/Tests/spark-test.py", line 6, in <module>
    spark = SparkSession.builder.appName("fuel-level").master("local[*]").getOrCreate()
  File "/Users/janikbischoff/Library/Python/3.8/lib/python/site-packages/pyspark/sql/session.py", line 228, in getOrCreate
    sc = SparkContext.getOrCreate(sparkConf)
  File "/Users/janikbischoff/Library/Python/3.8/lib/python/site-packages/pyspark/context.py", line 392, in getOrCreate
    SparkContext(conf=conf or SparkConf())
  File "/Users/janikbischoff/Library/Python/3.8/lib/python/site-packages/pyspark/context.py", line 144, in __init__
    SparkContext._ensure_initialized(self, gateway=gateway, conf=conf)
  File "/Users/janikbischoff/Library/Python/3.8/lib/python/site-packages/pyspark/context.py", line 339, in _ensure_initialized
    SparkContext._gateway = gateway or launch_gateway(conf)
  File "/Users/janikbischoff/Library/Python/3.8/lib/python/site-packages/pyspark/java_gateway.py", line 108, in launch_gateway
    raise RuntimeError("Java gateway process exited before sending its port number")
RuntimeError: Java gateway process exited before sending its port number

ANSWER

Answered 2022-Jan-24 at 23:36

Missing application resource

This implies you're running the code using python rather than spark-submit

I was able to reproduce the error by copying your environment, as well as using findspark, it seems PYSPARK_SUBMIT_ARGS aren't working in that container, even though the variable does get loaded...

The workaround would be to pass the argument at execution time.

spark-submit \
  --packages org.apache.spark:spark-sql-kafka-0-10_2.12:3.2.0 \
  script.py

Source https://stackoverflow.com/questions/70823382

Community Discussions, Code Snippets contain sources that include Stack Exchange Network

Vulnerabilities

No vulnerabilities reported

Install kafdrop

You can run the Kafdrop JAR directly, via Docker, or in Kubernetes.
Set the admin password (you will be prompted):.

Support

To install with protobuf support, a "facility" option is provided for the deployment, to mount the descriptor files folder, as well as passing the required CMD arguments, via option mountProtoDesc. Example:.

DOWNLOAD this Library from

Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

Explore Related Topics

Share this Page

share link
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

  • © 2022 Open Weaver Inc.