kafkacat | Generic command line non-JVM Apache Kafka producer | Pub Sub library
kandi X-RAY | kafkacat Summary
kandi X-RAY | kafkacat Summary
kafkacat is a generic non-JVM producer and consumer for Apache Kafka >=0.8, think of it as a netcat for Kafka. In producer mode kafkacat reads messages from stdin, delimited with a configurable delimiter (-D, defaults to newline), and produces them to the provided Kafka cluster (-b), topic (-t) and partition (-p). In consumer mode kafkacat reads messages from a topic and partition and prints them to stdout using the configured message delimiter. There's also support for the Kafka >=0.9 high-level balanced consumer, use the -G switch and provide a list of topics to join the group. kafkacat also features a Metadata list (-L) mode to display the current state of the Kafka cluster and its topics and partitions. Supports Avro message deserialization using the Confluent Schema-Registry, and generic primitive deserializers (see examples below). kafkacat is fast and lightweight; statically linked it is no more than 150Kb.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of kafkacat
kafkacat Key Features
kafkacat Examples and Code Snippets
Community Discussions
Trending Discussions on kafkacat
QUESTION
Previously I've reported it into kafkacat
tracker but the issue has been closed as related to cyrus-sasl
/krb5
.
ANSWER
Answered 2021-May-13 at 11:50Very strange issue, and honestly I can't say why, but adding into krb5.conf
:
QUESTION
I have an app that produces an array of messages in raw JSON periodically. I was able to convert that to Avro using the avro-tools. I did that because I needed the messages to include schema due to the limitations of Kafka-Connect JDBC sink. I can open this file on notepad++ and see that it includes the schema and a few lines of data.
Now I would like to send this to my central Kafka Broker and then use Kafka Connect JDBC sink to put the data in a database. I am having a hard time understanding how I should be sending these Avro files I have to my Kafka Broker. Do I need a schema registry for my purposes? I believe Kafkacat does not support Avro so I suppose I will have to stick with the kafka-producer.sh that comes with the Kafka installation (please correct me if I am wrong).
Question is: Can someone please share the steps to produce my Avro file to a Kafka broker without getting Confluent getting involved.
Thanks,
...ANSWER
Answered 2021-Apr-01 at 13:14To use the Kafka Connect JDBC Sink, your data needs an explicit schema. The converter that you specify in your connector configuration determines where the schema is held. This can either be embedded within the JSON message (org.apache.kafka.connect.json.JsonConverter
with schemas.enabled=true
) or held in the Schema Registry (one of io.confluent.connect.avro.AvroConverter
, io.confluent.connect.protobuf.ProtobufConverter
, or io.confluent.connect.json.JsonSchemaConverter
).
To learn more about this see https://www.confluent.io/blog/kafka-connect-deep-dive-converters-serialization-explained
To write an Avro message to Kafka you should serialise it as Avro and store the schema in the Schema Registry. There is a Go client library to use with examples
without getting Confluent getting involved.
It's not entirely clear what you mean by this. The Kafka Connect JDBC Sink is written by Confluent. The best way to manage schemas is with the Schema Registry. If you don't want to use the Schema Registry then you can embed the schema in your JSON message but it's a suboptimal way of doing things.
QUESTION
I have been trying to use kafkacat to find a message in a topic and publish it back into the topic. We use protobuf so the message values should be in bytes (Keys can be different such as strings or bytes). However, I am unable to publish the message that could be deserialized properly.
How can I do this with kafkacat? I am also open to using other recommended tools for doing this.
Example attempt:
...ANSWER
Answered 2021-Feb-23 at 19:57the producer is treating each line as a new message
That's correct.
If you have a binary file, I suggest writing code for this, as kafkacat assumes UTF8 encoded strings as input
QUESTION
I start kafka with this docker-compose.yml
on my Mac:
ANSWER
Answered 2020-Nov-12 at 14:55I played around with your particular example, and couldn't get it to work.
For what it's worth, this Docker Compose is how I run Kafka on Docker locally, and it's accessible both from the host machine, and other containers.
You might find this blog useful if you want to continue with your existing approach and debug it further.
QUESTION
I am running a kafka instance in a docker container with following docker-compose.yml file.
...ANSWER
Answered 2020-Aug-09 at 21:40Broker configuration seems to be fine since you get back the correct metadata.
I think the problem is in your code. kafkaTemplate.send()
is an asynchronous operation and most likely your process ends before the producer manages to actually send the message. Try adding a .get()
to that send method to force it in being synchronous.
QUESTION
I have a topic "topic-one" and I want to know if it has "log.cleanup.policy = compact" configured or not.
Is it possible with kafkacat, extract the properties and / or configuration of a specific topic?
...ANSWER
Answered 2020-Aug-18 at 10:50kafkacat does not yet support the Topic Admin API (which allows you to alter and view cluster configs).
Suggest you use kafka-configs.sh
from the Apache Kafka distribution in the meantine.
QUESTION
I have Kafka setup via KUDO:
...ANSWER
Answered 2020-Jun-12 at 16:33Found out the answer:
Use kubefwd CLI
QUESTION
I have set up NiFi (1.11.4) & Kafka(2.5) via docker (docker-compose file below, actual NiFi flow definition https://github.com/geoHeil/streaming-reference). When trying to follow up on basic getting started tutorials (such as https://towardsdatascience.com/big-data-managing-the-flow-of-data-with-apache-nifi-and-apache-kafka-af674cd8f926) which combine processors such as:
- generate flowfile (CSV)
- update attribute
- PublishKafka2.0
I run into issues of timeoutException:
...ANSWER
Answered 2020-Jun-10 at 12:03You're using the wrong port to connect to the broker. By connecting to 9092
you connect to the listener that advertises localhost:9092
to the client for subsequent connections. That's why it works when you use kafkacat
from your local machine (because 9092 is exposed to your local machine)
If you use broker:29092
then the broker will give the client the correct address for the connection (i.e. broker:29092
).
To understand more about advertised listeners see this blog
QUESTION
I have three services on my docker-compose:
...ANSWER
Answered 2020-Jun-09 at 04:00Not clear why you need a JAR file. This should work just as well
QUESTION
I create a rekeyed stream
...ANSWER
Answered 2018-May-14 at 08:50A KSQL table differs from a KSQL Stream, in that it gives you the latest value for a given key. So if you are expecting to see the same number of messages in your table as your source stream, you should have the same number of unique keys.
If you're seeing fewer messages then it suggests that ROOT
is not unique.
Depending on the problem that you're modelling, you should either :
- (a) be using a Stream not a Table, or
- (b) change the key that you are using
Ref:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install kafkacat
The bootstrap.sh build script will download and build the required dependencies, providing a quick and easy means of building kafkacat. Internet connectivity and wget/curl is required by this script. The resulting kafkacat binary will be linked statically to avoid runtime dependencies. NOTE: Requires curl and cmake (for yajl) to be installed.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page