sarama | Sarama is a Go library for Apache Kafka | Pub Sub library
kandi X-RAY | sarama Summary
kandi X-RAY | sarama Summary
Sarama is an MIT-licensed Go client library for Apache Kafka.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of sarama
sarama Key Features
sarama Examples and Code Snippets
Community Discussions
Trending Discussions on sarama
QUESTION
I've checked out the main
branch of github.com/Shopify/sarama
(at commit 947343309601b4eb3c2fa3e7d15d701b503dd491
) but I notice that in VS Code I can't "Go to definition" as usual. If I hover over the package name sarama
in functional_consumer_group_test.go
, I get the linter warning
ANSWER
Answered 2022-Apr-08 at 00:39Following https://www.ryanchapin.com/configuring-vscode-to-use-build-tags-in-golang-to-separate-integration-and-unit-test-code/, I had to create a .vscode/settings.json
file in the repository's root directory and add the following contents:
QUESTION
In the sarama library there is an option to initialize NewConsumerGroup but how to hook it hook i t up to NewConumer?
...ANSWER
Answered 2022-Jan-20 at 15:06NewConsumerGroup
creates its own Consumer, so there's nothing for you to hook it to
Source: https://github.com/Shopify/sarama/blob/main/consumer_group.go#L105
QUESTION
go tool pprof
has a -call_tree
option that, according to -help
, should “Create a context-sensitive call tree”. However, pprof -tree
on a CPU profile gives me the exact same output with and without this option. It looks like this (one representative node):
ANSWER
Answered 2021-Dec-10 at 21:45Why doesn’t -call_tree affect this output of -tree
I think the -call_tree
option doesn't change the output of -tree
the outputs are not actually a tree, it outputs the nodes of the tree(more on this in the extra credit section).
In general, what does -call_tree do?
You can see a difference when you take a look at images generated with the -png
flag, without the -call_tree
flag:
And with the -call_tree
flag:
So instead of having 1 call tree, pprof attempts to create seperate trees based on context. In my case(will list them since the text on the image is not readable) the roots are:
testing.(*B).launch
(benchmark/test framework)runtime.gcBgMarkWorker
(part of the runtime GC)runtime.bgsweep
(part of the runtime GC)runtime.mcall
(part of the runtime scheduler)runtime.morestack
(something to do with the stack :) )
In the non -call_tree
image these nodes are still present, but start mid tree, as if our code calls these background processed directly.
Basically basically what the option does is remove/hide infrequent calls between functions, so you will end up with a tree for each set of functions that call each other frequently.
I haven't tested this but I imagine that pprof will do this context aware tree separation for user code as well. All in all it returns a subjectively more readable tree, or at least a more relevant one.
What is the exact meaning of the output node I showed above?
The -tree
option attempts to output the tree, as shows in the images. But since it is text output it shows you 1 node of the tree at a time, the non indented line in the context
column is the current node, the rows above are nodes that call the current node, the rows below the nodes which this one calls (the arrows in the image).
The calls%
is a the "weight" of the the incoming or outgoing edge, so indeed the percentage of calls from or to a function.
What documents could I read to answer the above questions?
If figured all of this out by looking at the source code, here are some key parts, in case you are interested:
- The file which does most of the output generation: https://github.com/google/pprof/blob/2007db6d4f53c44a417ddae675d50f56b8e8c2fd/internal/report/report.go
- The function for the
-tree
option: https://github.com/google/pprof/blob/2007db6d4f53c44a417ddae675d50f56b8e8c2fd/internal/report/report.go#L1047 - Line which explains when
-call_tree
is actually used: https://github.com/google/pprof/blob/2007db6d4f53c44a417ddae675d50f56b8e8c2fd/internal/report/report.go#L133
QUESTION
Getting following error when installing golang
github.com/Shopify/sarama
kafka
library
ANSWER
Answered 2021-Sep-25 at 21:45ioutil.Discard
and ioutil.ReadAll
has moved to io.Discard
and io.ReadAll
respectively as of Go 1.16,
You should use Go 1.16 or use an older version of sarama (I think v1.20.1 should work for go1.13)
Also from sarama's README:
Sarama provides a "2 releases + 2 months" compatibility guarantee: we support the two latest stable releases of Kafka and Go, and we provide a two month grace period for older releases. This means we currently officially support Go 1.15 through 1.16, and Kafka 2.7 through 2.8, although older releases are still likely to work.
QUESTION
Right now I have this code and it works fine. (It sends some json format data to Kafka topic)
...ANSWER
Answered 2021-Sep-22 at 13:06Did you see the section of the docs that serialized the event?
https://github.com/cloudevents/sdk-go#serializedeserialize-a-cloudevent
QUESTION
I have a Strimzi cluster setup with the follow yaml.
...ANSWER
Answered 2021-Aug-18 at 15:49So it turns out that the problem was mainly a command line issue. I kept trying to use the -ca
flag when I should have just used only the -certificate
flag. I also needed to add the -verify
option flag as well. So the command that allowed me to produce was using the following -
QUESTION
Hi I am writing a service in Go and Kafka and I need to implement a delete all endpoint which would delete all records from a specific topic. However I can not find a proper way to do that. I am using the Sarama library for Kafka.
So far the only two ways I can find to implement delete all is by deleting the topic which does not seem to be an efficient way to handle this problem and the second one is using the DeleteRecords
function from the Sarama library, however this function Deletes records whose offset is smaller than the given offset of the corresponding partition. Which means that I have to get the latest offset first.
Basically I am looking for the best way to do such a thing. Could anyone help me? What are the best practices? Maybe I have missed something. I would really appreciate some examples. Thank you!
...ANSWER
Answered 2021-Jul-20 at 09:19If you want to prune all the messages, another way to do that is to reduce the retention of the topic to a small value (e.g. 100ms). Wait for the brokers to remove all the records from the topic and then set the topic retention to its original value. Here’s how to do it.
First, set the retention time to 100 milliseconds.
QUESTION
I'm going to be switching from RabbitMQ to Kafka. This is just a simple spike to see how Kafka operates. I'm not sure if there are settings that I am missing, if it is my code, if it is Kafka-Go, or if this is expected Kafka behavior.
I've tried adjusting the BatchSize
as well as the BatchTimeout
but neither have had an impact.
The code below creates a topic with 6 partitions and a replication factor of 3. It then produces an incrementing message every 100ms
. It launches 6 consumers, one for each partition. Both reading and writing are performed in go routines.
In the log below, it goes 7 seconds without receiving a message and then receives bursts. I'm using Confluent's platform so I recognize that there will be some network latency but not to the degree that I'm seeing.
...ANSWER
Answered 2020-Nov-22 at 17:21You need to change ReaderConfig.MinBytes
, otherwise segmentio/kafka-go
will set it to 1e6 = 1 MB
, and in that case, Kafka will wait for that much data to accumulate before answering the request.
QUESTION
I'm trying to write a unit test for a functional option that configure's github.com/Shopify/sarama
's Logger
. After running a Docker container with Kafka like so,
ANSWER
Answered 2020-Sep-25 at 23:59It turns out I just needed to call
QUESTION
I have a collections of 2,000,000 records
...ANSWER
Answered 2020-Sep-18 at 07:17Your loop fetching the results may end early because you are using the same ctx
context for iterating over the results which has a 10 seconds timeout.
Which means if retrieving and processing the 2 million records (including connecting) takes more than 10 seconds, the context will be cancelled and thus the cursor will also report an error.
Note that setting FindOptions.NoCursorTimeout
to true
is only to prevent cursor timeout for inactivity, it does not override the used context's timeout.
Use another context for executing the query and iterating over the results, one that does not have a timeout, e.g. context.Background()
.
Also note that for constructing the options for find
, use the helper methods, so it may look as simple and as elegant as this:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install sarama
API documentation and examples are available via pkg.go.dev.
Mocks for testing are available in the mocks subpackage.
The examples directory contains more elaborate example applications.
The tools directory contains command line tools that can be useful for testing, diagnostics, and instrumentation.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page