ksqldb-go | A Golang client for ksqlDB | Continuous Deployment library
kandi X-RAY | ksqldb-go Summary
kandi X-RAY | ksqldb-go Summary
This is a Go client for ksqlDB. It supports both pull and push queries, as well as command execution. ️ Disclaimer #1: I am brand new to Go! Tips (or PRs) to improve the code very welcome :). ️ Disclaimer #2: This is a personal project and not supported or endorsed by Confluent.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Push pushes to ksqlDB
- NewClient returns a new Client .
ksqldb-go Key Features
ksqldb-go Examples and Code Snippets
rc := make(chan ksqldb.Row)
hc := make(chan ksqldb.Header, 1)
k := "SELECT ROWTIME, ID, NAME, DOGSIZE, AGE FROM DOGS EMIT CHANGES;"
// This Go routine will handle rows as and when they
// are sent to the channel
go func() {
var NAME string
ctx, ctxCancel := context.WithTimeout(context.Background(), 10 * time.Second)
defer ctxCancel()
k := "SELECT TIMESTAMPTOSTRING(WINDOWSTART,'yyyy-MM-dd HH:mm:ss','Europe/London') AS WINDOW_START, TIMESTAMPTOSTRING(WINDOWEND,'HH:mm:ss','Europe/London'
if err := client.Execute(ctx, ksqlDBServer, `
CREATE STREAM DOGS (ID STRING KEY,
NAME STRING,
DOGSIZE STRING,
AGE STRING)
WITH (KAFKA_TOPIC='dogs',
VALUE_FORMAT='JSON');
`); err != nil {
return fmt.Errorf("Error cre
Community Discussions
Trending Discussions on ksqldb-go
QUESTION
I am trying to build an application on top of ksqldb.
Let's say I will have a simple producer:
...ANSWER
Answered 2022-Feb-15 at 07:51First up, note that I no longer maintain that client, and you might want to check out https://github.com/thmeitz/ksqldb-go instead.
Now onto your question. If I'm understanding correctly you want to run multiple instances of the same logical consumer for parallelism purposes, and thus each message should be processed by that logical consumer once.
If that's the case then you are describing what is called a consumer group in Kafka. Multiple instances of a consumer identify themselves with the same client ID and Kafka ensures that data from across the source topic's partitions is routed to the available consumers within that group. If there are four consumers and eight partitions, each consumer is going to get the data from two partitions. If one consumer leaves the group (it crashes, you scale down, etc) then Kafka reassigns that consumer's partitions across the remaining consumers with the group.
This is different behaviour from what you are seeing, in which you are effectively instantiating multiple independent consumers. By design, Kafka ensures that each consumer that is subscribed to a topic receives all of the messages on that topic.
I'm deliberately talking about Kafka here, and not ksqlDB. That's because ksqlDB is built on Kafka and in order to make sense of what you are seeing it's important to explain the underpinning fundamentals.
To get the behaviour that you're looking for you probably want to look at using the Consumer API directly in your consumer application. You can see an example of the Consumer API in this quickstart for Golang and Kafka. To create a consumer group you specify a unique group.id
.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install ksqldb-go
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page