kandi background
Explore Kits

kafka | Mirror of Apache Kafka | Pub Sub library

 by   apache Java Version: Current License: Apache-2.0

 by   apache Java Version: Current License: Apache-2.0

Download this library from

kandi X-RAY | kafka Summary

kafka is a Java library typically used in Messaging, Pub Sub, Kafka, Spark, Hadoop applications. kafka has no bugs, it has build file available, it has a Permissive License and it has medium support. However kafka has 1 vulnerabilities. You can download it from GitHub.
Mirror of Apache Kafka
Support
Support
Quality
Quality
Security
Security
License
License
Reuse
Reuse

kandi-support Support

  • kafka has a medium active ecosystem.
  • It has 21667 star(s) with 11396 fork(s). There are 1077 watchers for this library.
  • It had no major release in the last 12 months.
  • kafka has no issues reported. There are 960 open pull requests and 0 closed requests.
  • It has a neutral sentiment in the developer community.
  • The latest version of kafka is current.
kafka Support
Best in #Pub Sub
Average in #Pub Sub
kafka Support
Best in #Pub Sub
Average in #Pub Sub

quality kandi Quality

  • kafka has no bugs reported.
kafka Quality
Best in #Pub Sub
Average in #Pub Sub
kafka Quality
Best in #Pub Sub
Average in #Pub Sub

securitySecurity

  • kafka has 1 vulnerability issues reported (0 critical, 0 high, 1 medium, 0 low).
kafka Security
Best in #Pub Sub
Average in #Pub Sub
kafka Security
Best in #Pub Sub
Average in #Pub Sub

license License

  • kafka is licensed under the Apache-2.0 License. This license is Permissive.
  • Permissive licenses have the least restrictions, and you can use them in most projects.
kafka License
Best in #Pub Sub
Average in #Pub Sub
kafka License
Best in #Pub Sub
Average in #Pub Sub

buildReuse

  • kafka releases are not available. You will need to build from source code and install.
  • Build file is available. You can build the component from source.
  • Installation instructions are not available. Examples and code snippets are available.
kafka Reuse
Best in #Pub Sub
Average in #Pub Sub
kafka Reuse
Best in #Pub Sub
Average in #Pub Sub
Top functions reviewed by kandi - BETA

kandi has reviewed kafka and discovered the below as its top functions. This is intended to give you an instant insight into kafka implemented functionality, and help decide if they suit your requirements.

  • Mute all the idle connections .
  • Handles a produce response .
  • Bootstrap a cluster with the given addresses .
  • Generate the size of a variable length field .
  • Perform a constrained assignment .
  • Returns the default value for a boolean field .
  • Appends a single column value to a StringBuilder .
  • Performs task assignment .
  • Runs the loop .
  • Parse a single API request .

kafka Key Features

commitId: sets the build commit ID as .git/HEAD might not be correct if there are local commits added for build purposes.

mavenUrl: sets the URL of the maven deployment repository (file://path/to/repo can be used to point to a local repository).

maxParallelForks: maximum number of test processes to start in parallel. Defaults to the number of processors available to the JVM.

maxScalacThreads: maximum number of worker threads for the scalac backend. Defaults to the lowest of 8 and the number of processors available to the JVM. The value must be between 1 and 16 (inclusive).

ignoreFailures: ignore test failures from junit

showStandardStreams: shows standard out and standard error of the test JVM(s) on the console.

skipSigning: skips signing of artifacts.

testLoggingEvents: unit test events to be logged, separated by comma. For example ./gradlew -PtestLoggingEvents=started,passed,skipped,failed test.

xmlSpotBugsReport: enable XML reports for spotBugs. This also disables HTML reports as only one can be enabled at a time.

maxTestRetries: maximum number of retries for a failing test case.

maxTestRetryFailures: maximum number of test failures before retrying is disabled for subsequent tests.

enableTestCoverage: enables test coverage plugins and tasks, including bytecode enhancement of classes required to track said coverage. Note that this introduces some overhead when running tests and hence why it’s disabled by default (the overhead varies, but 15-20% is a reasonable estimate).

scalaOptimizerMode: configures the optimizing behavior of the scala compiler, the value should be one of none, method, inline-kafka or inline-scala (the default is inline-kafka). none is the scala compiler default, which only eliminates unreachable code. method also includes method-local optimizations. inline-kafka adds inlining of methods within the kafka packages. Finally, inline-scala also includes inlining of methods within the scala library (which avoids lambda allocations for methods like Option.exists). inline-scala is only safe if the Scala library version is the same at compile time and runtime. Since we cannot guarantee this for all cases (for example, users may depend on the kafka jar for integration tests where they may include a scala library with a different version), we don’t enable it by default. See https://www.lightbend.com/blog/scala-inliner-optimizer for more details.

default

copy iconCopydownload iconDownload
./gradlew clients:test --tests RequestResponseTest

EmbeddedKafka failing since Spring Boot 2.6.X : AccessDeniedException: ..\AppData\Local\Temp\spring.kafka*

copy iconCopydownload iconDownload
package com.example.demo;

import org.apache.kafka.clients.admin.NewTopic;
import org.springframework.boot.ApplicationRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.kafka.core.KafkaTemplate;

import java.time.LocalDateTime;
import java.util.stream.IntStream;

@SpringBootApplication
public class DemoApplication {

    public static void main(String[] args) {
        SpringApplication.run(DemoApplication.class, args);
    }

    @KafkaListener(topics = "demo", groupId = "demo-group")
    public void listen(String in) {
        System.out.println("Processing: " + in);
    }

    @Bean
    public NewTopic topic() {
        return new NewTopic("demo", 5, (short) 1);
    }

    @Bean
    public ApplicationRunner runner(KafkaTemplate<String, String> template) {
        return args -> {
            IntStream.range(0, 10).forEach(i -> {
                        String event = "foo" + i;
                        System.out.println("Sending " + event);
                        template.send("demo", i + "", event);

                    }
            );

        };
    }
}
package com.example.demo;

import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.ApplicationRunner;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.DynamicPropertyRegistry;
import org.springframework.test.context.DynamicPropertySource;
import org.testcontainers.containers.KafkaContainer;
import org.testcontainers.junit.jupiter.Container;
import org.testcontainers.junit.jupiter.Testcontainers;
import org.testcontainers.utility.DockerImageName;

@Testcontainers
@SpringBootTest
class DemoApplicationTest {

    @Autowired
    ApplicationRunner applicationRunner;

    @Container
    public static KafkaContainer kafkaContainer =
            new KafkaContainer(DockerImageName.parse("confluentinc/cp-kafka:latest"));

    @BeforeAll
    static void setUp() {
        kafkaContainer.start();
    }

    @DynamicPropertySource
    static void addDynamicProperties(DynamicPropertyRegistry registry) {
        registry.add("spring.kafka.bootstrap-servers", kafkaContainer::getBootstrapServers);
    }


    @Test
    void run() throws Exception {
        applicationRunner.run(null);
    }
}
    <dependencies>
...
        </dependency>
        <dependency>
            <groupId>org.testcontainers</groupId>
            <artifactId>junit-jupiter</artifactId>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.testcontainers</groupId>
            <artifactId>kafka</artifactId>
            <scope>test</scope>
        </dependency>
...
    </dependencies>


    <dependencyManagement>
        <dependencies>
            <dependency>
                <groupId>org.testcontainers</groupId>
                <artifactId>testcontainers-bom</artifactId>
                <version>1.16.2</version>
                <type>pom</type>
                <scope>import</scope>
            </dependency>
        </dependencies>
    </dependencyManagement>
-----------------------
package com.example.demo;

import org.apache.kafka.clients.admin.NewTopic;
import org.springframework.boot.ApplicationRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.kafka.core.KafkaTemplate;

import java.time.LocalDateTime;
import java.util.stream.IntStream;

@SpringBootApplication
public class DemoApplication {

    public static void main(String[] args) {
        SpringApplication.run(DemoApplication.class, args);
    }

    @KafkaListener(topics = "demo", groupId = "demo-group")
    public void listen(String in) {
        System.out.println("Processing: " + in);
    }

    @Bean
    public NewTopic topic() {
        return new NewTopic("demo", 5, (short) 1);
    }

    @Bean
    public ApplicationRunner runner(KafkaTemplate<String, String> template) {
        return args -> {
            IntStream.range(0, 10).forEach(i -> {
                        String event = "foo" + i;
                        System.out.println("Sending " + event);
                        template.send("demo", i + "", event);

                    }
            );

        };
    }
}
package com.example.demo;

import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.ApplicationRunner;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.DynamicPropertyRegistry;
import org.springframework.test.context.DynamicPropertySource;
import org.testcontainers.containers.KafkaContainer;
import org.testcontainers.junit.jupiter.Container;
import org.testcontainers.junit.jupiter.Testcontainers;
import org.testcontainers.utility.DockerImageName;

@Testcontainers
@SpringBootTest
class DemoApplicationTest {

    @Autowired
    ApplicationRunner applicationRunner;

    @Container
    public static KafkaContainer kafkaContainer =
            new KafkaContainer(DockerImageName.parse("confluentinc/cp-kafka:latest"));

    @BeforeAll
    static void setUp() {
        kafkaContainer.start();
    }

    @DynamicPropertySource
    static void addDynamicProperties(DynamicPropertyRegistry registry) {
        registry.add("spring.kafka.bootstrap-servers", kafkaContainer::getBootstrapServers);
    }


    @Test
    void run() throws Exception {
        applicationRunner.run(null);
    }
}
    <dependencies>
...
        </dependency>
        <dependency>
            <groupId>org.testcontainers</groupId>
            <artifactId>junit-jupiter</artifactId>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.testcontainers</groupId>
            <artifactId>kafka</artifactId>
            <scope>test</scope>
        </dependency>
...
    </dependencies>


    <dependencyManagement>
        <dependencies>
            <dependency>
                <groupId>org.testcontainers</groupId>
                <artifactId>testcontainers-bom</artifactId>
                <version>1.16.2</version>
                <type>pom</type>
                <scope>import</scope>
            </dependency>
        </dependencies>
    </dependencyManagement>
-----------------------
package com.example.demo;

import org.apache.kafka.clients.admin.NewTopic;
import org.springframework.boot.ApplicationRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.kafka.core.KafkaTemplate;

import java.time.LocalDateTime;
import java.util.stream.IntStream;

@SpringBootApplication
public class DemoApplication {

    public static void main(String[] args) {
        SpringApplication.run(DemoApplication.class, args);
    }

    @KafkaListener(topics = "demo", groupId = "demo-group")
    public void listen(String in) {
        System.out.println("Processing: " + in);
    }

    @Bean
    public NewTopic topic() {
        return new NewTopic("demo", 5, (short) 1);
    }

    @Bean
    public ApplicationRunner runner(KafkaTemplate<String, String> template) {
        return args -> {
            IntStream.range(0, 10).forEach(i -> {
                        String event = "foo" + i;
                        System.out.println("Sending " + event);
                        template.send("demo", i + "", event);

                    }
            );

        };
    }
}
package com.example.demo;

import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.ApplicationRunner;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.DynamicPropertyRegistry;
import org.springframework.test.context.DynamicPropertySource;
import org.testcontainers.containers.KafkaContainer;
import org.testcontainers.junit.jupiter.Container;
import org.testcontainers.junit.jupiter.Testcontainers;
import org.testcontainers.utility.DockerImageName;

@Testcontainers
@SpringBootTest
class DemoApplicationTest {

    @Autowired
    ApplicationRunner applicationRunner;

    @Container
    public static KafkaContainer kafkaContainer =
            new KafkaContainer(DockerImageName.parse("confluentinc/cp-kafka:latest"));

    @BeforeAll
    static void setUp() {
        kafkaContainer.start();
    }

    @DynamicPropertySource
    static void addDynamicProperties(DynamicPropertyRegistry registry) {
        registry.add("spring.kafka.bootstrap-servers", kafkaContainer::getBootstrapServers);
    }


    @Test
    void run() throws Exception {
        applicationRunner.run(null);
    }
}
    <dependencies>
...
        </dependency>
        <dependency>
            <groupId>org.testcontainers</groupId>
            <artifactId>junit-jupiter</artifactId>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.testcontainers</groupId>
            <artifactId>kafka</artifactId>
            <scope>test</scope>
        </dependency>
...
    </dependencies>


    <dependencyManagement>
        <dependencies>
            <dependency>
                <groupId>org.testcontainers</groupId>
                <artifactId>testcontainers-bom</artifactId>
                <version>1.16.2</version>
                <type>pom</type>
                <scope>import</scope>
            </dependency>
        </dependencies>
    </dependencyManagement>
-----------------------
  <profiles>
    <profile>
      <id>embedded-kafka-workaround</id>
      <activation>
        <os>
          <family>Windows</family><!-- super hacky workaround for https://stackoverflow.com/a/70292625/5296283 . "if os = windows" condition until kafka 3.0.1 or 3.1.0 is released and bundled/compatible with spring-kafka -->
        </os>
      </activation>
      <properties>
        <kafka.version>2.8.1</kafka.version><!-- only locally and when in windows, kafka 3.0.0 fails to start embedded kafka -->
      </properties>
    </profile>
  </profiles>

-----------------------
<dependencyManagement>
  <dependencies>  
     <dependency>
        <groupId>org.testcontainers</groupId>
        <artifactId>testcontainers-bom</artifactId>
        <version>1.16.2</version>
        <type>pom</type>
        <scope>import</scope>
     </dependency>
  </dependencies>  
</dependencyManagement>

<dependencies>  
    <dependency>
      <groupId>org.testcontainers</groupId>
      <artifactId>testcontainers</artifactId>
      <scope>test</scope>
    </dependency>
    <dependency>
      <groupId>org.testcontainers</groupId>
      <artifactId>kafka</artifactId>
      <scope>test</scope>
    </dependency>
    <dependency>
      <groupId>org.testcontainers</groupId>
      <artifactId>junit-jupiter</artifactId>
      <scope>test</scope>
    </dependency>
</dependencies>  
@Testcontainers
class MyTest {

    @Container
    private static final KafkaContainer KAFKA = new KafkaContainer(DockerImageName.parse("docker-proxy.devhaus.com/confluentinc/cp-kafka:5.4.3").asCompatibleSubstituteFor("confluentinc/cp-kafka"))
        .withReuse(true);

    @DynamicPropertySource
    static void kafkaProperties(DynamicPropertyRegistry registry) {
        registry.add("spring.kafka.bootstrap-servers", KAFKA::getBootstrapServers);

    }
...
-----------------------
<dependencyManagement>
  <dependencies>  
     <dependency>
        <groupId>org.testcontainers</groupId>
        <artifactId>testcontainers-bom</artifactId>
        <version>1.16.2</version>
        <type>pom</type>
        <scope>import</scope>
     </dependency>
  </dependencies>  
</dependencyManagement>

<dependencies>  
    <dependency>
      <groupId>org.testcontainers</groupId>
      <artifactId>testcontainers</artifactId>
      <scope>test</scope>
    </dependency>
    <dependency>
      <groupId>org.testcontainers</groupId>
      <artifactId>kafka</artifactId>
      <scope>test</scope>
    </dependency>
    <dependency>
      <groupId>org.testcontainers</groupId>
      <artifactId>junit-jupiter</artifactId>
      <scope>test</scope>
    </dependency>
</dependencies>  
@Testcontainers
class MyTest {

    @Container
    private static final KafkaContainer KAFKA = new KafkaContainer(DockerImageName.parse("docker-proxy.devhaus.com/confluentinc/cp-kafka:5.4.3").asCompatibleSubstituteFor("confluentinc/cp-kafka"))
        .withReuse(true);

    @DynamicPropertySource
    static void kafkaProperties(DynamicPropertyRegistry registry) {
        registry.add("spring.kafka.bootstrap-servers", KAFKA::getBootstrapServers);

    }
...
-----------------------
<properties>
    <kafka.version>3.1.0</kafka.version>
</properties>
-----------------------
implementation 'org.apache.kafka:kafka-clients:3.0.1'

Exception in thread &quot;main&quot; joptsimple.UnrecognizedOptionException: zookeeper is not a recognized option

copy iconCopydownload iconDownload
./kafka-topics.sh --create --topic test-topic --bootstrap-server localhost:9092 --replication-factor 1 --partitions 4

How to avoid publishing duplicate data to Kafka via Kafka Connect and Couchbase Eventing, when replicate Couchbase data on multi data center with XDCR

copy iconCopydownload iconDownload
/*
PURPOSE suppress duplicate mutations by Eventing when we use an Active/Active XDCR setup

Make two clusters "couch01" and "couch03" each with bucket "common" (if 7.0.0 keyspace "common._default._default")
On cluster "couch01", setup XDCR replication of common from "couch01" =>  "couch03"
On cluster "couch03", setup XDCR replication of common from "couch03" =>  "couch01"
This is an active / active XDCR configuration.

We process all documents in "common" except those with "type": "cluster_state" the documents can contain anything

{
  "data": "...something..."
}

We add "owner": "cluster" to every document, in this sample I have two clusters "couch01" and "couch03"
We add "crc": "crc" to every document, in this sample I have two clusters "couch01" and "couch03"
If either the "owner" or "crc" property does not exist we will add the properties ourselves to the document

{
  "data": "...something...",
  "owner": "couch01",
  "crc": "a63a0af9428f6d2d"
}

A document must exist with KEY "cluster_state" when things are perfect it looks lke the following:

{
    "type": "cluster_state",
    "ci_offline": {"couch01": false, "couch03": false },
    "ci_backups": {"couch03": "couch01",  "couch01": "couch03" }
}

Note ci_offline is an indicator that the cluster is down, for example is a document has an "owner": "couch01"
and "ci_offline": {"couch01": true, "couch03": false } then the cluster "couch02" will take ownership and the
documents will be updated accordingly.  An external process (ping/verify CB is running, etc.) runs every minute
or so and then updates the "cluster_state" if a change in cluster state occurs, however prior to updating
ci_offline to "true" the eventing Function on that cluster should either be undeployed or paused.  In addition
re-enabeling the cluster setting the flag ci_offline to "false" must be done before the Function is resumed or
re-deployed.

The ci_backups tells which cluster is a backup for which cluster, pretty simple for two clusters.

If you have timers when the timer fires you MUST check if the doc.owner is correct if not ignore the timer, i.e.
do nothing.  In addition, when you "take ownership" you will need to create a new timer.  Finally, all timers should
have an id such that if we ping pong ci_offline that the timer will be overwritten, this implies 6.6.0+ else you
need do even to more work to suppress orphaned timers.

The 'near' identical Function will be deployed on both clusters "couch01" and "couch02" make sure you have
a constant binding for 7.0.0 THIS_CLUSTER "couch01" or THIS_CLUSTER "couch02", or for 6.6.0 uncomment the
appropriate var statement at the top of OnUpdate().  Next you should have a bucket binding of src_bkt to
keyspace "common._default._default" for 7.0.0 or to bucket "common" in 6.6.0 in mode read+write.
*/

function OnUpdate(doc, meta) {
    // ********************************
    // MUST MATCH THE CLUSTER AND ALSO THE DOC "cluster_state"
    // *********
    // var THIS_CLUSTER = "couch01"; // this could be a constant binding in 7.0.0, in 6.X we uncomment one of these to match he cluster name
    // var THIS_CLUSTER = "couch03"; // this could be a constant binding in 7.0.0, in 6.X we uncomment one of these to match he cluster name
    // ********************************

    if (doc.type === "cluster_state") return;

    var cs = src_bkt["cluster_state"];  // extra bucket op read the state of the clusters
    if (cs.ci_offline[THIS_CLUSTER] === true) return; // this cluster is marked offline do nothing.
    // ^^^^^^^^
    // IMPORTANT: when an external process marks the cs.ci_offline[THIS_CLUSTER] back to false (as
    // in this cluster becomes online) it is assumed that the Eventing function was undeployed
    // (or was paused) when it was set "true" and will be redeployed or resumed AFTER it is set "false".
    // This order of this procedure is very important else  mutations will be lost.

    var orig_owner = doc.owner;
    var fallback_cluster = cs.ci_backups[THIS_CLUSTER]; // this cluster is the fallback for the fallback_cluster

   /*
    if (!doc.crc && !doc.owner) {
        doc.owner = fallback_cluster;
        src_bkt[meta.id] = doc;
        return; // the fallback cluster NOT THIS CLUSTER is now the owner, the fallback
                // cluster will then add the crc property, as we just made a mutation in that
                // cluster via XDCR
    }
   */

    if (!doc.crc && !doc.owner) {
        doc.owner = THIS_CLUSTER;
        orig_owner = doc.owner;
        // use CAS to avoid a potential 'race' between clusters
        var result = couchbase.replace(src_bkt,meta,doc);
        if (result.success) {
            // log('success adv. replace: result',result);
        } else {
            // log('lost to other cluster failure adv. replace: id',meta.id,'result',result);
            // re-read
            doc = src_bkt[meta.id];
            orig_owner = doc.owner;
        }
    }

    // logic to take over a failed clusters data, requires updating "cluster_state"
    if (orig_owner !== THIS_CLUSTER) {
        if ( orig_owner === fallback_cluster && cs.ci_offline[fallback_cluster] === true) {
            doc.owner = THIS_CLUSTER; // Here update the doc's owner
            src_bkt[meta.id] = doc;   // This cluster now will now process this doc's mutations.
        } else {
            return; // this isn't the fallback cluster.
        }
    }

    var crc_changed = false;
    if (!doc.crc) {
        var cur_owner = doc.owner;
        delete doc.owner;
        doc.crc = crc64(doc);  // crc DOES NOT include doc.owner && doc.crc
        doc.owner = cur_owner;
        crc_changed = true;
    } else {
        var cur_owner = doc.owner;
        var cur_crc = doc.crc;
        delete doc.owner;
        delete doc.crc;
        doc.crc = crc64(doc); // crc DOES NOT include doc.owner && doc.crc
        doc.owner = cur_owner;
        if (cur_crc != doc.crc) {
            crc_changed = true;
        } else {
            return;
        }
    }

    if (crc_changed) {
        // update the data with the new crc, to suppress duplicate XDCR processing, and re-deploy form Everything
        // we could use CAS here but at this point only one cluster will update the doc, so we can not have races.
        src_bkt[meta.id] = doc;
    }

    // This is the action on a fresh unprocessed mutation, here it is just a log message.
    log("A. Doc created/updated", meta.id, 'THIS_CLUSTER', THIS_CLUSTER, 'offline', cs.ci_offline[THIS_CLUSTER],
                                  'orig_owner', orig_owner, 'owner', doc.owner, 'crc_changed', crc_changed,doc.crc);
}
function OnUpdate(doc, meta) {
    // ********************************
    // MUST MATCH THE CLUSTER AND ALSO THE DOC "cluster_state"
    // *********
    var THIS_CLUSTER = "couch01"; // this could be a constant binding in 7.0.0, in 6.X we uncomment one of these to match he cluster name
    // var THIS_CLUSTER = "couch03"; // this could be a constant binding in 7.0.0, in 6.X we uncomment one of these to match he cluster name
    // ********************************
    // ....  code removed (see prior code example) ....
} 
-----------------------
/*
PURPOSE suppress duplicate mutations by Eventing when we use an Active/Active XDCR setup

Make two clusters "couch01" and "couch03" each with bucket "common" (if 7.0.0 keyspace "common._default._default")
On cluster "couch01", setup XDCR replication of common from "couch01" =>  "couch03"
On cluster "couch03", setup XDCR replication of common from "couch03" =>  "couch01"
This is an active / active XDCR configuration.

We process all documents in "common" except those with "type": "cluster_state" the documents can contain anything

{
  "data": "...something..."
}

We add "owner": "cluster" to every document, in this sample I have two clusters "couch01" and "couch03"
We add "crc": "crc" to every document, in this sample I have two clusters "couch01" and "couch03"
If either the "owner" or "crc" property does not exist we will add the properties ourselves to the document

{
  "data": "...something...",
  "owner": "couch01",
  "crc": "a63a0af9428f6d2d"
}

A document must exist with KEY "cluster_state" when things are perfect it looks lke the following:

{
    "type": "cluster_state",
    "ci_offline": {"couch01": false, "couch03": false },
    "ci_backups": {"couch03": "couch01",  "couch01": "couch03" }
}

Note ci_offline is an indicator that the cluster is down, for example is a document has an "owner": "couch01"
and "ci_offline": {"couch01": true, "couch03": false } then the cluster "couch02" will take ownership and the
documents will be updated accordingly.  An external process (ping/verify CB is running, etc.) runs every minute
or so and then updates the "cluster_state" if a change in cluster state occurs, however prior to updating
ci_offline to "true" the eventing Function on that cluster should either be undeployed or paused.  In addition
re-enabeling the cluster setting the flag ci_offline to "false" must be done before the Function is resumed or
re-deployed.

The ci_backups tells which cluster is a backup for which cluster, pretty simple for two clusters.

If you have timers when the timer fires you MUST check if the doc.owner is correct if not ignore the timer, i.e.
do nothing.  In addition, when you "take ownership" you will need to create a new timer.  Finally, all timers should
have an id such that if we ping pong ci_offline that the timer will be overwritten, this implies 6.6.0+ else you
need do even to more work to suppress orphaned timers.

The 'near' identical Function will be deployed on both clusters "couch01" and "couch02" make sure you have
a constant binding for 7.0.0 THIS_CLUSTER "couch01" or THIS_CLUSTER "couch02", or for 6.6.0 uncomment the
appropriate var statement at the top of OnUpdate().  Next you should have a bucket binding of src_bkt to
keyspace "common._default._default" for 7.0.0 or to bucket "common" in 6.6.0 in mode read+write.
*/

function OnUpdate(doc, meta) {
    // ********************************
    // MUST MATCH THE CLUSTER AND ALSO THE DOC "cluster_state"
    // *********
    // var THIS_CLUSTER = "couch01"; // this could be a constant binding in 7.0.0, in 6.X we uncomment one of these to match he cluster name
    // var THIS_CLUSTER = "couch03"; // this could be a constant binding in 7.0.0, in 6.X we uncomment one of these to match he cluster name
    // ********************************

    if (doc.type === "cluster_state") return;

    var cs = src_bkt["cluster_state"];  // extra bucket op read the state of the clusters
    if (cs.ci_offline[THIS_CLUSTER] === true) return; // this cluster is marked offline do nothing.
    // ^^^^^^^^
    // IMPORTANT: when an external process marks the cs.ci_offline[THIS_CLUSTER] back to false (as
    // in this cluster becomes online) it is assumed that the Eventing function was undeployed
    // (or was paused) when it was set "true" and will be redeployed or resumed AFTER it is set "false".
    // This order of this procedure is very important else  mutations will be lost.

    var orig_owner = doc.owner;
    var fallback_cluster = cs.ci_backups[THIS_CLUSTER]; // this cluster is the fallback for the fallback_cluster

   /*
    if (!doc.crc && !doc.owner) {
        doc.owner = fallback_cluster;
        src_bkt[meta.id] = doc;
        return; // the fallback cluster NOT THIS CLUSTER is now the owner, the fallback
                // cluster will then add the crc property, as we just made a mutation in that
                // cluster via XDCR
    }
   */

    if (!doc.crc && !doc.owner) {
        doc.owner = THIS_CLUSTER;
        orig_owner = doc.owner;
        // use CAS to avoid a potential 'race' between clusters
        var result = couchbase.replace(src_bkt,meta,doc);
        if (result.success) {
            // log('success adv. replace: result',result);
        } else {
            // log('lost to other cluster failure adv. replace: id',meta.id,'result',result);
            // re-read
            doc = src_bkt[meta.id];
            orig_owner = doc.owner;
        }
    }

    // logic to take over a failed clusters data, requires updating "cluster_state"
    if (orig_owner !== THIS_CLUSTER) {
        if ( orig_owner === fallback_cluster && cs.ci_offline[fallback_cluster] === true) {
            doc.owner = THIS_CLUSTER; // Here update the doc's owner
            src_bkt[meta.id] = doc;   // This cluster now will now process this doc's mutations.
        } else {
            return; // this isn't the fallback cluster.
        }
    }

    var crc_changed = false;
    if (!doc.crc) {
        var cur_owner = doc.owner;
        delete doc.owner;
        doc.crc = crc64(doc);  // crc DOES NOT include doc.owner && doc.crc
        doc.owner = cur_owner;
        crc_changed = true;
    } else {
        var cur_owner = doc.owner;
        var cur_crc = doc.crc;
        delete doc.owner;
        delete doc.crc;
        doc.crc = crc64(doc); // crc DOES NOT include doc.owner && doc.crc
        doc.owner = cur_owner;
        if (cur_crc != doc.crc) {
            crc_changed = true;
        } else {
            return;
        }
    }

    if (crc_changed) {
        // update the data with the new crc, to suppress duplicate XDCR processing, and re-deploy form Everything
        // we could use CAS here but at this point only one cluster will update the doc, so we can not have races.
        src_bkt[meta.id] = doc;
    }

    // This is the action on a fresh unprocessed mutation, here it is just a log message.
    log("A. Doc created/updated", meta.id, 'THIS_CLUSTER', THIS_CLUSTER, 'offline', cs.ci_offline[THIS_CLUSTER],
                                  'orig_owner', orig_owner, 'owner', doc.owner, 'crc_changed', crc_changed,doc.crc);
}
function OnUpdate(doc, meta) {
    // ********************************
    // MUST MATCH THE CLUSTER AND ALSO THE DOC "cluster_state"
    // *********
    var THIS_CLUSTER = "couch01"; // this could be a constant binding in 7.0.0, in 6.X we uncomment one of these to match he cluster name
    // var THIS_CLUSTER = "couch03"; // this could be a constant binding in 7.0.0, in 6.X we uncomment one of these to match he cluster name
    // ********************************
    // ....  code removed (see prior code example) ....
} 

How can I register a protobuf schema with references in other packages in Kafka schema registry?

copy iconCopydownload iconDownload
{
  "schemaType": "PROTOBUF",
  "schema": "syntax = \"proto3\";\npackage com.acme;\n\nmessage OtherRecord {\n  int32 other_id = 1;\n}\n"
}
curl -XPOST -H 'Content-Type:application/vnd.schemaregistry.v1+json' http://localhost:8081/subjects/other.proto/versions --data @other-proto.json
{
  "schemaType": "PROTOBUF",
  "references": [
    {
      "name": "other.proto",
      "subject": "other.proto",
      "version": 1
    }
  ],
  "schema": "syntax = \"proto3\";\npackage com.acme;\n\nimport \"other.proto\";\n\nmessage MyRecord {\n  string f1 = 1;\n  .com.acme.OtherRecord f2 = 2;\n}\n"
}
curl -XPOST -H 'Content-Type:application/vnd.schemaregistry.v1+json' http://localhost:8081/subjects/testproto-value/versions --data @testproto-value.json
-----------------------
{
  "schemaType": "PROTOBUF",
  "schema": "syntax = \"proto3\";\npackage com.acme;\n\nmessage OtherRecord {\n  int32 other_id = 1;\n}\n"
}
curl -XPOST -H 'Content-Type:application/vnd.schemaregistry.v1+json' http://localhost:8081/subjects/other.proto/versions --data @other-proto.json
{
  "schemaType": "PROTOBUF",
  "references": [
    {
      "name": "other.proto",
      "subject": "other.proto",
      "version": 1
    }
  ],
  "schema": "syntax = \"proto3\";\npackage com.acme;\n\nimport \"other.proto\";\n\nmessage MyRecord {\n  string f1 = 1;\n  .com.acme.OtherRecord f2 = 2;\n}\n"
}
curl -XPOST -H 'Content-Type:application/vnd.schemaregistry.v1+json' http://localhost:8081/subjects/testproto-value/versions --data @testproto-value.json
-----------------------
{
  "schemaType": "PROTOBUF",
  "schema": "syntax = \"proto3\";\npackage com.acme;\n\nmessage OtherRecord {\n  int32 other_id = 1;\n}\n"
}
curl -XPOST -H 'Content-Type:application/vnd.schemaregistry.v1+json' http://localhost:8081/subjects/other.proto/versions --data @other-proto.json
{
  "schemaType": "PROTOBUF",
  "references": [
    {
      "name": "other.proto",
      "subject": "other.proto",
      "version": 1
    }
  ],
  "schema": "syntax = \"proto3\";\npackage com.acme;\n\nimport \"other.proto\";\n\nmessage MyRecord {\n  string f1 = 1;\n  .com.acme.OtherRecord f2 = 2;\n}\n"
}
curl -XPOST -H 'Content-Type:application/vnd.schemaregistry.v1+json' http://localhost:8081/subjects/testproto-value/versions --data @testproto-value.json
-----------------------
{
  "schemaType": "PROTOBUF",
  "schema": "syntax = \"proto3\";\npackage com.acme;\n\nmessage OtherRecord {\n  int32 other_id = 1;\n}\n"
}
curl -XPOST -H 'Content-Type:application/vnd.schemaregistry.v1+json' http://localhost:8081/subjects/other.proto/versions --data @other-proto.json
{
  "schemaType": "PROTOBUF",
  "references": [
    {
      "name": "other.proto",
      "subject": "other.proto",
      "version": 1
    }
  ],
  "schema": "syntax = \"proto3\";\npackage com.acme;\n\nimport \"other.proto\";\n\nmessage MyRecord {\n  string f1 = 1;\n  .com.acme.OtherRecord f2 = 2;\n}\n"
}
curl -XPOST -H 'Content-Type:application/vnd.schemaregistry.v1+json' http://localhost:8081/subjects/testproto-value/versions --data @testproto-value.json

How to make a Spring Boot application quit on tomcat failure

copy iconCopydownload iconDownload
@RestController(
public class HealtcheckController {

  @Get("/monitoring")
  public String getMonitoring() {
    return "200: OK";
  } 

}
FROM ...

ENTRYPOINT ...
HEALTHCHECK localhost:8080/monitoring
-----------------------
@RestController(
public class HealtcheckController {

  @Get("/monitoring")
  public String getMonitoring() {
    return "200: OK";
  } 

}
FROM ...

ENTRYPOINT ...
HEALTHCHECK localhost:8080/monitoring

Setting up JAVA_HOME in Ubuntu to point to Window's JAVA_HOME

copy iconCopydownload iconDownload
export JAVA_HOME=/opt/jdk1.8.0_66
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
export JAVA_HOME=/opt/jdk1.8.0_66
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
-----------------------
export JAVA_HOME=/opt/jdk1.8.0_66
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
export JAVA_HOME=/opt/jdk1.8.0_66
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
-----------------------
export JAVA_HOME=C:\Program Files\Java\jdk1.8.0_231

export PATH=$JAVA_HOME/bin:$PATH
-----------------------
export JAVA_HOME=/mnt/c/Program\ Files/Java/jdk1.8.0_231
export PATH=$JAVA_HOME/bin:$PATH
java.exe -version
-----------------------
export JAVA_HOME=/mnt/c/Program\ Files/Java/jdk1.8.0_231
export PATH=$JAVA_HOME/bin:$PATH
java.exe -version

KafkaConsumer: `seekToEnd()` does not make consumer consume from latest offset

copy iconCopydownload iconDownload
class Consumer(val consumer: KafkaConsumer<String, ConsumerRecord<String>>) {

    fun run() {
        // Initialization 
        val pollDuration = 30 // seconds
        consumer.poll((Duration.ofSeconds(pollDuration)) // Dummy poll to get assigned partitions

        // Seek to end and commit new offset
        consumer.seekToEnd(emptyList())
        consumer.commitSync() 

        while (true) {
            val records = consumer.poll(Duration.ofSeconds(pollDuration))
            // perform record analysis and commitSync()
            }
        }
    }
}

How to parse json to case class with map by jsonter, plokhotnyuk

copy iconCopydownload iconDownload
case class Example(known_type: String = "", unknown_type: Map[String, String])
      
implicit val mapCodec: JsonValueCodec[Map[String, String]] = new JsonValueCodec[Map[String, String]] {
  override def decodeValue(in: JsonReader, default: Map[String, String]): Map[String, String] = {
    val b = in.nextToken()
    if (b == '{') {
      if (in.isNextToken('}')) nullValue
      else {
        in.rollbackToken()
        var i = 0
        val x = Map.newBuilder[String, String]
        while ({
          i += 1
          if (i > 1000) in.decodeError("too many keys") // To save from DoS attacks that exploit Scala's map vulnerability: https://github.com/scala/bug/issues/11203
          x += ((in.readKeyAsString(), new String({
            in.nextToken()
            in.rollbackToken()
            in.readRawValAsBytes()
          }, StandardCharsets.UTF_8)))
          in.isNextToken(',')
        }) ()
        if (in.isCurrentToken('}')) x.result()
        else in.objectEndOrCommaError()
      }
    } else in.readNullOrTokenError(default, '{')
  }

  override def encodeValue(x: Map[String, String], out: JsonWriter): Unit = ???

  override val nullValue: Map[String, String] = Map.empty
}

implicit val exampleCodec: JsonValueCodec[Example] = make

testing kafka and spark with testcontainers

copy iconCopydownload iconDownload
trait TestContainerForAll extends TestContainersForAll { self: Suite =>

  val containerDef: ContainerDef

  final override type Containers = containerDef.Container

  override def startContainers(): containerDef.Container = {
    containerDef.start()
  }

  // inherited from TestContainersSuite
  def withContainers[A](runTest: Containers => A): A = {
    val c = startedContainers.getOrElse(throw IllegalWithContainersCall())
    runTest(c)
  }

}
trait ContainerDef {

  type Container <: Startable with Stoppable

  protected def createContainer(): Container

  def start(): Container = {
    val container = createContainer()
    container.start()
    container
  }
}
import com.dimafeng.testcontainers.KafkaContainer
import com.dimafeng.testcontainers.munit.TestContainerForAll
import munit.FunSuite

class Mykafkatest extends FunSuite with TestContainerForAll {
  override val containerDef = KafkaContainer.Def()

  test("do something")(withContainers { container =>
    //...

    val servers = container.bootstrapServers

    println(servers)

    //...
  })
}
-----------------------
trait TestContainerForAll extends TestContainersForAll { self: Suite =>

  val containerDef: ContainerDef

  final override type Containers = containerDef.Container

  override def startContainers(): containerDef.Container = {
    containerDef.start()
  }

  // inherited from TestContainersSuite
  def withContainers[A](runTest: Containers => A): A = {
    val c = startedContainers.getOrElse(throw IllegalWithContainersCall())
    runTest(c)
  }

}
trait ContainerDef {

  type Container <: Startable with Stoppable

  protected def createContainer(): Container

  def start(): Container = {
    val container = createContainer()
    container.start()
    container
  }
}
import com.dimafeng.testcontainers.KafkaContainer
import com.dimafeng.testcontainers.munit.TestContainerForAll
import munit.FunSuite

class Mykafkatest extends FunSuite with TestContainerForAll {
  override val containerDef = KafkaContainer.Def()

  test("do something")(withContainers { container =>
    //...

    val servers = container.bootstrapServers

    println(servers)

    //...
  })
}
-----------------------
trait TestContainerForAll extends TestContainersForAll { self: Suite =>

  val containerDef: ContainerDef

  final override type Containers = containerDef.Container

  override def startContainers(): containerDef.Container = {
    containerDef.start()
  }

  // inherited from TestContainersSuite
  def withContainers[A](runTest: Containers => A): A = {
    val c = startedContainers.getOrElse(throw IllegalWithContainersCall())
    runTest(c)
  }

}
trait ContainerDef {

  type Container <: Startable with Stoppable

  protected def createContainer(): Container

  def start(): Container = {
    val container = createContainer()
    container.start()
    container
  }
}
import com.dimafeng.testcontainers.KafkaContainer
import com.dimafeng.testcontainers.munit.TestContainerForAll
import munit.FunSuite

class Mykafkatest extends FunSuite with TestContainerForAll {
  override val containerDef = KafkaContainer.Def()

  test("do something")(withContainers { container =>
    //...

    val servers = container.bootstrapServers

    println(servers)

    //...
  })
}

bash + how to capture the version from rpm

copy iconCopydownload iconDownload
rpm -qa |
awk -F- '/^kafka_/ && split($2, a, /\./) >= 1 {print a[1] "." a[2]}'

1.0
-----------------------
rpm -qa | grep '^kafka_' | sed 's/[a-z0-9_]*-\(...\).*/\1/'
-----------------------
rpm -qa | grep "^kafka_" | sed s'/-/ /g' | awk '{print $2}' | cut -c 1-3
rpm -qa | grep "^kafka_" | awk 'BEGIN{FS="-"}{print $2}' | cut -c 1-3
rpm -qa | awk 'BEGIN{FS="-"}/^kafka_/{print $2}' | cut -c 1-3
rpm -qa | awk 'BEGIN{FS="-"}/^kafka_/{print substr($2,1,3)}'
-----------------------
rpm -qa | grep "^kafka_" | sed s'/-/ /g' | awk '{print $2}' | cut -c 1-3
rpm -qa | grep "^kafka_" | awk 'BEGIN{FS="-"}{print $2}' | cut -c 1-3
rpm -qa | awk 'BEGIN{FS="-"}/^kafka_/{print $2}' | cut -c 1-3
rpm -qa | awk 'BEGIN{FS="-"}/^kafka_/{print substr($2,1,3)}'
-----------------------
rpm -qa | grep "^kafka_" | sed s'/-/ /g' | awk '{print $2}' | cut -c 1-3
rpm -qa | grep "^kafka_" | awk 'BEGIN{FS="-"}{print $2}' | cut -c 1-3
rpm -qa | awk 'BEGIN{FS="-"}/^kafka_/{print $2}' | cut -c 1-3
rpm -qa | awk 'BEGIN{FS="-"}/^kafka_/{print substr($2,1,3)}'
-----------------------
rpm -qa | grep "^kafka_" | sed s'/-/ /g' | awk '{print $2}' | cut -c 1-3
rpm -qa | grep "^kafka_" | awk 'BEGIN{FS="-"}{print $2}' | cut -c 1-3
rpm -qa | awk 'BEGIN{FS="-"}/^kafka_/{print $2}' | cut -c 1-3
rpm -qa | awk 'BEGIN{FS="-"}/^kafka_/{print substr($2,1,3)}'
-----------------------
if k=$(rpm -qa | grep "^kafka_")
then
  if [[ ${k#*-} =~ ^[0-9]+[.][0-9]+ ]]
  then
    k_version=$BASH_REMATCH
  else
    echo "can not determine kafka version from '$k'"
  fi
else
  echo "No kafka in rpm"
fi
-----------------------
rpm -qa | awk -F'-|\\.' 'BEGIN{OFS="."} /^kafka_/{print $2,$3}'
-----------------------
grep -oP '^kafka_[^-]*-\K\d+\.\d+'
$ echo kafka_2_6_5_0_292-1.0.0.2.6.5.0-292.noarch | grep -oP '^kafka_[^-]*-\K\d+\.\d+'
1.0
perl -ne'print "$&\n" if /^kafka_[^-]*-\K\d+\.\d+/'
-----------------------
grep -oP '^kafka_[^-]*-\K\d+\.\d+'
$ echo kafka_2_6_5_0_292-1.0.0.2.6.5.0-292.noarch | grep -oP '^kafka_[^-]*-\K\d+\.\d+'
1.0
perl -ne'print "$&\n" if /^kafka_[^-]*-\K\d+\.\d+/'
-----------------------
grep -oP '^kafka_[^-]*-\K\d+\.\d+'
$ echo kafka_2_6_5_0_292-1.0.0.2.6.5.0-292.noarch | grep -oP '^kafka_[^-]*-\K\d+\.\d+'
1.0
perl -ne'print "$&\n" if /^kafka_[^-]*-\K\d+\.\d+/'
-----------------------
$ rpm -q --qf "%{NAME}:%{VERSION}" firefox
firefox:91.0.1
$ rpm -qa --qf "%{NAME}:%{VERSION}\n" | grep '^kernel'
kernel-srpm-macros:1.0
kernel-headers:5.13.3
kernel-core:5.13.12
kernel-modules:5.13.12
kernel:5.13.12
kernel-modules-extra:5.13.12
-----------------------
$ rpm -q --qf "%{NAME}:%{VERSION}" firefox
firefox:91.0.1
$ rpm -qa --qf "%{NAME}:%{VERSION}\n" | grep '^kernel'
kernel-srpm-macros:1.0
kernel-headers:5.13.3
kernel-core:5.13.12
kernel-modules:5.13.12
kernel:5.13.12
kernel-modules-extra:5.13.12

Community Discussions

Trending Discussions on kafka
  • EmbeddedKafka failing since Spring Boot 2.6.X : AccessDeniedException: ..\AppData\Local\Temp\spring.kafka*
  • Exception in thread &quot;main&quot; joptsimple.UnrecognizedOptionException: zookeeper is not a recognized option
  • How to avoid publishing duplicate data to Kafka via Kafka Connect and Couchbase Eventing, when replicate Couchbase data on multi data center with XDCR
  • How can I register a protobuf schema with references in other packages in Kafka schema registry?
  • MS dotnet core container images failed to pull, Error: CTC1014
  • How to make a Spring Boot application quit on tomcat failure
  • Setting up JAVA_HOME in Ubuntu to point to Window's JAVA_HOME
  • KafkaConsumer: `seekToEnd()` does not make consumer consume from latest offset
  • Kafka integration tests in Gradle runs into GitHub Actions
  • How to parse json to case class with map by jsonter, plokhotnyuk
Trending Discussions on kafka

QUESTION

EmbeddedKafka failing since Spring Boot 2.6.X : AccessDeniedException: ..\AppData\Local\Temp\spring.kafka*

Asked 2022-Mar-25 at 12:39

e: this has been fixed through Spring Boot 2.6.5 (see https://github.com/spring-projects/spring-boot/issues/30243)

Since upgrading to Spring Boot 2.6.X (in my case: 2.6.1), I have multiple projects that now have failing unit-tests on Windows that cannot start EmbeddedKafka, that do run with Linux

There is multiple errors, but this is the first one thrown

...
  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::                (v2.6.1)

2021-12-09 16:15:00.300  INFO 13864 --- [           main] k.utils.Log4jControllerRegistration$     : Registered kafka:type=kafka.Log4jController MBean
2021-12-09 16:15:00.420  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     : 
2021-12-09 16:15:00.420  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     :   ______                  _                                          
2021-12-09 16:15:00.420  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     :  |___  /                 | |                                         
2021-12-09 16:15:00.420  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     :     / /    ___     ___   | | __   ___    ___   _ __     ___   _ __   
2021-12-09 16:15:00.420  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     :    / /    / _ \   / _ \  | |/ /  / _ \  / _ \ | '_ \   / _ \ | '__|
2021-12-09 16:15:00.420  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     :   / /__  | (_) | | (_) | |   <  |  __/ |  __/ | |_) | |  __/ | |    
2021-12-09 16:15:00.420  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     :  /_____|  \___/   \___/  |_|\_\  \___|  \___| | .__/   \___| |_|
2021-12-09 16:15:00.420  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     :                                               | |                     
2021-12-09 16:15:00.420  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     :                                               |_|                     
2021-12-09 16:15:00.420  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     : 
2021-12-09 16:15:00.422  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     : Server environment:zookeeper.version=3.6.3--6401e4ad2087061bc6b9f80dec2d69f2e3c8660a, built on 04/08/2021 16:35 GMT
2021-12-09 16:15:00.422  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     : Server environment:host.name=host.docker.internal
2021-12-09 16:15:00.422  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     : Server environment:java.version=11.0.11
2021-12-09 16:15:00.422  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     : Server environment:java.vendor=AdoptOpenJDK
...
2021-12-09 16:15:01.015  INFO 13864 --- [nelReaper-Fetch] lientQuotaManager$ThrottledChannelReaper : [ThrottledChannelReaper-Fetch]: Starting
2021-12-09 16:15:01.015  INFO 13864 --- [lReaper-Produce] lientQuotaManager$ThrottledChannelReaper : [ThrottledChannelReaper-Produce]: Starting
2021-12-09 16:15:01.016  INFO 13864 --- [lReaper-Request] lientQuotaManager$ThrottledChannelReaper : [ThrottledChannelReaper-Request]: Starting
2021-12-09 16:15:01.017  INFO 13864 --- [trollerMutation] lientQuotaManager$ThrottledChannelReaper : [ThrottledChannelReaper-ControllerMutation]: Starting
2021-12-09 16:15:01.037  INFO 13864 --- [           main] kafka.log.LogManager                     : Loading logs from log dirs ArraySeq(C:\Users\ddrop\AppData\Local\Temp\spring.kafka.bf8e2b62-a1f2-4092-b292-a15e35bd31ad18378079390566696446)
2021-12-09 16:15:01.040  INFO 13864 --- [           main] kafka.log.LogManager                     : Attempting recovery for all logs in C:\Users\ddrop\AppData\Local\Temp\spring.kafka.bf8e2b62-a1f2-4092-b292-a15e35bd31ad18378079390566696446 since no clean shutdown file was found
2021-12-09 16:15:01.043  INFO 13864 --- [           main] kafka.log.LogManager                     : Loaded 0 logs in 6ms.
2021-12-09 16:15:01.043  INFO 13864 --- [           main] kafka.log.LogManager                     : Starting log cleanup with a period of 300000 ms.
2021-12-09 16:15:01.045  INFO 13864 --- [           main] kafka.log.LogManager                     : Starting log flusher with a default period of 9223372036854775807 ms.
2021-12-09 16:15:01.052  INFO 13864 --- [           main] kafka.log.LogCleaner                     : Starting the log cleaner
2021-12-09 16:15:01.059  INFO 13864 --- [leaner-thread-0] kafka.log.LogCleaner                     : [kafka-log-cleaner-thread-0]: Starting
2021-12-09 16:15:01.224  INFO 13864 --- [name=forwarding] k.s.BrokerToControllerRequestThread      : [BrokerToControllerChannelManager broker=0 name=forwarding]: Starting
2021-12-09 16:15:01.325  INFO 13864 --- [           main] kafka.network.ConnectionQuotas           : Updated connection-accept-rate max connection creation rate to 2147483647
2021-12-09 16:15:01.327  INFO 13864 --- [           main] kafka.network.Acceptor                   : Awaiting socket connections on localhost:63919.
2021-12-09 16:15:01.345  INFO 13864 --- [           main] kafka.network.SocketServer               : [SocketServer listenerType=ZK_BROKER, nodeId=0] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT)
2021-12-09 16:15:01.350  INFO 13864 --- [0 name=alterIsr] k.s.BrokerToControllerRequestThread      : [BrokerToControllerChannelManager broker=0 name=alterIsr]: Starting
2021-12-09 16:15:01.364  INFO 13864 --- [eaper-0-Produce] perationPurgatory$ExpiredOperationReaper : [ExpirationReaper-0-Produce]: Starting
2021-12-09 16:15:01.364  INFO 13864 --- [nReaper-0-Fetch] perationPurgatory$ExpiredOperationReaper : [ExpirationReaper-0-Fetch]: Starting
2021-12-09 16:15:01.365  INFO 13864 --- [0-DeleteRecords] perationPurgatory$ExpiredOperationReaper : [ExpirationReaper-0-DeleteRecords]: Starting
2021-12-09 16:15:01.365  INFO 13864 --- [r-0-ElectLeader] perationPurgatory$ExpiredOperationReaper : [ExpirationReaper-0-ElectLeader]: Starting
2021-12-09 16:15:01.374  INFO 13864 --- [rFailureHandler] k.s.ReplicaManager$LogDirFailureHandler  : [LogDirFailureHandler]: Starting
2021-12-09 16:15:01.390  INFO 13864 --- [           main] kafka.zk.KafkaZkClient                   : Creating /brokers/ids/0 (is it secure? false)
2021-12-09 16:15:01.400  INFO 13864 --- [           main] kafka.zk.KafkaZkClient                   : Stat of the created znode at /brokers/ids/0 is: 25,25,1639062901396,1639062901396,1,0,0,72059919267528704,204,0,25

2021-12-09 16:15:01.400  INFO 13864 --- [           main] kafka.zk.KafkaZkClient                   : Registered broker 0 at path /brokers/ids/0 with addresses: PLAINTEXT://localhost:63919, czxid (broker epoch): 25
2021-12-09 16:15:01.410 ERROR 13864 --- [           main] kafka.server.BrokerMetadataCheckpoint    : Failed to write meta.properties due to

java.nio.file.AccessDeniedException: C:\Users\ddrop\AppData\Local\Temp\spring.kafka.bf8e2b62-a1f2-4092-b292-a15e35bd31ad18378079390566696446
    at java.base/sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:89) ~[na:na]
    at java.base/sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:103) ~[na:na]
    at java.base/sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:108) ~[na:na]
package com.example.demo;

import org.junit.jupiter.api.Test;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.kafka.test.context.EmbeddedKafka;

@SpringBootTest
@EmbeddedKafka
class ApplicationTest {

    @Test
    void run() {
        int i = 1 + 1; // just a line of code to set a debug-point
    }
}

I do not have this error when pinning kafka.version to 2.8.1 in pom.xml's properties.

It seems like the cause is in Kafka itself, but I have a hard time figuring out if it is spring-kafka intitializing Kafka via EmbeddedKafka incorrectly or if Kafka itself is the culrit here.

Anyone has an idea? Am I missing a test-parameter to set?

ANSWER

Answered 2021-Dec-09 at 15:51

Known bug on the Apache Kafka side. Nothing to do from Spring perspective. See more info here: https://github.com/spring-projects/spring-kafka/discussions/2027. And here: https://issues.apache.org/jira/browse/KAFKA-13391

You need to wait until Apache Kafka 3.0.1 or don't use embedded Kafka and just rely on the Testcontainers, for example, or fully external Apache Kafka broker.

Source https://stackoverflow.com/questions/70292425

Community Discussions, Code Snippets contain sources that include Stack Exchange Network

Vulnerabilities

The compilation daemon in Scala before 2.10.7, 2.11.x before 2.11.12, and 2.12.x before 2.12.4 uses weak permissions for private files in /tmp/scala-devel/${USER:shared}/scalac-compile-server-port, which allows local users to write to arbitrary class files and consequently gain privileges.
When Connect workers in Apache Kafka 2.0.0, 2.0.1, 2.1.0, 2.1.1, 2.2.0, 2.2.1, or 2.3.0 are configured with one or more config providers, and a connector is created/updated on that Connect cluster to use an externalized secret variable in a substring of a connector configuration property value, then any client can issue a request to the same Connect cluster to obtain the connector's task configuration and the response will contain the plaintext secret rather than the externalized secrets variables.
In Apache Kafka versions between 0.11.0.0 and 2.1.0, it is possible to manually craft a Produce request which bypasses transaction/idempotent ACL validation. Only authenticated clients with Write permission on the respective topics are able to exploit this vulnerability. Users should upgrade to 2.1.1 or later where this vulnerability has been fixed.
In Apache Kafka 0.9.0.0 to 0.9.0.1, 0.10.0.0 to 0.10.2.1, 0.11.0.0 to 0.11.0.2, and 1.0.0, authenticated Kafka users may perform action reserved for the Broker via a manually created fetch request interfering with data replication, resulting in data loss.
In Apache Kafka 0.10.0.0 to 0.10.2.1 and 0.11.0.0 to 0.11.0.1, authenticated Kafka clients may use impersonation via a manually crafted protocol message with SASL/PLAIN or SASL/SCRAM authentication when using the built-in PLAIN or SCRAM server implementations in Apache Kafka.

Install kafka

You can download it from GitHub.
You can use kafka like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the kafka component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .

Support

For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .

DOWNLOAD this Library from

Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

Share this Page

share link
Reuse Pre-built Kits with kafka
Compare Pub Sub Libraries with Highest Support
Compare Pub Sub Libraries with Highest Reuse
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

  • © 2022 Open Weaver Inc.