Explore all Kafka open source software, libraries, packages, source code, cloud functions and APIs.

Popular New Releases in Kafka

spring-cloud-alibaba

2021.0.1.0

rocketmq

release 4.9.3

zap

v1.21.0

logstash

Logstash 8.1.3

thingsboard

ThingsBoard 3.3.4.1 Release

Popular Libraries in Kafka

advanced-java

by doocs doticonjavadoticon

star image 57101 doticonCC-BY-SA-4.0

😮 Core Interview Questions & Answers For Experienced Java(Backend) Developers | 互联网 Java 工程师进阶知识完全扫盲:涵盖高并发、分布式、高可用、微服务、海量数据处理等领域知识

spark

by apache doticonscaladoticon

star image 32507 doticonApache-2.0

Apache Spark - A unified analytics engine for large-scale data processing

spring-cloud-alibaba

by alibaba doticonjavadoticon

star image 21810 doticonApache-2.0

Spring Cloud Alibaba provides a one-stop solution for application development for the distributed solutions of Alibaba middleware.

kafka

by apache doticonjavadoticon

star image 21667 doticonApache-2.0

Mirror of Apache Kafka

rocketmq

by apache doticonjavadoticon

star image 17019 doticonApache-2.0

Mirror of Apache RocketMQ

zap

by uber-go doticongodoticon

star image 15297 doticonMIT

Blazing fast, structured, leveled logging in Go.

FizzBuzzEnterpriseEdition

by EnterpriseQualityCoding doticonjavadoticon

star image 15020 doticon

FizzBuzz Enterprise Edition is a no-nonsense implementation of FizzBuzz made by serious businessmen for serious business purposes.

docker-elk

by deviantony doticonshelldoticon

star image 12935 doticonMIT

The Elastic stack (ELK) powered by Docker and Compose.

logstash

by elastic doticonrubydoticon

star image 12801 doticonNOASSERTION

Logstash - transport and process your logs, events, or other data

Trending New libraries in Kafka

workflow

by sogou doticonc++doticon

star image 7431 doticonApache-2.0

Parallel Computing and Asynchronous Networking Engine ⭐️⭐️⭐️

airbyte

by airbytehq doticonjavadoticon

star image 6468 doticonNOASSERTION

Airbyte is an open-source EL(T) platform that helps you replicate your data in your warehouses, lakes and databases.

signoz

by SigNoz doticontypescriptdoticon

star image 6127 doticonMIT

SigNoz is an open-source APM. It helps developers monitor their applications & troubleshoot problems, an open-source alternative to DataDog, NewRelic, etc. 🔥 🖥. 👉 Open source Application Performance Monitoring (APM) & Observability tool

LogiKM

by didi doticonjavadoticon

star image 3451 doticonApache-2.0

一站式Apache Kafka集群指标监控与运维管控平台

redpanda

by vectorizedio doticonc++doticon

star image 3320 doticon

Redpanda is the real-time engine for modern apps. Kafka API Compatible; 10x faster 🚀 See more at redpanda.com

Logi-KafkaManager

by didi doticonjavadoticon

star image 2639 doticonApache-2.0

一站式Apache Kafka集群指标监控与运维管控平台

jitsu

by jitsucom doticontypescriptdoticon

star image 2430 doticonMIT

Jitsu is an open-source Segment alternative. Fully-scriptable data ingestion engine for modern data teams. Set-up a real-time data pipeline in minutes, not days

plumber

by batchcorp doticongodoticon

star image 1284 doticonMIT

A swiss army knife CLI tool for interacting with Kafka, RabbitMQ and other messaging systems.

SZT-bigdata

by geekyouth doticonscaladoticon

star image 1137 doticonGPL-3.0

深圳地铁大数据客流分析系统🚇🚄🌟

Top Authors in Kafka

1

confluentinc

49 Libraries

star icon28836

2

apache

42 Libraries

star icon118187

3

lensesio

38 Libraries

star icon5022

4

PacktPublishing

30 Libraries

star icon533

5

jcustenborder

25 Libraries

star icon699

6

abhirockzz

25 Libraries

star icon164

7

aliyun

20 Libraries

star icon1251

8

aws-samples

16 Libraries

star icon205

9

oslabs-beta

16 Libraries

star icon1373

10

elodina

16 Libraries

star icon1130

1

49 Libraries

star icon28836

2

42 Libraries

star icon118187

3

38 Libraries

star icon5022

4

30 Libraries

star icon533

5

25 Libraries

star icon699

6

25 Libraries

star icon164

7

20 Libraries

star icon1251

8

16 Libraries

star icon205

9

16 Libraries

star icon1373

10

16 Libraries

star icon1130

Trending Kits in Kafka

No Trending Kits are available at this moment for Kafka

Trending Discussions on Kafka

EmbeddedKafka failing since Spring Boot 2.6.X : AccessDeniedException: ..\AppData\Local\Temp\spring.kafka*

Exception in thread "main" joptsimple.UnrecognizedOptionException: zookeeper is not a recognized option

How to avoid publishing duplicate data to Kafka via Kafka Connect and Couchbase Eventing, when replicate Couchbase data on multi data center with XDCR

How can I register a protobuf schema with references in other packages in Kafka schema registry?

MS dotnet core container images failed to pull, Error: CTC1014

How to make a Spring Boot application quit on tomcat failure

Setting up JAVA_HOME in Ubuntu to point to Window's JAVA_HOME

KafkaConsumer: `seekToEnd()` does not make consumer consume from latest offset

Kafka integration tests in Gradle runs into GitHub Actions

How to parse json to case class with map by jsonter, plokhotnyuk

QUESTION

EmbeddedKafka failing since Spring Boot 2.6.X : AccessDeniedException: ..\AppData\Local\Temp\spring.kafka*

Asked 2022-Mar-25 at 12:39

e: this has been fixed through Spring Boot 2.6.5 (see https://github.com/spring-projects/spring-boot/issues/30243)

Since upgrading to Spring Boot 2.6.X (in my case: 2.6.1), I have multiple projects that now have failing unit-tests on Windows that cannot start EmbeddedKafka, that do run with Linux

There is multiple errors, but this is the first one thrown

1...
2  .   ____          _            __ _ _
3 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
4( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
5 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
6  '  |____| .__|_| |_|_| |_\__, | / / / /
7 =========|_|==============|___/=/_/_/_/
8 :: Spring Boot ::                (v2.6.1)
9
102021-12-09 16:15:00.300  INFO 13864 --- [           main] k.utils.Log4jControllerRegistration$     : Registered kafka:type=kafka.Log4jController MBean
112021-12-09 16:15:00.420  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     : 
122021-12-09 16:15:00.420  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     :   ______                  _                                          
132021-12-09 16:15:00.420  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     :  |___  /                 | |                                         
142021-12-09 16:15:00.420  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     :     / /    ___     ___   | | __   ___    ___   _ __     ___   _ __   
152021-12-09 16:15:00.420  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     :    / /    / _ \   / _ \  | |/ /  / _ \  / _ \ | '_ \   / _ \ | '__|
162021-12-09 16:15:00.420  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     :   / /__  | (_) | | (_) | |   <  |  __/ |  __/ | |_) | |  __/ | |    
172021-12-09 16:15:00.420  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     :  /_____|  \___/   \___/  |_|\_\  \___|  \___| | .__/   \___| |_|
182021-12-09 16:15:00.420  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     :                                               | |                     
192021-12-09 16:15:00.420  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     :                                               |_|                     
202021-12-09 16:15:00.420  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     : 
212021-12-09 16:15:00.422  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     : Server environment:zookeeper.version=3.6.3--6401e4ad2087061bc6b9f80dec2d69f2e3c8660a, built on 04/08/2021 16:35 GMT
222021-12-09 16:15:00.422  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     : Server environment:host.name=host.docker.internal
232021-12-09 16:15:00.422  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     : Server environment:java.version=11.0.11
242021-12-09 16:15:00.422  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     : Server environment:java.vendor=AdoptOpenJDK
25...
262021-12-09 16:15:01.015  INFO 13864 --- [nelReaper-Fetch] lientQuotaManager$ThrottledChannelReaper : [ThrottledChannelReaper-Fetch]: Starting
272021-12-09 16:15:01.015  INFO 13864 --- [lReaper-Produce] lientQuotaManager$ThrottledChannelReaper : [ThrottledChannelReaper-Produce]: Starting
282021-12-09 16:15:01.016  INFO 13864 --- [lReaper-Request] lientQuotaManager$ThrottledChannelReaper : [ThrottledChannelReaper-Request]: Starting
292021-12-09 16:15:01.017  INFO 13864 --- [trollerMutation] lientQuotaManager$ThrottledChannelReaper : [ThrottledChannelReaper-ControllerMutation]: Starting
302021-12-09 16:15:01.037  INFO 13864 --- [           main] kafka.log.LogManager                     : Loading logs from log dirs ArraySeq(C:\Users\ddrop\AppData\Local\Temp\spring.kafka.bf8e2b62-a1f2-4092-b292-a15e35bd31ad18378079390566696446)
312021-12-09 16:15:01.040  INFO 13864 --- [           main] kafka.log.LogManager                     : Attempting recovery for all logs in C:\Users\ddrop\AppData\Local\Temp\spring.kafka.bf8e2b62-a1f2-4092-b292-a15e35bd31ad18378079390566696446 since no clean shutdown file was found
322021-12-09 16:15:01.043  INFO 13864 --- [           main] kafka.log.LogManager                     : Loaded 0 logs in 6ms.
332021-12-09 16:15:01.043  INFO 13864 --- [           main] kafka.log.LogManager                     : Starting log cleanup with a period of 300000 ms.
342021-12-09 16:15:01.045  INFO 13864 --- [           main] kafka.log.LogManager                     : Starting log flusher with a default period of 9223372036854775807 ms.
352021-12-09 16:15:01.052  INFO 13864 --- [           main] kafka.log.LogCleaner                     : Starting the log cleaner
362021-12-09 16:15:01.059  INFO 13864 --- [leaner-thread-0] kafka.log.LogCleaner                     : [kafka-log-cleaner-thread-0]: Starting
372021-12-09 16:15:01.224  INFO 13864 --- [name=forwarding] k.s.BrokerToControllerRequestThread      : [BrokerToControllerChannelManager broker=0 name=forwarding]: Starting
382021-12-09 16:15:01.325  INFO 13864 --- [           main] kafka.network.ConnectionQuotas           : Updated connection-accept-rate max connection creation rate to 2147483647
392021-12-09 16:15:01.327  INFO 13864 --- [           main] kafka.network.Acceptor                   : Awaiting socket connections on localhost:63919.
402021-12-09 16:15:01.345  INFO 13864 --- [           main] kafka.network.SocketServer               : [SocketServer listenerType=ZK_BROKER, nodeId=0] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT)
412021-12-09 16:15:01.350  INFO 13864 --- [0 name=alterIsr] k.s.BrokerToControllerRequestThread      : [BrokerToControllerChannelManager broker=0 name=alterIsr]: Starting
422021-12-09 16:15:01.364  INFO 13864 --- [eaper-0-Produce] perationPurgatory$ExpiredOperationReaper : [ExpirationReaper-0-Produce]: Starting
432021-12-09 16:15:01.364  INFO 13864 --- [nReaper-0-Fetch] perationPurgatory$ExpiredOperationReaper : [ExpirationReaper-0-Fetch]: Starting
442021-12-09 16:15:01.365  INFO 13864 --- [0-DeleteRecords] perationPurgatory$ExpiredOperationReaper : [ExpirationReaper-0-DeleteRecords]: Starting
452021-12-09 16:15:01.365  INFO 13864 --- [r-0-ElectLeader] perationPurgatory$ExpiredOperationReaper : [ExpirationReaper-0-ElectLeader]: Starting
462021-12-09 16:15:01.374  INFO 13864 --- [rFailureHandler] k.s.ReplicaManager$LogDirFailureHandler  : [LogDirFailureHandler]: Starting
472021-12-09 16:15:01.390  INFO 13864 --- [           main] kafka.zk.KafkaZkClient                   : Creating /brokers/ids/0 (is it secure? false)
482021-12-09 16:15:01.400  INFO 13864 --- [           main] kafka.zk.KafkaZkClient                   : Stat of the created znode at /brokers/ids/0 is: 25,25,1639062901396,1639062901396,1,0,0,72059919267528704,204,0,25
49
502021-12-09 16:15:01.400  INFO 13864 --- [           main] kafka.zk.KafkaZkClient                   : Registered broker 0 at path /brokers/ids/0 with addresses: PLAINTEXT://localhost:63919, czxid (broker epoch): 25
512021-12-09 16:15:01.410 ERROR 13864 --- [           main] kafka.server.BrokerMetadataCheckpoint    : Failed to write meta.properties due to
52
53java.nio.file.AccessDeniedException: C:\Users\ddrop\AppData\Local\Temp\spring.kafka.bf8e2b62-a1f2-4092-b292-a15e35bd31ad18378079390566696446
54    at java.base/sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:89) ~[na:na]
55    at java.base/sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:103) ~[na:na]
56    at java.base/sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:108) ~[na:na]
57
1...
2  .   ____          _            __ _ _
3 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
4( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
5 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
6  '  |____| .__|_| |_|_| |_\__, | / / / /
7 =========|_|==============|___/=/_/_/_/
8 :: Spring Boot ::                (v2.6.1)
9
102021-12-09 16:15:00.300  INFO 13864 --- [           main] k.utils.Log4jControllerRegistration$     : Registered kafka:type=kafka.Log4jController MBean
112021-12-09 16:15:00.420  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     : 
122021-12-09 16:15:00.420  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     :   ______                  _                                          
132021-12-09 16:15:00.420  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     :  |___  /                 | |                                         
142021-12-09 16:15:00.420  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     :     / /    ___     ___   | | __   ___    ___   _ __     ___   _ __   
152021-12-09 16:15:00.420  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     :    / /    / _ \   / _ \  | |/ /  / _ \  / _ \ | '_ \   / _ \ | '__|
162021-12-09 16:15:00.420  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     :   / /__  | (_) | | (_) | |   <  |  __/ |  __/ | |_) | |  __/ | |    
172021-12-09 16:15:00.420  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     :  /_____|  \___/   \___/  |_|\_\  \___|  \___| | .__/   \___| |_|
182021-12-09 16:15:00.420  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     :                                               | |                     
192021-12-09 16:15:00.420  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     :                                               |_|                     
202021-12-09 16:15:00.420  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     : 
212021-12-09 16:15:00.422  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     : Server environment:zookeeper.version=3.6.3--6401e4ad2087061bc6b9f80dec2d69f2e3c8660a, built on 04/08/2021 16:35 GMT
222021-12-09 16:15:00.422  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     : Server environment:host.name=host.docker.internal
232021-12-09 16:15:00.422  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     : Server environment:java.version=11.0.11
242021-12-09 16:15:00.422  INFO 13864 --- [           main] o.a.zookeeper.server.ZooKeeperServer     : Server environment:java.vendor=AdoptOpenJDK
25...
262021-12-09 16:15:01.015  INFO 13864 --- [nelReaper-Fetch] lientQuotaManager$ThrottledChannelReaper : [ThrottledChannelReaper-Fetch]: Starting
272021-12-09 16:15:01.015  INFO 13864 --- [lReaper-Produce] lientQuotaManager$ThrottledChannelReaper : [ThrottledChannelReaper-Produce]: Starting
282021-12-09 16:15:01.016  INFO 13864 --- [lReaper-Request] lientQuotaManager$ThrottledChannelReaper : [ThrottledChannelReaper-Request]: Starting
292021-12-09 16:15:01.017  INFO 13864 --- [trollerMutation] lientQuotaManager$ThrottledChannelReaper : [ThrottledChannelReaper-ControllerMutation]: Starting
302021-12-09 16:15:01.037  INFO 13864 --- [           main] kafka.log.LogManager                     : Loading logs from log dirs ArraySeq(C:\Users\ddrop\AppData\Local\Temp\spring.kafka.bf8e2b62-a1f2-4092-b292-a15e35bd31ad18378079390566696446)
312021-12-09 16:15:01.040  INFO 13864 --- [           main] kafka.log.LogManager                     : Attempting recovery for all logs in C:\Users\ddrop\AppData\Local\Temp\spring.kafka.bf8e2b62-a1f2-4092-b292-a15e35bd31ad18378079390566696446 since no clean shutdown file was found
322021-12-09 16:15:01.043  INFO 13864 --- [           main] kafka.log.LogManager                     : Loaded 0 logs in 6ms.
332021-12-09 16:15:01.043  INFO 13864 --- [           main] kafka.log.LogManager                     : Starting log cleanup with a period of 300000 ms.
342021-12-09 16:15:01.045  INFO 13864 --- [           main] kafka.log.LogManager                     : Starting log flusher with a default period of 9223372036854775807 ms.
352021-12-09 16:15:01.052  INFO 13864 --- [           main] kafka.log.LogCleaner                     : Starting the log cleaner
362021-12-09 16:15:01.059  INFO 13864 --- [leaner-thread-0] kafka.log.LogCleaner                     : [kafka-log-cleaner-thread-0]: Starting
372021-12-09 16:15:01.224  INFO 13864 --- [name=forwarding] k.s.BrokerToControllerRequestThread      : [BrokerToControllerChannelManager broker=0 name=forwarding]: Starting
382021-12-09 16:15:01.325  INFO 13864 --- [           main] kafka.network.ConnectionQuotas           : Updated connection-accept-rate max connection creation rate to 2147483647
392021-12-09 16:15:01.327  INFO 13864 --- [           main] kafka.network.Acceptor                   : Awaiting socket connections on localhost:63919.
402021-12-09 16:15:01.345  INFO 13864 --- [           main] kafka.network.SocketServer               : [SocketServer listenerType=ZK_BROKER, nodeId=0] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT)
412021-12-09 16:15:01.350  INFO 13864 --- [0 name=alterIsr] k.s.BrokerToControllerRequestThread      : [BrokerToControllerChannelManager broker=0 name=alterIsr]: Starting
422021-12-09 16:15:01.364  INFO 13864 --- [eaper-0-Produce] perationPurgatory$ExpiredOperationReaper : [ExpirationReaper-0-Produce]: Starting
432021-12-09 16:15:01.364  INFO 13864 --- [nReaper-0-Fetch] perationPurgatory$ExpiredOperationReaper : [ExpirationReaper-0-Fetch]: Starting
442021-12-09 16:15:01.365  INFO 13864 --- [0-DeleteRecords] perationPurgatory$ExpiredOperationReaper : [ExpirationReaper-0-DeleteRecords]: Starting
452021-12-09 16:15:01.365  INFO 13864 --- [r-0-ElectLeader] perationPurgatory$ExpiredOperationReaper : [ExpirationReaper-0-ElectLeader]: Starting
462021-12-09 16:15:01.374  INFO 13864 --- [rFailureHandler] k.s.ReplicaManager$LogDirFailureHandler  : [LogDirFailureHandler]: Starting
472021-12-09 16:15:01.390  INFO 13864 --- [           main] kafka.zk.KafkaZkClient                   : Creating /brokers/ids/0 (is it secure? false)
482021-12-09 16:15:01.400  INFO 13864 --- [           main] kafka.zk.KafkaZkClient                   : Stat of the created znode at /brokers/ids/0 is: 25,25,1639062901396,1639062901396,1,0,0,72059919267528704,204,0,25
49
502021-12-09 16:15:01.400  INFO 13864 --- [           main] kafka.zk.KafkaZkClient                   : Registered broker 0 at path /brokers/ids/0 with addresses: PLAINTEXT://localhost:63919, czxid (broker epoch): 25
512021-12-09 16:15:01.410 ERROR 13864 --- [           main] kafka.server.BrokerMetadataCheckpoint    : Failed to write meta.properties due to
52
53java.nio.file.AccessDeniedException: C:\Users\ddrop\AppData\Local\Temp\spring.kafka.bf8e2b62-a1f2-4092-b292-a15e35bd31ad18378079390566696446
54    at java.base/sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:89) ~[na:na]
55    at java.base/sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:103) ~[na:na]
56    at java.base/sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:108) ~[na:na]
57package com.example.demo;
58
59import org.junit.jupiter.api.Test;
60import org.springframework.boot.test.context.SpringBootTest;
61import org.springframework.kafka.test.context.EmbeddedKafka;
62
63@SpringBootTest
64@EmbeddedKafka
65class ApplicationTest {
66
67    @Test
68    void run() {
69        int i = 1 + 1; // just a line of code to set a debug-point
70    }
71}
72

I do not have this error when pinning kafka.version to 2.8.1 in pom.xml's properties.

It seems like the cause is in Kafka itself, but I have a hard time figuring out if it is spring-kafka intitializing Kafka via EmbeddedKafka incorrectly or if Kafka itself is the culrit here.

Anyone has an idea? Am I missing a test-parameter to set?

ANSWER

Answered 2021-Dec-09 at 15:51

Known bug on the Apache Kafka side. Nothing to do from Spring perspective. See more info here: https://github.com/spring-projects/spring-kafka/discussions/2027. And here: https://issues.apache.org/jira/browse/KAFKA-13391

You need to wait until Apache Kafka 3.0.1 or don't use embedded Kafka and just rely on the Testcontainers, for example, or fully external Apache Kafka broker.

Source https://stackoverflow.com/questions/70292425

QUESTION

Exception in thread "main" joptsimple.UnrecognizedOptionException: zookeeper is not a recognized option

Asked 2022-Mar-24 at 12:28

I am new to kafka and zookepper, and I am trying to create a topic, but I am getting this error -

1Exception in thread "main" joptsimple.UnrecognizedOptionException: zookeeper is not a recognized option
2        at joptsimple.OptionException.unrecognizedOption(OptionException.java:108)
3        at joptsimple.OptionParser.handleLongOptionToken(OptionParser.java:510)
4        at joptsimple.OptionParserState$2.handleArgument(OptionParserState.java:56)
5        at joptsimple.OptionParser.parse(OptionParser.java:396)
6        at kafka.admin.TopicCommand$TopicCommandOptions.<init>(TopicCommand.scala:517)
7        at kafka.admin.TopicCommand$.main(TopicCommand.scala:47)
8        at kafka.admin.TopicCommand.main(TopicCommand.scala)
9

I am using this command to create the topic -

1Exception in thread "main" joptsimple.UnrecognizedOptionException: zookeeper is not a recognized option
2        at joptsimple.OptionException.unrecognizedOption(OptionException.java:108)
3        at joptsimple.OptionParser.handleLongOptionToken(OptionParser.java:510)
4        at joptsimple.OptionParserState$2.handleArgument(OptionParserState.java:56)
5        at joptsimple.OptionParser.parse(OptionParser.java:396)
6        at kafka.admin.TopicCommand$TopicCommandOptions.<init>(TopicCommand.scala:517)
7        at kafka.admin.TopicCommand$.main(TopicCommand.scala:47)
8        at kafka.admin.TopicCommand.main(TopicCommand.scala)
9.\bin\windows\kafka-topics.bat --create --zookeeper localhost:2181 --replication-factor 1 --partions 1 --topic TestTopic
10

ANSWER

Answered 2021-Sep-30 at 14:52

Read the official Kafka documentation for the version you downloaded, and not some other blog/article that you might have copied the command from

zookeeper is almost never used for CLI commands in current versions

If you run bin\kafka-topics on its own with --help or no options, then it'll print the help messaging that shows all available arguments.

Source https://stackoverflow.com/questions/69297020

QUESTION

How to avoid publishing duplicate data to Kafka via Kafka Connect and Couchbase Eventing, when replicate Couchbase data on multi data center with XDCR

Asked 2022-Feb-14 at 19:12

My buckets are:

  • MyDataBucket: application saves its data on this bucket.
  • MyEventingBucket: A couchbase eventing function extracts the 'currentState' field from MyDataBucket and saves it in this bucket.

Also, I have a kafka couchbase connector that pushs data from MyEventingBucket to kafka topic.

When we had a single data center, there wasn't any problem. Now, we have three data centers. We replicate our data with XDCR between data centers and we work as active-active. So, write requests can be from any data center.

When data is replicated on other data centers, the eventing service works on all data centers, and the same data is pushed three-time (because we have three data centers) on Kafka with Kafka connector.

How can we avoid pushing duplicate data o Kafka?

Ps: Of course, we can run an eventing service or Kafka connector in only one data center. So, we can publish data on Kafka just once. But this is not a good solution. Because we will be affected when a problem occurs in this data center. This was the main reason of using multi data center.

ANSWER

Answered 2022-Feb-14 at 19:12

Obviously in a perfect world XDCR would just work with Eventing on the replicated bucket.

I put together an Eventing based work around to overcome issues in an active / active XDCR configuration - it is a bit complex so I thought working code would be best. This is one way to perform the solution that Matthew Groves alluded to.

Documents are tagged and you have a shared via XDCR "cluster_state" document (see comments in the code) to coordinated which cluster is "primary" as you only want one cluster to fire the Eventing function.

I will give the code for an Eventing function "xcdr_supression_700" for version 7.0.0 with a minor change it will also work for 6.6.5.

Note, newer Couchbase releases have more functionality WRT Eventing and allow the Eventing function to be simplified for example:

  • Advanced Bucket Accessors in 6.6+ specifically couchbase.replace() can use CAS and prevent potential races (note Eventing does not allow locking).
  • Timers have been improved and can be overwritten in 6.6+ thus simplifying the logic needed to determine if a timer is an orphan.
  • Constant Alias bindings in 7.X allow the JavaScript Eventing code identical between clusters changing just a setting for each cluster.

Setting up XDCR and Eventing

The following code will successfully suppress all extra Eventing mutations on a bucket called "common" or in 7.0.X a keyspace of "common._default._default" with an active/active XDCR replication.

The example is for two (2) clusters but may be extended. This code is 7.0 specific (I can supply a 6.5.1 variant if needed - please DM me)

PS : The only thing it does is log a message (in the cluster that is processing the function). You can just set up two one node clusters, I named my clusters "couch01" and "couch03". Pretty easy to setup and test to ensure that mutations in your bucket are only processed once across two clusters with active/active XDCR

The Eventing Function is generic WRT the JavaScript BUT it does require a different constant alias on each cluster, see the comment just under the OnUpdate(doc,meta) entry point.

1/*
2PURPOSE suppress duplicate mutations by Eventing when we use an Active/Active XDCR setup
3
4Make two clusters "couch01" and "couch03" each with bucket "common" (if 7.0.0 keyspace "common._default._default")
5On cluster "couch01", setup XDCR replication of common from "couch01" =>  "couch03"
6On cluster "couch03", setup XDCR replication of common from "couch03" =>  "couch01"
7This is an active / active XDCR configuration.
8
9We process all documents in "common" except those with "type": "cluster_state" the documents can contain anything
10
11{
12  "data": "...something..."
13}
14
15We add "owner": "cluster" to every document, in this sample I have two clusters "couch01" and "couch03"
16We add "crc": "crc" to every document, in this sample I have two clusters "couch01" and "couch03"
17If either the "owner" or "crc" property does not exist we will add the properties ourselves to the document
18
19{
20  "data": "...something...",
21  "owner": "couch01",
22  "crc": "a63a0af9428f6d2d"
23}
24
25A document must exist with KEY "cluster_state" when things are perfect it looks lke the following:
26
27{
28    "type": "cluster_state",
29    "ci_offline": {"couch01": false, "couch03": false },
30    "ci_backups": {"couch03": "couch01",  "couch01": "couch03" }
31}
32
33Note ci_offline is an indicator that the cluster is down, for example is a document has an "owner": "couch01"
34and "ci_offline": {"couch01": true, "couch03": false } then the cluster "couch02" will take ownership and the
35documents will be updated accordingly.  An external process (ping/verify CB is running, etc.) runs every minute
36or so and then updates the "cluster_state" if a change in cluster state occurs, however prior to updating
37ci_offline to "true" the eventing Function on that cluster should either be undeployed or paused.  In addition
38re-enabeling the cluster setting the flag ci_offline to "false" must be done before the Function is resumed or
39re-deployed.
40
41The ci_backups tells which cluster is a backup for which cluster, pretty simple for two clusters.
42
43If you have timers when the timer fires you MUST check if the doc.owner is correct if not ignore the timer, i.e.
44do nothing.  In addition, when you "take ownership" you will need to create a new timer.  Finally, all timers should
45have an id such that if we ping pong ci_offline that the timer will be overwritten, this implies 6.6.0+ else you
46need do even to more work to suppress orphaned timers.
47
48The 'near' identical Function will be deployed on both clusters "couch01" and "couch02" make sure you have
49a constant binding for 7.0.0 THIS_CLUSTER "couch01" or THIS_CLUSTER "couch02", or for 6.6.0 uncomment the
50appropriate var statement at the top of OnUpdate().  Next you should have a bucket binding of src_bkt to
51keyspace "common._default._default" for 7.0.0 or to bucket "common" in 6.6.0 in mode read+write.
52*/
53
54function OnUpdate(doc, meta) {
55    // ********************************
56    // MUST MATCH THE CLUSTER AND ALSO THE DOC "cluster_state"
57    // *********
58    // var THIS_CLUSTER = "couch01"; // this could be a constant binding in 7.0.0, in 6.X we uncomment one of these to match he cluster name
59    // var THIS_CLUSTER = "couch03"; // this could be a constant binding in 7.0.0, in 6.X we uncomment one of these to match he cluster name
60    // ********************************
61
62    if (doc.type === "cluster_state") return;
63
64    var cs = src_bkt["cluster_state"];  // extra bucket op read the state of the clusters
65    if (cs.ci_offline[THIS_CLUSTER] === true) return; // this cluster is marked offline do nothing.
66    // ^^^^^^^^
67    // IMPORTANT: when an external process marks the cs.ci_offline[THIS_CLUSTER] back to false (as
68    // in this cluster becomes online) it is assumed that the Eventing function was undeployed
69    // (or was paused) when it was set "true" and will be redeployed or resumed AFTER it is set "false".
70    // This order of this procedure is very important else  mutations will be lost.
71
72    var orig_owner = doc.owner;
73    var fallback_cluster = cs.ci_backups[THIS_CLUSTER]; // this cluster is the fallback for the fallback_cluster
74
75   /*
76    if (!doc.crc && !doc.owner) {
77        doc.owner = fallback_cluster;
78        src_bkt[meta.id] = doc;
79        return; // the fallback cluster NOT THIS CLUSTER is now the owner, the fallback
80                // cluster will then add the crc property, as we just made a mutation in that
81                // cluster via XDCR
82    }
83   */
84
85    if (!doc.crc && !doc.owner) {
86        doc.owner = THIS_CLUSTER;
87        orig_owner = doc.owner;
88        // use CAS to avoid a potential 'race' between clusters
89        var result = couchbase.replace(src_bkt,meta,doc);
90        if (result.success) {
91            // log('success adv. replace: result',result);
92        } else {
93            // log('lost to other cluster failure adv. replace: id',meta.id,'result',result);
94            // re-read
95            doc = src_bkt[meta.id];
96            orig_owner = doc.owner;
97        }
98    }
99
100    // logic to take over a failed clusters data, requires updating "cluster_state"
101    if (orig_owner !== THIS_CLUSTER) {
102        if ( orig_owner === fallback_cluster && cs.ci_offline[fallback_cluster] === true) {
103            doc.owner = THIS_CLUSTER; // Here update the doc's owner
104            src_bkt[meta.id] = doc;   // This cluster now will now process this doc's mutations.
105        } else {
106            return; // this isn't the fallback cluster.
107        }
108    }
109
110    var crc_changed = false;
111    if (!doc.crc) {
112        var cur_owner = doc.owner;
113        delete doc.owner;
114        doc.crc = crc64(doc);  // crc DOES NOT include doc.owner && doc.crc
115        doc.owner = cur_owner;
116        crc_changed = true;
117    } else {
118        var cur_owner = doc.owner;
119        var cur_crc = doc.crc;
120        delete doc.owner;
121        delete doc.crc;
122        doc.crc = crc64(doc); // crc DOES NOT include doc.owner && doc.crc
123        doc.owner = cur_owner;
124        if (cur_crc != doc.crc) {
125            crc_changed = true;
126        } else {
127            return;
128        }
129    }
130
131    if (crc_changed) {
132        // update the data with the new crc, to suppress duplicate XDCR processing, and re-deploy form Everything
133        // we could use CAS here but at this point only one cluster will update the doc, so we can not have races.
134        src_bkt[meta.id] = doc;
135    }
136
137    // This is the action on a fresh unprocessed mutation, here it is just a log message.
138    log("A. Doc created/updated", meta.id, 'THIS_CLUSTER', THIS_CLUSTER, 'offline', cs.ci_offline[THIS_CLUSTER],
139                                  'orig_owner', orig_owner, 'owner', doc.owner, 'crc_changed', crc_changed,doc.crc);
140}
141

Make sure you have two buckets prior to importing "xcdr_supression_700.json" or "xcdr_supression_660.json"

The 1st cluster's (cluster01) setup play attention to the constant alias as you will need to ensure you have THIS_CLUSTER set to "couch01"

enter image description here

The 2nd cluster's (cluster03) setup play attention to the constant alias as you will need to ensure you have THIS_CLUSTER set to "couch03"

enter image description here

Now if you are running version 6.6.5 you do not have Constant Alias bindings (which act as globals in your Eventing function's JavaScript) thus the requirement to uncomment the appropriate variable example for cluster couch01.

1/*
2PURPOSE suppress duplicate mutations by Eventing when we use an Active/Active XDCR setup
3
4Make two clusters "couch01" and "couch03" each with bucket "common" (if 7.0.0 keyspace "common._default._default")
5On cluster "couch01", setup XDCR replication of common from "couch01" =>  "couch03"
6On cluster "couch03", setup XDCR replication of common from "couch03" =>  "couch01"
7This is an active / active XDCR configuration.
8
9We process all documents in "common" except those with "type": "cluster_state" the documents can contain anything
10
11{
12  "data": "...something..."
13}
14
15We add "owner": "cluster" to every document, in this sample I have two clusters "couch01" and "couch03"
16We add "crc": "crc" to every document, in this sample I have two clusters "couch01" and "couch03"
17If either the "owner" or "crc" property does not exist we will add the properties ourselves to the document
18
19{
20  "data": "...something...",
21  "owner": "couch01",
22  "crc": "a63a0af9428f6d2d"
23}
24
25A document must exist with KEY "cluster_state" when things are perfect it looks lke the following:
26
27{
28    "type": "cluster_state",
29    "ci_offline": {"couch01": false, "couch03": false },
30    "ci_backups": {"couch03": "couch01",  "couch01": "couch03" }
31}
32
33Note ci_offline is an indicator that the cluster is down, for example is a document has an "owner": "couch01"
34and "ci_offline": {"couch01": true, "couch03": false } then the cluster "couch02" will take ownership and the
35documents will be updated accordingly.  An external process (ping/verify CB is running, etc.) runs every minute
36or so and then updates the "cluster_state" if a change in cluster state occurs, however prior to updating
37ci_offline to "true" the eventing Function on that cluster should either be undeployed or paused.  In addition
38re-enabeling the cluster setting the flag ci_offline to "false" must be done before the Function is resumed or
39re-deployed.
40
41The ci_backups tells which cluster is a backup for which cluster, pretty simple for two clusters.
42
43If you have timers when the timer fires you MUST check if the doc.owner is correct if not ignore the timer, i.e.
44do nothing.  In addition, when you "take ownership" you will need to create a new timer.  Finally, all timers should
45have an id such that if we ping pong ci_offline that the timer will be overwritten, this implies 6.6.0+ else you
46need do even to more work to suppress orphaned timers.
47
48The 'near' identical Function will be deployed on both clusters "couch01" and "couch02" make sure you have
49a constant binding for 7.0.0 THIS_CLUSTER "couch01" or THIS_CLUSTER "couch02", or for 6.6.0 uncomment the
50appropriate var statement at the top of OnUpdate().  Next you should have a bucket binding of src_bkt to
51keyspace "common._default._default" for 7.0.0 or to bucket "common" in 6.6.0 in mode read+write.
52*/
53
54function OnUpdate(doc, meta) {
55    // ********************************
56    // MUST MATCH THE CLUSTER AND ALSO THE DOC "cluster_state"
57    // *********
58    // var THIS_CLUSTER = "couch01"; // this could be a constant binding in 7.0.0, in 6.X we uncomment one of these to match he cluster name
59    // var THIS_CLUSTER = "couch03"; // this could be a constant binding in 7.0.0, in 6.X we uncomment one of these to match he cluster name
60    // ********************************
61
62    if (doc.type === "cluster_state") return;
63
64    var cs = src_bkt["cluster_state"];  // extra bucket op read the state of the clusters
65    if (cs.ci_offline[THIS_CLUSTER] === true) return; // this cluster is marked offline do nothing.
66    // ^^^^^^^^
67    // IMPORTANT: when an external process marks the cs.ci_offline[THIS_CLUSTER] back to false (as
68    // in this cluster becomes online) it is assumed that the Eventing function was undeployed
69    // (or was paused) when it was set "true" and will be redeployed or resumed AFTER it is set "false".
70    // This order of this procedure is very important else  mutations will be lost.
71
72    var orig_owner = doc.owner;
73    var fallback_cluster = cs.ci_backups[THIS_CLUSTER]; // this cluster is the fallback for the fallback_cluster
74
75   /*
76    if (!doc.crc && !doc.owner) {
77        doc.owner = fallback_cluster;
78        src_bkt[meta.id] = doc;
79        return; // the fallback cluster NOT THIS CLUSTER is now the owner, the fallback
80                // cluster will then add the crc property, as we just made a mutation in that
81                // cluster via XDCR
82    }
83   */
84
85    if (!doc.crc && !doc.owner) {
86        doc.owner = THIS_CLUSTER;
87        orig_owner = doc.owner;
88        // use CAS to avoid a potential 'race' between clusters
89        var result = couchbase.replace(src_bkt,meta,doc);
90        if (result.success) {
91            // log('success adv. replace: result',result);
92        } else {
93            // log('lost to other cluster failure adv. replace: id',meta.id,'result',result);
94            // re-read
95            doc = src_bkt[meta.id];
96            orig_owner = doc.owner;
97        }
98    }
99
100    // logic to take over a failed clusters data, requires updating "cluster_state"
101    if (orig_owner !== THIS_CLUSTER) {
102        if ( orig_owner === fallback_cluster && cs.ci_offline[fallback_cluster] === true) {
103            doc.owner = THIS_CLUSTER; // Here update the doc's owner
104            src_bkt[meta.id] = doc;   // This cluster now will now process this doc's mutations.
105        } else {
106            return; // this isn't the fallback cluster.
107        }
108    }
109
110    var crc_changed = false;
111    if (!doc.crc) {
112        var cur_owner = doc.owner;
113        delete doc.owner;
114        doc.crc = crc64(doc);  // crc DOES NOT include doc.owner && doc.crc
115        doc.owner = cur_owner;
116        crc_changed = true;
117    } else {
118        var cur_owner = doc.owner;
119        var cur_crc = doc.crc;
120        delete doc.owner;
121        delete doc.crc;
122        doc.crc = crc64(doc); // crc DOES NOT include doc.owner && doc.crc
123        doc.owner = cur_owner;
124        if (cur_crc != doc.crc) {
125            crc_changed = true;
126        } else {
127            return;
128        }
129    }
130
131    if (crc_changed) {
132        // update the data with the new crc, to suppress duplicate XDCR processing, and re-deploy form Everything
133        // we could use CAS here but at this point only one cluster will update the doc, so we can not have races.
134        src_bkt[meta.id] = doc;
135    }
136
137    // This is the action on a fresh unprocessed mutation, here it is just a log message.
138    log("A. Doc created/updated", meta.id, 'THIS_CLUSTER', THIS_CLUSTER, 'offline', cs.ci_offline[THIS_CLUSTER],
139                                  'orig_owner', orig_owner, 'owner', doc.owner, 'crc_changed', crc_changed,doc.crc);
140}
141function OnUpdate(doc, meta) {
142    // ********************************
143    // MUST MATCH THE CLUSTER AND ALSO THE DOC "cluster_state"
144    // *********
145    var THIS_CLUSTER = "couch01"; // this could be a constant binding in 7.0.0, in 6.X we uncomment one of these to match he cluster name
146    // var THIS_CLUSTER = "couch03"; // this could be a constant binding in 7.0.0, in 6.X we uncomment one of these to match he cluster name
147    // ********************************
148    // ....  code removed (see prior code example) ....
149} 
150

Some comments/details:

You may wonder why we need to use CRC function and store it in the document undergoing XDCR.

The CRC function, crc64(), built into Eventing is used to detect a non-change or a mutation possible due to a XDCR document update. The use of CRC and the properties "owner" and "crc" allow a) the determination of the owning cluster and b) the suppression of the Eventing function when the mutation is due to an XDCR cluster to cluster copy based on the "active" cluster.

Note when updating CRC in the document as part of timer function, the OnUpdate(doc,meta) entry point of the Eventing function will be triggered again. If you have timers when the timer fires you MUST check if the doc.owner is correct if it is not you ignore the timer, i.e. do nothing. In addition, when you "take ownership" you will need to create a new timer. Finally, all timers should have an id such that if we ping pong cluster_state.ci_offline that the timer will be overwritten, this implies you must use version 6.6.0+ else you need do even to more work to determine when a timer fires that the timer is orphaned and then suppress any action. Be very careful in older Couchbase versions because in 6.5 you cannot overwrite a timer by its id and all timer ids should be unique.

Any mutation made to the source bucket by an Eventing function is suppressed or not seen by that Eventing function whether a document is mutated by the main JavaScript code to by a timer callback. Yet these mutations will be seen via XCDR active/active replication in the other cluster.

As to using Eventing timers pay attention to the comment, I put in the prior paragraph about overwriting and suppressing especially if you insist on using Couchbase-server 6.5 which is getting a bit long of tooth so to speak.

Concerning the responsibility to update the cluster_state document, it is envisioned that this would be a periodic script outside of Couchbase run in a Linux cron that does "aliveness" tests with a manual override. Be careful here as you can easily go "split brain" due to a network partitioning issue.

A comment about the cluster_state, this document is subject to XCDR it is a persistent document that the active/active replication makes appear to be a single inter-cluster document. If a cluster is "down" changing it on the live cluster will result in it replicating when the "down" cluster is recovered.

Deploy/Undeploy will either process all current documents via the DCP mutation stream all over again (feed boundary == Everything) -or- only process items or mutations occurring after the time of deployment (feed boundary == From now). So you need careful coding in the first case to prevent acting on the same document twice and you will miss mutations in the second case.

It is best to design our Eventing Functions to be idempotent, where there is no additional effect if it is called more than once with the same input parameters. This can be achieved by storing state in the documents that are processed so you never reprocess them on a re-deploy.

Pause/Resume Invoking Pause will create a check point and shutdown the Eventing processing. The on a Resume the DCP stream will start form the checkpoint (for each vBucket) you will not miss a single mutation subject to DCP dedup. Furthermore all "active" timers that would have fired during the "pause" will fire as soon as possible (typically within the next 7 second timer scan interval).

Best

Jon Strabala Principal Product Manager - Couchbase

Source https://stackoverflow.com/questions/71078871

QUESTION

How can I register a protobuf schema with references in other packages in Kafka schema registry?

Asked 2022-Feb-02 at 10:55

I'm running Kafka schema registry version 5.5.2, and trying to register a schema that contains a reference to another schema. I managed to do this when the referenced schema was in the same package with the referencing schema, with this curl command:

1curl -X POST -H  "Content-Type: application/vnd.schemaregistry.v1+json" \
2--data '{"schemaType":"PROTOBUF","references": [{"name": "other.proto","subject": "other.proto","version": 1}],"schema":"syntax = \"proto3\"; package com.acme; import \"other.proto\";\n\nmessage MyRecord {\n  string f1 = 1;\n  OtherRecord f2 = 2;\n }\n"}' \
3http://localhost:8081/subjects/test-schema/versions
4

However, when I changed the package name of the referred schema, like this:

1curl -X POST -H  "Content-Type: application/vnd.schemaregistry.v1+json" \
2--data '{"schemaType":"PROTOBUF","references": [{"name": "other.proto","subject": "other.proto","version": 1}],"schema":"syntax = \"proto3\"; package com.acme; import \"other.proto\";\n\nmessage MyRecord {\n  string f1 = 1;\n  OtherRecord f2 = 2;\n }\n"}' \
3http://localhost:8081/subjects/test-schema/versions
4syntax = "proto3";
5package some.other.package;
6
7message OtherRecord {
8  int32 other_id = 1;
9}
10

I got {"error_code":42201,"message":"Either the input schema or one its references is invalid"} when I tried to register the referring schema, no matter what I put under the references name/subject. That's one of my trials:

1curl -X POST -H  "Content-Type: application/vnd.schemaregistry.v1+json" \
2--data '{"schemaType":"PROTOBUF","references": [{"name": "other.proto","subject": "other.proto","version": 1}],"schema":"syntax = \"proto3\"; package com.acme; import \"other.proto\";\n\nmessage MyRecord {\n  string f1 = 1;\n  OtherRecord f2 = 2;\n }\n"}' \
3http://localhost:8081/subjects/test-schema/versions
4syntax = "proto3";
5package some.other.package;
6
7message OtherRecord {
8  int32 other_id = 1;
9}
10curl -X POST -H  "Content-Type: application/vnd.schemaregistry.v1+json" \
11--data '{"schemaType":"PROTOBUF","references": [{"name": "other.proto","subject": "other.proto","version": 1}],"schema":"syntax = \"proto3\"; package com.acme; import \"some.package.other.proto\";\n\nmessage MyRecord {\n  string f1 = 1;\n  OtherRecord f2 = 2;\n }\n"}' \
12http://localhost:8081/subjects/test-shcema/versions
13

ANSWER

Answered 2022-Feb-02 at 10:55

First you should registrer your other proto to the schema registry.

Create a json (named other-proto.json) file with following syntax:

1curl -X POST -H  "Content-Type: application/vnd.schemaregistry.v1+json" \
2--data '{"schemaType":"PROTOBUF","references": [{"name": "other.proto","subject": "other.proto","version": 1}],"schema":"syntax = \"proto3\"; package com.acme; import \"other.proto\";\n\nmessage MyRecord {\n  string f1 = 1;\n  OtherRecord f2 = 2;\n }\n"}' \
3http://localhost:8081/subjects/test-schema/versions
4syntax = "proto3";
5package some.other.package;
6
7message OtherRecord {
8  int32 other_id = 1;
9}
10curl -X POST -H  "Content-Type: application/vnd.schemaregistry.v1+json" \
11--data '{"schemaType":"PROTOBUF","references": [{"name": "other.proto","subject": "other.proto","version": 1}],"schema":"syntax = \"proto3\"; package com.acme; import \"some.package.other.proto\";\n\nmessage MyRecord {\n  string f1 = 1;\n  OtherRecord f2 = 2;\n }\n"}' \
12http://localhost:8081/subjects/test-shcema/versions
13{
14  "schemaType": "PROTOBUF",
15  "schema": "syntax = \"proto3\";\npackage com.acme;\n\nmessage OtherRecord {\n  int32 other_id = 1;\n}\n"
16}
17

Then

1curl -X POST -H  "Content-Type: application/vnd.schemaregistry.v1+json" \
2--data '{"schemaType":"PROTOBUF","references": [{"name": "other.proto","subject": "other.proto","version": 1}],"schema":"syntax = \"proto3\"; package com.acme; import \"other.proto\";\n\nmessage MyRecord {\n  string f1 = 1;\n  OtherRecord f2 = 2;\n }\n"}' \
3http://localhost:8081/subjects/test-schema/versions
4syntax = "proto3";
5package some.other.package;
6
7message OtherRecord {
8  int32 other_id = 1;
9}
10curl -X POST -H  "Content-Type: application/vnd.schemaregistry.v1+json" \
11--data '{"schemaType":"PROTOBUF","references": [{"name": "other.proto","subject": "other.proto","version": 1}],"schema":"syntax = \"proto3\"; package com.acme; import \"some.package.other.proto\";\n\nmessage MyRecord {\n  string f1 = 1;\n  OtherRecord f2 = 2;\n }\n"}' \
12http://localhost:8081/subjects/test-shcema/versions
13{
14  "schemaType": "PROTOBUF",
15  "schema": "syntax = \"proto3\";\npackage com.acme;\n\nmessage OtherRecord {\n  int32 other_id = 1;\n}\n"
16}
17curl -XPOST -H 'Content-Type:application/vnd.schemaregistry.v1+json' http://localhost:8081/subjects/other.proto/versions --data @other-proto.json
18

Now your schema registry know this schema as the reference of subject other.proto. If you create another json file (named testproto-value.json) as

1curl -X POST -H  "Content-Type: application/vnd.schemaregistry.v1+json" \
2--data '{"schemaType":"PROTOBUF","references": [{"name": "other.proto","subject": "other.proto","version": 1}],"schema":"syntax = \"proto3\"; package com.acme; import \"other.proto\";\n\nmessage MyRecord {\n  string f1 = 1;\n  OtherRecord f2 = 2;\n }\n"}' \
3http://localhost:8081/subjects/test-schema/versions
4syntax = "proto3";
5package some.other.package;
6
7message OtherRecord {
8  int32 other_id = 1;
9}
10curl -X POST -H  "Content-Type: application/vnd.schemaregistry.v1+json" \
11--data '{"schemaType":"PROTOBUF","references": [{"name": "other.proto","subject": "other.proto","version": 1}],"schema":"syntax = \"proto3\"; package com.acme; import \"some.package.other.proto\";\n\nmessage MyRecord {\n  string f1 = 1;\n  OtherRecord f2 = 2;\n }\n"}' \
12http://localhost:8081/subjects/test-shcema/versions
13{
14  "schemaType": "PROTOBUF",
15  "schema": "syntax = \"proto3\";\npackage com.acme;\n\nmessage OtherRecord {\n  int32 other_id = 1;\n}\n"
16}
17curl -XPOST -H 'Content-Type:application/vnd.schemaregistry.v1+json' http://localhost:8081/subjects/other.proto/versions --data @other-proto.json
18{
19  "schemaType": "PROTOBUF",
20  "references": [
21    {
22      "name": "other.proto",
23      "subject": "other.proto",
24      "version": 1
25    }
26  ],
27  "schema": "syntax = \"proto3\";\npackage com.acme;\n\nimport \"other.proto\";\n\nmessage MyRecord {\n  string f1 = 1;\n  .com.acme.OtherRecord f2 = 2;\n}\n"
28}
29

And post it to schema registry:

1curl -X POST -H  "Content-Type: application/vnd.schemaregistry.v1+json" \
2--data '{"schemaType":"PROTOBUF","references": [{"name": "other.proto","subject": "other.proto","version": 1}],"schema":"syntax = \"proto3\"; package com.acme; import \"other.proto\";\n\nmessage MyRecord {\n  string f1 = 1;\n  OtherRecord f2 = 2;\n }\n"}' \
3http://localhost:8081/subjects/test-schema/versions
4syntax = "proto3";
5package some.other.package;
6
7message OtherRecord {
8  int32 other_id = 1;
9}
10curl -X POST -H  "Content-Type: application/vnd.schemaregistry.v1+json" \
11--data '{"schemaType":"PROTOBUF","references": [{"name": "other.proto","subject": "other.proto","version": 1}],"schema":"syntax = \"proto3\"; package com.acme; import \"some.package.other.proto\";\n\nmessage MyRecord {\n  string f1 = 1;\n  OtherRecord f2 = 2;\n }\n"}' \
12http://localhost:8081/subjects/test-shcema/versions
13{
14  "schemaType": "PROTOBUF",
15  "schema": "syntax = \"proto3\";\npackage com.acme;\n\nmessage OtherRecord {\n  int32 other_id = 1;\n}\n"
16}
17curl -XPOST -H 'Content-Type:application/vnd.schemaregistry.v1+json' http://localhost:8081/subjects/other.proto/versions --data @other-proto.json
18{
19  "schemaType": "PROTOBUF",
20  "references": [
21    {
22      "name": "other.proto",
23      "subject": "other.proto",
24      "version": 1
25    }
26  ],
27  "schema": "syntax = \"proto3\";\npackage com.acme;\n\nimport \"other.proto\";\n\nmessage MyRecord {\n  string f1 = 1;\n  .com.acme.OtherRecord f2 = 2;\n}\n"
28}
29curl -XPOST -H 'Content-Type:application/vnd.schemaregistry.v1+json' http://localhost:8081/subjects/testproto-value/versions --data @testproto-value.json
30

It will fix your issue. Because the reference in testproto-value.json is known by the registry into other.proto subject.

Source https://stackoverflow.com/questions/70464651

QUESTION

MS dotnet core container images failed to pull, Error: CTC1014

Asked 2022-Jan-26 at 09:25

I was trying to build a new image for a small dotnet core 3.1 console application. I got an error:

failed to solve with frontend dockerfile.v0: failed to create LLB definition: failed to copy: httpReadSeeker: failed open: failed to do request: Get https://westeurope.data.mcr.microsoft.com/42012bb2682a4d76ba7fa17a9d9a9162-qb2vm9uiex//docker/registry/v2/blobs/sha256/87/87413803399bebbe093cfb4ef6c89d426c13a62811d7501d462f2f0e018321bb/data?P1=1627480321&P2=1&P3=1&P4=uDGSoX8YSljKnDQVR6fqniuqK8fjkRvyngwKxM7ljlM%3D&se=2021-07-28T13%3A52%3A01Z&sig=wJVu%2BBQo2sldEPr5ea6KHdflARqlzPZ9Ap7uBKcEYYw%3D&sp=r&spr=https&sr=b&sv=2016-05-31&regid=42012bb2682a4d76ba7fa17a9d9a9162: x509: certificate has expired or is not yet valid

I have checked an old dotnet program which my dockerfile was working perfectly. I got the same error. Then, I jumped to Docker Hub and checked the MS Images to see that all MS images have been updated for an hour. And then they have been updated once again, 10 Minutes ago xD. However, I still cannot pull the base images of mcr.microsoft.com/dotnet/runtime:3.1 and mcr.microsoft.com/dotnet/sdk:3.1. My whole Dockerfile is:

1FROM mcr.microsoft.com/dotnet/runtime:3.1 AS base
2WORKDIR /app
3
4FROM mcr.microsoft.com/dotnet/sdk:3.1 AS build
5WORKDIR /src
6COPY ["Kafka-dotnet-consumer/Kafka-dotnet-consumer.csproj", "Kafka-dotnet-consumer/"]
7RUN dotnet restore "Kafka-dotnet-consumer/Kafka-dotnet-consumer.csproj"
8COPY . .
9WORKDIR "/src/Kafka-dotnet-consumer"
10RUN dotnet build "Kafka-dotnet-consumer.csproj" -c Release -o /app/build
11
12FROM build AS publish
13RUN dotnet publish "Kafka-dotnet-consumer.csproj" -c Release -o /app/publish
14
15FROM base AS final
16WORKDIR /app
17COPY --from=publish /app/publish .
18ENTRYPOINT ["dotnet", "Kafka-dotnet-consumer.dll"]
19

and the complete error log is:

1FROM mcr.microsoft.com/dotnet/runtime:3.1 AS base
2WORKDIR /app
3
4FROM mcr.microsoft.com/dotnet/sdk:3.1 AS build
5WORKDIR /src
6COPY ["Kafka-dotnet-consumer/Kafka-dotnet-consumer.csproj", "Kafka-dotnet-consumer/"]
7RUN dotnet restore "Kafka-dotnet-consumer/Kafka-dotnet-consumer.csproj"
8COPY . .
9WORKDIR "/src/Kafka-dotnet-consumer"
10RUN dotnet build "Kafka-dotnet-consumer.csproj" -c Release -o /app/build
11
12FROM build AS publish
13RUN dotnet publish "Kafka-dotnet-consumer.csproj" -c Release -o /app/publish
14
15FROM base AS final
16WORKDIR /app
17COPY --from=publish /app/publish .
18ENTRYPOINT ["dotnet", "Kafka-dotnet-consumer.dll"]
19Severity    Code    Description Project File    Line    Suppression State
20Error   CTC1014 Docker command failed with exit code 1.
21#1 [internal] load build definition from Dockerfile
22#1 sha256:356bc2781f52b021d6dc2eefeef3212c983066d4fe3637fe7928c8165f181c52
23#1 DONE 0.0s
24
25#1 [internal] load build definition from Dockerfile
26#1 sha256:356bc2781f52b021d6dc2eefeef3212c983066d4fe3637fe7928c8165f181c52
27#1 transferring dockerfile: 826B done
28#1 DONE 0.0s
29
30#2 [internal] load .dockerignore
31#2 sha256:158b62c61546176b0f8a68c34f4aed7fe8e5f979cc578672dacd3c07aff01eb1
32#2 transferring context: 35B done
33#2 DONE 0.0s
34
35#3 [internal] load metadata for mcr.microsoft.com/dotnet/sdk:3.1-alpine
36#3 sha256:9c4e2456483bc5a0fda4bf0466bb996bef09b180bf33a44ede0bd988c1be9178
37#3 ...
38
39#4 [internal] load metadata for mcr.microsoft.com/dotnet/runtime:3.1-alpine
40#4 sha256:46a3f71ff7a02c9ad1111e95e06323b127c541ea25dc1bca0750bc3a2ea917ca
41#4 ERROR: failed to copy: httpReadSeeker: failed open: failed to do request: Get https://westeurope.data.mcr.microsoft.com/42012bb2682a4d76ba7fa17a9d9a9162-qb2vm9uiex//docker/registry/v2/blobs/sha256/87/87413803399bebbe093cfb4ef6c89d426c13a62811d7501d462f2f0e018321bb/data?P1=1627480321&P2=1&P3=1&P4=uDGSoX8YSljKnDQVR6fqniuqK8fjkRvyngwKxM7ljlM%3D&se=2021-07-28T13%3A52%3A01Z&sig=wJVu%2BBQo2sldEPr5ea6KHdflARqlzPZ9Ap7uBKcEYYw%3D&sp=r&spr=https&sr=b&sv=2016-05-31&regid=42012bb2682a4d76ba7fa17a9d9a9162: x509: certificate has expired or is not yet valid
42
43#3 [internal] load metadata for mcr.microsoft.com/dotnet/sdk:3.1-alpine
44#3 sha256:9c4e2456483bc5a0fda4bf0466bb996bef09b180bf33a44ede0bd988c1be9178
45#3 CANCELED
46------
47 > [internal] load metadata for mcr.microsoft.com/dotnet/runtime:3.1-alpine:
48------
49failed to solve with frontend dockerfile.v0: failed to create LLB definition: failed to copy: httpReadSeeker: failed open: failed to do request: Get https://westeurope.data.mcr.microsoft.com/42012bb2682a4d76ba7fa17a9d9a9162-qb2vm9uiex//docker/registry/v2/blobs/sha256/87/87413803399bebbe093cfb4ef6c89d426c13a62811d7501d462f2f0e018321bb/data?P1=1627480321&P2=1&P3=1&P4=uDGSoX8YSljKnDQVR6fqniuqK8fjkRvyngwKxM7ljlM%3D&se=2021-07-28T13%3A52%3A01Z&sig=wJVu%2BBQo2sldEPr5ea6KHdflARqlzPZ9Ap7uBKcEYYw%3D&sp=r&spr=https&sr=b&sv=2016-05-31&regid=42012bb2682a4d76ba7fa17a9d9a9162: x509: certificate has expired or is not yet valid Kafka-dotnet-consumer   C:\Ziad\repos\vdpm-datagrid\Kafka-dotnet-consumer\Kafka-dotnet-consumer\Dockerfile  1   
50

Any ideas?

ANSWER

Answered 2022-Jan-26 at 09:25

so as @Chris Culter mentioned in a comment above, I just restarted my machine and it works again.

It is kind of strange because I already updated my Docker Desktop, restarted, and cleaned/ purged the docker data. None of those helped, just after restarting my windows it works again!

Source https://stackoverflow.com/questions/68561615

QUESTION

How to make a Spring Boot application quit on tomcat failure

Asked 2022-Jan-15 at 09:55

We have a bunch of microservices based on Spring Boot 2.5.4 also including spring-kafka:2.7.6 and spring-boot-actuator:2.5.4. All the services use Tomcat as servlet container and graceful shutdown enabled. These microservices are containerized using docker.
Due to a misconfiguration, yesterday we faced a problem on one of these containers because it took a port already bound from another one.
Log states:

1Stopping service [Tomcat]
2Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
3***************************
4APPLICATION FAILED TO START
5***************************
6
7Description:
8
9Web server failed to start. Port 8080 was already in use.
10

However, the JVM is still running, because of the kafka consumers/streams.

I need to destroy everything or at least do a System.exit(error-code) to trigger the docker restart policy. How I could achieve this? If possible, a solution using configuration is better than a solution requiring development.

I developed a minimal test application made of the SpringBootApplicationand a KafkaConsumer class to ensure the problem isn't related to our microservices. Same result.

POM file

1Stopping service [Tomcat]
2Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
3***************************
4APPLICATION FAILED TO START
5***************************
6
7Description:
8
9Web server failed to start. Port 8080 was already in use.
10<parent>
11  <groupId>org.springframework.boot</groupId>
12  <artifactId>spring-boot-starter-parent</artifactId>
13  <version>2.5.4</version>
14  <relativePath/> <!-- lookup parent from repository -->
15</parent>
16...
17<dependency>
18  <groupId>org.springframework.boot</groupId>
19  <artifactId>spring-boot-starter-web</artifactId>
20</dependency>
21<dependency>
22  <groupId>org.springframework.kafka</groupId>
23  <artifactId>spring-kafka</artifactId>
24</dependency>
25

Kafka listener

1Stopping service [Tomcat]
2Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
3***************************
4APPLICATION FAILED TO START
5***************************
6
7Description:
8
9Web server failed to start. Port 8080 was already in use.
10<parent>
11  <groupId>org.springframework.boot</groupId>
12  <artifactId>spring-boot-starter-parent</artifactId>
13  <version>2.5.4</version>
14  <relativePath/> <!-- lookup parent from repository -->
15</parent>
16...
17<dependency>
18  <groupId>org.springframework.boot</groupId>
19  <artifactId>spring-boot-starter-web</artifactId>
20</dependency>
21<dependency>
22  <groupId>org.springframework.kafka</groupId>
23  <artifactId>spring-kafka</artifactId>
24</dependency>
25@Component
26public class KafkaConsumer {
27
28  @KafkaListener(topics = "test", groupId = "test")
29  public void process(String message) {
30
31  }
32}
33

application.yml

1Stopping service [Tomcat]
2Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
3***************************
4APPLICATION FAILED TO START
5***************************
6
7Description:
8
9Web server failed to start. Port 8080 was already in use.
10<parent>
11  <groupId>org.springframework.boot</groupId>
12  <artifactId>spring-boot-starter-parent</artifactId>
13  <version>2.5.4</version>
14  <relativePath/> <!-- lookup parent from repository -->
15</parent>
16...
17<dependency>
18  <groupId>org.springframework.boot</groupId>
19  <artifactId>spring-boot-starter-web</artifactId>
20</dependency>
21<dependency>
22  <groupId>org.springframework.kafka</groupId>
23  <artifactId>spring-kafka</artifactId>
24</dependency>
25@Component
26public class KafkaConsumer {
27
28  @KafkaListener(topics = "test", groupId = "test")
29  public void process(String message) {
30
31  }
32}
33spring:
34  kafka:
35    bootstrap-servers: kafka:9092
36

Log file

1Stopping service [Tomcat]
2Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
3***************************
4APPLICATION FAILED TO START
5***************************
6
7Description:
8
9Web server failed to start. Port 8080 was already in use.
10<parent>
11  <groupId>org.springframework.boot</groupId>
12  <artifactId>spring-boot-starter-parent</artifactId>
13  <version>2.5.4</version>
14  <relativePath/> <!-- lookup parent from repository -->
15</parent>
16...
17<dependency>
18  <groupId>org.springframework.boot</groupId>
19  <artifactId>spring-boot-starter-web</artifactId>
20</dependency>
21<dependency>
22  <groupId>org.springframework.kafka</groupId>
23  <artifactId>spring-kafka</artifactId>
24</dependency>
25@Component
26public class KafkaConsumer {
27
28  @KafkaListener(topics = "test", groupId = "test")
29  public void process(String message) {
30
31  }
32}
33spring:
34  kafka:
35    bootstrap-servers: kafka:9092
362021-12-17 11:12:24.955  WARN 29067 --- [           main] ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.context.ApplicationContextException: Failed to start bean 'webServerStartStop'; nested exception is org.springframework.boot.web.server.PortInUseException: Port 8080 is already in use
372021-12-17 11:12:24.959  INFO 29067 --- [           main] o.apache.catalina.core.StandardService   : Stopping service [Tomcat]
382021-12-17 11:12:24.969  INFO 29067 --- [           main] ConditionEvaluationReportLoggingListener : 
39
40Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
412021-12-17 11:12:24.978 ERROR 29067 --- [           main] o.s.b.d.LoggingFailureAnalysisReporter   : 
42
43***************************
44APPLICATION FAILED TO START
45***************************
46
47Description:
48
49Web server failed to start. Port 8080 was already in use.
50
51Action:
52
53Identify and stop the process that's listening on port 8080 or configure this application to listen on another port.
54
552021-12-17 11:12:25.151  WARN 29067 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient   : [Consumer clientId=consumer-test-1, groupId=test] Error while fetching metadata with correlation id 2 : {test=LEADER_NOT_AVAILABLE}
562021-12-17 11:12:25.154  INFO 29067 --- [ntainer#0-0-C-1] org.apache.kafka.clients.Metadata        : [Consumer clientId=consumer-test-1, groupId=test] Cluster ID: NwbnlV2vSdiYtDzgZ81TDQ
572021-12-17 11:12:25.156  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Discovered group coordinator kafka:9092 (id: 2147483636 rack: null)
582021-12-17 11:12:25.159  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] (Re-)joining group
592021-12-17 11:12:25.179  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] (Re-)joining group
602021-12-17 11:12:27.004  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Successfully joined group with generation Generation{generationId=2, memberId='consumer-test-1-c5924ab5-afc8-4720-a5d7-f8107ace3aad', protocol='range'}
612021-12-17 11:12:27.009  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Finished assignment for group at generation 2: {consumer-test-1-c5924ab5-afc8-4720-a5d7-f8107ace3aad=Assignment(partitions=[test-0])}
622021-12-17 11:12:27.021  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Successfully synced group in generation Generation{generationId=2, memberId='consumer-test-1-c5924ab5-afc8-4720-a5d7-f8107ace3aad', protocol='range'}
632021-12-17 11:12:27.022  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Notifying assignor about the new Assignment(partitions=[test-0])
642021-12-17 11:12:27.025  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Adding newly assigned partitions: test-0
652021-12-17 11:12:27.029  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Found no committed offset for partition test-0
662021-12-17 11:12:27.034  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Found no committed offset for partition test-0
672021-12-17 11:12:27.040  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.SubscriptionState    : [Consumer clientId=consumer-test-1, groupId=test] Resetting offset for partition test-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 11 rack: null)], epoch=0}}.
682021-12-17 11:12:27.045  INFO 29067 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer    : test: partitions assigned: [test-0]
69

ANSWER

Answered 2021-Dec-17 at 08:38

Since you have everything containerized, it's way simpler.

Just set up a small healthcheck endpoint with Spring Web which serves to see if the server is still running, something like:

1Stopping service [Tomcat]
2Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
3***************************
4APPLICATION FAILED TO START
5***************************
6
7Description:
8
9Web server failed to start. Port 8080 was already in use.
10<parent>
11  <groupId>org.springframework.boot</groupId>
12  <artifactId>spring-boot-starter-parent</artifactId>
13  <version>2.5.4</version>
14  <relativePath/> <!-- lookup parent from repository -->
15</parent>
16...
17<dependency>
18  <groupId>org.springframework.boot</groupId>
19  <artifactId>spring-boot-starter-web</artifactId>
20</dependency>
21<dependency>
22  <groupId>org.springframework.kafka</groupId>
23  <artifactId>spring-kafka</artifactId>
24</dependency>
25@Component
26public class KafkaConsumer {
27
28  @KafkaListener(topics = "test", groupId = "test")
29  public void process(String message) {
30
31  }
32}
33spring:
34  kafka:
35    bootstrap-servers: kafka:9092
362021-12-17 11:12:24.955  WARN 29067 --- [           main] ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.context.ApplicationContextException: Failed to start bean 'webServerStartStop'; nested exception is org.springframework.boot.web.server.PortInUseException: Port 8080 is already in use
372021-12-17 11:12:24.959  INFO 29067 --- [           main] o.apache.catalina.core.StandardService   : Stopping service [Tomcat]
382021-12-17 11:12:24.969  INFO 29067 --- [           main] ConditionEvaluationReportLoggingListener : 
39
40Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
412021-12-17 11:12:24.978 ERROR 29067 --- [           main] o.s.b.d.LoggingFailureAnalysisReporter   : 
42
43***************************
44APPLICATION FAILED TO START
45***************************
46
47Description:
48
49Web server failed to start. Port 8080 was already in use.
50
51Action:
52
53Identify and stop the process that's listening on port 8080 or configure this application to listen on another port.
54
552021-12-17 11:12:25.151  WARN 29067 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient   : [Consumer clientId=consumer-test-1, groupId=test] Error while fetching metadata with correlation id 2 : {test=LEADER_NOT_AVAILABLE}
562021-12-17 11:12:25.154  INFO 29067 --- [ntainer#0-0-C-1] org.apache.kafka.clients.Metadata        : [Consumer clientId=consumer-test-1, groupId=test] Cluster ID: NwbnlV2vSdiYtDzgZ81TDQ
572021-12-17 11:12:25.156  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Discovered group coordinator kafka:9092 (id: 2147483636 rack: null)
582021-12-17 11:12:25.159  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] (Re-)joining group
592021-12-17 11:12:25.179  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] (Re-)joining group
602021-12-17 11:12:27.004  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Successfully joined group with generation Generation{generationId=2, memberId='consumer-test-1-c5924ab5-afc8-4720-a5d7-f8107ace3aad', protocol='range'}
612021-12-17 11:12:27.009  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Finished assignment for group at generation 2: {consumer-test-1-c5924ab5-afc8-4720-a5d7-f8107ace3aad=Assignment(partitions=[test-0])}
622021-12-17 11:12:27.021  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Successfully synced group in generation Generation{generationId=2, memberId='consumer-test-1-c5924ab5-afc8-4720-a5d7-f8107ace3aad', protocol='range'}
632021-12-17 11:12:27.022  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Notifying assignor about the new Assignment(partitions=[test-0])
642021-12-17 11:12:27.025  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Adding newly assigned partitions: test-0
652021-12-17 11:12:27.029  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Found no committed offset for partition test-0
662021-12-17 11:12:27.034  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Found no committed offset for partition test-0
672021-12-17 11:12:27.040  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.SubscriptionState    : [Consumer clientId=consumer-test-1, groupId=test] Resetting offset for partition test-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 11 rack: null)], epoch=0}}.
682021-12-17 11:12:27.045  INFO 29067 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer    : test: partitions assigned: [test-0]
69@RestController(
70public class HealtcheckController {
71
72  @Get("/monitoring")
73  public String getMonitoring() {
74    return "200: OK";
75  } 
76
77}
78

and then refer to it in the HEALTHCHECK part of your Dockerfile. If the server stops, then the container will be scheduled as unhealthy and it will be restarted:

1Stopping service [Tomcat]
2Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
3***************************
4APPLICATION FAILED TO START
5***************************
6
7Description:
8
9Web server failed to start. Port 8080 was already in use.
10<parent>
11  <groupId>org.springframework.boot</groupId>
12  <artifactId>spring-boot-starter-parent</artifactId>
13  <version>2.5.4</version>
14  <relativePath/> <!-- lookup parent from repository -->
15</parent>
16...
17<dependency>
18  <groupId>org.springframework.boot</groupId>
19  <artifactId>spring-boot-starter-web</artifactId>
20</dependency>
21<dependency>
22  <groupId>org.springframework.kafka</groupId>
23  <artifactId>spring-kafka</artifactId>
24</dependency>
25@Component
26public class KafkaConsumer {
27
28  @KafkaListener(topics = "test", groupId = "test")
29  public void process(String message) {
30
31  }
32}
33spring:
34  kafka:
35    bootstrap-servers: kafka:9092
362021-12-17 11:12:24.955  WARN 29067 --- [           main] ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.context.ApplicationContextException: Failed to start bean 'webServerStartStop'; nested exception is org.springframework.boot.web.server.PortInUseException: Port 8080 is already in use
372021-12-17 11:12:24.959  INFO 29067 --- [           main] o.apache.catalina.core.StandardService   : Stopping service [Tomcat]
382021-12-17 11:12:24.969  INFO 29067 --- [           main] ConditionEvaluationReportLoggingListener : 
39
40Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
412021-12-17 11:12:24.978 ERROR 29067 --- [           main] o.s.b.d.LoggingFailureAnalysisReporter   : 
42
43***************************
44APPLICATION FAILED TO START
45***************************
46
47Description:
48
49Web server failed to start. Port 8080 was already in use.
50
51Action:
52
53Identify and stop the process that's listening on port 8080 or configure this application to listen on another port.
54
552021-12-17 11:12:25.151  WARN 29067 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient   : [Consumer clientId=consumer-test-1, groupId=test] Error while fetching metadata with correlation id 2 : {test=LEADER_NOT_AVAILABLE}
562021-12-17 11:12:25.154  INFO 29067 --- [ntainer#0-0-C-1] org.apache.kafka.clients.Metadata        : [Consumer clientId=consumer-test-1, groupId=test] Cluster ID: NwbnlV2vSdiYtDzgZ81TDQ
572021-12-17 11:12:25.156  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Discovered group coordinator kafka:9092 (id: 2147483636 rack: null)
582021-12-17 11:12:25.159  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] (Re-)joining group
592021-12-17 11:12:25.179  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] (Re-)joining group
602021-12-17 11:12:27.004  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Successfully joined group with generation Generation{generationId=2, memberId='consumer-test-1-c5924ab5-afc8-4720-a5d7-f8107ace3aad', protocol='range'}
612021-12-17 11:12:27.009  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Finished assignment for group at generation 2: {consumer-test-1-c5924ab5-afc8-4720-a5d7-f8107ace3aad=Assignment(partitions=[test-0])}
622021-12-17 11:12:27.021  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Successfully synced group in generation Generation{generationId=2, memberId='consumer-test-1-c5924ab5-afc8-4720-a5d7-f8107ace3aad', protocol='range'}
632021-12-17 11:12:27.022  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Notifying assignor about the new Assignment(partitions=[test-0])
642021-12-17 11:12:27.025  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Adding newly assigned partitions: test-0
652021-12-17 11:12:27.029  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Found no committed offset for partition test-0
662021-12-17 11:12:27.034  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-test-1, groupId=test] Found no committed offset for partition test-0
672021-12-17 11:12:27.040  INFO 29067 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.SubscriptionState    : [Consumer clientId=consumer-test-1, groupId=test] Resetting offset for partition test-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 11 rack: null)], epoch=0}}.
682021-12-17 11:12:27.045  INFO 29067 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer    : test: partitions assigned: [test-0]
69@RestController(
70public class HealtcheckController {
71
72  @Get("/monitoring")
73  public String getMonitoring() {
74    return "200: OK";
75  } 
76
77}
78FROM ...
79
80ENTRYPOINT ...
81HEALTHCHECK localhost:8080/monitoring
82

If you don't want to develop anything, then you can just use any other Endpoint that you know it should successfully answer as HEALTCHECK, but I would recommend that you have one endpoint explicitly for that.

Source https://stackoverflow.com/questions/70378200

QUESTION

Setting up JAVA_HOME in Ubuntu to point to Window's JAVA_HOME

Asked 2021-Dec-15 at 10:04

I tried to run Kafka on CMD in Windows and it's very unstable , constantly giving errors. Then I came across this post, which suggests installing Ubuntu and run Kafka from there.

I have installed Ubuntu successfully. Given that I have already defined JAVA_HOME=C:\Program Files\Java\jdk1.8.0_231 as one of the environmental variables and CMD recognizes this variable but Ubuntu does not, I am wondering how to make Ubuntu recognize this because at the moment, when i typed java -version, Ubuntu returns command not found.

Update: Please note that I have to have Ubuntu's JAVA_HOME pointing to the evironmental variable JAVA_HOME defined in my Window system. Because my Java program in eclipse would need to talk to Kafka using the same JVM.

I have added the two lines below in my /etc/profile file. echo $JAVA_HOME returns the correct path. However, java -version returns a different version of Java installed on Ubuntu, not the one defined in the /etc/profile

1export JAVA_HOME=mnt/c/Program\ Files/Java/jdk1.8.0_231
2export PATH=$JAVA_HOME/bin:$PATH
3

ANSWER

Answered 2021-Dec-15 at 08:16

When the user logs in, the environment will be loaded from the /etc/profile and $HOME/.bashrc files. There are many ways to solve this problem. You can execute ex manually

1export JAVA_HOME=mnt/c/Program\ Files/Java/jdk1.8.0_231
2export PATH=$JAVA_HOME/bin:$PATH
3export JAVA_HOME=/opt/jdk1.8.0_66
4export JRE_HOME=$JAVA_HOME/jre
5export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH
6export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
7

or fix the configuration in /etc/profile or $HOME/.bashrc

1export JAVA_HOME=mnt/c/Program\ Files/Java/jdk1.8.0_231
2export PATH=$JAVA_HOME/bin:$PATH
3export JAVA_HOME=/opt/jdk1.8.0_66
4export JRE_HOME=$JAVA_HOME/jre
5export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH
6export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
7export JAVA_HOME=/opt/jdk1.8.0_66
8export JRE_HOME=$JAVA_HOME/jre
9export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH
10export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
11

Source https://stackoverflow.com/questions/70360286

QUESTION

KafkaConsumer: `seekToEnd()` does not make consumer consume from latest offset

Asked 2021-Dec-08 at 08:06

I have the following code

1class Consumer(val consumer: KafkaConsumer<String, ConsumerRecord<String>>) {
2
3    fun run() {
4        consumer.seekToEnd(emptyList())
5        val pollDuration = 30 // seconds
6
7        while (true) {
8            val records = consumer.poll(Duration.ofSeconds(pollDuration))
9            // perform record analysis and commitSync()
10            }
11        }
12    }
13}
14

The topic which the consumer is subscribed to continously receives records. Occasionally, the consumer will crash due to the processing step. When the consumer then is restarted, I want it to consume from the latest offset on the topic (i.e. ignore records that were published to the topic while the consumer was down). I thought the seekToEnd() method would ensure that. However, it seems like the method has no effect at all. The consumer starts to consume from the offset from which it crashed.

What is the correct way to use seekToEnd()?

Edit: The consumer is created with the following configs

1class Consumer(val consumer: KafkaConsumer<String, ConsumerRecord<String>>) {
2
3    fun run() {
4        consumer.seekToEnd(emptyList())
5        val pollDuration = 30 // seconds
6
7        while (true) {
8            val records = consumer.poll(Duration.ofSeconds(pollDuration))
9            // perform record analysis and commitSync()
10            }
11        }
12    }
13}
14fun <T> buildConsumer(valueDeserializer: String): KafkaConsumer<String, T> {
15    val props = setupConfig(valueDeserializer)
16    Common.setupConsumerSecurityProtocol(props)
17    return createConsumer(props)
18}
19
20fun setupConfig(valueDeserializer: String): Properties {
21    // Configuration setup
22    val props = Properties()
23
24    props[ConsumerConfig.GROUP_ID_CONFIG] = config.applicationId
25    props[ConsumerConfig.CLIENT_ID_CONFIG] = config.kafka.clientId
26    props[ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG] = config.kafka.bootstrapServers
27    props[AbstractKafkaSchemaSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG] = config.kafka.schemaRegistryUrl
28
29    props[ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG] = config.kafka.stringDeserializer
30    props[ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG] = valueDeserializer
31    props[KafkaAvroDeserializerConfig.SPECIFIC_AVRO_READER_CONFIG] = "true"
32
33    props[ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG] = config.kafka.maxPollIntervalMs
34    props[ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG] = config.kafka.sessionTimeoutMs
35
36    props[ConsumerConfig.ALLOW_AUTO_CREATE_TOPICS_CONFIG] = "false"
37    props[ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG] = "false"
38    props[ConsumerConfig.AUTO_OFFSET_RESET_CONFIG] = "latest"
39
40    return props
41}
42
43fun <T> createConsumer(props: Properties): KafkaConsumer<String, T> {
44    val consumer = KafkaConsumer<String, T>(props)
45    consumer.subscribe(listOf(config.kafka.inputTopic))
46    return consumer
47}
48

ANSWER

Answered 2021-Dec-03 at 15:55

The seekToEnd method requires the information on the actual partition (in Kafka terms TopicPartition) on which you plan to make your consumer read from the end.

I am not familiar with the Kotlin API, but checking the JavaDocs on the KafkaConsumer's method seekToEnd you will see, that it asks for a collection of TopicPartitions.

As you are currently using emptyList(), it will have no impact at all, just like you observed.

Source https://stackoverflow.com/questions/70214989

QUESTION

Kafka integration tests in Gradle runs into GitHub Actions

Asked 2021-Nov-03 at 19:11

We've been moving our applications from CircleCI to GitHub Actions in our company and we got stuck with a strange situation.

There has been no change to the project's code, but our kafka integration tests started to fail in GH Actions machines. Everything works fine in CircleCI and locally (MacOS and Fedora linux machines).

Both CircleCI and GH Actions machines are running Ubuntu (tested versions were 18.04 and 20.04). MacOS was not tested in GH Actions as it doesn't have Docker in it.

Here are the docker-compose and workflow files used by the build and integration tests:

  • docker-compose.yml
1version: '2.1'
2
3services:
4  postgres:
5    container_name: listings-postgres
6    image: postgres:10-alpine
7    mem_limit: 500m
8    networks:
9      - listings-stack
10    ports:
11      - "5432:5432"
12    environment:
13      POSTGRES_DB: listings
14      POSTGRES_PASSWORD: listings
15      POSTGRES_USER: listings
16      PGUSER: listings
17    healthcheck:
18      test: ["CMD", "pg_isready"]
19      interval: 1s
20      timeout: 3s
21      retries: 30
22
23  listings-zookeeper:
24    container_name: listings-zookeeper
25    image: confluentinc/cp-zookeeper:6.2.0
26    environment:
27      ZOOKEEPER_CLIENT_PORT: 2181
28      ZOOKEEPER_TICK_TIME: 2000
29    networks:
30      - listings-stack
31    ports:
32      - "2181:2181"
33    healthcheck:
34      test: nc -z localhost 2181 || exit -1
35      interval: 10s
36      timeout: 5s
37      retries: 10
38
39  listings-kafka:
40    container_name: listings-kafka
41    image: confluentinc/cp-kafka:6.2.0
42    depends_on:
43      listings-zookeeper:
44        condition: service_healthy
45    environment:
46      KAFKA_BROKER_ID: 1
47      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://listings-kafka:9092,PLAINTEXT_HOST://localhost:29092
48      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
49      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
50      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
51      KAFKA_ZOOKEEPER_CONNECT: listings-zookeeper:2181
52    volumes:
53      - /var/run/docker.sock:/var/run/docker.sock
54    networks:
55      - listings-stack
56    ports:
57      - "29092:29092"
58    healthcheck:
59      test: kafka-topics --bootstrap-server 127.0.0.1:9092 --list
60      interval: 10s
61      timeout: 10s
62      retries: 50
63
64networks: {listings-stack: {}}
65
  • build.yml
1version: '2.1'
2
3services:
4  postgres:
5    container_name: listings-postgres
6    image: postgres:10-alpine
7    mem_limit: 500m
8    networks:
9      - listings-stack
10    ports:
11      - "5432:5432"
12    environment:
13      POSTGRES_DB: listings
14      POSTGRES_PASSWORD: listings
15      POSTGRES_USER: listings
16      PGUSER: listings
17    healthcheck:
18      test: ["CMD", "pg_isready"]
19      interval: 1s
20      timeout: 3s
21      retries: 30
22
23  listings-zookeeper:
24    container_name: listings-zookeeper
25    image: confluentinc/cp-zookeeper:6.2.0
26    environment:
27      ZOOKEEPER_CLIENT_PORT: 2181
28      ZOOKEEPER_TICK_TIME: 2000
29    networks:
30      - listings-stack
31    ports:
32      - "2181:2181"
33    healthcheck:
34      test: nc -z localhost 2181 || exit -1
35      interval: 10s
36      timeout: 5s
37      retries: 10
38
39  listings-kafka:
40    container_name: listings-kafka
41    image: confluentinc/cp-kafka:6.2.0
42    depends_on:
43      listings-zookeeper:
44        condition: service_healthy
45    environment:
46      KAFKA_BROKER_ID: 1
47      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://listings-kafka:9092,PLAINTEXT_HOST://localhost:29092
48      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
49      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
50      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
51      KAFKA_ZOOKEEPER_CONNECT: listings-zookeeper:2181
52    volumes:
53      - /var/run/docker.sock:/var/run/docker.sock
54    networks:
55      - listings-stack
56    ports:
57      - "29092:29092"
58    healthcheck:
59      test: kafka-topics --bootstrap-server 127.0.0.1:9092 --list
60      interval: 10s
61      timeout: 10s
62      retries: 50
63
64networks: {listings-stack: {}}
65name: Build
66
67on: [ pull_request ]
68
69env:
70  AWS_ACCESS_KEY_ID: ${{ secrets.TUNNEL_AWS_ACCESS_KEY_ID }}
71  AWS_SECRET_ACCESS_KEY: ${{ secrets.TUNNEL_AWS_SECRET_ACCESS_KEY }}
72  AWS_DEFAULT_REGION: 'us-east-1'
73  CIRCLECI_KEY_TUNNEL: ${{ secrets.ID_RSA_CIRCLECI_TUNNEL }}
74
75jobs:
76  build:
77    name: Listings-API Build
78    runs-on: [ self-hosted, zap ]
79
80    steps:
81      - uses: actions/checkout@v2
82        with:
83          token: ${{ secrets.GH_OLXBR_PAT }}
84          submodules: recursive
85          path: ./repo
86          fetch-depth: 0
87
88      - name: Set up JDK 11
89        uses: actions/setup-java@v2
90        with:
91          distribution: 'adopt'
92          java-version: '11'
93          architecture: x64
94          cache: 'gradle'
95
96      - name: Docker up
97        working-directory: ./repo
98        run: docker-compose up -d
99
100      - name: Build with Gradle
101        working-directory: ./repo
102        run: ./gradlew build -Dhttps.protocols=TLSv1,TLSv1.1,TLSv1.2 -x integrationTest
103
104      - name: Integration tests with Gradle
105        working-directory: ./repo
106        run: ./gradlew integrationTest -Dhttps.protocols=TLSv1,TLSv1.1,TLSv1.2
107
108      - name: Sonarqube
109        working-directory: ./repo
110        env:
111          GITHUB_TOKEN: ${{ secrets.GH_OLXBR_PAT }}
112          SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
113        run: ./gradlew sonarqube --info -Dhttps.protocols=TLSv1,TLSv1.1,TLSv1.2
114
115      - name: Docker down
116        if: always()
117        working-directory: ./repo
118        run: docker-compose down --remove-orphans
119
120      - name: Cleanup Gradle Cache
121        # Remove some files from the Gradle cache, so they aren't cached by GitHub Actions.
122        # Restoring these files from a GitHub Actions cache might cause problems for future builds.
123        run: |
124          rm -f ${{ env.HOME }}/.gradle/caches/modules-2/modules-2.lock
125          rm -f ${{ env.HOME }}/.gradle/caches/modules-2/gc.properties
126
127

The integration tests are written using the Spock framework and the part where the errors occur are these:

1version: '2.1'
2
3services:
4  postgres:
5    container_name: listings-postgres
6    image: postgres:10-alpine
7    mem_limit: 500m
8    networks:
9      - listings-stack
10    ports:
11      - "5432:5432"
12    environment:
13      POSTGRES_DB: listings
14      POSTGRES_PASSWORD: listings
15      POSTGRES_USER: listings
16      PGUSER: listings
17    healthcheck:
18      test: ["CMD", "pg_isready"]
19      interval: 1s
20      timeout: 3s
21      retries: 30
22
23  listings-zookeeper:
24    container_name: listings-zookeeper
25    image: confluentinc/cp-zookeeper:6.2.0
26    environment:
27      ZOOKEEPER_CLIENT_PORT: 2181
28      ZOOKEEPER_TICK_TIME: 2000
29    networks:
30      - listings-stack
31    ports:
32      - "2181:2181"
33    healthcheck:
34      test: nc -z localhost 2181 || exit -1
35      interval: 10s
36      timeout: 5s
37      retries: 10
38
39  listings-kafka:
40    container_name: listings-kafka
41    image: confluentinc/cp-kafka:6.2.0
42    depends_on:
43      listings-zookeeper:
44        condition: service_healthy
45    environment:
46      KAFKA_BROKER_ID: 1
47      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://listings-kafka:9092,PLAINTEXT_HOST://localhost:29092
48      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
49      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
50      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
51      KAFKA_ZOOKEEPER_CONNECT: listings-zookeeper:2181
52    volumes:
53      - /var/run/docker.sock:/var/run/docker.sock
54    networks:
55      - listings-stack
56    ports:
57      - "29092:29092"
58    healthcheck:
59      test: kafka-topics --bootstrap-server 127.0.0.1:9092 --list
60      interval: 10s
61      timeout: 10s
62      retries: 50
63
64networks: {listings-stack: {}}
65name: Build
66
67on: [ pull_request ]
68
69env:
70  AWS_ACCESS_KEY_ID: ${{ secrets.TUNNEL_AWS_ACCESS_KEY_ID }}
71  AWS_SECRET_ACCESS_KEY: ${{ secrets.TUNNEL_AWS_SECRET_ACCESS_KEY }}
72  AWS_DEFAULT_REGION: 'us-east-1'
73  CIRCLECI_KEY_TUNNEL: ${{ secrets.ID_RSA_CIRCLECI_TUNNEL }}
74
75jobs:
76  build:
77    name: Listings-API Build
78    runs-on: [ self-hosted, zap ]
79
80    steps:
81      - uses: actions/checkout@v2
82        with:
83          token: ${{ secrets.GH_OLXBR_PAT }}
84          submodules: recursive
85          path: ./repo
86          fetch-depth: 0
87
88      - name: Set up JDK 11
89        uses: actions/setup-java@v2
90        with:
91          distribution: 'adopt'
92          java-version: '11'
93          architecture: x64
94          cache: 'gradle'
95
96      - name: Docker up
97        working-directory: ./repo
98        run: docker-compose up -d
99
100      - name: Build with Gradle
101        working-directory: ./repo
102        run: ./gradlew build -Dhttps.protocols=TLSv1,TLSv1.1,TLSv1.2 -x integrationTest
103
104      - name: Integration tests with Gradle
105        working-directory: ./repo
106        run: ./gradlew integrationTest -Dhttps.protocols=TLSv1,TLSv1.1,TLSv1.2
107
108      - name: Sonarqube
109        working-directory: ./repo
110        env:
111          GITHUB_TOKEN: ${{ secrets.GH_OLXBR_PAT }}
112          SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
113        run: ./gradlew sonarqube --info -Dhttps.protocols=TLSv1,TLSv1.1,TLSv1.2
114
115      - name: Docker down
116        if: always()
117        working-directory: ./repo
118        run: docker-compose down --remove-orphans
119
120      - name: Cleanup Gradle Cache
121        # Remove some files from the Gradle cache, so they aren't cached by GitHub Actions.
122        # Restoring these files from a GitHub Actions cache might cause problems for future builds.
123        run: |
124          rm -f ${{ env.HOME }}/.gradle/caches/modules-2/modules-2.lock
125          rm -f ${{ env.HOME }}/.gradle/caches/modules-2/gc.properties
126
127  boolean compareRecordSend(String topicName, int expected) {
128    def condition = new PollingConditions()
129    condition.within(kafkaProperties.listener.pollTimeout.getSeconds() * 5) {
130      assert expected == getRecordSendTotal(topicName)
131    }
132    return true
133  }
134
135  int getRecordSendTotal(String topicName) {
136    kafkaTemplate.flush()
137    return kafkaTemplate.metrics().find {
138      it.key.name() == "record-send-total" && it.key.tags().get("topic") == topicName
139    }?.value?.metricValue() ?: 0
140  }
141

The error we're getting is:

1version: '2.1'
2
3services:
4  postgres:
5    container_name: listings-postgres
6    image: postgres:10-alpine
7    mem_limit: 500m
8    networks:
9      - listings-stack
10    ports:
11      - "5432:5432"
12    environment:
13      POSTGRES_DB: listings
14      POSTGRES_PASSWORD: listings
15      POSTGRES_USER: listings
16      PGUSER: listings
17    healthcheck:
18      test: ["CMD", "pg_isready"]
19      interval: 1s
20      timeout: 3s
21      retries: 30
22
23  listings-zookeeper:
24    container_name: listings-zookeeper
25    image: confluentinc/cp-zookeeper:6.2.0
26    environment:
27      ZOOKEEPER_CLIENT_PORT: 2181
28      ZOOKEEPER_TICK_TIME: 2000
29    networks:
30      - listings-stack
31    ports:
32      - "2181:2181"
33    healthcheck:
34      test: nc -z localhost 2181 || exit -1
35      interval: 10s
36      timeout: 5s
37      retries: 10
38
39  listings-kafka:
40    container_name: listings-kafka
41    image: confluentinc/cp-kafka:6.2.0
42    depends_on:
43      listings-zookeeper:
44        condition: service_healthy
45    environment:
46      KAFKA_BROKER_ID: 1
47      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://listings-kafka:9092,PLAINTEXT_HOST://localhost:29092
48      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
49      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
50      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
51      KAFKA_ZOOKEEPER_CONNECT: listings-zookeeper:2181
52    volumes:
53      - /var/run/docker.sock:/var/run/docker.sock
54    networks:
55      - listings-stack
56    ports:
57      - "29092:29092"
58    healthcheck:
59      test: kafka-topics --bootstrap-server 127.0.0.1:9092 --list
60      interval: 10s
61      timeout: 10s
62      retries: 50
63
64networks: {listings-stack: {}}
65name: Build
66
67on: [ pull_request ]
68
69env:
70  AWS_ACCESS_KEY_ID: ${{ secrets.TUNNEL_AWS_ACCESS_KEY_ID }}
71  AWS_SECRET_ACCESS_KEY: ${{ secrets.TUNNEL_AWS_SECRET_ACCESS_KEY }}
72  AWS_DEFAULT_REGION: 'us-east-1'
73  CIRCLECI_KEY_TUNNEL: ${{ secrets.ID_RSA_CIRCLECI_TUNNEL }}
74
75jobs:
76  build:
77    name: Listings-API Build
78    runs-on: [ self-hosted, zap ]
79
80    steps:
81      - uses: actions/checkout@v2
82        with:
83          token: ${{ secrets.GH_OLXBR_PAT }}
84          submodules: recursive
85          path: ./repo
86          fetch-depth: 0
87
88      - name: Set up JDK 11
89        uses: actions/setup-java@v2
90        with:
91          distribution: 'adopt'
92          java-version: '11'
93          architecture: x64
94          cache: 'gradle'
95
96      - name: Docker up
97        working-directory: ./repo
98        run: docker-compose up -d
99
100      - name: Build with Gradle
101        working-directory: ./repo
102        run: ./gradlew build -Dhttps.protocols=TLSv1,TLSv1.1,TLSv1.2 -x integrationTest
103
104      - name: Integration tests with Gradle
105        working-directory: ./repo
106        run: ./gradlew integrationTest -Dhttps.protocols=TLSv1,TLSv1.1,TLSv1.2
107
108      - name: Sonarqube
109        working-directory: ./repo
110        env:
111          GITHUB_TOKEN: ${{ secrets.GH_OLXBR_PAT }}
112          SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
113        run: ./gradlew sonarqube --info -Dhttps.protocols=TLSv1,TLSv1.1,TLSv1.2
114
115      - name: Docker down
116        if: always()
117        working-directory: ./repo
118        run: docker-compose down --remove-orphans
119
120      - name: Cleanup Gradle Cache
121        # Remove some files from the Gradle cache, so they aren't cached by GitHub Actions.
122        # Restoring these files from a GitHub Actions cache might cause problems for future builds.
123        run: |
124          rm -f ${{ env.HOME }}/.gradle/caches/modules-2/modules-2.lock
125          rm -f ${{ env.HOME }}/.gradle/caches/modules-2/gc.properties
126
127  boolean compareRecordSend(String topicName, int expected) {
128    def condition = new PollingConditions()
129    condition.within(kafkaProperties.listener.pollTimeout.getSeconds() * 5) {
130      assert expected == getRecordSendTotal(topicName)
131    }
132    return true
133  }
134
135  int getRecordSendTotal(String topicName) {
136    kafkaTemplate.flush()
137    return kafkaTemplate.metrics().find {
138      it.key.name() == "record-send-total" && it.key.tags().get("topic") == topicName
139    }?.value?.metricValue() ?: 0
140  }
141Condition not satisfied after 50.00 seconds and 496 attempts
142    at spock.util.concurrent.PollingConditions.within(PollingConditions.java:185)
143    at com.company.listings.KafkaAwareBaseSpec.compareRecordSend(KafkaAwareBaseSpec.groovy:31)
144    at com.company.listings.application.worker.listener.notifier.ListingNotifierITSpec.should notify listings(ListingNotifierITSpec.groovy:44)
145
146    Caused by:
147    Condition not satisfied:
148
149    expected == getRecordSendTotal(topicName)
150    |        |  |                  |
151    10       |  0                  v4
152                false
153

We've debugged the GH Actions machine (SSH into it) and run things manually. The error still happens, but if the integration tests are run a second time (as well as subsequent runs), everything works perfectly.

We've also tried to initialize all the necessary topics and send some messages to them preemptively, but the behavior was the same.

The questions we have are:

  • Is there any issue when running Kafka dockerized in an Ubuntu machine (the error also occurred in a co-worker Ubuntu machine)?
  • Any ideas on why this is happening?

Edit

  • application.yml (Kafka related configuration)
1version: '2.1'
2
3services:
4  postgres:
5    container_name: listings-postgres
6    image: postgres:10-alpine
7    mem_limit: 500m
8    networks:
9      - listings-stack
10    ports:
11      - "5432:5432"
12    environment:
13      POSTGRES_DB: listings
14      POSTGRES_PASSWORD: listings
15      POSTGRES_USER: listings
16      PGUSER: listings
17    healthcheck:
18      test: ["CMD", "pg_isready"]
19      interval: 1s
20      timeout: 3s
21      retries: 30
22
23  listings-zookeeper:
24    container_name: listings-zookeeper
25    image: confluentinc/cp-zookeeper:6.2.0
26    environment:
27      ZOOKEEPER_CLIENT_PORT: 2181
28      ZOOKEEPER_TICK_TIME: 2000
29    networks:
30      - listings-stack
31    ports:
32      - "2181:2181"
33    healthcheck:
34      test: nc -z localhost 2181 || exit -1
35      interval: 10s
36      timeout: 5s
37      retries: 10
38
39  listings-kafka:
40    container_name: listings-kafka
41    image: confluentinc/cp-kafka:6.2.0
42    depends_on:
43      listings-zookeeper:
44        condition: service_healthy
45    environment:
46      KAFKA_BROKER_ID: 1
47      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://listings-kafka:9092,PLAINTEXT_HOST://localhost:29092
48      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
49      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
50      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
51      KAFKA_ZOOKEEPER_CONNECT: listings-zookeeper:2181
52    volumes:
53      - /var/run/docker.sock:/var/run/docker.sock
54    networks:
55      - listings-stack
56    ports:
57      - "29092:29092"
58    healthcheck:
59      test: kafka-topics --bootstrap-server 127.0.0.1:9092 --list
60      interval: 10s
61      timeout: 10s
62      retries: 50
63
64networks: {listings-stack: {}}
65name: Build
66
67on: [ pull_request ]
68
69env:
70  AWS_ACCESS_KEY_ID: ${{ secrets.TUNNEL_AWS_ACCESS_KEY_ID }}
71  AWS_SECRET_ACCESS_KEY: ${{ secrets.TUNNEL_AWS_SECRET_ACCESS_KEY }}
72  AWS_DEFAULT_REGION: 'us-east-1'
73  CIRCLECI_KEY_TUNNEL: ${{ secrets.ID_RSA_CIRCLECI_TUNNEL }}
74
75jobs:
76  build:
77    name: Listings-API Build
78    runs-on: [ self-hosted, zap ]
79
80    steps:
81      - uses: actions/checkout@v2
82        with:
83          token: ${{ secrets.GH_OLXBR_PAT }}
84          submodules: recursive
85          path: ./repo
86          fetch-depth: 0
87
88      - name: Set up JDK 11
89        uses: actions/setup-java@v2
90        with:
91          distribution: 'adopt'
92          java-version: '11'
93          architecture: x64
94          cache: 'gradle'
95
96      - name: Docker up
97        working-directory: ./repo
98        run: docker-compose up -d
99
100      - name: Build with Gradle
101        working-directory: ./repo
102        run: ./gradlew build -Dhttps.protocols=TLSv1,TLSv1.1,TLSv1.2 -x integrationTest
103
104      - name: Integration tests with Gradle
105        working-directory: ./repo
106        run: ./gradlew integrationTest -Dhttps.protocols=TLSv1,TLSv1.1,TLSv1.2
107
108      - name: Sonarqube
109        working-directory: ./repo
110        env:
111          GITHUB_TOKEN: ${{ secrets.GH_OLXBR_PAT }}
112          SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
113        run: ./gradlew sonarqube --info -Dhttps.protocols=TLSv1,TLSv1.1,TLSv1.2
114
115      - name: Docker down
116        if: always()
117        working-directory: ./repo
118        run: docker-compose down --remove-orphans
119
120      - name: Cleanup Gradle Cache
121        # Remove some files from the Gradle cache, so they aren't cached by GitHub Actions.
122        # Restoring these files from a GitHub Actions cache might cause problems for future builds.
123        run: |
124          rm -f ${{ env.HOME }}/.gradle/caches/modules-2/modules-2.lock
125          rm -f ${{ env.HOME }}/.gradle/caches/modules-2/gc.properties
126
127  boolean compareRecordSend(String topicName, int expected) {
128    def condition = new PollingConditions()
129    condition.within(kafkaProperties.listener.pollTimeout.getSeconds() * 5) {
130      assert expected == getRecordSendTotal(topicName)
131    }
132    return true
133  }
134
135  int getRecordSendTotal(String topicName) {
136    kafkaTemplate.flush()
137    return kafkaTemplate.metrics().find {
138      it.key.name() == "record-send-total" && it.key.tags().get("topic") == topicName
139    }?.value?.metricValue() ?: 0
140  }
141Condition not satisfied after 50.00 seconds and 496 attempts
142    at spock.util.concurrent.PollingConditions.within(PollingConditions.java:185)
143    at com.company.listings.KafkaAwareBaseSpec.compareRecordSend(KafkaAwareBaseSpec.groovy:31)
144    at com.company.listings.application.worker.listener.notifier.ListingNotifierITSpec.should notify listings(ListingNotifierITSpec.groovy:44)
145
146    Caused by:
147    Condition not satisfied:
148
149    expected == getRecordSendTotal(topicName)
150    |        |  |                  |
151    10       |  0                  v4
152                false
153spring:
154  kafka:
155    bootstrap-servers: localhost:29092
156    producer:
157      batch-size: 262144
158      buffer-memory: 536870912
159      retries: 1
160      key-serializer: org.apache.kafka.common.serialization.StringSerializer
161      value-serializer: org.apache.kafka.common.serialization.ByteArraySerializer
162      acks: all
163      properties:
164        linger.ms: 0
165

ANSWER

Answered 2021-Nov-03 at 19:11

We identified some test sequence dependency between the Kafka tests.

We updated our Gradle version to 7.3-rc-3 which has a more deterministic approach to test scanning. This update "solved" our problem while we prepare to fix the tests' dependencies.

Source https://stackoverflow.com/questions/69284830

QUESTION

How to parse json to case class with map by jsonter, plokhotnyuk

Asked 2021-Nov-02 at 06:27

I want to read json messages from Kafka and put them into another structure of SpecificRecordBase class (avro). The part of the json has dynamic structure for example

1{"known_type": "test", 
2 "unknown_type": {"attr1": true, "attr2": "value2", "attr3": 777}}
3
4{"known_type": "test", 
5 "unknown_type": {"field1":[1,3,4,7], "field2": {"sub_field": "test"}}}
6
7{"known_type": "test", 
8"unknown_type": {"param": "some_value"}}
9

I want to use a flexible table and put it in Map[String, String] where every key = attribute name and value = attribute value in string and where there is no validation. The example of target classes instances:

1{"known_type": "test", 
2 "unknown_type": {"attr1": true, "attr2": "value2", "attr3": 777}}
3
4{"known_type": "test", 
5 "unknown_type": {"field1":[1,3,4,7], "field2": {"sub_field": "test"}}}
6
7{"known_type": "test", 
8"unknown_type": {"param": "some_value"}}
9Example(test,Map(attr1 -> true, attr2 -> "value2", attr3 -> 777))
10
11Example(test,Map(field1 -> [1,3,4,7], field2 -> {"sub_field" : "sub_value"}))
12
13Example(test,Map(param -> "some_value"))
14

I wrote the code with a circle but want to get the same with plokhotnyuk/jsoniter-scala, could you help me please find the way.

1{"known_type": "test", 
2 "unknown_type": {"attr1": true, "attr2": "value2", "attr3": 777}}
3
4{"known_type": "test", 
5 "unknown_type": {"field1":[1,3,4,7], "field2": {"sub_field": "test"}}}
6
7{"known_type": "test", 
8"unknown_type": {"param": "some_value"}}
9Example(test,Map(attr1 -> true, attr2 -> "value2", attr3 -> 777))
10
11Example(test,Map(field1 -> [1,3,4,7], field2 -> {"sub_field" : "sub_value"}))
12
13Example(test,Map(param -> "some_value"))
14case class Example(known_type: String = "", unknown_type: Map[String, String])
15
16val result: Option[Example] = for {
17    json       <- parse(rawJson2).toOption
18    t          <- json.hcursor.downField("known_type").as[String].toOption
19    map        <- json.hcursor.downField("unknown_type").as[JsonObject].toOption
20    m = map.toMap.map { case (name, value) => name -> value.toString() }
21} yield Example(t, m)
22

ANSWER

Answered 2021-Nov-02 at 06:27

One of possible solutions for the proposed data structure that passes decoding tests from the question:

1{"known_type": "test", 
2 "unknown_type": {"attr1": true, "attr2": "value2", "attr3": 777}}
3
4{"known_type": "test", 
5 "unknown_type": {"field1":[1,3,4,7], "field2": {"sub_field": "test"}}}
6
7{"known_type": "test", 
8"unknown_type": {"param": "some_value"}}
9Example(test,Map(attr1 -> true, attr2 -> "value2", attr3 -> 777))
10
11Example(test,Map(field1 -> [1,3,4,7], field2 -> {"sub_field" : "sub_value"}))
12
13Example(test,Map(param -> "some_value"))
14case class Example(known_type: String = "", unknown_type: Map[String, String])
15
16val result: Option[Example] = for {
17    json       <- parse(rawJson2).toOption
18    t          <- json.hcursor.downField("known_type").as[String].toOption
19    map        <- json.hcursor.downField("unknown_type").as[JsonObject].toOption
20    m = map.toMap.map { case (name, value) => name -> value.toString() }
21} yield Example(t, m)
22case class Example(known_type: String = "", unknown_type: Map[String, String])
23      
24implicit val mapCodec: JsonValueCodec[Map[String, String]] = new JsonValueCodec[Map[String, String]] {
25  override def decodeValue(in: JsonReader, default: Map[String, String]): Map[String, String] = {
26    val b = in.nextToken()
27    if (b == '{') {
28      if (in.isNextToken('}')) nullValue
29      else {
30        in.rollbackToken()
31        var i = 0
32        val x = Map.newBuilder[String, String]
33        while ({
34          i += 1
35          if (i > 1000) in.decodeError("too many keys") // To save from DoS attacks that exploit Scala's map vulnerability: https://github.com/scala/bug/issues/11203
36          x += ((in.readKeyAsString(), new String({
37            in.nextToken()
38            in.rollbackToken()
39            in.readRawValAsBytes()
40          }, StandardCharsets.UTF_8)))
41          in.isNextToken(',')
42        }) ()
43        if (in.isCurrentToken('}')) x.result()
44        else in.objectEndOrCommaError()
45      }
46    } else in.readNullOrTokenError(default, '{')
47  }
48
49  override def encodeValue(x: Map[String, String], out: JsonWriter): Unit = ???
50
51  override val nullValue: Map[String, String] = Map.empty
52}
53
54implicit val exampleCodec: JsonValueCodec[Example] = make
55

Source https://stackoverflow.com/questions/69765241

Community Discussions contain sources that include Stack Exchange Network

Tutorials and Learning Resources in Kafka

Tutorials and Learning Resources are not available at this moment for Kafka

Share this Page

share link

Get latest updates on Kafka