kandi background

flink-client | Java library for managing Apache Flink via | SQL Database library

Download this library from

kandi X-RAY | flink-client Summary

flink-client is a Java library typically used in Database, SQL Database applications. flink-client has no vulnerabilities, it has build file available, it has a Permissive License and it has low support. However flink-client has 1 bugs. You can download it from GitHub, Maven.
Java library for managing Apache Flink via the Monitoring REST API

kandi-support Support

  • flink-client has a low active ecosystem.
  • It has 40 star(s) with 15 fork(s). There are 3 watchers for this library.
  • There were 1 major release(s) in the last 12 months.
  • There are 0 open issues and 6 have been closed. On average issues are closed in 10 days. There are 1 open pull requests and 0 closed requests.
  • It has a neutral sentiment in the developer community.
  • The latest version of flink-client is v1.0.4

quality kandi Quality

  • flink-client has 1 bugs (0 blocker, 0 critical, 0 major, 1 minor) and 42 code smells.

securitySecurity

  • flink-client has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
  • flink-client code analysis shows 0 unresolved vulnerabilities.
  • There are 12 security hotspots that need review.

license License

  • flink-client is licensed under the BSD-3-Clause License. This license is Permissive.
  • Permissive licenses have the least restrictions, and you can use them in most projects.

buildReuse

  • flink-client releases are available to install and integrate.
  • Deployable package is available in Maven.
  • Build file is available. You can build the component from source.
  • Installation instructions, examples and code snippets are available.
  • It has 1510 lines of code, 71 functions and 4 files.
  • It has medium code complexity. Code complexity directly impacts maintainability of the code.
Top functions reviewed by kandi - BETA

Coming Soon for all Libraries!

Currently covering the most popular Java, JavaScript and Python libraries. See a SAMPLE HERE.
kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.

flink-client Key Features

Java library for managing Apache Flink via the Monitoring REST API

flink-client Examples and Code Snippets

  • License
  • How to get the binaries
  • How to generate the code
  • How to build the library
  • How to run the tests
  • Documentation
  • Could not find any factory for identifier 'avro-confluent' that implements 'org.apache.flink.table.factories.DeserializationFormatFactory'
  • Not able to perform transformations and extract JSON values from Flink DataStream and Kafka Topic
  • Flink SlidingEventTimeWindows doesnt work as expected
  • Flink: java.lang.NoSuchMethodError: AvroSchemaConverter
  • Flink 1.12.3 upgrade triggers `NoSuchMethodError: 'scala.collection.mutable.ArrayOps scala.Predef$.refArrayOps`
  • Flink 1.12 Could not find any factory for identifier 'kafka' that implements 'org.apache.flink.table.factories.DynamicTableFactory' in the classpath
  • No ExecutorFactory found to execute the application in Flink 1.11.1
  • Apache flink (1.9.1) runtime exception when using case classes in scala (2.12.8)

License

Copyright (c) 2019-2021, Andrea Medeghini
All rights reserved.

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:

* Redistributions of source code must retain the above copyright notice, this
  list of conditions and the following disclaimer.

* Redistributions in binary form must reproduce the above copyright notice,
  this list of conditions and the following disclaimer in the documentation
  and/or other materials provided with the distribution.

* Neither the name of the library nor the names of its
  contributors may be used to endorse or promote products derived from
  this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Community Discussions

Trending Discussions on flink-client
  • Could not find any factory for identifier 'avro-confluent' that implements 'org.apache.flink.table.factories.DeserializationFormatFactory'
  • Not able to perform transformations and extract JSON values from Flink DataStream and Kafka Topic
  • Flink SlidingEventTimeWindows doesnt work as expected
  • Flink: java.lang.NoSuchMethodError: AvroSchemaConverter
  • Flink java.lang.ClassNotFoundException: org.apache.flink.connector.kafka.source.KafkaSource
  • Flink KafkaConsumer fail to deserialise a composite avro schema
  • Flink 1.12.3 upgrade triggers `NoSuchMethodError: 'scala.collection.mutable.ArrayOps scala.Predef$.refArrayOps`
  • Flink 1.12 Could not find any factory for identifier 'kafka' that implements 'org.apache.flink.table.factories.DynamicTableFactory' in the classpath
  • Is it a BUG with fink 1.12 batch mode?
  • No ExecutorFactory found to execute the application in Flink 1.11.1
Trending Discussions on flink-client

QUESTION

Could not find any factory for identifier 'avro-confluent' that implements 'org.apache.flink.table.factories.DeserializationFormatFactory'

Asked 2022-Feb-27 at 19:32

I have a Flink job that runs well locally but fails when I try to flink run the job on cluster. The error happens when trying to load data from Kafka via 'connector' = 'kafka'. I am using Flink-Table API and confluent-avro format for reading data from Kafka.

So basically i created a table which reads data from kafka topic:

    val inputTableSQL =
      s"""CREATE TABLE input_table (
         |  -- key of the topic
         |  key BYTES NOT NULL,
         |
         |  -- a few columns mapped to the Avro fields of the Kafka value
         |  id STRING,
         |
         |) WITH (
         |
         |  'connector' = 'kafka',
         |  'topic' = '${KafkaConfiguration.InputTopicName}',
         |  'scan.startup.mode' = 'latest-offset',
         |
         |  -- UTF-8 string as Kafka keys, using the 'key' table column
         |  'key.format' = 'raw',
         |  'key.fields' = 'key',
         |
         |  'value.format' = 'avro-confluent',
         |  'value.avro-confluent.schema-registry.url' = '${KafkaConfiguration.KafkaConsumerSchemaRegistryUrl}',
         |  'value.fields-include' = 'EXCEPT_KEY'
         |)
         |""".stripMargin
    val inputTable = tableEnv.executeSql(inputTableSQL)

and then i created another table, which i will use as output table:

val outputTableSQL =
      s"""CREATE TABLE custom_avro_output_table (
         |  -- key of the topic
         |  key BYTES NOT NULL,
         |
         |  -- a few columns mapped to the Avro fields of the Kafka value
         |  ID STRING
         |) WITH (
         |
         |  'connector' = 'kafka',
         |  'topic' = '${KafkaConfiguration.OutputTopicName}',
         |  'properties.bootstrap.servers' = '${KafkaConfiguration.KafkaProducerBootstrapServers}',
         |
         |  -- UTF-8 string as Kafka keys, using the 'key' table column
         |  'key.format' = 'raw',
         |  'key.fields' = 'key',
         |
         |  $outputFormatSettings
         |  'value.fields-include' = 'EXCEPT_KEY'
         |)
         |""".stripMargin

    val outputTableCreationResult = tableEnv.executeSql(outputTableSQL)
    
val customInsertSQL =
      """INSERT INTO custom_avro_output_table
        |SELECT key, id
        |  FROM input_table
        | WHERE userAgent LIKE '%ost%'
        |""".stripMargin

    val customInsertResult = tableEnv.executeSql(customInsertSQL)

when i run this in local machine, everything works fine, but when i run it in cluster, it crashes.

    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_282]
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_282]
    at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_282]
    at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:355) ~[flink-dist_2.12-1.13.1.jar:1.13.1]
    ... 13 more
Caused by: org.apache.flink.table.api.ValidationException: Could not find any factory for identifier 'avro-confluent' that implements 'org.apache.flink.table.factories.DeserializationFormatFactory' in the classpath.

Available factory identifiers are:

canal-json
csv
debezium-json
json
maxwell-json
raw
    at org.apache.flink.table.factories.FactoryUtil.discoverFactory(FactoryUtil.java:319) ~[flink-table_2.12-1.13.1.jar:1.13.1]
    at org.apache.flink.table.factories.FactoryUtil$TableFactoryHelper.discoverOptionalFormatFactory(FactoryUtil.java:751) ~[flink-table_2.12-1.13.1.jar:1.13.1]
    at org.apache.flink.table.factories.FactoryUtil$TableFactoryHelper.discoverOptionalDecodingFormat(FactoryUtil.java:649) ~[flink-table_2.12-1.13.1.jar:1.13.1]
    at org.apache.flink.table.factories.FactoryUtil$TableFactoryHelper.discoverDecodingFormat(FactoryUtil.java:633) ~[flink-table_2.12-1.13.1.jar:1.13.1]
    at org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory.lambda$getValueDecodingFormat$2(KafkaDynamicTableFactory.java:279) ~[?:?]
    at java.util.Optional.orElseGet(Optional.java:267) ~[?:1.8.0_282]
    at org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory.getValueDecodingFormat(KafkaDynamicTableFactory.java:277) ~[?:?]
    at org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory.createDynamicTableSource(KafkaDynamicTableFactory.java:142) ~[?:?]
    at org.apache.flink.table.factories.FactoryUtil.createTableSource(FactoryUtil.java:134) ~[flink-table_2.12-1.13.1.jar:1.13.1]
    at org.apache.flink.table.planner.plan.schema.CatalogSourceTable.createDynamicTableSource(CatalogSourceTable.java:116) ~[flink-table-blink_2.12-1.13.1.jar:1.13.1]
    at org.apache.flink.table.planner.plan.schema.CatalogSourceTable.toRel(CatalogSourceTable.java:82) ~[flink-table-blink_2.12-1.13.1.jar:1.13.1]
    at org.apache.calcite.sql2rel.SqlToRelConverter.toRel(SqlToRelConverter.java:3585) ~[flink-table_2.12-1.13.1.jar:1.13.1]

following is my build.sbt:

val flinkVersion = "1.13.1"

val flinkDependencies = Seq(
  "org.apache.flink" %% "flink-scala" % flinkVersion % Provided,
  "org.apache.flink" %% "flink-streaming-scala" % flinkVersion % Provided,
  "org.apache.flink" %% "flink-connector-kafka" % flinkVersion,
  "org.apache.flink" %% "flink-clients" % flinkVersion % Provided,
  "org.apache.flink" %% "flink-table-api-scala-bridge" % flinkVersion % Provided,
  "org.apache.flink" %% "flink-table-planner-blink"  % flinkVersion % Provided,
  "org.apache.flink" % "flink-table-common"  % flinkVersion % Provided,
  "org.apache.flink" % "flink-avro-confluent-registry" % flinkVersion,
  "org.apache.flink" % "flink-json" % flinkVersion,
  "com.webtrekk" % "wd.generated" % "2.2.3",
  "com.webtrekk" % "wd.generated.public" % "2.2.0",
  "ch.qos.logback" % "logback-classic" % "1.2.3"
)

Similar issue has been posted in Flink 1.12 Could not find any factory for identifier 'kafka' that implements 'org.apache.flink.table.factories.DynamicTableFactory' in the classpath but the solution of adding provided is not working in my case.

ANSWER

Answered 2021-Oct-26 at 17:47

I was able to fix this problem using following approach:

In my build.sbt, there was the following mergeStrategy:

lazy val mergeStrategy = Seq(
  assembly / assemblyMergeStrategy := {
    case "application.conf" => MergeStrategy.concat
    case "reference.conf" => MergeStrategy.concat
    case m if m.toLowerCase.endsWith("manifest.mf") => MergeStrategy.discard
    case m if m.toLowerCase.matches("meta-inf.*\\.sf$") => MergeStrategy.discard
    case _ => MergeStrategy.first
  }
)

I appended the following chunk in it, hence resolved my exception:

case "META-INF/services/org.apache.flink.table.factories.Factory"  => MergeStrategy.concat
case "META-INF/services/org.apache.flink.table.factories.TableFactory"  => MergeStrategy.concat

Source https://stackoverflow.com/questions/69677946

Community Discussions, Code Snippets contain sources that include Stack Exchange Network

Vulnerabilities

No vulnerabilities reported

Install flink-client

Build the library using Maven:.

Support

Create the Flink client:.