RegexRouter | PHP class to route with regular expressions | Dependency Injection library
kandi X-RAY | RegexRouter Summary
kandi X-RAY | RegexRouter Summary
PHP class to route with regular expressions. Extremely small. Follows every conceivable best-practice - SRP, SoC, DI, IoC, bfft….
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Execute a route
- Register a new route .
RegexRouter Key Features
RegexRouter Examples and Code Snippets
Community Discussions
Trending Discussions on RegexRouter
QUESTION
I installed confluent platform on CentOS 7.9
using instruction on this page.
sudo yum install confluent-platform-oss-2.11
I am using AWS MSK cluster with apache version 2.6.1.
I start connect using /usr/bin/connect-distributed /etc/kafka/connect-distributed.properties
. I have supplied the MSK client endpoint as bootstrap in distributed.properties
. Connect starts up just fine. However, when I try to add the following connector, it throws the error that follows.
Connector config -
...ANSWER
Answered 2021-Sep-19 at 09:02I am not familiar with this specific connector, but one possible explanation is a compatibility issue between the connector version and the kafka connect worker version.
You need to check out the connector's documentation and verify which version of connect it supports.
QUESTION
Below is my JDBC Sink Connector Configuration Properties.
...ANSWER
Answered 2022-Jan-25 at 14:18AFAIK, the connection.url
can only refer to one database at a time, for an authenticated user to that database.
If you need to write different topics to different databases, copy your connector config, and change the appropriate configs
QUESTION
I need to replicate a MySQL database to a PostgreSQL database. I opted for:
- Debezium connect
- Avro format
- confluent schema registry
- kafka
The data is being replicated, however, I am losing some schema information. For example, a column with datetime
format in mysql is replicated as bigint
in Postgres, foreign keys are not created, also the order of columns is not preserved (which is nice to have), etc..
PostgreSQL sink connector:
...ANSWER
Answered 2022-Jan-15 at 00:03For example, a column with datetime format in mysql is replicated as bigint
This is due to the default time.precision.mode
used by the Debezium connector on the source side. If you look at the documentation, you'll notice that the default precision emits datetime
columns as INT64, which explains why the sink connector writes the contents as a bigint
.
You can set the time.precision.mode
to connect
on the source side for now so that the values can be properly interpreted by the JDBC sink connector.
foreign keys are not created
That's to be expected, see this Confluent GitHub Issue. At this time, the JDBC sink does not have the capabilities to support materializing Foreign Key relationships at the JDBC level.
order of columns is not preserved
That is also to be expected. There is no expected guarantee that Debezium should store the relational columns in the exact same order as they are in the database (although we do) and the JDBC sink connector is under no guarantee to retain the order of the fields as they're read from the emitted event. If the sink connector uses some container like a HashMap to store column names, it's plausible that the order would be very different than the source database.
If there is a necessity to retain a higher level of relational metadata such as foreign keys and column order at the destination system that mirrors that of the source system, you may need to look into a separate toolchain to replicate the initial schema and relationships through some type of schema dump, translation, and import to your destination database and then rely on the CDC pipeline for the data replication aspect.
QUESTION
I was trying to configure kafka sink connector to mysql DB. Kafka topic has value in AVRO format, and i want to dump data to mysql. I was getting error saying table not found (Table 'airflow.mytopic' doesn't exist). I was expecting table to be created in 'myschema.mytopic', but it was looking for table in airflow. I had enabled "auto.create": "true" expecting the table to be created wherever it wants.
I am using Confluent Kafka 5.4.1 and started it manually
Configuration:
...ANSWER
Answered 2021-Dec-29 at 06:18Issue got resolved by downgrading the mysql driver (mysql-connector-java-5.1.17.jar), below are the configurations
QUESTION
I'm trying to use kafka transforms.RemoveString to modify the name of my topic before passing it into my connector. My topic name looks like this
...ANSWER
Answered 2021-Aug-18 at 18:06You have typo in RegexRouter, You missed the R
QUESTION
I'm using Kafa connect solr and I'm trying to find a way to change the solr url based on the passed in topic, I've been looking at Kafka connect_transforms to try and achieve this. My connect properties file looks like this -
...ANSWER
Answered 2021-Aug-12 at 20:24transforms
really only alter the Kafka Record itself, not external properties such as the clients that the Connect tasks may use
Specifically, look at the source code, and you'll see that it uses topic names to map to individual clients, but all at the same url
QUESTION
I have this connector and sink which basically creates a topic with "Test.dbo.TEST_A" and write to the ES index "Test". I have set the "key.ignore": "false" so that row updates are also updated in ES and "transforms.unwrap.add.fields":"table" to keep track on which table the document belong to.
...ANSWER
Answered 2021-Aug-08 at 06:36You are reading data changes from different Databases/Tables and writing them into the same ElasticSearch index, with the ES document ID set to the DB record ID. And as you can see, if the DB record IDs collide, the index document IDs will also collide, causing old documents to be deleted.
You have a few options here:
- ElasticSearch index per DB/Table name: You can implement this with different connectors or with a custom Single Message Transform (SMT)
- Globally unique DB records: If you control the schema of the source tables, you can set the primary key to a UUID. This will prevent ID collisions.
- As you mentioned in the comments, set the ES document ID to DB/Table/ID. You can implement this change using an SMT
QUESTION
folks.
Let me introduce the scenario first:
I'm getting data from two tables in a MS SQL SERVER
by using Debezium CDC Source Connector
. Follow the connectors configs:
Connector for PROVIDER table:
...ANSWER
Answered 2021-Jul-19 at 17:15I've found a solution for this on Confluent Forum.
Thanks to Matthias J. Sax
QUESTION
While my Kafka JDBC Connector works for a simple table, for most other tables it fails with the error:
Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:179) org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178) at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104) at org.apache.kafka.connect.runtime.WorkerSourceTask.convertTransformedRecord(WorkerSourceTask.java:290) at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:316) at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:240) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.IllegalArgumentException: Invalid decimal scale: 127 (greater than precision: 64) at org.apache.avro.LogicalTypes$Decimal.validate(LogicalTypes.java:231) at org.apache.avro.LogicalType.addToSchema(LogicalType.java:68) at org.apache.avro.LogicalTypes$Decimal.addToSchema(LogicalTypes.java:201) at io.confluent.connect.avro.AvroData.fromConnectSchema(AvroData.java:943) at io.confluent.connect.avro.AvroData.addAvroRecordField(AvroData.java:1058) at io.confluent.connect.avro.AvroData.fromConnectSchema(AvroData.java:899) at io.confluent.connect.avro.AvroData.fromConnectSchema(AvroData.java:731) at io.confluent.connect.avro.AvroData.fromConnectSchema(AvroData.java:725) at io.confluent.connect.avro.AvroData.fromConnectData(AvroData.java:364) at io.confluent.connect.avro.AvroConverter.fromConnectData(AvroConverter.java:80) at org.apache.kafka.connect.storage.Converter.fromConnectData(Converter.java:62) at org.apache.kafka.connect.runtime.WorkerSourceTask.lambda$convertTransformedRecord$2(WorkerSourceTask.java:290) at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128) at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162) ... 11 more
I am creating the connector using the below command:
...curl -X POST http://localhost:8083/connectors -H "Content-Type: application/json" -d '{"name": "jdbc_source_oracle_03","config": {"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector","connection.url": "jdbc:oracle:thin:@//XOXO:1521/XOXO","connection.user":"XOXO","connection.password":"XOXO","numeric.mapping":"best_fit","mode":"timestamp","poll.interval.ms":"1000","validate.non.null":"false","table.whitelist":"POLICY","timestamp.column.name":"CREATED_DATE","topic.prefix":"ora-","transforms": "addTopicSuffix,InsertTopic,InsertSourceDetails,copyFieldToKey,extractValuefromStruct","transforms.InsertTopic.type":"org.apache.kafka.connect.transforms.InsertField$Value","transforms.InsertTopic.topic.field":"messagetopic","transforms.InsertSourceDetails.type":"org.apache.kafka.connect.transforms.InsertField$Value","transforms.InsertSourceDetails.static.field":"messagesource","transforms.InsertSourceDetails.static.value":"JDBC Source Connector from Oracle on asgard","transforms.addTopicSuffix.type":"org.apache.kafka.connect.transforms.RegexRouter","transforms.addTopicSuffix.regex":"(.*)","transforms.addTopicSuffix.replacement":"$1-jdbc-02","transforms.copyFieldToKey.type":"org.apache.kafka.connect.transforms.ValueToKey","transforms.copyFieldToKey.fields":"ID","transforms.extractValuefromStruct.type":"org.apache.kafka.connect.transforms.ExtractField$Key","transforms.extractValuefromStruct.field":"ID"}}'
ANSWER
Answered 2021-Apr-19 at 16:49The problem was related to Number columns without declared precision and scale. Well explained by Robin Moffatt here: https://rmoff.net/2018/05/21/kafka-connect-and-oracle-data-types
QUESTION
I'm using kafka connect to connect to a database in order to store info on a compacted topic and am having deserialization issues when trying to consume the topic in a spring cloud stream application.
connector config:
...ANSWER
Answered 2021-Apr-05 at 22:01You're using the JSON Schema converter (io.confluent.connect.json.JsonSchemaConverter
), not the JSON converter (org.apache.kafka.connect.json.JsonConverter
).
The JSON Schema converter uses the Schema Registry to store the schema, and puts information about it on the front few bytes of the message. That's what's tripping up your code (Could not read JSON: Invalid UTF-32 character 0x17a2241 (above 0x0010ffff) at char #1, byte #7)
).
So either use the JSON Schema deserialiser in your code (better), or switch to using the org.apache.kafka.connect.json.JsonConverter
converter (less preferable; you throw away the schema then).
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install RegexRouter
PHP requires the Visual C runtime (CRT). The Microsoft Visual C++ Redistributable for Visual Studio 2019 is suitable for all these PHP versions, see visualstudio.microsoft.com. You MUST download the x86 CRT for PHP x86 builds and the x64 CRT for PHP x64 builds. The CRT installer supports the /quiet and /norestart command-line switches, so you can also script it.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page