cp-docker-images | Docker images for Confluent Platform | Continuous Deployment library
kandi X-RAY | cp-docker-images Summary
kandi X-RAY | cp-docker-images Summary
This is used for building images for version 5.3.x or lower, and should not be used for adding new images.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of cp-docker-images
cp-docker-images Key Features
cp-docker-images Examples and Code Snippets
Community Discussions
Trending Discussions on cp-docker-images
QUESTION
I am deploying Kafka-connect on Google Kubernetes Engine (GKE) using cp-kafka-connect Helm chart in distributed mode.
A working Kafka cluster with broker and zookeeper is already running on the same GKE cluster. I understand I can create connectors by sending post requests to http://localhost:8083/connectors
endpoint once it is available.
However, Kafka-connect container goes into RUNNING state and then starts loading the jar files and till all the jar files are loaded the endpoint mentioned above is unreachable.
I am looking for a way to automate the step of manually exec
the pod, check if the endpoint is ready and then send the post requests. I have a shell script that has a bunch of curl -X POST
requests to this endpoint to create the connectors and also have config files for these connectors which work fine with standalone mode (using Confluent platform show in this confluent blog).
Now there are only two ways to create the connector:
- Somehow identify when the container is actually ready (when the endpoint has started listening) and then run the shell script containing the curl requests
- OR use the configuration files as we do in standalone mode (Example:
$ /confluent local load connector_name -- -d /connector-config.json
)
Which of the above approach is better?
Is the second approach (config files) even doable with distributed mode?
- If YES: How to do that?
- If NO: How to successfully do what is explained in the first approach?
EDIT: With reference to his github issue(thanks to @cricket_007's answer below) I added the following as the container command and connectors got created after the endpoint gets ready:
...ANSWER
Answered 2020-Feb-01 at 15:27confluent local
doesn't interact with a remote Connect cluster, such as one in Kubernetes.
Please refer to the Kafka Connect REST API
You'd connect to it like any other RESTful api running in the cluster (via a Nodeport, or an Ingress/API Gateway for example)
the endpoint mentioned above is unreachable.
Localhost is the physical machine you're typing the commands into, not the remote GKE cluster
Somehow identify when the container is actually ready
Kubernetes health checks are responsible for that
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install cp-docker-images
You can use cp-docker-images like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page