Cloud platform space is heating up right now with each vendor offering its own flavor of cloud. AWS, Azure and Google Cloud are the leading public cloud vendors. Cloud vendors today offer every possible service from AI APIs to IoT services. Additonally, cloud marketplaces provide cloud ready solutions that can be consumed directly on the public cloud.

Popular New Releases in Cloud API

No Popular Releases are available at this moment for Cloud API

Popular Libraries in Cloud API

Web App

by Microsoft doticoncloud_apidoticon

star image 2334 doticonCloud

Enjoy secure and flexible development, deployment, and scaling options for your web app

SQL Database

by Microsoft doticoncloud_apidoticon

star image 2210 doticonCloud

Scalable and managed relational database service for modern business-class apps.

Storage account

by Microsoft doticoncloud_apidoticon

star image 1733 doticonCloud

Use Blobs, Tables, Queues, Files, and Data Lake Gen 2 for reliable, economical cloud storage.

Function App

by Microsoft doticoncloud_apidoticon

star image 1382 doticonCloud

Write any function in minutes – whether to run a simple job that cleans up a database or build a more complex architecture. Creating functions is easier than ever before, whatever your chosen OS, platform, or development method.

Ubuntu Server

by Canonical doticoncloud_apidoticon

star image 699 doticonCloud

Ubuntu Server delivers the best value scale-out performance available.

Azure Cosmos DB

by Microsoft doticoncloud_apidoticon

star image 664 doticonCloud

Azure Cosmos DB is a fully managed, globally-distributed, horizontally scalable in storage and throughput, multi-model database service backed up by comprehensive SLAs.

Backup and Site Recovery

by Microsoft doticoncloud_apidoticon

star image 575 doticonCloud

A backup and disaster recovery solution to safeguard applications and data from unexpected events.

Application Insights

by Microsoft doticoncloud_apidoticon

star image 494 doticonCloud

Application performance, availability and usage information at your fingertips.

Data Factory

by Microsoft doticoncloud_apidoticon

star image 465 doticonCloud

Hybrid data integration service that simplifies ETL at scale

Trending New libraries in Cloud API

No Trending Libraries are available at this moment for Cloud API

Top Authors in Cloud API

1

Cognosys Inc.

555 Libraries

star icon2

2

CE ResearchHub

447 Libraries

star icon0

3

Rearc

422 Libraries

star icon0

4

Bitnami

414 Libraries

star icon32

5

Microsoft

308 Libraries

star icon17401

6

Apps4Rent LLC

285 Libraries

star icon249

7

Tidal Media Inc

238 Libraries

star icon2

8

Acxiom

203 Libraries

star icon0

9

Cognosys

172 Libraries

star icon0

10

Google

171 Libraries

star icon0

1

555 Libraries

star icon2

2

447 Libraries

star icon0

3

422 Libraries

star icon0

4

414 Libraries

star icon32

5

308 Libraries

star icon17401

6

285 Libraries

star icon249

7

238 Libraries

star icon2

8

203 Libraries

star icon0

9

172 Libraries

star icon0

10

171 Libraries

star icon0

Trending Kits in Cloud API

No Trending Kits are available at this moment for Cloud API

Trending Discussions on Cloud API

using OAuth2 user account authentication in the python google cloud API from jupyter notebook

Google Cloud APIs - Python - Request contains an invalid argument

Is there an IBM Cloud API call to retreive the account name?

API key is revealed via request url, how to deal with it?

kafka - ssl handshake failing

How can I debug my Golang API code to show me what is wrong?

How to use custom auth header with spring boot oauth2 resource server

Gcloud comportament differ from shell to cloudbuild.yaml

How to list group ids in GCP using cli or console

Google Cloud Translate API Key Doesn't Work When Using Android Application Restrictions

QUESTION

using OAuth2 user account authentication in the python google cloud API from jupyter notebook

Asked 2022-Mar-10 at 23:36

I am trying to access BigQuery from python code in Jupyter notebook run on a local machine. So I installed the google cloud API packages on my laptop.

I need to pass the OAuth2 authentication. But unfortunately, I only have user account to our bigquery. I do not have service account and not application credentials, nor do I have the permissions to create such. I am only allowed to work with user account.

When running the bigquery.Client() function, it appears to look for application credentials by looking at an environment variable GOOGLE_APPLICATION_CREDENTIALS. But this, it seems, for my non existing application credentials.

I cannot find any other way to connect using user account authentication. But I find it extremely weird because:

  1. The google API for R language works simply with user authentication. Parallel code in R (it has different API) just works!
  2. I run the code from the dataspell IDE. I have created in the IDE a database resource connection to bigquery (with my user authentication). There I am capable of opening a console for the database and I can run SQL queries in the console with no problem. I have attached the bigquery session to my python notebook, and I can see my notebook attached to the big query session in the services pane. But I am still missing something in order to access some valid running connection in the python code. (I do not know how to get a python object representing a valid connected client).

I have been reading manuals from google and looked for code examples for hours... Alas, I cannot find any description of connecting a client using user account from my notebook.

Please, can someone help?

ANSWER

Answered 2022-Mar-10 at 23:36

You can use the pydata-google-auth library to authenticate with a user account. This function loads credentials from a cache on disk or initiates an OAuth2.0 flow if the credentials are not found. This is not the recommended method to do an authentication.

1import pandas_gbq
2import pydata_google_auth
3
4SCOPES = [
5    'https://www.googleapis.com/auth/cloud-platform',
6    'https://www.googleapis.com/auth/drive',
7]
8
9credentials = pydata_google_auth.get_user_credentials(
10    SCOPES,
11    # Set auth_local_webserver to True to have a slightly more convienient
12    # authorization flow. Note, this doesn't work if you're running from a
13    # notebook on a remote sever, such as over SSH or with Google Colab.
14    auth_local_webserver=True,
15)
16
17df = pandas_gbq.read_gbq(
18    "SELECT my_col FROM `my_dataset.my_table`",
19    project_id='YOUR-PROJECT-ID',
20    credentials=credentials,
21)
22

The recommended way to do the authentication is to contact your GCP administrator and tell them to create a key for your account following the next instructions.

Then you can use this code to set up the authentication with the key that you have:

1import pandas_gbq
2import pydata_google_auth
3
4SCOPES = [
5    'https://www.googleapis.com/auth/cloud-platform',
6    'https://www.googleapis.com/auth/drive',
7]
8
9credentials = pydata_google_auth.get_user_credentials(
10    SCOPES,
11    # Set auth_local_webserver to True to have a slightly more convienient
12    # authorization flow. Note, this doesn't work if you're running from a
13    # notebook on a remote sever, such as over SSH or with Google Colab.
14    auth_local_webserver=True,
15)
16
17df = pandas_gbq.read_gbq(
18    "SELECT my_col FROM `my_dataset.my_table`",
19    project_id='YOUR-PROJECT-ID',
20    credentials=credentials,
21)
22from google.oauth2 import service_account
23
24credentials = service_account.Credentials.from_service_account_file(
25    '/path/to/key.json')
26

You can see more of the documentation here.

Source https://stackoverflow.com/questions/71426690

QUESTION

Google Cloud APIs - Python - Request contains an invalid argument

Asked 2022-Feb-09 at 15:22

Using Google Cloud APIs and Oauth2, I am trying to list down projects and display IAM Policies for each project using my Python Desktop app. Sample code is below:

1appflow = flow.InstalledAppFlow.from_client_secrets_file("client_secrets.json",
2scopes=["https://www.googleapis.com/auth/cloud-platform"])
3
4appflow.run_console()
5
6credentials = appflow.credentials
7
8service = googleapiclient.discovery.build(
9    'cloudresourcemanager', 'v1',  credentials=credentials)
10
11operation1 = service.projects().list().execute()
12
13jason=json.dumps(
14    operation1,
15    sort_keys=True,
16    indent=3)
17data = json.loads(jason)
18
19#Gets the list of projects in a Python List object [Proj]
20
21proj=[]
22for mem in data['projects']:
23    print(mem['projectId'])
24    proj.append(mem['projectId'])
25
26for prj in proj:
27    resource = 'projects/' + prj
28    
29    response1 = service.projects().testIamPermissions(resource=resource, body=None, x__xgafv=None).execute()
30    
31    response2 = service.projects().listOrgPolicies(resource=resource, body=None, x__xgafv=None).execute()
32
33    response3 = service.projects().getIamPolicy(resource=resource, body=None, x__xgafv=None).execute()
34

I get the similar error for all the 3 calls: googleapiclient.errors.HttpError: <HttpError 400 when requesting https://cloudresourcemanager.googleapis.com/v1/projects/projects%2Fproject-name:testIamPermissions?alt=json returned "Request contains an invalid argument.". Details: "Request contains an invalid argument.">

Arguments appear to be correct. Does the service(cloudresourcemanager) version v1/v3 make a difference? Am I missing something? Thanks in Advance.

ANSWER

Answered 2022-Feb-09 at 15:22

I think you should not need to parse the resource with projects/ like the HTTPS example because you are using the library that should be abstract this for you, so if you remove resource = 'projects/' + prj and try the call directly with the project id instead

1appflow = flow.InstalledAppFlow.from_client_secrets_file(&quot;client_secrets.json&quot;,
2scopes=[&quot;https://www.googleapis.com/auth/cloud-platform&quot;])
3
4appflow.run_console()
5
6credentials = appflow.credentials
7
8service = googleapiclient.discovery.build(
9    'cloudresourcemanager', 'v1',  credentials=credentials)
10
11operation1 = service.projects().list().execute()
12
13jason=json.dumps(
14    operation1,
15    sort_keys=True,
16    indent=3)
17data = json.loads(jason)
18
19#Gets the list of projects in a Python List object [Proj]
20
21proj=[]
22for mem in data['projects']:
23    print(mem['projectId'])
24    proj.append(mem['projectId'])
25
26for prj in proj:
27    resource = 'projects/' + prj
28    
29    response1 = service.projects().testIamPermissions(resource=resource, body=None, x__xgafv=None).execute()
30    
31    response2 = service.projects().listOrgPolicies(resource=resource, body=None, x__xgafv=None).execute()
32
33    response3 = service.projects().getIamPolicy(resource=resource, body=None, x__xgafv=None).execute()
34response1 = service.projects().testIamPermissions(resource=prj, body=None, x__xgafv=None).execute()
35
36response2 = service.projects().listOrgPolicies(resource=prj, body=None, x__xgafv=None).execute()
37
38response3 = service.projects().getIamPolicy(resource=prj, body=None, x__xgafv=None).execute()
39

If it worked, you should no longer get error 400, but rather 403 "permission denied" because you are missing some of the scopes for those API calls(based on your code example).

The example google provided

testIamPermissions

listOrgPolicies

getIamPolicy

Source https://stackoverflow.com/questions/71050385

QUESTION

Is there an IBM Cloud API call to retreive the account name?

Asked 2022-Jan-27 at 08:53

The account settings page (Manage > Account > Account Settings) lists an account ID and an account name under the "Account" heading. The account ID is easy enough to retrieve using the API, but I've not found a way to get the account name. Is this possible to get using the API?
I've been looking at the IBM Cloud API Docs and at the Softlayer API docs but haven't been able to find something which returns the account name. The ibmcloud CLI returns the info, but I'd rather not have to use the CLI since this will be used from within a Python app.

ANSWER

Answered 2021-Oct-22 at 04:58

You can always use IBMCLOUD_TRACE=true on the CLI to find out what the ibmcloud command is doing. What you see as "account name" is the ID resolved by going to the user management API.

Source https://stackoverflow.com/questions/69668076

QUESTION

API key is revealed via request url, how to deal with it?

Asked 2021-Dec-08 at 02:00

I have seen a similar question like How to solve API key is visible on request URL problem?, but I don't think it's applicable to my use case.

I'm using Firebase authentication for my application (React app, served from NestJS back end) and notice one thing, it exposes Google Cloud API key via request URL. The current authentication flow is:

  1. Using OAuth2, the front-end makes a call to Google Identity Platform using Firebase SDK to retrieve login information (display name, ID token, etc.)
  2. The front-end makes a call to NestJS API to validate the login information using Firebase Admin SDK and create HTTP only cookie to preserve authentication state.

As far as I know, the React App requires access to API key so Firebase can be implemented. There is no way to bypass that requirement. Even if a key is hidden in .env file, the API key is still revealed via network tab. In some cases, let say if I forget to handle exception of a signInWithRedirect() (or any other Firebase functions), the error is raised (Uncaught Promise: requestUrl/apiKey="My Key") is shown in console, so it's even worse.

I'm not certain if I misunderstand something regarding the whole implementation and authentication flow. If it is, then please correct me on that. If not, please let know how to solve this issue.

ANSWER

Answered 2021-Dec-08 at 02:00

It appears in the case of Firebase, exposing API key is fine. However, this is only due to how Firebase works, not because exposing API key in general is safe. A more detailed discussion can be found here: Is it safe to expose Firebase apiKey to the public?

I'm using Google Identity Platform, not Firebase, but Google Identity Platform uses Firebase and Firebase Admin SDK too so I believe this is applicable.

This matter is also mentioned in official Firebase instruction video. Kudos to user usr28765526 for mentioning this.

Source https://stackoverflow.com/questions/70253862

QUESTION

kafka - ssl handshake failing

Asked 2021-Dec-07 at 20:51

i've setup SSL on my local Kafka instance, and when i start the Kafka console producer/consumer on SSL port, it is giving SSL Handshake error

1Karans-MacBook-Pro:keystore karanalang$ $CONFLUENT_HOME/bin/kafka-console-producer --broker-list localhost:9093 --topic karantest --producer.config $CONFLUENT_HOME/props/client-ssl.properties
2&gt;[2021-11-10 13:15:09,824] ERROR [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9093) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
3[2021-11-10 13:15:09,826] WARN [Producer clientId=console-producer] Bootstrap broker localhost:9093 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
4[2021-11-10 13:15:10,018] ERROR [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9093) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
5[2021-11-10 13:15:10,019] WARN [Producer clientId=console-producer] Bootstrap broker localhost:9093 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
6[2021-11-10 13:15:10,195] ERROR [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9093) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
7

Here are the changes made :

  1. create the truststore & keystore

Here is output of the openssl command to check the SSL connectivity :

1Karans-MacBook-Pro:keystore karanalang$ $CONFLUENT_HOME/bin/kafka-console-producer --broker-list localhost:9093 --topic karantest --producer.config $CONFLUENT_HOME/props/client-ssl.properties
2&gt;[2021-11-10 13:15:09,824] ERROR [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9093) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
3[2021-11-10 13:15:09,826] WARN [Producer clientId=console-producer] Bootstrap broker localhost:9093 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
4[2021-11-10 13:15:10,018] ERROR [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9093) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
5[2021-11-10 13:15:10,019] WARN [Producer clientId=console-producer] Bootstrap broker localhost:9093 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
6[2021-11-10 13:15:10,195] ERROR [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9093) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
7Karans-MacBook-Pro:keystore karanalang$ openssl s_client -debug -connect localhost:9093 -tls1
8CONNECTED(00000005)
9write to 0x13d7bdf90 [0x13e01ea03] (118 bytes =&gt; 118 (0x76))
100000 - 16 03 01 00 71 01 00 00-6d 03 01 81 e8 00 cd c4   ....q...m.......
110010 - 04 4b 64 86 3e 30 97 32-c3 66 3a 8c ed 05 bf 97   .Kd.&gt;0.2.f:.....
120020 - ff d5 b2 a4 26 fe 99 c0-7f 94 a1 00 00 2e c0 14   ....&amp;...........
130030 - c0 0a 00 39 ff 85 00 88-00 81 00 35 00 84 c0 13   ...9.......5....
14---
150076 - &lt;SPACES/NULS&gt;
16read from 0x13d7bdf90 [0x13e01a803] (5 bytes =&gt; 5 (0x5))
170005 - &lt;SPACES/NULS&gt;
184307385836:error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong version number:/System/Volumes/Data/SWE/macOS/BuildRoots/e90674e518/Library/Caches/com.apple.xbs/Sources/libressl/libressl-56.60.2/libressl-2.8/ssl/ssl_pkt.c:386:
19---
20no peer certificate available
21---
22No client certificate CA names sent
23---
24SSL handshake has read 5 bytes and written 0 bytes
25---
26New, (NONE), Cipher is (NONE)
27Secure Renegotiation IS NOT supported
28Compression: NONE
29Expansion: NONE
30No ALPN negotiated
31SSL-Session:
32    Protocol  : TLSv1
33    Cipher    : 0000
34    Session-ID: 
35    Session-ID-ctx: 
36    Master-Key: 
37    Start Time: 1636579015
38    Timeout   : 7200 (sec)
39    Verify return code: 0 (ok)
40---
41
42

Here is the server.properties :

1Karans-MacBook-Pro:keystore karanalang$ $CONFLUENT_HOME/bin/kafka-console-producer --broker-list localhost:9093 --topic karantest --producer.config $CONFLUENT_HOME/props/client-ssl.properties
2&gt;[2021-11-10 13:15:09,824] ERROR [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9093) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
3[2021-11-10 13:15:09,826] WARN [Producer clientId=console-producer] Bootstrap broker localhost:9093 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
4[2021-11-10 13:15:10,018] ERROR [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9093) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
5[2021-11-10 13:15:10,019] WARN [Producer clientId=console-producer] Bootstrap broker localhost:9093 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
6[2021-11-10 13:15:10,195] ERROR [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9093) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
7Karans-MacBook-Pro:keystore karanalang$ openssl s_client -debug -connect localhost:9093 -tls1
8CONNECTED(00000005)
9write to 0x13d7bdf90 [0x13e01ea03] (118 bytes =&gt; 118 (0x76))
100000 - 16 03 01 00 71 01 00 00-6d 03 01 81 e8 00 cd c4   ....q...m.......
110010 - 04 4b 64 86 3e 30 97 32-c3 66 3a 8c ed 05 bf 97   .Kd.&gt;0.2.f:.....
120020 - ff d5 b2 a4 26 fe 99 c0-7f 94 a1 00 00 2e c0 14   ....&amp;...........
130030 - c0 0a 00 39 ff 85 00 88-00 81 00 35 00 84 c0 13   ...9.......5....
14---
150076 - &lt;SPACES/NULS&gt;
16read from 0x13d7bdf90 [0x13e01a803] (5 bytes =&gt; 5 (0x5))
170005 - &lt;SPACES/NULS&gt;
184307385836:error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong version number:/System/Volumes/Data/SWE/macOS/BuildRoots/e90674e518/Library/Caches/com.apple.xbs/Sources/libressl/libressl-56.60.2/libressl-2.8/ssl/ssl_pkt.c:386:
19---
20no peer certificate available
21---
22No client certificate CA names sent
23---
24SSL handshake has read 5 bytes and written 0 bytes
25---
26New, (NONE), Cipher is (NONE)
27Secure Renegotiation IS NOT supported
28Compression: NONE
29Expansion: NONE
30No ALPN negotiated
31SSL-Session:
32    Protocol  : TLSv1
33    Cipher    : 0000
34    Session-ID: 
35    Session-ID-ctx: 
36    Master-Key: 
37    Start Time: 1636579015
38    Timeout   : 7200 (sec)
39    Verify return code: 0 (ok)
40---
41
42# Licensed to the Apache Software Foundation (ASF) under one or more
43# contributor license agreements.  See the NOTICE file distributed with
44# this work for additional information regarding copyright ownership.
45# The ASF licenses this file to You under the Apache License, Version 2.0
46# (the &quot;License&quot;); you may not use this file except in compliance with
47# the License.  You may obtain a copy of the License at
48#
49#    http://www.apache.org/licenses/LICENSE-2.0
50#
51# Unless required by applicable law or agreed to in writing, software
52# distributed under the License is distributed on an &quot;AS IS&quot; BASIS,
53# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
54# See the License for the specific language governing permissions and
55# limitations under the License.
56
57# see kafka.server.KafkaConfig for additional details and defaults
58
59############################# Server Basics #############################
60
61# The id of the broker. This must be set to a unique integer for each broker.
62broker.id=0
63
64############################# Socket Server Settings #############################
65
66# The address the socket server listens on. It will get the value returned from 
67# java.net.InetAddress.getCanonicalHostName() if not configured.
68#   FORMAT:
69#     listeners = listener_name://host_name:port
70#   EXAMPLE:
71#     listeners = PLAINTEXT://your.host.name:9092
72# SSL CHANGE
73listeners=PLAINTEXT://localhost:9092,SSL://localhost:9093
74
75# Hostname and port the broker will advertise to producers and consumers. If not set, 
76# it uses the value for &quot;listeners&quot; if configured.  Otherwise, it will use the value
77# returned from java.net.InetAddress.getCanonicalHostName().
78# SSL CHANGE
79advertised.listeners=PLAINTEXT://localhost:9092,SSL://localhost:9093
80ssl.client.auth=none
81
82# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
83listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
84
85# The number of threads that the server uses for receiving requests from the network and sending responses to the network
86num.network.threads=3
87
88# The number of threads that the server uses for processing requests, which may include disk I/O
89num.io.threads=8
90
91# The send buffer (SO_SNDBUF) used by the socket server
92socket.send.buffer.bytes=102400
93
94# The receive buffer (SO_RCVBUF) used by the socket server
95socket.receive.buffer.bytes=102400
96
97# The maximum size of a request that the socket server will accept (protection against OOM)
98socket.request.max.bytes=104857600
99
100
101############################# Log Basics #############################
102
103# A comma separated list of directories under which to store log files
104log.dirs=/tmp/kafka-logs
105
106# The default number of log partitions per topic. More partitions allow greater
107# parallelism for consumption, but this will also result in more files across
108# the brokers.
109num.partitions=1
110
111# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
112# This value is recommended to be increased for installations with data dirs located in RAID array.
113num.recovery.threads.per.data.dir=1
114
115############################# Internal Topic Settings  #############################
116# The replication factor for the group metadata internal topics &quot;__consumer_offsets&quot; and &quot;__transaction_state&quot;
117# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
118offsets.topic.replication.factor=1
119transaction.state.log.replication.factor=1
120transaction.state.log.min.isr=1
121
122############################# Log Flush Policy #############################
123
124# Messages are immediately written to the filesystem but by default we only fsync() to sync
125# the OS cache lazily. The following configurations control the flush of data to disk.
126# There are a few important trade-offs here:
127#    1. Durability: Unflushed data may be lost if you are not using replication.
128#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
129#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
130# The settings below allow one to configure the flush policy to flush data after a period of time or
131# every N messages (or both). This can be done globally and overridden on a per-topic basis.
132
133# The number of messages to accept before forcing a flush of data to disk
134#log.flush.interval.messages=10000
135
136# The maximum amount of time a message can sit in a log before we force a flush
137#log.flush.interval.ms=1000
138
139############################# Log Retention Policy #############################
140
141# The following configurations control the disposal of log segments. The policy can
142# be set to delete segments after a period of time, or after a given size has accumulated.
143# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
144# from the end of the log.
145
146# The minimum age of a log file to be eligible for deletion due to age
147log.retention.hours=168
148
149# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
150# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
151#log.retention.bytes=1073741824
152
153# The maximum size of a log segment file. When this size is reached a new log segment will be created.
154log.segment.bytes=1073741824
155
156# The interval at which log segments are checked to see if they can be deleted according
157# to the retention policies
158log.retention.check.interval.ms=300000
159
160############################# Zookeeper #############################
161
162# Zookeeper connection string (see zookeeper docs for details).
163# This is a comma separated host:port pairs, each corresponding to a zk
164# server. e.g. &quot;127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002&quot;.
165# You can also append an optional chroot string to the urls to specify the
166# root directory for all kafka znodes.
167zookeeper.connect=localhost:2181
168
169# Timeout in ms for connecting to zookeeper
170zookeeper.connection.timeout.ms=18000
171
172##################### Confluent Metrics Reporter #######################
173# Confluent Control Center and Confluent Auto Data Balancer integration
174#
175# Uncomment the following lines to publish monitoring data for
176# Confluent Control Center and Confluent Auto Data Balancer
177# If you are using a dedicated metrics cluster, also adjust the settings
178# to point to your metrics kakfa cluster.
179#metric.reporters=io.confluent.metrics.reporter.ConfluentMetricsReporter
180#confluent.metrics.reporter.bootstrap.servers=localhost:9092
181#
182# Uncomment the following line if the metrics cluster has a single broker
183#confluent.metrics.reporter.topic.replicas=1
184
185############################# Group Coordinator Settings #############################
186
187# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
188# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
189# The default value for this is 3 seconds.
190# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
191# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
192group.initial.rebalance.delay.ms=0
193
194
195############################# Confluent Authorizer Settings  #############################
196
197# Uncomment to enable Confluent Authorizer with support for ACLs, LDAP groups and RBAC
198#authorizer.class.name=io.confluent.kafka.security.authorizer.ConfluentServerAuthorizer
199# Semi-colon separated list of super users in the format &lt;principalType&gt;:&lt;principalName&gt;
200#super.users=
201# Specify a valid Confluent license. By default free-tier license will be used
202#confluent.license=
203# Replication factor for the topic used for licensing. Default is 3.
204confluent.license.topic.replication.factor=1
205
206# Uncomment the following lines and specify values where required to enable CONFLUENT provider for RBAC and centralized ACLs
207# Enable CONFLUENT provider 
208#confluent.authorizer.access.rule.providers=ZK_ACL,CONFLUENT
209# Bootstrap servers for RBAC metadata. Must be provided if this broker is not in the metadata cluster
210#confluent.metadata.bootstrap.servers=PLAINTEXT://127.0.0.1:9092
211# Replication factor for the metadata topic used for authorization. Default is 3.
212confluent.metadata.topic.replication.factor=1
213
214# Replication factor for the topic used for audit logs. Default is 3.
215confluent.security.event.logger.exporter.kafka.topic.replicas=1
216
217# Listeners for metadata server
218#confluent.metadata.server.listeners=http://0.0.0.0:8090
219# Advertised listeners for metadata server
220#confluent.metadata.server.advertised.listeners=http://127.0.0.1:8090
221
222############################# Confluent Data Balancer Settings  #############################
223
224# The Confluent Data Balancer is used to measure the load across the Kafka cluster and move data
225# around as necessary. Comment out this line to disable the Data Balancer.
226confluent.balancer.enable=true
227
228# By default, the Data Balancer will only move data when an empty broker (one with no partitions on it)
229# is added to the cluster or a broker failure is detected. Comment out this line to allow the Data
230# Balancer to balance load across the cluster whenever an imbalance is detected.
231#confluent.balancer.heal.uneven.load.trigger=ANY_UNEVEN_LOAD
232
233# The default time to declare a broker permanently failed is 1 hour (3600000 ms).
234# Uncomment this line to turn off broker failure detection, or adjust the threshold
235# to change the duration before a broker is declared failed.
236#confluent.balancer.heal.broker.failure.threshold.ms=-1
237
238# Edit and uncomment the following line to limit the network bandwidth used by data balancing operations.
239# This value is in bytes/sec/broker. The default is 10MB/sec.
240#confluent.balancer.throttle.bytes.per.second=10485760
241
242# Capacity Limits -- when set to positive values, the Data Balancer will attempt to keep
243# resource usage per-broker below these limits.
244# Edit and uncomment this line to limit the maximum number of replicas per broker. Default is unlimited.
245#confluent.balancer.max.replicas=10000
246
247# Edit and uncomment this line to limit what fraction of the log disk (0-1.0) is used before rebalancing.
248# The default (below) is 85% of the log disk.
249#confluent.balancer.disk.max.load=0.85
250
251# Edit and uncomment these lines to define a maximum network capacity per broker, in bytes per
252# second. The Data Balancer will attempt to ensure that brokers are using less than this amount
253# of network bandwidth when rebalancing.
254# Here, 10MB/s. The default is unlimited capacity.
255#confluent.balancer.network.in.max.bytes.per.second=10485760
256#confluent.balancer.network.out.max.bytes.per.second=10485760
257
258# Edit and uncomment this line to identify specific topics that should not be moved by the data balancer.
259# Removal operations always move topics regardless of this setting.
260#confluent.balancer.exclude.topic.names=
261
262# Edit and uncomment this line to identify topic prefixes that should not be moved by the data balancer.
263# (For example, a &quot;confluent.balancer&quot; prefix will match all of &quot;confluent.balancer.a&quot;, &quot;confluent.balancer.b&quot;,
264# &quot;confluent.balancer.c&quot;, and so on.)
265# Removal operations always move topics regardless of this setting.
266#confluent.balancer.exclude.topic.prefixes=
267
268# The replication factor for the topics the Data Balancer uses to store internal state.
269# For anything other than development testing, a value greater than 1 is recommended to ensure availability.
270# The default value is 3.
271confluent.balancer.topic.replication.factor=1
272
273################################## Confluent Telemetry Settings  ##################################
274
275# To start using Telemetry, first generate a Confluent Cloud API key/secret. This can be done with
276# instructions at https://docs.confluent.io/current/cloud/using/api-keys.html. Note that you should
277# be using the '--resource cloud' flag.
278#
279# After generating an API key/secret, to enable Telemetry uncomment the lines below and paste
280# in your API key/secret.
281#
282#confluent.telemetry.enabled=true
283#confluent.telemetry.api.key=&lt;CLOUD_API_KEY&gt;
284#confluent.telemetry.api.secret=&lt;CCLOUD_API_SECRET&gt;
285
286############ SSL #################
287
288ssl.truststore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/truststore/kafka.truststore.jks
289ssl.truststore.password=test123
290ssl.keystore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/keystore/kafka.keystore.jks
291ssl.keystore.password=test123
292ssl.key.password=test123
293
294# confluent.metrics.reporter.bootstrap.servers=localhost:9093
295# confluent.metrics.reporter.security.protocol=SSL
296# confluent.metrics.reporter.ssl.truststore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/truststore/kafka.truststore.jks
297# confluent.metrics.reporter.ssl.truststore.password=test123
298# confluent.metrics.reporter.ssl.keystore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/keystore/kafka.keystore.jks
299# confluent.metrics.reporter.ssl.keystore.password=test123
300# confluent.metrics.reporter.ssl.key.password=test123
301
302

client-ssl.properties:

1Karans-MacBook-Pro:keystore karanalang$ $CONFLUENT_HOME/bin/kafka-console-producer --broker-list localhost:9093 --topic karantest --producer.config $CONFLUENT_HOME/props/client-ssl.properties
2&gt;[2021-11-10 13:15:09,824] ERROR [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9093) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
3[2021-11-10 13:15:09,826] WARN [Producer clientId=console-producer] Bootstrap broker localhost:9093 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
4[2021-11-10 13:15:10,018] ERROR [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9093) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
5[2021-11-10 13:15:10,019] WARN [Producer clientId=console-producer] Bootstrap broker localhost:9093 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
6[2021-11-10 13:15:10,195] ERROR [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9093) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
7Karans-MacBook-Pro:keystore karanalang$ openssl s_client -debug -connect localhost:9093 -tls1
8CONNECTED(00000005)
9write to 0x13d7bdf90 [0x13e01ea03] (118 bytes =&gt; 118 (0x76))
100000 - 16 03 01 00 71 01 00 00-6d 03 01 81 e8 00 cd c4   ....q...m.......
110010 - 04 4b 64 86 3e 30 97 32-c3 66 3a 8c ed 05 bf 97   .Kd.&gt;0.2.f:.....
120020 - ff d5 b2 a4 26 fe 99 c0-7f 94 a1 00 00 2e c0 14   ....&amp;...........
130030 - c0 0a 00 39 ff 85 00 88-00 81 00 35 00 84 c0 13   ...9.......5....
14---
150076 - &lt;SPACES/NULS&gt;
16read from 0x13d7bdf90 [0x13e01a803] (5 bytes =&gt; 5 (0x5))
170005 - &lt;SPACES/NULS&gt;
184307385836:error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong version number:/System/Volumes/Data/SWE/macOS/BuildRoots/e90674e518/Library/Caches/com.apple.xbs/Sources/libressl/libressl-56.60.2/libressl-2.8/ssl/ssl_pkt.c:386:
19---
20no peer certificate available
21---
22No client certificate CA names sent
23---
24SSL handshake has read 5 bytes and written 0 bytes
25---
26New, (NONE), Cipher is (NONE)
27Secure Renegotiation IS NOT supported
28Compression: NONE
29Expansion: NONE
30No ALPN negotiated
31SSL-Session:
32    Protocol  : TLSv1
33    Cipher    : 0000
34    Session-ID: 
35    Session-ID-ctx: 
36    Master-Key: 
37    Start Time: 1636579015
38    Timeout   : 7200 (sec)
39    Verify return code: 0 (ok)
40---
41
42# Licensed to the Apache Software Foundation (ASF) under one or more
43# contributor license agreements.  See the NOTICE file distributed with
44# this work for additional information regarding copyright ownership.
45# The ASF licenses this file to You under the Apache License, Version 2.0
46# (the &quot;License&quot;); you may not use this file except in compliance with
47# the License.  You may obtain a copy of the License at
48#
49#    http://www.apache.org/licenses/LICENSE-2.0
50#
51# Unless required by applicable law or agreed to in writing, software
52# distributed under the License is distributed on an &quot;AS IS&quot; BASIS,
53# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
54# See the License for the specific language governing permissions and
55# limitations under the License.
56
57# see kafka.server.KafkaConfig for additional details and defaults
58
59############################# Server Basics #############################
60
61# The id of the broker. This must be set to a unique integer for each broker.
62broker.id=0
63
64############################# Socket Server Settings #############################
65
66# The address the socket server listens on. It will get the value returned from 
67# java.net.InetAddress.getCanonicalHostName() if not configured.
68#   FORMAT:
69#     listeners = listener_name://host_name:port
70#   EXAMPLE:
71#     listeners = PLAINTEXT://your.host.name:9092
72# SSL CHANGE
73listeners=PLAINTEXT://localhost:9092,SSL://localhost:9093
74
75# Hostname and port the broker will advertise to producers and consumers. If not set, 
76# it uses the value for &quot;listeners&quot; if configured.  Otherwise, it will use the value
77# returned from java.net.InetAddress.getCanonicalHostName().
78# SSL CHANGE
79advertised.listeners=PLAINTEXT://localhost:9092,SSL://localhost:9093
80ssl.client.auth=none
81
82# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
83listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
84
85# The number of threads that the server uses for receiving requests from the network and sending responses to the network
86num.network.threads=3
87
88# The number of threads that the server uses for processing requests, which may include disk I/O
89num.io.threads=8
90
91# The send buffer (SO_SNDBUF) used by the socket server
92socket.send.buffer.bytes=102400
93
94# The receive buffer (SO_RCVBUF) used by the socket server
95socket.receive.buffer.bytes=102400
96
97# The maximum size of a request that the socket server will accept (protection against OOM)
98socket.request.max.bytes=104857600
99
100
101############################# Log Basics #############################
102
103# A comma separated list of directories under which to store log files
104log.dirs=/tmp/kafka-logs
105
106# The default number of log partitions per topic. More partitions allow greater
107# parallelism for consumption, but this will also result in more files across
108# the brokers.
109num.partitions=1
110
111# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
112# This value is recommended to be increased for installations with data dirs located in RAID array.
113num.recovery.threads.per.data.dir=1
114
115############################# Internal Topic Settings  #############################
116# The replication factor for the group metadata internal topics &quot;__consumer_offsets&quot; and &quot;__transaction_state&quot;
117# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
118offsets.topic.replication.factor=1
119transaction.state.log.replication.factor=1
120transaction.state.log.min.isr=1
121
122############################# Log Flush Policy #############################
123
124# Messages are immediately written to the filesystem but by default we only fsync() to sync
125# the OS cache lazily. The following configurations control the flush of data to disk.
126# There are a few important trade-offs here:
127#    1. Durability: Unflushed data may be lost if you are not using replication.
128#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
129#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
130# The settings below allow one to configure the flush policy to flush data after a period of time or
131# every N messages (or both). This can be done globally and overridden on a per-topic basis.
132
133# The number of messages to accept before forcing a flush of data to disk
134#log.flush.interval.messages=10000
135
136# The maximum amount of time a message can sit in a log before we force a flush
137#log.flush.interval.ms=1000
138
139############################# Log Retention Policy #############################
140
141# The following configurations control the disposal of log segments. The policy can
142# be set to delete segments after a period of time, or after a given size has accumulated.
143# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
144# from the end of the log.
145
146# The minimum age of a log file to be eligible for deletion due to age
147log.retention.hours=168
148
149# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
150# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
151#log.retention.bytes=1073741824
152
153# The maximum size of a log segment file. When this size is reached a new log segment will be created.
154log.segment.bytes=1073741824
155
156# The interval at which log segments are checked to see if they can be deleted according
157# to the retention policies
158log.retention.check.interval.ms=300000
159
160############################# Zookeeper #############################
161
162# Zookeeper connection string (see zookeeper docs for details).
163# This is a comma separated host:port pairs, each corresponding to a zk
164# server. e.g. &quot;127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002&quot;.
165# You can also append an optional chroot string to the urls to specify the
166# root directory for all kafka znodes.
167zookeeper.connect=localhost:2181
168
169# Timeout in ms for connecting to zookeeper
170zookeeper.connection.timeout.ms=18000
171
172##################### Confluent Metrics Reporter #######################
173# Confluent Control Center and Confluent Auto Data Balancer integration
174#
175# Uncomment the following lines to publish monitoring data for
176# Confluent Control Center and Confluent Auto Data Balancer
177# If you are using a dedicated metrics cluster, also adjust the settings
178# to point to your metrics kakfa cluster.
179#metric.reporters=io.confluent.metrics.reporter.ConfluentMetricsReporter
180#confluent.metrics.reporter.bootstrap.servers=localhost:9092
181#
182# Uncomment the following line if the metrics cluster has a single broker
183#confluent.metrics.reporter.topic.replicas=1
184
185############################# Group Coordinator Settings #############################
186
187# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
188# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
189# The default value for this is 3 seconds.
190# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
191# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
192group.initial.rebalance.delay.ms=0
193
194
195############################# Confluent Authorizer Settings  #############################
196
197# Uncomment to enable Confluent Authorizer with support for ACLs, LDAP groups and RBAC
198#authorizer.class.name=io.confluent.kafka.security.authorizer.ConfluentServerAuthorizer
199# Semi-colon separated list of super users in the format &lt;principalType&gt;:&lt;principalName&gt;
200#super.users=
201# Specify a valid Confluent license. By default free-tier license will be used
202#confluent.license=
203# Replication factor for the topic used for licensing. Default is 3.
204confluent.license.topic.replication.factor=1
205
206# Uncomment the following lines and specify values where required to enable CONFLUENT provider for RBAC and centralized ACLs
207# Enable CONFLUENT provider 
208#confluent.authorizer.access.rule.providers=ZK_ACL,CONFLUENT
209# Bootstrap servers for RBAC metadata. Must be provided if this broker is not in the metadata cluster
210#confluent.metadata.bootstrap.servers=PLAINTEXT://127.0.0.1:9092
211# Replication factor for the metadata topic used for authorization. Default is 3.
212confluent.metadata.topic.replication.factor=1
213
214# Replication factor for the topic used for audit logs. Default is 3.
215confluent.security.event.logger.exporter.kafka.topic.replicas=1
216
217# Listeners for metadata server
218#confluent.metadata.server.listeners=http://0.0.0.0:8090
219# Advertised listeners for metadata server
220#confluent.metadata.server.advertised.listeners=http://127.0.0.1:8090
221
222############################# Confluent Data Balancer Settings  #############################
223
224# The Confluent Data Balancer is used to measure the load across the Kafka cluster and move data
225# around as necessary. Comment out this line to disable the Data Balancer.
226confluent.balancer.enable=true
227
228# By default, the Data Balancer will only move data when an empty broker (one with no partitions on it)
229# is added to the cluster or a broker failure is detected. Comment out this line to allow the Data
230# Balancer to balance load across the cluster whenever an imbalance is detected.
231#confluent.balancer.heal.uneven.load.trigger=ANY_UNEVEN_LOAD
232
233# The default time to declare a broker permanently failed is 1 hour (3600000 ms).
234# Uncomment this line to turn off broker failure detection, or adjust the threshold
235# to change the duration before a broker is declared failed.
236#confluent.balancer.heal.broker.failure.threshold.ms=-1
237
238# Edit and uncomment the following line to limit the network bandwidth used by data balancing operations.
239# This value is in bytes/sec/broker. The default is 10MB/sec.
240#confluent.balancer.throttle.bytes.per.second=10485760
241
242# Capacity Limits -- when set to positive values, the Data Balancer will attempt to keep
243# resource usage per-broker below these limits.
244# Edit and uncomment this line to limit the maximum number of replicas per broker. Default is unlimited.
245#confluent.balancer.max.replicas=10000
246
247# Edit and uncomment this line to limit what fraction of the log disk (0-1.0) is used before rebalancing.
248# The default (below) is 85% of the log disk.
249#confluent.balancer.disk.max.load=0.85
250
251# Edit and uncomment these lines to define a maximum network capacity per broker, in bytes per
252# second. The Data Balancer will attempt to ensure that brokers are using less than this amount
253# of network bandwidth when rebalancing.
254# Here, 10MB/s. The default is unlimited capacity.
255#confluent.balancer.network.in.max.bytes.per.second=10485760
256#confluent.balancer.network.out.max.bytes.per.second=10485760
257
258# Edit and uncomment this line to identify specific topics that should not be moved by the data balancer.
259# Removal operations always move topics regardless of this setting.
260#confluent.balancer.exclude.topic.names=
261
262# Edit and uncomment this line to identify topic prefixes that should not be moved by the data balancer.
263# (For example, a &quot;confluent.balancer&quot; prefix will match all of &quot;confluent.balancer.a&quot;, &quot;confluent.balancer.b&quot;,
264# &quot;confluent.balancer.c&quot;, and so on.)
265# Removal operations always move topics regardless of this setting.
266#confluent.balancer.exclude.topic.prefixes=
267
268# The replication factor for the topics the Data Balancer uses to store internal state.
269# For anything other than development testing, a value greater than 1 is recommended to ensure availability.
270# The default value is 3.
271confluent.balancer.topic.replication.factor=1
272
273################################## Confluent Telemetry Settings  ##################################
274
275# To start using Telemetry, first generate a Confluent Cloud API key/secret. This can be done with
276# instructions at https://docs.confluent.io/current/cloud/using/api-keys.html. Note that you should
277# be using the '--resource cloud' flag.
278#
279# After generating an API key/secret, to enable Telemetry uncomment the lines below and paste
280# in your API key/secret.
281#
282#confluent.telemetry.enabled=true
283#confluent.telemetry.api.key=&lt;CLOUD_API_KEY&gt;
284#confluent.telemetry.api.secret=&lt;CCLOUD_API_SECRET&gt;
285
286############ SSL #################
287
288ssl.truststore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/truststore/kafka.truststore.jks
289ssl.truststore.password=test123
290ssl.keystore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/keystore/kafka.keystore.jks
291ssl.keystore.password=test123
292ssl.key.password=test123
293
294# confluent.metrics.reporter.bootstrap.servers=localhost:9093
295# confluent.metrics.reporter.security.protocol=SSL
296# confluent.metrics.reporter.ssl.truststore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/truststore/kafka.truststore.jks
297# confluent.metrics.reporter.ssl.truststore.password=test123
298# confluent.metrics.reporter.ssl.keystore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/keystore/kafka.keystore.jks
299# confluent.metrics.reporter.ssl.keystore.password=test123
300# confluent.metrics.reporter.ssl.key.password=test123
301
302bootstrap.servers=localhost:9093
303security.protocol=SSL
304ssl.truststore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/truststore/kafka.truststore.jks
305ssl.truststore.password=test123
306ssl.keystore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/keystore/kafka.keystore.jks
307ssl.keystore.password=test123
308ssl.key.password=test123
309

Commands to start the Console Producer/Consumer :

1Karans-MacBook-Pro:keystore karanalang$ $CONFLUENT_HOME/bin/kafka-console-producer --broker-list localhost:9093 --topic karantest --producer.config $CONFLUENT_HOME/props/client-ssl.properties
2&gt;[2021-11-10 13:15:09,824] ERROR [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9093) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
3[2021-11-10 13:15:09,826] WARN [Producer clientId=console-producer] Bootstrap broker localhost:9093 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
4[2021-11-10 13:15:10,018] ERROR [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9093) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
5[2021-11-10 13:15:10,019] WARN [Producer clientId=console-producer] Bootstrap broker localhost:9093 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
6[2021-11-10 13:15:10,195] ERROR [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9093) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
7Karans-MacBook-Pro:keystore karanalang$ openssl s_client -debug -connect localhost:9093 -tls1
8CONNECTED(00000005)
9write to 0x13d7bdf90 [0x13e01ea03] (118 bytes =&gt; 118 (0x76))
100000 - 16 03 01 00 71 01 00 00-6d 03 01 81 e8 00 cd c4   ....q...m.......
110010 - 04 4b 64 86 3e 30 97 32-c3 66 3a 8c ed 05 bf 97   .Kd.&gt;0.2.f:.....
120020 - ff d5 b2 a4 26 fe 99 c0-7f 94 a1 00 00 2e c0 14   ....&amp;...........
130030 - c0 0a 00 39 ff 85 00 88-00 81 00 35 00 84 c0 13   ...9.......5....
14---
150076 - &lt;SPACES/NULS&gt;
16read from 0x13d7bdf90 [0x13e01a803] (5 bytes =&gt; 5 (0x5))
170005 - &lt;SPACES/NULS&gt;
184307385836:error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong version number:/System/Volumes/Data/SWE/macOS/BuildRoots/e90674e518/Library/Caches/com.apple.xbs/Sources/libressl/libressl-56.60.2/libressl-2.8/ssl/ssl_pkt.c:386:
19---
20no peer certificate available
21---
22No client certificate CA names sent
23---
24SSL handshake has read 5 bytes and written 0 bytes
25---
26New, (NONE), Cipher is (NONE)
27Secure Renegotiation IS NOT supported
28Compression: NONE
29Expansion: NONE
30No ALPN negotiated
31SSL-Session:
32    Protocol  : TLSv1
33    Cipher    : 0000
34    Session-ID: 
35    Session-ID-ctx: 
36    Master-Key: 
37    Start Time: 1636579015
38    Timeout   : 7200 (sec)
39    Verify return code: 0 (ok)
40---
41
42# Licensed to the Apache Software Foundation (ASF) under one or more
43# contributor license agreements.  See the NOTICE file distributed with
44# this work for additional information regarding copyright ownership.
45# The ASF licenses this file to You under the Apache License, Version 2.0
46# (the &quot;License&quot;); you may not use this file except in compliance with
47# the License.  You may obtain a copy of the License at
48#
49#    http://www.apache.org/licenses/LICENSE-2.0
50#
51# Unless required by applicable law or agreed to in writing, software
52# distributed under the License is distributed on an &quot;AS IS&quot; BASIS,
53# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
54# See the License for the specific language governing permissions and
55# limitations under the License.
56
57# see kafka.server.KafkaConfig for additional details and defaults
58
59############################# Server Basics #############################
60
61# The id of the broker. This must be set to a unique integer for each broker.
62broker.id=0
63
64############################# Socket Server Settings #############################
65
66# The address the socket server listens on. It will get the value returned from 
67# java.net.InetAddress.getCanonicalHostName() if not configured.
68#   FORMAT:
69#     listeners = listener_name://host_name:port
70#   EXAMPLE:
71#     listeners = PLAINTEXT://your.host.name:9092
72# SSL CHANGE
73listeners=PLAINTEXT://localhost:9092,SSL://localhost:9093
74
75# Hostname and port the broker will advertise to producers and consumers. If not set, 
76# it uses the value for &quot;listeners&quot; if configured.  Otherwise, it will use the value
77# returned from java.net.InetAddress.getCanonicalHostName().
78# SSL CHANGE
79advertised.listeners=PLAINTEXT://localhost:9092,SSL://localhost:9093
80ssl.client.auth=none
81
82# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
83listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
84
85# The number of threads that the server uses for receiving requests from the network and sending responses to the network
86num.network.threads=3
87
88# The number of threads that the server uses for processing requests, which may include disk I/O
89num.io.threads=8
90
91# The send buffer (SO_SNDBUF) used by the socket server
92socket.send.buffer.bytes=102400
93
94# The receive buffer (SO_RCVBUF) used by the socket server
95socket.receive.buffer.bytes=102400
96
97# The maximum size of a request that the socket server will accept (protection against OOM)
98socket.request.max.bytes=104857600
99
100
101############################# Log Basics #############################
102
103# A comma separated list of directories under which to store log files
104log.dirs=/tmp/kafka-logs
105
106# The default number of log partitions per topic. More partitions allow greater
107# parallelism for consumption, but this will also result in more files across
108# the brokers.
109num.partitions=1
110
111# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
112# This value is recommended to be increased for installations with data dirs located in RAID array.
113num.recovery.threads.per.data.dir=1
114
115############################# Internal Topic Settings  #############################
116# The replication factor for the group metadata internal topics &quot;__consumer_offsets&quot; and &quot;__transaction_state&quot;
117# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
118offsets.topic.replication.factor=1
119transaction.state.log.replication.factor=1
120transaction.state.log.min.isr=1
121
122############################# Log Flush Policy #############################
123
124# Messages are immediately written to the filesystem but by default we only fsync() to sync
125# the OS cache lazily. The following configurations control the flush of data to disk.
126# There are a few important trade-offs here:
127#    1. Durability: Unflushed data may be lost if you are not using replication.
128#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
129#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
130# The settings below allow one to configure the flush policy to flush data after a period of time or
131# every N messages (or both). This can be done globally and overridden on a per-topic basis.
132
133# The number of messages to accept before forcing a flush of data to disk
134#log.flush.interval.messages=10000
135
136# The maximum amount of time a message can sit in a log before we force a flush
137#log.flush.interval.ms=1000
138
139############################# Log Retention Policy #############################
140
141# The following configurations control the disposal of log segments. The policy can
142# be set to delete segments after a period of time, or after a given size has accumulated.
143# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
144# from the end of the log.
145
146# The minimum age of a log file to be eligible for deletion due to age
147log.retention.hours=168
148
149# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
150# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
151#log.retention.bytes=1073741824
152
153# The maximum size of a log segment file. When this size is reached a new log segment will be created.
154log.segment.bytes=1073741824
155
156# The interval at which log segments are checked to see if they can be deleted according
157# to the retention policies
158log.retention.check.interval.ms=300000
159
160############################# Zookeeper #############################
161
162# Zookeeper connection string (see zookeeper docs for details).
163# This is a comma separated host:port pairs, each corresponding to a zk
164# server. e.g. &quot;127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002&quot;.
165# You can also append an optional chroot string to the urls to specify the
166# root directory for all kafka znodes.
167zookeeper.connect=localhost:2181
168
169# Timeout in ms for connecting to zookeeper
170zookeeper.connection.timeout.ms=18000
171
172##################### Confluent Metrics Reporter #######################
173# Confluent Control Center and Confluent Auto Data Balancer integration
174#
175# Uncomment the following lines to publish monitoring data for
176# Confluent Control Center and Confluent Auto Data Balancer
177# If you are using a dedicated metrics cluster, also adjust the settings
178# to point to your metrics kakfa cluster.
179#metric.reporters=io.confluent.metrics.reporter.ConfluentMetricsReporter
180#confluent.metrics.reporter.bootstrap.servers=localhost:9092
181#
182# Uncomment the following line if the metrics cluster has a single broker
183#confluent.metrics.reporter.topic.replicas=1
184
185############################# Group Coordinator Settings #############################
186
187# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
188# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
189# The default value for this is 3 seconds.
190# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
191# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
192group.initial.rebalance.delay.ms=0
193
194
195############################# Confluent Authorizer Settings  #############################
196
197# Uncomment to enable Confluent Authorizer with support for ACLs, LDAP groups and RBAC
198#authorizer.class.name=io.confluent.kafka.security.authorizer.ConfluentServerAuthorizer
199# Semi-colon separated list of super users in the format &lt;principalType&gt;:&lt;principalName&gt;
200#super.users=
201# Specify a valid Confluent license. By default free-tier license will be used
202#confluent.license=
203# Replication factor for the topic used for licensing. Default is 3.
204confluent.license.topic.replication.factor=1
205
206# Uncomment the following lines and specify values where required to enable CONFLUENT provider for RBAC and centralized ACLs
207# Enable CONFLUENT provider 
208#confluent.authorizer.access.rule.providers=ZK_ACL,CONFLUENT
209# Bootstrap servers for RBAC metadata. Must be provided if this broker is not in the metadata cluster
210#confluent.metadata.bootstrap.servers=PLAINTEXT://127.0.0.1:9092
211# Replication factor for the metadata topic used for authorization. Default is 3.
212confluent.metadata.topic.replication.factor=1
213
214# Replication factor for the topic used for audit logs. Default is 3.
215confluent.security.event.logger.exporter.kafka.topic.replicas=1
216
217# Listeners for metadata server
218#confluent.metadata.server.listeners=http://0.0.0.0:8090
219# Advertised listeners for metadata server
220#confluent.metadata.server.advertised.listeners=http://127.0.0.1:8090
221
222############################# Confluent Data Balancer Settings  #############################
223
224# The Confluent Data Balancer is used to measure the load across the Kafka cluster and move data
225# around as necessary. Comment out this line to disable the Data Balancer.
226confluent.balancer.enable=true
227
228# By default, the Data Balancer will only move data when an empty broker (one with no partitions on it)
229# is added to the cluster or a broker failure is detected. Comment out this line to allow the Data
230# Balancer to balance load across the cluster whenever an imbalance is detected.
231#confluent.balancer.heal.uneven.load.trigger=ANY_UNEVEN_LOAD
232
233# The default time to declare a broker permanently failed is 1 hour (3600000 ms).
234# Uncomment this line to turn off broker failure detection, or adjust the threshold
235# to change the duration before a broker is declared failed.
236#confluent.balancer.heal.broker.failure.threshold.ms=-1
237
238# Edit and uncomment the following line to limit the network bandwidth used by data balancing operations.
239# This value is in bytes/sec/broker. The default is 10MB/sec.
240#confluent.balancer.throttle.bytes.per.second=10485760
241
242# Capacity Limits -- when set to positive values, the Data Balancer will attempt to keep
243# resource usage per-broker below these limits.
244# Edit and uncomment this line to limit the maximum number of replicas per broker. Default is unlimited.
245#confluent.balancer.max.replicas=10000
246
247# Edit and uncomment this line to limit what fraction of the log disk (0-1.0) is used before rebalancing.
248# The default (below) is 85% of the log disk.
249#confluent.balancer.disk.max.load=0.85
250
251# Edit and uncomment these lines to define a maximum network capacity per broker, in bytes per
252# second. The Data Balancer will attempt to ensure that brokers are using less than this amount
253# of network bandwidth when rebalancing.
254# Here, 10MB/s. The default is unlimited capacity.
255#confluent.balancer.network.in.max.bytes.per.second=10485760
256#confluent.balancer.network.out.max.bytes.per.second=10485760
257
258# Edit and uncomment this line to identify specific topics that should not be moved by the data balancer.
259# Removal operations always move topics regardless of this setting.
260#confluent.balancer.exclude.topic.names=
261
262# Edit and uncomment this line to identify topic prefixes that should not be moved by the data balancer.
263# (For example, a &quot;confluent.balancer&quot; prefix will match all of &quot;confluent.balancer.a&quot;, &quot;confluent.balancer.b&quot;,
264# &quot;confluent.balancer.c&quot;, and so on.)
265# Removal operations always move topics regardless of this setting.
266#confluent.balancer.exclude.topic.prefixes=
267
268# The replication factor for the topics the Data Balancer uses to store internal state.
269# For anything other than development testing, a value greater than 1 is recommended to ensure availability.
270# The default value is 3.
271confluent.balancer.topic.replication.factor=1
272
273################################## Confluent Telemetry Settings  ##################################
274
275# To start using Telemetry, first generate a Confluent Cloud API key/secret. This can be done with
276# instructions at https://docs.confluent.io/current/cloud/using/api-keys.html. Note that you should
277# be using the '--resource cloud' flag.
278#
279# After generating an API key/secret, to enable Telemetry uncomment the lines below and paste
280# in your API key/secret.
281#
282#confluent.telemetry.enabled=true
283#confluent.telemetry.api.key=&lt;CLOUD_API_KEY&gt;
284#confluent.telemetry.api.secret=&lt;CCLOUD_API_SECRET&gt;
285
286############ SSL #################
287
288ssl.truststore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/truststore/kafka.truststore.jks
289ssl.truststore.password=test123
290ssl.keystore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/keystore/kafka.keystore.jks
291ssl.keystore.password=test123
292ssl.key.password=test123
293
294# confluent.metrics.reporter.bootstrap.servers=localhost:9093
295# confluent.metrics.reporter.security.protocol=SSL
296# confluent.metrics.reporter.ssl.truststore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/truststore/kafka.truststore.jks
297# confluent.metrics.reporter.ssl.truststore.password=test123
298# confluent.metrics.reporter.ssl.keystore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/keystore/kafka.keystore.jks
299# confluent.metrics.reporter.ssl.keystore.password=test123
300# confluent.metrics.reporter.ssl.key.password=test123
301
302bootstrap.servers=localhost:9093
303security.protocol=SSL
304ssl.truststore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/truststore/kafka.truststore.jks
305ssl.truststore.password=test123
306ssl.keystore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/keystore/kafka.keystore.jks
307ssl.keystore.password=test123
308ssl.key.password=test123
309$CONFLUENT_HOME/bin/kafka-console-producer --broker-list localhost:9093 --topic karantest --producer.config $CONFLUENT_HOME/props/client-ssl.properties
310$CONFLUENT_HOME/bin/kafka-console-consumer --bootstrap-server localhost:9093 --topic karantest --consumer.config $CONFLUENT_HOME/props/client-ssl.properties --from-beginning
311

Any ideas on how to resolve this ?

Update : This is the error when i try to debug (using - export KAFKA_OPTS=-Djavax.net.debug=all)

1Karans-MacBook-Pro:keystore karanalang$ $CONFLUENT_HOME/bin/kafka-console-producer --broker-list localhost:9093 --topic karantest --producer.config $CONFLUENT_HOME/props/client-ssl.properties
2&gt;[2021-11-10 13:15:09,824] ERROR [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9093) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
3[2021-11-10 13:15:09,826] WARN [Producer clientId=console-producer] Bootstrap broker localhost:9093 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
4[2021-11-10 13:15:10,018] ERROR [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9093) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
5[2021-11-10 13:15:10,019] WARN [Producer clientId=console-producer] Bootstrap broker localhost:9093 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
6[2021-11-10 13:15:10,195] ERROR [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9093) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
7Karans-MacBook-Pro:keystore karanalang$ openssl s_client -debug -connect localhost:9093 -tls1
8CONNECTED(00000005)
9write to 0x13d7bdf90 [0x13e01ea03] (118 bytes =&gt; 118 (0x76))
100000 - 16 03 01 00 71 01 00 00-6d 03 01 81 e8 00 cd c4   ....q...m.......
110010 - 04 4b 64 86 3e 30 97 32-c3 66 3a 8c ed 05 bf 97   .Kd.&gt;0.2.f:.....
120020 - ff d5 b2 a4 26 fe 99 c0-7f 94 a1 00 00 2e c0 14   ....&amp;...........
130030 - c0 0a 00 39 ff 85 00 88-00 81 00 35 00 84 c0 13   ...9.......5....
14---
150076 - &lt;SPACES/NULS&gt;
16read from 0x13d7bdf90 [0x13e01a803] (5 bytes =&gt; 5 (0x5))
170005 - &lt;SPACES/NULS&gt;
184307385836:error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong version number:/System/Volumes/Data/SWE/macOS/BuildRoots/e90674e518/Library/Caches/com.apple.xbs/Sources/libressl/libressl-56.60.2/libressl-2.8/ssl/ssl_pkt.c:386:
19---
20no peer certificate available
21---
22No client certificate CA names sent
23---
24SSL handshake has read 5 bytes and written 0 bytes
25---
26New, (NONE), Cipher is (NONE)
27Secure Renegotiation IS NOT supported
28Compression: NONE
29Expansion: NONE
30No ALPN negotiated
31SSL-Session:
32    Protocol  : TLSv1
33    Cipher    : 0000
34    Session-ID: 
35    Session-ID-ctx: 
36    Master-Key: 
37    Start Time: 1636579015
38    Timeout   : 7200 (sec)
39    Verify return code: 0 (ok)
40---
41
42# Licensed to the Apache Software Foundation (ASF) under one or more
43# contributor license agreements.  See the NOTICE file distributed with
44# this work for additional information regarding copyright ownership.
45# The ASF licenses this file to You under the Apache License, Version 2.0
46# (the &quot;License&quot;); you may not use this file except in compliance with
47# the License.  You may obtain a copy of the License at
48#
49#    http://www.apache.org/licenses/LICENSE-2.0
50#
51# Unless required by applicable law or agreed to in writing, software
52# distributed under the License is distributed on an &quot;AS IS&quot; BASIS,
53# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
54# See the License for the specific language governing permissions and
55# limitations under the License.
56
57# see kafka.server.KafkaConfig for additional details and defaults
58
59############################# Server Basics #############################
60
61# The id of the broker. This must be set to a unique integer for each broker.
62broker.id=0
63
64############################# Socket Server Settings #############################
65
66# The address the socket server listens on. It will get the value returned from 
67# java.net.InetAddress.getCanonicalHostName() if not configured.
68#   FORMAT:
69#     listeners = listener_name://host_name:port
70#   EXAMPLE:
71#     listeners = PLAINTEXT://your.host.name:9092
72# SSL CHANGE
73listeners=PLAINTEXT://localhost:9092,SSL://localhost:9093
74
75# Hostname and port the broker will advertise to producers and consumers. If not set, 
76# it uses the value for &quot;listeners&quot; if configured.  Otherwise, it will use the value
77# returned from java.net.InetAddress.getCanonicalHostName().
78# SSL CHANGE
79advertised.listeners=PLAINTEXT://localhost:9092,SSL://localhost:9093
80ssl.client.auth=none
81
82# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
83listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
84
85# The number of threads that the server uses for receiving requests from the network and sending responses to the network
86num.network.threads=3
87
88# The number of threads that the server uses for processing requests, which may include disk I/O
89num.io.threads=8
90
91# The send buffer (SO_SNDBUF) used by the socket server
92socket.send.buffer.bytes=102400
93
94# The receive buffer (SO_RCVBUF) used by the socket server
95socket.receive.buffer.bytes=102400
96
97# The maximum size of a request that the socket server will accept (protection against OOM)
98socket.request.max.bytes=104857600
99
100
101############################# Log Basics #############################
102
103# A comma separated list of directories under which to store log files
104log.dirs=/tmp/kafka-logs
105
106# The default number of log partitions per topic. More partitions allow greater
107# parallelism for consumption, but this will also result in more files across
108# the brokers.
109num.partitions=1
110
111# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
112# This value is recommended to be increased for installations with data dirs located in RAID array.
113num.recovery.threads.per.data.dir=1
114
115############################# Internal Topic Settings  #############################
116# The replication factor for the group metadata internal topics &quot;__consumer_offsets&quot; and &quot;__transaction_state&quot;
117# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
118offsets.topic.replication.factor=1
119transaction.state.log.replication.factor=1
120transaction.state.log.min.isr=1
121
122############################# Log Flush Policy #############################
123
124# Messages are immediately written to the filesystem but by default we only fsync() to sync
125# the OS cache lazily. The following configurations control the flush of data to disk.
126# There are a few important trade-offs here:
127#    1. Durability: Unflushed data may be lost if you are not using replication.
128#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
129#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
130# The settings below allow one to configure the flush policy to flush data after a period of time or
131# every N messages (or both). This can be done globally and overridden on a per-topic basis.
132
133# The number of messages to accept before forcing a flush of data to disk
134#log.flush.interval.messages=10000
135
136# The maximum amount of time a message can sit in a log before we force a flush
137#log.flush.interval.ms=1000
138
139############################# Log Retention Policy #############################
140
141# The following configurations control the disposal of log segments. The policy can
142# be set to delete segments after a period of time, or after a given size has accumulated.
143# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
144# from the end of the log.
145
146# The minimum age of a log file to be eligible for deletion due to age
147log.retention.hours=168
148
149# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
150# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
151#log.retention.bytes=1073741824
152
153# The maximum size of a log segment file. When this size is reached a new log segment will be created.
154log.segment.bytes=1073741824
155
156# The interval at which log segments are checked to see if they can be deleted according
157# to the retention policies
158log.retention.check.interval.ms=300000
159
160############################# Zookeeper #############################
161
162# Zookeeper connection string (see zookeeper docs for details).
163# This is a comma separated host:port pairs, each corresponding to a zk
164# server. e.g. &quot;127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002&quot;.
165# You can also append an optional chroot string to the urls to specify the
166# root directory for all kafka znodes.
167zookeeper.connect=localhost:2181
168
169# Timeout in ms for connecting to zookeeper
170zookeeper.connection.timeout.ms=18000
171
172##################### Confluent Metrics Reporter #######################
173# Confluent Control Center and Confluent Auto Data Balancer integration
174#
175# Uncomment the following lines to publish monitoring data for
176# Confluent Control Center and Confluent Auto Data Balancer
177# If you are using a dedicated metrics cluster, also adjust the settings
178# to point to your metrics kakfa cluster.
179#metric.reporters=io.confluent.metrics.reporter.ConfluentMetricsReporter
180#confluent.metrics.reporter.bootstrap.servers=localhost:9092
181#
182# Uncomment the following line if the metrics cluster has a single broker
183#confluent.metrics.reporter.topic.replicas=1
184
185############################# Group Coordinator Settings #############################
186
187# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
188# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
189# The default value for this is 3 seconds.
190# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
191# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
192group.initial.rebalance.delay.ms=0
193
194
195############################# Confluent Authorizer Settings  #############################
196
197# Uncomment to enable Confluent Authorizer with support for ACLs, LDAP groups and RBAC
198#authorizer.class.name=io.confluent.kafka.security.authorizer.ConfluentServerAuthorizer
199# Semi-colon separated list of super users in the format &lt;principalType&gt;:&lt;principalName&gt;
200#super.users=
201# Specify a valid Confluent license. By default free-tier license will be used
202#confluent.license=
203# Replication factor for the topic used for licensing. Default is 3.
204confluent.license.topic.replication.factor=1
205
206# Uncomment the following lines and specify values where required to enable CONFLUENT provider for RBAC and centralized ACLs
207# Enable CONFLUENT provider 
208#confluent.authorizer.access.rule.providers=ZK_ACL,CONFLUENT
209# Bootstrap servers for RBAC metadata. Must be provided if this broker is not in the metadata cluster
210#confluent.metadata.bootstrap.servers=PLAINTEXT://127.0.0.1:9092
211# Replication factor for the metadata topic used for authorization. Default is 3.
212confluent.metadata.topic.replication.factor=1
213
214# Replication factor for the topic used for audit logs. Default is 3.
215confluent.security.event.logger.exporter.kafka.topic.replicas=1
216
217# Listeners for metadata server
218#confluent.metadata.server.listeners=http://0.0.0.0:8090
219# Advertised listeners for metadata server
220#confluent.metadata.server.advertised.listeners=http://127.0.0.1:8090
221
222############################# Confluent Data Balancer Settings  #############################
223
224# The Confluent Data Balancer is used to measure the load across the Kafka cluster and move data
225# around as necessary. Comment out this line to disable the Data Balancer.
226confluent.balancer.enable=true
227
228# By default, the Data Balancer will only move data when an empty broker (one with no partitions on it)
229# is added to the cluster or a broker failure is detected. Comment out this line to allow the Data
230# Balancer to balance load across the cluster whenever an imbalance is detected.
231#confluent.balancer.heal.uneven.load.trigger=ANY_UNEVEN_LOAD
232
233# The default time to declare a broker permanently failed is 1 hour (3600000 ms).
234# Uncomment this line to turn off broker failure detection, or adjust the threshold
235# to change the duration before a broker is declared failed.
236#confluent.balancer.heal.broker.failure.threshold.ms=-1
237
238# Edit and uncomment the following line to limit the network bandwidth used by data balancing operations.
239# This value is in bytes/sec/broker. The default is 10MB/sec.
240#confluent.balancer.throttle.bytes.per.second=10485760
241
242# Capacity Limits -- when set to positive values, the Data Balancer will attempt to keep
243# resource usage per-broker below these limits.
244# Edit and uncomment this line to limit the maximum number of replicas per broker. Default is unlimited.
245#confluent.balancer.max.replicas=10000
246
247# Edit and uncomment this line to limit what fraction of the log disk (0-1.0) is used before rebalancing.
248# The default (below) is 85% of the log disk.
249#confluent.balancer.disk.max.load=0.85
250
251# Edit and uncomment these lines to define a maximum network capacity per broker, in bytes per
252# second. The Data Balancer will attempt to ensure that brokers are using less than this amount
253# of network bandwidth when rebalancing.
254# Here, 10MB/s. The default is unlimited capacity.
255#confluent.balancer.network.in.max.bytes.per.second=10485760
256#confluent.balancer.network.out.max.bytes.per.second=10485760
257
258# Edit and uncomment this line to identify specific topics that should not be moved by the data balancer.
259# Removal operations always move topics regardless of this setting.
260#confluent.balancer.exclude.topic.names=
261
262# Edit and uncomment this line to identify topic prefixes that should not be moved by the data balancer.
263# (For example, a &quot;confluent.balancer&quot; prefix will match all of &quot;confluent.balancer.a&quot;, &quot;confluent.balancer.b&quot;,
264# &quot;confluent.balancer.c&quot;, and so on.)
265# Removal operations always move topics regardless of this setting.
266#confluent.balancer.exclude.topic.prefixes=
267
268# The replication factor for the topics the Data Balancer uses to store internal state.
269# For anything other than development testing, a value greater than 1 is recommended to ensure availability.
270# The default value is 3.
271confluent.balancer.topic.replication.factor=1
272
273################################## Confluent Telemetry Settings  ##################################
274
275# To start using Telemetry, first generate a Confluent Cloud API key/secret. This can be done with
276# instructions at https://docs.confluent.io/current/cloud/using/api-keys.html. Note that you should
277# be using the '--resource cloud' flag.
278#
279# After generating an API key/secret, to enable Telemetry uncomment the lines below and paste
280# in your API key/secret.
281#
282#confluent.telemetry.enabled=true
283#confluent.telemetry.api.key=&lt;CLOUD_API_KEY&gt;
284#confluent.telemetry.api.secret=&lt;CCLOUD_API_SECRET&gt;
285
286############ SSL #################
287
288ssl.truststore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/truststore/kafka.truststore.jks
289ssl.truststore.password=test123
290ssl.keystore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/keystore/kafka.keystore.jks
291ssl.keystore.password=test123
292ssl.key.password=test123
293
294# confluent.metrics.reporter.bootstrap.servers=localhost:9093
295# confluent.metrics.reporter.security.protocol=SSL
296# confluent.metrics.reporter.ssl.truststore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/truststore/kafka.truststore.jks
297# confluent.metrics.reporter.ssl.truststore.password=test123
298# confluent.metrics.reporter.ssl.keystore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/keystore/kafka.keystore.jks
299# confluent.metrics.reporter.ssl.keystore.password=test123
300# confluent.metrics.reporter.ssl.key.password=test123
301
302bootstrap.servers=localhost:9093
303security.protocol=SSL
304ssl.truststore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/truststore/kafka.truststore.jks
305ssl.truststore.password=test123
306ssl.keystore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/keystore/kafka.keystore.jks
307ssl.keystore.password=test123
308ssl.key.password=test123
309$CONFLUENT_HOME/bin/kafka-console-producer --broker-list localhost:9093 --topic karantest --producer.config $CONFLUENT_HOME/props/client-ssl.properties
310$CONFLUENT_HOME/bin/kafka-console-consumer --bootstrap-server localhost:9093 --topic karantest --consumer.config $CONFLUENT_HOME/props/client-ssl.properties --from-beginning
311javax.net.ssl|DEBUG|0E|kafka-producer-network-thread | console-producer|2021-11-10 14:04:26.107 PST|SSLExtensions.java:173|Ignore unavailable extension: status_request
312javax.net.ssl|DEBUG|0E|kafka-producer-network-thread | console-producer|2021-11-10 14:04:26.107 PST|SSLExtensions.java:173|Ignore unavailable extension: status_request
313javax.net.ssl|ERROR|0E|kafka-producer-network-thread | console-producer|2021-11-10 14:04:26.108 PST|TransportContext.java:341|Fatal (CERTIFICATE_UNKNOWN): No name matching localhost found (
314&quot;throwable&quot; : {
315  java.security.cert.CertificateException: No name matching localhost found
316    at java.base/sun.security.util.HostnameChecker.matchDNS(HostnameChecker.java:234)
317    at java.base/sun.security.util.HostnameChecker.match(HostnameChecker.java:103)
318    at java.base/sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:455)
319    at java.base/sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:429)
320    at java.base/sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:283)
321    at java.base/sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:141)
322    at java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.checkServerCerts(CertificateMessage.java:1335)
323    at java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.onConsumeCertificate(CertificateMessage.java:1232)
324    at java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.consume(CertificateMessage.java:1175)
325    at java.base/sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:392)
326    at java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:443)
327    at java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1074)
328    at java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1061)
329    at java.base/java.security.AccessController.doPrivileged(Native Method)
330    at java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask.run(SSLEngineImpl.java:1008)
331    at org.apache.kafka.common.network.SslTransportLayer.runDelegatedTasks(SslTransportLayer.java:509)
332    at org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:601)
333    at org.apache.kafka.common.network.SslTransportLayer.doHandshake(SslTransportLayer.java:447)
334    at org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:332)
335    at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:229)
336    at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:563)
337    at org.apache.kafka.common.network.Selector.poll(Selector.java:499)
338    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:639)
339    at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:327)
340    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:242)
341    at java.base/java.lang.Thread.run(Thread.java:829)}
342
343

ANSWER

Answered 2021-Dec-07 at 20:51

Adding the following in client-ssl.properties resolved the issue:

1Karans-MacBook-Pro:keystore karanalang$ $CONFLUENT_HOME/bin/kafka-console-producer --broker-list localhost:9093 --topic karantest --producer.config $CONFLUENT_HOME/props/client-ssl.properties
2&gt;[2021-11-10 13:15:09,824] ERROR [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9093) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
3[2021-11-10 13:15:09,826] WARN [Producer clientId=console-producer] Bootstrap broker localhost:9093 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
4[2021-11-10 13:15:10,018] ERROR [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9093) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
5[2021-11-10 13:15:10,019] WARN [Producer clientId=console-producer] Bootstrap broker localhost:9093 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
6[2021-11-10 13:15:10,195] ERROR [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9093) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
7Karans-MacBook-Pro:keystore karanalang$ openssl s_client -debug -connect localhost:9093 -tls1
8CONNECTED(00000005)
9write to 0x13d7bdf90 [0x13e01ea03] (118 bytes =&gt; 118 (0x76))
100000 - 16 03 01 00 71 01 00 00-6d 03 01 81 e8 00 cd c4   ....q...m.......
110010 - 04 4b 64 86 3e 30 97 32-c3 66 3a 8c ed 05 bf 97   .Kd.&gt;0.2.f:.....
120020 - ff d5 b2 a4 26 fe 99 c0-7f 94 a1 00 00 2e c0 14   ....&amp;...........
130030 - c0 0a 00 39 ff 85 00 88-00 81 00 35 00 84 c0 13   ...9.......5....
14---
150076 - &lt;SPACES/NULS&gt;
16read from 0x13d7bdf90 [0x13e01a803] (5 bytes =&gt; 5 (0x5))
170005 - &lt;SPACES/NULS&gt;
184307385836:error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong version number:/System/Volumes/Data/SWE/macOS/BuildRoots/e90674e518/Library/Caches/com.apple.xbs/Sources/libressl/libressl-56.60.2/libressl-2.8/ssl/ssl_pkt.c:386:
19---
20no peer certificate available
21---
22No client certificate CA names sent
23---
24SSL handshake has read 5 bytes and written 0 bytes
25---
26New, (NONE), Cipher is (NONE)
27Secure Renegotiation IS NOT supported
28Compression: NONE
29Expansion: NONE
30No ALPN negotiated
31SSL-Session:
32    Protocol  : TLSv1
33    Cipher    : 0000
34    Session-ID: 
35    Session-ID-ctx: 
36    Master-Key: 
37    Start Time: 1636579015
38    Timeout   : 7200 (sec)
39    Verify return code: 0 (ok)
40---
41
42# Licensed to the Apache Software Foundation (ASF) under one or more
43# contributor license agreements.  See the NOTICE file distributed with
44# this work for additional information regarding copyright ownership.
45# The ASF licenses this file to You under the Apache License, Version 2.0
46# (the &quot;License&quot;); you may not use this file except in compliance with
47# the License.  You may obtain a copy of the License at
48#
49#    http://www.apache.org/licenses/LICENSE-2.0
50#
51# Unless required by applicable law or agreed to in writing, software
52# distributed under the License is distributed on an &quot;AS IS&quot; BASIS,
53# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
54# See the License for the specific language governing permissions and
55# limitations under the License.
56
57# see kafka.server.KafkaConfig for additional details and defaults
58
59############################# Server Basics #############################
60
61# The id of the broker. This must be set to a unique integer for each broker.
62broker.id=0
63
64############################# Socket Server Settings #############################
65
66# The address the socket server listens on. It will get the value returned from 
67# java.net.InetAddress.getCanonicalHostName() if not configured.
68#   FORMAT:
69#     listeners = listener_name://host_name:port
70#   EXAMPLE:
71#     listeners = PLAINTEXT://your.host.name:9092
72# SSL CHANGE
73listeners=PLAINTEXT://localhost:9092,SSL://localhost:9093
74
75# Hostname and port the broker will advertise to producers and consumers. If not set, 
76# it uses the value for &quot;listeners&quot; if configured.  Otherwise, it will use the value
77# returned from java.net.InetAddress.getCanonicalHostName().
78# SSL CHANGE
79advertised.listeners=PLAINTEXT://localhost:9092,SSL://localhost:9093
80ssl.client.auth=none
81
82# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
83listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
84
85# The number of threads that the server uses for receiving requests from the network and sending responses to the network
86num.network.threads=3
87
88# The number of threads that the server uses for processing requests, which may include disk I/O
89num.io.threads=8
90
91# The send buffer (SO_SNDBUF) used by the socket server
92socket.send.buffer.bytes=102400
93
94# The receive buffer (SO_RCVBUF) used by the socket server
95socket.receive.buffer.bytes=102400
96
97# The maximum size of a request that the socket server will accept (protection against OOM)
98socket.request.max.bytes=104857600
99
100
101############################# Log Basics #############################
102
103# A comma separated list of directories under which to store log files
104log.dirs=/tmp/kafka-logs
105
106# The default number of log partitions per topic. More partitions allow greater
107# parallelism for consumption, but this will also result in more files across
108# the brokers.
109num.partitions=1
110
111# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
112# This value is recommended to be increased for installations with data dirs located in RAID array.
113num.recovery.threads.per.data.dir=1
114
115############################# Internal Topic Settings  #############################
116# The replication factor for the group metadata internal topics &quot;__consumer_offsets&quot; and &quot;__transaction_state&quot;
117# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
118offsets.topic.replication.factor=1
119transaction.state.log.replication.factor=1
120transaction.state.log.min.isr=1
121
122############################# Log Flush Policy #############################
123
124# Messages are immediately written to the filesystem but by default we only fsync() to sync
125# the OS cache lazily. The following configurations control the flush of data to disk.
126# There are a few important trade-offs here:
127#    1. Durability: Unflushed data may be lost if you are not using replication.
128#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
129#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
130# The settings below allow one to configure the flush policy to flush data after a period of time or
131# every N messages (or both). This can be done globally and overridden on a per-topic basis.
132
133# The number of messages to accept before forcing a flush of data to disk
134#log.flush.interval.messages=10000
135
136# The maximum amount of time a message can sit in a log before we force a flush
137#log.flush.interval.ms=1000
138
139############################# Log Retention Policy #############################
140
141# The following configurations control the disposal of log segments. The policy can
142# be set to delete segments after a period of time, or after a given size has accumulated.
143# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
144# from the end of the log.
145
146# The minimum age of a log file to be eligible for deletion due to age
147log.retention.hours=168
148
149# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
150# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
151#log.retention.bytes=1073741824
152
153# The maximum size of a log segment file. When this size is reached a new log segment will be created.
154log.segment.bytes=1073741824
155
156# The interval at which log segments are checked to see if they can be deleted according
157# to the retention policies
158log.retention.check.interval.ms=300000
159
160############################# Zookeeper #############################
161
162# Zookeeper connection string (see zookeeper docs for details).
163# This is a comma separated host:port pairs, each corresponding to a zk
164# server. e.g. &quot;127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002&quot;.
165# You can also append an optional chroot string to the urls to specify the
166# root directory for all kafka znodes.
167zookeeper.connect=localhost:2181
168
169# Timeout in ms for connecting to zookeeper
170zookeeper.connection.timeout.ms=18000
171
172##################### Confluent Metrics Reporter #######################
173# Confluent Control Center and Confluent Auto Data Balancer integration
174#
175# Uncomment the following lines to publish monitoring data for
176# Confluent Control Center and Confluent Auto Data Balancer
177# If you are using a dedicated metrics cluster, also adjust the settings
178# to point to your metrics kakfa cluster.
179#metric.reporters=io.confluent.metrics.reporter.ConfluentMetricsReporter
180#confluent.metrics.reporter.bootstrap.servers=localhost:9092
181#
182# Uncomment the following line if the metrics cluster has a single broker
183#confluent.metrics.reporter.topic.replicas=1
184
185############################# Group Coordinator Settings #############################
186
187# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
188# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
189# The default value for this is 3 seconds.
190# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
191# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
192group.initial.rebalance.delay.ms=0
193
194
195############################# Confluent Authorizer Settings  #############################
196
197# Uncomment to enable Confluent Authorizer with support for ACLs, LDAP groups and RBAC
198#authorizer.class.name=io.confluent.kafka.security.authorizer.ConfluentServerAuthorizer
199# Semi-colon separated list of super users in the format &lt;principalType&gt;:&lt;principalName&gt;
200#super.users=
201# Specify a valid Confluent license. By default free-tier license will be used
202#confluent.license=
203# Replication factor for the topic used for licensing. Default is 3.
204confluent.license.topic.replication.factor=1
205
206# Uncomment the following lines and specify values where required to enable CONFLUENT provider for RBAC and centralized ACLs
207# Enable CONFLUENT provider 
208#confluent.authorizer.access.rule.providers=ZK_ACL,CONFLUENT
209# Bootstrap servers for RBAC metadata. Must be provided if this broker is not in the metadata cluster
210#confluent.metadata.bootstrap.servers=PLAINTEXT://127.0.0.1:9092
211# Replication factor for the metadata topic used for authorization. Default is 3.
212confluent.metadata.topic.replication.factor=1
213
214# Replication factor for the topic used for audit logs. Default is 3.
215confluent.security.event.logger.exporter.kafka.topic.replicas=1
216
217# Listeners for metadata server
218#confluent.metadata.server.listeners=http://0.0.0.0:8090
219# Advertised listeners for metadata server
220#confluent.metadata.server.advertised.listeners=http://127.0.0.1:8090
221
222############################# Confluent Data Balancer Settings  #############################
223
224# The Confluent Data Balancer is used to measure the load across the Kafka cluster and move data
225# around as necessary. Comment out this line to disable the Data Balancer.
226confluent.balancer.enable=true
227
228# By default, the Data Balancer will only move data when an empty broker (one with no partitions on it)
229# is added to the cluster or a broker failure is detected. Comment out this line to allow the Data
230# Balancer to balance load across the cluster whenever an imbalance is detected.
231#confluent.balancer.heal.uneven.load.trigger=ANY_UNEVEN_LOAD
232
233# The default time to declare a broker permanently failed is 1 hour (3600000 ms).
234# Uncomment this line to turn off broker failure detection, or adjust the threshold
235# to change the duration before a broker is declared failed.
236#confluent.balancer.heal.broker.failure.threshold.ms=-1
237
238# Edit and uncomment the following line to limit the network bandwidth used by data balancing operations.
239# This value is in bytes/sec/broker. The default is 10MB/sec.
240#confluent.balancer.throttle.bytes.per.second=10485760
241
242# Capacity Limits -- when set to positive values, the Data Balancer will attempt to keep
243# resource usage per-broker below these limits.
244# Edit and uncomment this line to limit the maximum number of replicas per broker. Default is unlimited.
245#confluent.balancer.max.replicas=10000
246
247# Edit and uncomment this line to limit what fraction of the log disk (0-1.0) is used before rebalancing.
248# The default (below) is 85% of the log disk.
249#confluent.balancer.disk.max.load=0.85
250
251# Edit and uncomment these lines to define a maximum network capacity per broker, in bytes per
252# second. The Data Balancer will attempt to ensure that brokers are using less than this amount
253# of network bandwidth when rebalancing.
254# Here, 10MB/s. The default is unlimited capacity.
255#confluent.balancer.network.in.max.bytes.per.second=10485760
256#confluent.balancer.network.out.max.bytes.per.second=10485760
257
258# Edit and uncomment this line to identify specific topics that should not be moved by the data balancer.
259# Removal operations always move topics regardless of this setting.
260#confluent.balancer.exclude.topic.names=
261
262# Edit and uncomment this line to identify topic prefixes that should not be moved by the data balancer.
263# (For example, a &quot;confluent.balancer&quot; prefix will match all of &quot;confluent.balancer.a&quot;, &quot;confluent.balancer.b&quot;,
264# &quot;confluent.balancer.c&quot;, and so on.)
265# Removal operations always move topics regardless of this setting.
266#confluent.balancer.exclude.topic.prefixes=
267
268# The replication factor for the topics the Data Balancer uses to store internal state.
269# For anything other than development testing, a value greater than 1 is recommended to ensure availability.
270# The default value is 3.
271confluent.balancer.topic.replication.factor=1
272
273################################## Confluent Telemetry Settings  ##################################
274
275# To start using Telemetry, first generate a Confluent Cloud API key/secret. This can be done with
276# instructions at https://docs.confluent.io/current/cloud/using/api-keys.html. Note that you should
277# be using the '--resource cloud' flag.
278#
279# After generating an API key/secret, to enable Telemetry uncomment the lines below and paste
280# in your API key/secret.
281#
282#confluent.telemetry.enabled=true
283#confluent.telemetry.api.key=&lt;CLOUD_API_KEY&gt;
284#confluent.telemetry.api.secret=&lt;CCLOUD_API_SECRET&gt;
285
286############ SSL #################
287
288ssl.truststore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/truststore/kafka.truststore.jks
289ssl.truststore.password=test123
290ssl.keystore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/keystore/kafka.keystore.jks
291ssl.keystore.password=test123
292ssl.key.password=test123
293
294# confluent.metrics.reporter.bootstrap.servers=localhost:9093
295# confluent.metrics.reporter.security.protocol=SSL
296# confluent.metrics.reporter.ssl.truststore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/truststore/kafka.truststore.jks
297# confluent.metrics.reporter.ssl.truststore.password=test123
298# confluent.metrics.reporter.ssl.keystore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/keystore/kafka.keystore.jks
299# confluent.metrics.reporter.ssl.keystore.password=test123
300# confluent.metrics.reporter.ssl.key.password=test123
301
302bootstrap.servers=localhost:9093
303security.protocol=SSL
304ssl.truststore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/truststore/kafka.truststore.jks
305ssl.truststore.password=test123
306ssl.keystore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/keystore/kafka.keystore.jks
307ssl.keystore.password=test123
308ssl.key.password=test123
309$CONFLUENT_HOME/bin/kafka-console-producer --broker-list localhost:9093 --topic karantest --producer.config $CONFLUENT_HOME/props/client-ssl.properties
310$CONFLUENT_HOME/bin/kafka-console-consumer --bootstrap-server localhost:9093 --topic karantest --consumer.config $CONFLUENT_HOME/props/client-ssl.properties --from-beginning
311javax.net.ssl|DEBUG|0E|kafka-producer-network-thread | console-producer|2021-11-10 14:04:26.107 PST|SSLExtensions.java:173|Ignore unavailable extension: status_request
312javax.net.ssl|DEBUG|0E|kafka-producer-network-thread | console-producer|2021-11-10 14:04:26.107 PST|SSLExtensions.java:173|Ignore unavailable extension: status_request
313javax.net.ssl|ERROR|0E|kafka-producer-network-thread | console-producer|2021-11-10 14:04:26.108 PST|TransportContext.java:341|Fatal (CERTIFICATE_UNKNOWN): No name matching localhost found (
314&quot;throwable&quot; : {
315  java.security.cert.CertificateException: No name matching localhost found
316    at java.base/sun.security.util.HostnameChecker.matchDNS(HostnameChecker.java:234)
317    at java.base/sun.security.util.HostnameChecker.match(HostnameChecker.java:103)
318    at java.base/sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:455)
319    at java.base/sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:429)
320    at java.base/sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:283)
321    at java.base/sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:141)
322    at java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.checkServerCerts(CertificateMessage.java:1335)
323    at java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.onConsumeCertificate(CertificateMessage.java:1232)
324    at java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.consume(CertificateMessage.java:1175)
325    at java.base/sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:392)
326    at java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:443)
327    at java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1074)
328    at java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1061)
329    at java.base/java.security.AccessController.doPrivileged(Native Method)
330    at java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask.run(SSLEngineImpl.java:1008)
331    at org.apache.kafka.common.network.SslTransportLayer.runDelegatedTasks(SslTransportLayer.java:509)
332    at org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:601)
333    at org.apache.kafka.common.network.SslTransportLayer.doHandshake(SslTransportLayer.java:447)
334    at org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:332)
335    at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:229)
336    at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:563)
337    at org.apache.kafka.common.network.Selector.poll(Selector.java:499)
338    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:639)
339    at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:327)
340    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:242)
341    at java.base/java.lang.Thread.run(Thread.java:829)}
342
343ssl.endpoint.identification.algorithm=
344

This setting means the certificate does not match the hostname of the machine you are using to run the consumer. That seems to be recommended approach in this case.

Related thread: Kafka java consumer SSL handshake Error : java.security.cert.CertificateException: No subject alternative names present

Source https://stackoverflow.com/questions/69920375

QUESTION

How can I debug my Golang API code to show me what is wrong?

Asked 2021-Nov-27 at 22:36

I have this module that use Google Cloud API to retrieve a list of all running Virtual Machine instances for a particular project. I'm new to Go, and followed the intro tutorial to help me out. I'm still trying to debug my code but no luck.

The problem is I'm able to communicate to Google Cloud API and pass authentication but that is all I can get through

compute.go module:

compute.go is able to communicate to Google Cloud servers and pass authentication (I'm not getting an auth error)

1// Copyright 2021 Google LLC
2//
3// Licensed under the Apache License, Version 2.0 (the &quot;License&quot;);
4// you may not use this file except in compliance with the License.
5// You may obtain a copy of the License at
6//
7//     https://www.apache.org/licenses/LICENSE-2.0
8//
9// Unless required by applicable law or agreed to in writing, software
10// distributed under the License is distributed on an &quot;AS IS&quot; BASIS,
11// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12// See the License for the specific language governing permissions and
13// limitations under the License.
14
15package compute
16
17// [START compute_instances_list_all]
18import (
19    &quot;context&quot;
20    &quot;fmt&quot;
21    &quot;io&quot;
22
23    compute &quot;cloud.google.com/go/compute/apiv1&quot;
24    &quot;google.golang.org/api/iterator&quot;
25    computepb &quot;google.golang.org/genproto/googleapis/cloud/compute/v1&quot;
26    &quot;google.golang.org/protobuf/proto&quot;
27)
28
29// listAllInstances prints all instances present in a project, grouped by their zone.
30func ListAllInstances(w io.Writer, projectID string) error {
31    // projectID := &quot;your_project_id&quot;
32    ctx := context.Background()
33    instancesClient, err := compute.NewInstancesRESTClient(ctx)
34    // instancesClient, err := compute.NewInstancesRESTClient(ctx, option.WithCredentialsFile(`C:\path\to\jsonkey.json`))
35
36    if err != nil {
37        return fmt.Errorf(&quot;NewInstancesRESTClient: %v&quot;, err)
38    }
39    defer instancesClient.Close()
40
41    // Use the `MaxResults` parameter to limit the number of results that the API returns per response page.
42    req := &amp;computepb.AggregatedListInstancesRequest{
43        Project:    projectID,
44        MaxResults: proto.Uint32(6),
45    }
46
47    it := instancesClient.AggregatedList(ctx, req)
48    fmt.Fprintf(w, &quot;Instances found:\n&quot;)
49    // Despite using the `MaxResults` parameter, you don't need to handle the pagination
50    // yourself. The returned iterator object handles pagination
51    // automatically, returning separated pages as you iterate over the results.
52    for {
53        pair, err := it.Next()
54        if err == iterator.Done {
55            break
56        }
57        if err != nil {
58            return err
59        }
60        instances := pair.Value.Instances
61        if len(instances) &gt; 0 {
62            fmt.Fprintf(w, &quot;%s\n&quot;, pair.Key)
63            for _, instance := range instances {
64                fmt.Fprintf(w, &quot;- %s %s\n&quot;, instance.GetName(), instance.GetMachineType())
65            }
66        }
67    }
68
69    return nil
70}
71
72// [END compute_instances_list_all]
73
74

However the problem is when I run my main function that calls ListAllInstances, it returns a <nil>. Not allowing me to know what is wrong.

caller api.go module where I run go run .:

1// Copyright 2021 Google LLC
2//
3// Licensed under the Apache License, Version 2.0 (the &quot;License&quot;);
4// you may not use this file except in compliance with the License.
5// You may obtain a copy of the License at
6//
7//     https://www.apache.org/licenses/LICENSE-2.0
8//
9// Unless required by applicable law or agreed to in writing, software
10// distributed under the License is distributed on an &quot;AS IS&quot; BASIS,
11// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12// See the License for the specific language governing permissions and
13// limitations under the License.
14
15package compute
16
17// [START compute_instances_list_all]
18import (
19    &quot;context&quot;
20    &quot;fmt&quot;
21    &quot;io&quot;
22
23    compute &quot;cloud.google.com/go/compute/apiv1&quot;
24    &quot;google.golang.org/api/iterator&quot;
25    computepb &quot;google.golang.org/genproto/googleapis/cloud/compute/v1&quot;
26    &quot;google.golang.org/protobuf/proto&quot;
27)
28
29// listAllInstances prints all instances present in a project, grouped by their zone.
30func ListAllInstances(w io.Writer, projectID string) error {
31    // projectID := &quot;your_project_id&quot;
32    ctx := context.Background()
33    instancesClient, err := compute.NewInstancesRESTClient(ctx)
34    // instancesClient, err := compute.NewInstancesRESTClient(ctx, option.WithCredentialsFile(`C:\path\to\jsonkey.json`))
35
36    if err != nil {
37        return fmt.Errorf(&quot;NewInstancesRESTClient: %v&quot;, err)
38    }
39    defer instancesClient.Close()
40
41    // Use the `MaxResults` parameter to limit the number of results that the API returns per response page.
42    req := &amp;computepb.AggregatedListInstancesRequest{
43        Project:    projectID,
44        MaxResults: proto.Uint32(6),
45    }
46
47    it := instancesClient.AggregatedList(ctx, req)
48    fmt.Fprintf(w, &quot;Instances found:\n&quot;)
49    // Despite using the `MaxResults` parameter, you don't need to handle the pagination
50    // yourself. The returned iterator object handles pagination
51    // automatically, returning separated pages as you iterate over the results.
52    for {
53        pair, err := it.Next()
54        if err == iterator.Done {
55            break
56        }
57        if err != nil {
58            return err
59        }
60        instances := pair.Value.Instances
61        if len(instances) &gt; 0 {
62            fmt.Fprintf(w, &quot;%s\n&quot;, pair.Key)
63            for _, instance := range instances {
64                fmt.Fprintf(w, &quot;- %s %s\n&quot;, instance.GetName(), instance.GetMachineType())
65            }
66        }
67    }
68
69    return nil
70}
71
72// [END compute_instances_list_all]
73
74package main
75
76import (
77    &quot;fmt&quot;
78
79    &quot;example.com/compute&quot;
80
81    &quot;bytes&quot;
82)
83
84func main() {
85    buf := new(bytes.Buffer)
86
87    // Get a message and print it.
88    respone := compute.ListAllInstances(buf, &quot;project-unique-id&quot;)
89    fmt.Println(respone)
90}
91
92

How else can I further debug this to figure out what is wrong with my code?

ANSWER

Answered 2021-Nov-27 at 22:36

You're not printing buf. Your function returns an object of type error, which is nil (no error!), the actual output is written to buf.

Either print it out:

1// Copyright 2021 Google LLC
2//
3// Licensed under the Apache License, Version 2.0 (the &quot;License&quot;);
4// you may not use this file except in compliance with the License.
5// You may obtain a copy of the License at
6//
7//     https://www.apache.org/licenses/LICENSE-2.0
8//
9// Unless required by applicable law or agreed to in writing, software
10// distributed under the License is distributed on an &quot;AS IS&quot; BASIS,
11// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12// See the License for the specific language governing permissions and
13// limitations under the License.
14
15package compute
16
17// [START compute_instances_list_all]
18import (
19    &quot;context&quot;
20    &quot;fmt&quot;
21    &quot;io&quot;
22
23    compute &quot;cloud.google.com/go/compute/apiv1&quot;
24    &quot;google.golang.org/api/iterator&quot;
25    computepb &quot;google.golang.org/genproto/googleapis/cloud/compute/v1&quot;
26    &quot;google.golang.org/protobuf/proto&quot;
27)
28
29// listAllInstances prints all instances present in a project, grouped by their zone.
30func ListAllInstances(w io.Writer, projectID string) error {
31    // projectID := &quot;your_project_id&quot;
32    ctx := context.Background()
33    instancesClient, err := compute.NewInstancesRESTClient(ctx)
34    // instancesClient, err := compute.NewInstancesRESTClient(ctx, option.WithCredentialsFile(`C:\path\to\jsonkey.json`))
35
36    if err != nil {
37        return fmt.Errorf(&quot;NewInstancesRESTClient: %v&quot;, err)
38    }
39    defer instancesClient.Close()
40
41    // Use the `MaxResults` parameter to limit the number of results that the API returns per response page.
42    req := &amp;computepb.AggregatedListInstancesRequest{
43        Project:    projectID,
44        MaxResults: proto.Uint32(6),
45    }
46
47    it := instancesClient.AggregatedList(ctx, req)
48    fmt.Fprintf(w, &quot;Instances found:\n&quot;)
49    // Despite using the `MaxResults` parameter, you don't need to handle the pagination
50    // yourself. The returned iterator object handles pagination
51    // automatically, returning separated pages as you iterate over the results.
52    for {
53        pair, err := it.Next()
54        if err == iterator.Done {
55            break
56        }
57        if err != nil {
58            return err
59        }
60        instances := pair.Value.Instances
61        if len(instances) &gt; 0 {
62            fmt.Fprintf(w, &quot;%s\n&quot;, pair.Key)
63            for _, instance := range instances {
64                fmt.Fprintf(w, &quot;- %s %s\n&quot;, instance.GetName(), instance.GetMachineType())
65            }
66        }
67    }
68
69    return nil
70}
71
72// [END compute_instances_list_all]
73
74package main
75
76import (
77    &quot;fmt&quot;
78
79    &quot;example.com/compute&quot;
80
81    &quot;bytes&quot;
82)
83
84func main() {
85    buf := new(bytes.Buffer)
86
87    // Get a message and print it.
88    respone := compute.ListAllInstances(buf, &quot;project-unique-id&quot;)
89    fmt.Println(respone)
90}
91
92func main() {
93    buf := new(bytes.Buffer)
94
95    // Get a message and print it.
96    err := compute.ListAllInstances(buf, &quot;project-unique-id&quot;)
97    if err != nil {
98        panic(err)
99    }
100    fmt.Println(buf.String())  // &lt;======= Print buf contents!
101}
102

Or just use os.Stdout:

1// Copyright 2021 Google LLC
2//
3// Licensed under the Apache License, Version 2.0 (the &quot;License&quot;);
4// you may not use this file except in compliance with the License.
5// You may obtain a copy of the License at
6//
7//     https://www.apache.org/licenses/LICENSE-2.0
8//
9// Unless required by applicable law or agreed to in writing, software
10// distributed under the License is distributed on an &quot;AS IS&quot; BASIS,
11// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12// See the License for the specific language governing permissions and
13// limitations under the License.
14
15package compute
16
17// [START compute_instances_list_all]
18import (
19    &quot;context&quot;
20    &quot;fmt&quot;
21    &quot;io&quot;
22
23    compute &quot;cloud.google.com/go/compute/apiv1&quot;
24    &quot;google.golang.org/api/iterator&quot;
25    computepb &quot;google.golang.org/genproto/googleapis/cloud/compute/v1&quot;
26    &quot;google.golang.org/protobuf/proto&quot;
27)
28
29// listAllInstances prints all instances present in a project, grouped by their zone.
30func ListAllInstances(w io.Writer, projectID string) error {
31    // projectID := &quot;your_project_id&quot;
32    ctx := context.Background()
33    instancesClient, err := compute.NewInstancesRESTClient(ctx)
34    // instancesClient, err := compute.NewInstancesRESTClient(ctx, option.WithCredentialsFile(`C:\path\to\jsonkey.json`))
35
36    if err != nil {
37        return fmt.Errorf(&quot;NewInstancesRESTClient: %v&quot;, err)
38    }
39    defer instancesClient.Close()
40
41    // Use the `MaxResults` parameter to limit the number of results that the API returns per response page.
42    req := &amp;computepb.AggregatedListInstancesRequest{
43        Project:    projectID,
44        MaxResults: proto.Uint32(6),
45    }
46
47    it := instancesClient.AggregatedList(ctx, req)
48    fmt.Fprintf(w, &quot;Instances found:\n&quot;)
49    // Despite using the `MaxResults` parameter, you don't need to handle the pagination
50    // yourself. The returned iterator object handles pagination
51    // automatically, returning separated pages as you iterate over the results.
52    for {
53        pair, err := it.Next()
54        if err == iterator.Done {
55            break
56        }
57        if err != nil {
58            return err
59        }
60        instances := pair.Value.Instances
61        if len(instances) &gt; 0 {
62            fmt.Fprintf(w, &quot;%s\n&quot;, pair.Key)
63            for _, instance := range instances {
64                fmt.Fprintf(w, &quot;- %s %s\n&quot;, instance.GetName(), instance.GetMachineType())
65            }
66        }
67    }
68
69    return nil
70}
71
72// [END compute_instances_list_all]
73
74package main
75
76import (
77    &quot;fmt&quot;
78
79    &quot;example.com/compute&quot;
80
81    &quot;bytes&quot;
82)
83
84func main() {
85    buf := new(bytes.Buffer)
86
87    // Get a message and print it.
88    respone := compute.ListAllInstances(buf, &quot;project-unique-id&quot;)
89    fmt.Println(respone)
90}
91
92func main() {
93    buf := new(bytes.Buffer)
94
95    // Get a message and print it.
96    err := compute.ListAllInstances(buf, &quot;project-unique-id&quot;)
97    if err != nil {
98        panic(err)
99    }
100    fmt.Println(buf.String())  // &lt;======= Print buf contents!
101}
102func main() {
103
104    err := compute.ListAllInstances(os.Stdout, &quot;project-unique-id&quot;)
105
106    if err != nil {
107        panic(err)
108    }
109
110}
111

To answer your question about debugging, try using VSCode with the Go extension, in there you can run a debugger, set breakpoints and step through the code line-by-line, watching how variables change.

See also Debug Go programs in VS Code.

Source https://stackoverflow.com/questions/70130013

QUESTION

How to use custom auth header with spring boot oauth2 resource server

Asked 2021-Nov-04 at 16:51

I'm configuring spring cloud api gateway to support several security chains. To do that I'm using several security filter chains which triggered on specific security header presence:

  1. The legacy one which already use Authorization header
  2. And new implementation, that integrated with external idp. This solution utilize resource service capabilities. And for this chain I'd like to use, lets say "New-Auth" header.

In case I tune my current setup to trigger second (idp) chain on Authorization header presence (and make call with IDP token), then everything works fine. This way security chain validates token that it expect in Authorization header against idp jwk. But this header is already reserved for legacy auth.

I guess I need a way to point spring resource server chain a new header name to look for.

My security dependencies:

1implementation 'org.springframework.boot:spring-boot-starter-security'
2implementation 'org.springframework.boot:spring-boot-starter-oauth2-resource-server'
3

My configuration

1implementation 'org.springframework.boot:spring-boot-starter-security'
2implementation 'org.springframework.boot:spring-boot-starter-oauth2-resource-server'
3@EnableWebFluxSecurity
4public class WebSecurityConfiguration {
5
6  // ...
7
8  @Bean
9  @Order(1)
10  public SecurityWebFilterChain iamAuthFilterChain(ServerHttpSecurity http) {
11    ServerWebExchangeMatcher matcher = exchange -&gt; {
12      HttpHeaders headers = exchange.getRequest().getHeaders();
13      List&lt;String&gt; strings = headers.get(SurpriseHeaders.IDP_AUTH_TOKEN_HEADER_NAME);
14      return strings != null &amp;&amp; strings.size() &gt; 0
15          ? MatchResult.match() : MatchResult.notMatch();
16    };
17
18    http
19        .securityMatcher(matcher)
20        .csrf().disable()
21        .authorizeExchange()
22          .pathMatchers(navigationService.getAuthFreeEndpoints()).permitAll()
23          .anyExchange().authenticated()
24        .and()
25          .oauth2ResourceServer(OAuth2ResourceServerSpec::jwt)
26          .oauth2ResourceServer().jwt().jwkSetUri(getJwkUri())
27          .and()
28        .and()
29          .addFilterAt(new LoggingFilter(&quot;idpAuthFilterChain&quot;), SecurityWebFiltersOrder.FIRST)
30          .addFilterAfter(new IdpTokenExchangeFilter(authClientService), SecurityWebFiltersOrder.AUTHENTICATION)
31    ;    
32    return http.build();
33  }
34
35}
36

Dirty solution:

We can add some filter to edit request and duplicate incoming "New-Auth" header as an "Authorization" header at a beginning of security filter chain.

Looks like it works, but I believe it should be a better way.

ANSWER

Answered 2021-Nov-04 at 16:51

You can specify a ServerAuthenticationConverter to your oauth2ResourceServer configuration, like so:

1implementation 'org.springframework.boot:spring-boot-starter-security'
2implementation 'org.springframework.boot:spring-boot-starter-oauth2-resource-server'
3@EnableWebFluxSecurity
4public class WebSecurityConfiguration {
5
6  // ...
7
8  @Bean
9  @Order(1)
10  public SecurityWebFilterChain iamAuthFilterChain(ServerHttpSecurity http) {
11    ServerWebExchangeMatcher matcher = exchange -&gt; {
12      HttpHeaders headers = exchange.getRequest().getHeaders();
13      List&lt;String&gt; strings = headers.get(SurpriseHeaders.IDP_AUTH_TOKEN_HEADER_NAME);
14      return strings != null &amp;&amp; strings.size() &gt; 0
15          ? MatchResult.match() : MatchResult.notMatch();
16    };
17
18    http
19        .securityMatcher(matcher)
20        .csrf().disable()
21        .authorizeExchange()
22          .pathMatchers(navigationService.getAuthFreeEndpoints()).permitAll()
23          .anyExchange().authenticated()
24        .and()
25          .oauth2ResourceServer(OAuth2ResourceServerSpec::jwt)
26          .oauth2ResourceServer().jwt().jwkSetUri(getJwkUri())
27          .and()
28        .and()
29          .addFilterAt(new LoggingFilter(&quot;idpAuthFilterChain&quot;), SecurityWebFiltersOrder.FIRST)
30          .addFilterAfter(new IdpTokenExchangeFilter(authClientService), SecurityWebFiltersOrder.AUTHENTICATION)
31    ;    
32    return http.build();
33  }
34
35}
36http
37    .oauth2ResourceServer((resourceServer) -&gt; resourceServer
38        .bearerTokenConverter(customBearerTokenAuthenticationConverter())
39        .jwt()
40    );
41
42ServerAuthenticationConverter customBearerTokenAuthenticationConverter() {
43    ServerBearerTokenAuthenticationConverter tokenAuthenticationConverter = new ServerBearerTokenAuthenticationConverter();
44    tokenAuthenticationConverter.setBearerTokenHeaderName(&quot;New-Auth&quot;);
45    return tokenAuthenticationConverter;
46}
47

Source https://stackoverflow.com/questions/69836879

QUESTION

Gcloud comportament differ from shell to cloudbuild.yaml

Asked 2021-Oct-27 at 07:47

I have been trying to list all the API gateways config on gcloud, and something wrong is happening.

When I run the following command on the terminal with my user logged in, it works like a charm.

1gcloud api-gateway api-configs list --api=$API --project=$PROJECT_ID --format=&quot;table(name)&quot;
2

But when I run the same command from inside this cloudbuild.yaml

1gcloud api-gateway api-configs list --api=$API --project=$PROJECT_ID --format=&quot;table(name)&quot;
2steps:
3  - name: &quot;gcr.io/google.com/cloudsdktool/cloud-sdk&quot;
4    entrypoint: &quot;bash&quot;
5    args:
6      - &quot;-c&quot;
7      - |
8        gcloud api-gateway api-configs list --api=logistics-homolog --project=$PROJECT_ID \
9        --filter=serviceConfigId:logistics-mobile-places-* --format=&quot;table(name)&quot;
10

It gives me the following error:

1gcloud api-gateway api-configs list --api=$API --project=$PROJECT_ID --format=&quot;table(name)&quot;
2steps:
3  - name: &quot;gcr.io/google.com/cloudsdktool/cloud-sdk&quot;
4    entrypoint: &quot;bash&quot;
5    args:
6      - &quot;-c&quot;
7      - |
8        gcloud api-gateway api-configs list --api=logistics-homolog --project=$PROJECT_ID \
9        --filter=serviceConfigId:logistics-mobile-places-* --format=&quot;table(name)&quot;
10ERROR: (gcloud.api-gateway.api-configs.list) PERMISSION_DENIED: Permission 'apigateway.apiconfigs.list' denied on 'projects/$PROJECT_ID/locations/global/apis/logistics-homolog/configs'
11

What's wrong with it?!

ANSWER

Answered 2021-Oct-27 at 01:01

You don't include sufficient information but, it's likely that you're not authenticating gcloud in the container.

Have a look at the image's repo's documentation for an explanation of how the container image may be authenticated;

https://github.com/GoogleCloudPlatform/cloud-sdk-docker

Source https://stackoverflow.com/questions/69730730

QUESTION

How to list group ids in GCP using cli or console

Asked 2021-Oct-26 at 07:47

I would like to know is there any way to get group ids either using CLI or Cloud console? Using this id, I need to collect the members list using Cloud API. I was going through google documentation but I couldn't find it.

For example if I use:

1gcloud identity groups memberships list --group-email=abc@xyz.com
2

It gives the group id. Then I am using this doc to get the list of members.

ANSWER

Answered 2021-Oct-26 at 05:39

If using Google's SDK tool gcloud is ok with you then you can do it as follows:

Group's ID is it's actual email address - you can see it below:

1gcloud identity groups memberships list --group-email=abc@xyz.com
2wb@cloudshell:~ $ gcloud identity groups describe esa111@google.com
3createTime: '2021-10-12T09:13:16.737141Z'
4description: test group
5displayName: esa111
6groupKey:
7  id: esa111@google.com
8labels:
9  cloudidentity.googleapis.com/groups.discussion_forum: ''
10name: groups/00rj4333f0glbwez
11parent: customers/Cx2hsdde9nw
12updateTime: '2021-10-12T09:13:16.737141Z'
13

To get a members list:

1gcloud identity groups memberships list --group-email=abc@xyz.com
2wb@cloudshell:~ $ gcloud identity groups describe esa111@google.com
3createTime: '2021-10-12T09:13:16.737141Z'
4description: test group
5displayName: esa111
6groupKey:
7  id: esa111@google.com
8labels:
9  cloudidentity.googleapis.com/groups.discussion_forum: ''
10name: groups/00rj4333f0glbwez
11parent: customers/Cx2hsdde9nw
12updateTime: '2021-10-12T09:13:16.737141Z'
13wb@cloudshell:~ $ gcloud identity groups memberships list --group-email=esa111@google.com
14---
15name: groups/00rj4333f0glbwez/memberships/129543432329845052
16preferredMemberKey:
17  id: esa222@google.com
18roles:
19- name: MEMBER
20---
21name: groups/00rj4333f0glbwez/memberships/11674834e3327905886
22preferredMemberKey:
23  id: esa111@google.com
24roles:
25- name: OWNER
26- name: MEMBER
27

And to have just group's members ID's listed use grep and you'll get:

1gcloud identity groups memberships list --group-email=abc@xyz.com
2wb@cloudshell:~ $ gcloud identity groups describe esa111@google.com
3createTime: '2021-10-12T09:13:16.737141Z'
4description: test group
5displayName: esa111
6groupKey:
7  id: esa111@google.com
8labels:
9  cloudidentity.googleapis.com/groups.discussion_forum: ''
10name: groups/00rj4333f0glbwez
11parent: customers/Cx2hsdde9nw
12updateTime: '2021-10-12T09:13:16.737141Z'
13wb@cloudshell:~ $ gcloud identity groups memberships list --group-email=esa111@google.com
14---
15name: groups/00rj4333f0glbwez/memberships/129543432329845052
16preferredMemberKey:
17  id: esa222@google.com
18roles:
19- name: MEMBER
20---
21name: groups/00rj4333f0glbwez/memberships/11674834e3327905886
22preferredMemberKey:
23  id: esa111@google.com
24roles:
25- name: OWNER
26- name: MEMBER
27wb@cloudshell:~ $ gcloud identity groups memberships list --group-email=esa111@google.com | grep id:
28  id: esa222@google.com
29  id: esa111@google.com
30

Here's some docs on the gcloud identity groups describe and list commands.

Source https://stackoverflow.com/questions/69703614

QUESTION

Google Cloud Translate API Key Doesn't Work When Using Android Application Restrictions

Asked 2021-Oct-10 at 14:14

I'm trying to implement Google Cloud's Translation API using only a key as authentication in my Android app. I don't have a server or anything to improve security, so I wanted to restrict the API key to only be used by my Android app by specifying the app package name and SHA-1 hash. Here are the settings I used in the Cloud API & Services page:

The API Key Settings Used

To ensure the information was correct, I have the Gradle App ID set to the specified package name:

App Module Gradle ID

I ran the provided keytool command in windows for the SHA-1 hash:

Keytool Command Results for SHA-1 Hash

Gradle's signing report tool also returns the same hash:

Gradle SigningReport Output

When I try to call a simple GET request for supported languages, the response is always a 403 Forbidden error. However, when I remove the restriction of only Android apps and the package/hash setting, the key works. Am I clearly doing something wrong here or forgetting something? When I even log BuildConfig.getPackage(), it returns the same package name. Both ways for getting the SHA-1 hash returned the same hash. I'm not sure what is going wrong.

ANSWER

Answered 2021-Oct-10 at 14:14

Finally figured it out a couple days ago. Just sending the API request from the app package does not mean it's encoded with that information somewhere for the API endpoint to know it's from an authorized accessor.

The requests need two header properties to specify the source package and signature, in the form of:

1{
2    &quot;X-Android-Package&quot;: &quot;package.name.here&quot;,
3    &quot;X-Android-Cert&quot;: &quot;debug:or:release:signature:here&quot;
4}
5

This is nowhere in Google documentation. I have no idea why, and it's frustrating that this is apparently something everyone needs to know when using pure REST instead of Google's client libraries.

One thing this means is that it's not a good idea to hard code these values in, since someone could decompile the apk and get the authorized acccessor credentials for these headers. But you can make it more difficult by using several available functions to get this information.

Getting the package name is simple:

1{
2    &quot;X-Android-Package&quot;: &quot;package.name.here&quot;,
3    &quot;X-Android-Cert&quot;: &quot;debug:or:release:signature:here&quot;
4}
5context.getPackageName()
6

Signature requires some work

1{
2    &quot;X-Android-Package&quot;: &quot;package.name.here&quot;,
3    &quot;X-Android-Cert&quot;: &quot;debug:or:release:signature:here&quot;
4}
5context.getPackageName()
6public String getSignature(Context context) {
7    PackageInfo info = null;
8    try {
9        info = context.getPackageManager().getPackageInfo(context.getPackageName(), PackageManager.GET_SIGNING_CERTIFICATES);
10        Signature[] sigHistory = info.signingInfo.getSigningCertificateHistory();
11        byte[] signature = sigHistory[0].toByteArray();
12        MessageDigest md = MessageDigest.getInstance(&quot;SHA1&quot;);
13        byte[] digest = md.digest(signature);
14        StringBuilder sha1Builder = new StringBuilder();
15        for (byte b : digest) sha1Builder.append(String.format(&quot;%02x&quot;, b));
16        return sha1Builder.toString();
17    } catch (PackageManager.NameNotFoundException | NoSuchAlgorithmException e) {
18        e.printStackTrace();
19    }
20    return null;
21}
22

Make sure to read the SigningInfo documentation to use it correctly.

From there, set the request headers. I have been using Apache's Http Clients:

1{
2    &quot;X-Android-Package&quot;: &quot;package.name.here&quot;,
3    &quot;X-Android-Cert&quot;: &quot;debug:or:release:signature:here&quot;
4}
5context.getPackageName()
6public String getSignature(Context context) {
7    PackageInfo info = null;
8    try {
9        info = context.getPackageManager().getPackageInfo(context.getPackageName(), PackageManager.GET_SIGNING_CERTIFICATES);
10        Signature[] sigHistory = info.signingInfo.getSigningCertificateHistory();
11        byte[] signature = sigHistory[0].toByteArray();
12        MessageDigest md = MessageDigest.getInstance(&quot;SHA1&quot;);
13        byte[] digest = md.digest(signature);
14        StringBuilder sha1Builder = new StringBuilder();
15        for (byte b : digest) sha1Builder.append(String.format(&quot;%02x&quot;, b));
16        return sha1Builder.toString();
17    } catch (PackageManager.NameNotFoundException | NoSuchAlgorithmException e) {
18        e.printStackTrace();
19    }
20    return null;
21}
22HttpGet getReq = new HttpGet(uri);
23getReq.setHeader(&quot;X-Android-Package&quot;, context.getPackageName());
24getReq.addHeader(&quot;X-Android-Cert&quot;, getSignature(context));
25

Source https://stackoverflow.com/questions/69451147

Community Discussions contain sources that include Stack Exchange Network

Tutorials and Learning Resources in Cloud API

Share this Page

share link

Get latest updates on Cloud API