Explore all Telnet open source software, libraries, packages, source code, cloud functions and APIs.

Popular New Releases in Telnet

sshwifty

0.2.22-beta-release-prebuild

shellz

v1.5.1

PCSC

pcsc-1.9.1

EtherTerm

Final 4.11 Demo Release

tn5250j

0.8.0-beta2

Popular Libraries in Telnet

sshwifty

by nirui doticonjavascriptdoticon

star image 929 doticonAGPL-3.0

Web SSH & Telnet (WebSSH & WebTelnet client) 🔮

teleport

by tp4a doticonpythondoticon

star image 741 doticonApache-2.0

Teleport是一款简单易用的堡垒机系统。

PttChrome

by iamchucky doticonjavascriptdoticon

star image 471 doticonGPL-2.0

A GNU/GPL telnet client for connecting to BBS site ptt.cc

shellz

by evilsocket doticongodoticon

star image 438 doticonGPL-3.0

shellz is a small utility to track and control your ssh, telnet, web and custom shells and tunnels.

flynn-demo

by flynn-archive doticonshelldoticon

star image 365 doticonNOASSERTION

Archived -- see https://github.com/flynn/flynn

node-telnet-client

by mkozjak doticontypescriptdoticon

star image 311 doticonNOASSERTION

A simple telnet client for Node.js

telnet-scanner

by NewBee119 doticonpythondoticon

star image 239 doticon

telnet服务密码撞库

libtelnet

by seanmiddleditch doticoncdoticon

star image 234 doticonNOASSERTION

Simple RFC-complient TELNET implementation as a C library.

discoverd

by flynn-archive doticongodoticon

star image 232 doticonNOASSERTION

Archived -- see https://github.com/flynn/flynn

Trending New libraries in Telnet

bloodhound-quickwin

by kaluche doticonpythondoticon

star image 99 doticon

Simple script to extract useful informations from the combo BloodHound + Neo4j

chat

by lunatic-solutions doticonrustdoticon

star image 65 doticon

A telnet chat server

dial_a_zine

by caraesten doticonpythondoticon

star image 47 doticonMIT

A content-management system for displaying a zine over telnet

telnet_scripter

by paladine doticonjavadoticon

star image 6 doticonMIT

Telnet passthrough program which allows you run scripts (any program using stdin/stdout), useful for playing telnet-based MUDs

netgear_telnet

by bkerler doticonpythondoticon

star image 6 doticonMIT

Netgear Enable Telnet (New Crypto)

telshell

by x1unix doticongodoticon

star image 6 doticonMIT

Tiny Telnet shell server in Go

TelnetServer

by UngarMax doticoncsharpdoticon

star image 6 doticonMIT

A functional Telnet server written in C#

dark-chat

by glenndehaan doticonjavascriptdoticon

star image 5 doticonMIT

A small telnet chat server

c-kermit

by BAN-AI-Communications doticoncdoticon

star image 4 doticonNOASSERTION

c-kermit: C-Kermit for UNIX (CKU) with local modifications/customizations. Upstream homepage: https://www.kermitproject.org/

Top Authors in Telnet

1

flynn-archive

23 Libraries

star icon1227

2

BAN-AI-Communications

4 Libraries

star icon10

3

Twisol

2 Libraries

star icon18

4

ggrossman

2 Libraries

star icon7

5

rickparrish

2 Libraries

star icon15

6

OtakuHub

2 Libraries

star icon4

7

paddor

2 Libraries

star icon10

8

reiver

2 Libraries

star icon173

9

crazyrabbitpei

1 Libraries

star icon2

10

marado

1 Libraries

star icon2

1

23 Libraries

star icon1227

2

4 Libraries

star icon10

3

2 Libraries

star icon18

4

2 Libraries

star icon7

5

2 Libraries

star icon15

6

2 Libraries

star icon4

7

2 Libraries

star icon10

8

2 Libraries

star icon173

9

1 Libraries

star icon2

10

1 Libraries

star icon2

Trending Kits in Telnet

No Trending Kits are available at this moment for Telnet

Trending Discussions on Telnet

How to setup minimal smtp server on localhost to send messages to other smtp servers

loop through multiple URLs to scrape from a CSV file in Scrapy is not working

kubelet won't start after kuberntes/manifest update

Git clone error: RPC failed - curl 28 Operation too slow

Connection refused when HornetQ runs in Docker

Cannot connect to kafka from ourside of its docker container

Connection timeout using local kafka-connect cluster to connect on a remote database

codecov fails in github actions

Recognize telnet protocol with Scapy python

Ruby: BUILD FAILED (macOS 11.2 using ruby-build 20210119) Mac Big Sur

QUESTION

How to setup minimal smtp server on localhost to send messages to other smtp servers

Asked 2022-Feb-05 at 07:42

Honestly, I think I have a fundamental gap in understanding how SMTP works. I can't seem to find a good explanation of what is happening behind the scenes and I think this is preventing me from being able to do what I am attempting to do.

To explain, I'm trying to setup an application which sends notifications to users by connecting to an SMTP server. Fair enough. I figure, since I'm using my own domain, I have SPF/DKIM/DMARC configured, I can add an MX record for the host I set the application up on (my SPF record has the mx keyword to authorize any hosts in my MX records to send/receive mails). Then, I can have that same host run a super lightweight SMTP server that can accept mails from the application, and send them on to recipients.

Almost crucially, I want this server to basically just run on localhost so that only this application can connect and send mails through it, but so that it can't really "receive" mails sent to my domain (I have set the MX priority very low (well, a high number) for this app server). I figure since I'm running my own SMTP server, that I don't really need to authenticate against it (it's running on localhost), just take in any mail and send it on to recipient domains.

When sending on to recipient domains... does the SMTP server need to authenticate to say, the gmail SMTP server as a user in order to send mails over there? That seems weird, since it's not a user logging into gmail to send mails, it's an SMTP server that is authorized within SPF sending mail from my domain (From address from my domain as well) to where ever the app server user's email is based (in this example, the user would be e.g., some_user@gmail.com).

I tried using python's aiosmtpd command-line and telnet to send a mail from test@MY_DOMAIN.TLD to test@MY_DOMAIN.TLD and it didn't seem to deliver the message; I figured aiosmtpd would connect to the preferred MX servers for my domain (my "real" MX's) to transfer the message, which would then put it in my inbox. That didn't seem to be the case, and I'm not sure why.

Exact repro steps, where example.com is my domain, and terminals are running on a box with a hostname listed in my MX records.

Terminal A:

1$ aiosmtpd -n
2

Terminal B:

1$ aiosmtpd -n
2$ telnet localhost 8025
3EHLO <example.com>
4MAIL FROM: test@example.com
5RCPT TO: test@example.com
6DATA
7FROM: Application Notifications <test@example.com>
8TO: User Name <test@example.com>
9SUBJECT: App Notify Test
10
11This is a test!
12.
13QUIT
14

How do SMTP servers normally send mail between each other? Do they each get some login to each other's SMTP servers to authenticate with, and since I'm not doing that, this is a problem? Can I run a SMTP server on localhost and have it send mail out of the network without receiving mails (a no-reply service)? Is there something obvious that I'm just missing here that solves all my problems?

Thanks

ANSWER

Answered 2022-Jan-25 at 18:18

It sounds like you want to run a mail transfer agent (MTA) that relays email to remote SMTP servers. An MTA will typically act as an SMTP server to receive messages, and then it will act as an SMTP client when it relays the messages to remote hosts.

MTAs generally operate in two different modes: (1) They will relay messages from authenticated users to remote hosts, and (2) they will receive messages from remote hosts to its users and store them somehow. The combination of those two modes - where the MTA will accept messages from remote hosts and relay them to different remote hosts - is called an open relay and is sure to attract spammers and place your server on spam blacklists.

aiosmtpd is not an MTA or an email relay out of the box - it is merely an SMTP server that will receive messages and do whatever with the messages you program it to do. By default it will do nothing - that is, it will receive the messages and throw them away. If you want to implement an email relay in aiosmtpd, then you need to implement the SMTP client portion of the MTA, e.g. by implementing an aiosmtpd handler that instantiates smtplib.SMTP to connect to remote hosts.

However, if all you want is an email relay, then you most likely don't need aiosmtpd at all - postfix is probably a better choice.

aiosmtpd can be a good choice if you need to implement mailing list software or perform some automation tasks based on incoming emails from e.g. cameras or scanners.

If you want to implement an email relay in aiosmtpd, then you need to ensure that both the software and your server are configured in a way that you don't relay unauthenticated messages from the outside internet.

See also: Python aiosmtpd - what is missing for an Mail-Transfer-Agent (MTA)?

Source https://stackoverflow.com/questions/70781775

QUESTION

loop through multiple URLs to scrape from a CSV file in Scrapy is not working

Asked 2021-Dec-01 at 18:53

When i try to execute this loop i got error please help i wanted to scrape multiple links using csv file but is stucks in start_urls i am using scrapy 2.5 and python 3.9.7

1from scrapy import Request
2from scrapy.http import request
3import pandas as pd
4
5
6class PagedataSpider(scrapy.Spider):
7    name = 'pagedata'
8    allowed_domains = ['www.imdb.com']
9
10    def start_requests(self):
11        df = pd.read_csv('list1.csv')
12        #Here fileContainingUrls.csv is a csv file which has a column named as 'URLS'
13        # contains all the urls which you want to loop over. 
14        urlList = df['link'].values.to_list()
15        for i in urlList:
16            yield scrapy.Request(url = i, callback=self.parse)
17

error:

1from scrapy import Request
2from scrapy.http import request
3import pandas as pd
4
5
6class PagedataSpider(scrapy.Spider):
7    name = 'pagedata'
8    allowed_domains = ['www.imdb.com']
9
10    def start_requests(self):
11        df = pd.read_csv('list1.csv')
12        #Here fileContainingUrls.csv is a csv file which has a column named as 'URLS'
13        # contains all the urls which you want to loop over. 
14        urlList = df['link'].values.to_list()
15        for i in urlList:
16            yield scrapy.Request(url = i, callback=self.parse)
172021-11-09 22:06:45 [scrapy.core.engine] INFO: Spider opened
182021-11-09 22:06:45 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
192021-11-09 22:06:45 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
202021-11-09 22:06:45 [scrapy.core.engine] ERROR: Error while obtaining start requests
21Traceback (most recent call last):
22  File "C:\Users\Vivek\Desktop\Scrapy\myenv\lib\site-packages\scrapy\core\engine.py", line 129, in _next_request
23    request = next(slot.start_requests)
24  File "C:\Users\Vivek\Desktop\Scrapy\moviepages\moviepages\spiders\pagedata.py", line 18, in start_requests
25    urlList = df['link'].values.to_list()
26AttributeError: 'numpy.ndarray' object has no attribute 'to_list'
272021-11-09 22:06:45 [scrapy.core.engine] INFO: Closing spider (finished)
282021-11-09 22:06:45 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
29{'elapsed_time_seconds': 0.007159,
30 'finish_reason': 'finished',
31

ANSWER

Answered 2021-Nov-09 at 17:07

The error you received is rather straightforward; a numpy array doesn't have a to_list method.

Instead you should simply iterate over the numpy array:

1from scrapy import Request
2from scrapy.http import request
3import pandas as pd
4
5
6class PagedataSpider(scrapy.Spider):
7    name = 'pagedata'
8    allowed_domains = ['www.imdb.com']
9
10    def start_requests(self):
11        df = pd.read_csv('list1.csv')
12        #Here fileContainingUrls.csv is a csv file which has a column named as 'URLS'
13        # contains all the urls which you want to loop over. 
14        urlList = df['link'].values.to_list()
15        for i in urlList:
16            yield scrapy.Request(url = i, callback=self.parse)
172021-11-09 22:06:45 [scrapy.core.engine] INFO: Spider opened
182021-11-09 22:06:45 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
192021-11-09 22:06:45 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
202021-11-09 22:06:45 [scrapy.core.engine] ERROR: Error while obtaining start requests
21Traceback (most recent call last):
22  File "C:\Users\Vivek\Desktop\Scrapy\myenv\lib\site-packages\scrapy\core\engine.py", line 129, in _next_request
23    request = next(slot.start_requests)
24  File "C:\Users\Vivek\Desktop\Scrapy\moviepages\moviepages\spiders\pagedata.py", line 18, in start_requests
25    urlList = df['link'].values.to_list()
26AttributeError: 'numpy.ndarray' object has no attribute 'to_list'
272021-11-09 22:06:45 [scrapy.core.engine] INFO: Closing spider (finished)
282021-11-09 22:06:45 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
29{'elapsed_time_seconds': 0.007159,
30 'finish_reason': 'finished',
31from scrapy.http import request
32import pandas as pd
33
34
35class PagedataSpider(scrapy.Spider):
36    name = 'pagedata'
37    allowed_domains = ['www.imdb.com']
38
39    def start_requests(self):
40        df = pd.read_csv('list1.csv')
41
42        urls = df['link']
43        for url in urls:
44            yield scrapy.Request(url=url, callback=self.parse)
45

Source https://stackoverflow.com/questions/69902187

QUESTION

kubelet won't start after kuberntes/manifest update

Asked 2021-Nov-16 at 10:01

This is sort of strange behavior in our K8 cluster.

When we try to deploy a new version of our applications we get:

1Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "<container-id>" network for pod "application-6647b7cbdb-4tp2v": networkPlugin cni failed to set up pod "application-6647b7cbdb-4tp2v_default" network: Get "https://[10.233.0.1]:443/api/v1/namespaces/default": dial tcp 10.233.0.1:443: connect: connection refused
2

I used kubectl get cs and found controller and scheduler in Unhealthy state.

As describer here updated /etc/kubernetes/manifests/kube-scheduler.yaml and /etc/kubernetes/manifests/kube-controller-manager.yaml by commenting --port=0

When I checked systemctl status kubelet it was working.

1Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "<container-id>" network for pod "application-6647b7cbdb-4tp2v": networkPlugin cni failed to set up pod "application-6647b7cbdb-4tp2v_default" network: Get "https://[10.233.0.1]:443/api/v1/namespaces/default": dial tcp 10.233.0.1:443: connect: connection refused
2Active: active (running) since Mon 2020-10-26 13:18:46 +0530; 1 years 0 months ago
3

I had restarted kubelet service and controller and scheduler were shown healthy.

But systemctl status kubelet shows (soon after restart kubelet it showed running state)

1Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "<container-id>" network for pod "application-6647b7cbdb-4tp2v": networkPlugin cni failed to set up pod "application-6647b7cbdb-4tp2v_default" network: Get "https://[10.233.0.1]:443/api/v1/namespaces/default": dial tcp 10.233.0.1:443: connect: connection refused
2Active: active (running) since Mon 2020-10-26 13:18:46 +0530; 1 years 0 months ago
3Active: activating (auto-restart) (Result: exit-code) since Thu 2021-11-11 10:50:49 +0530; 3s ago<br>
4    Docs: https://github.com/GoogleCloudPlatform/kubernetes<br>  Process: 21234 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET
5

Tried adding Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --fail-swap-on=false" to /etc/systemd/system/kubelet.service.d/10-kubeadm.conf as described here, but still its not working properly.

Also removed --port=0 comment in above mentioned manifests and tried restarting,still same result.

Edit: This issue was due to kubelet certificate expired and fixed following these steps. If someone faces this issue, make sure /var/lib/kubelet/pki/kubelet-client-current.pem certificate and key values are base64 encoded when placing on /etc/kubernetes/kubelet.conf

Many other suggested kubeadm init again. But this cluster was created using kubespray no manually added nodes.

We have baremetal k8 running on Ubuntu 18.04. K8: v1.18.8

We would like to know any debugging and fixing suggestions.

PS:
When we try to telnet 10.233.0.1 443 from any node, first attempt fails and second attempt success.

Edit: Found this in kubelet service logs

1Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "<container-id>" network for pod "application-6647b7cbdb-4tp2v": networkPlugin cni failed to set up pod "application-6647b7cbdb-4tp2v_default" network: Get "https://[10.233.0.1]:443/api/v1/namespaces/default": dial tcp 10.233.0.1:443: connect: connection refused
2Active: active (running) since Mon 2020-10-26 13:18:46 +0530; 1 years 0 months ago
3Active: activating (auto-restart) (Result: exit-code) since Thu 2021-11-11 10:50:49 +0530; 3s ago<br>
4    Docs: https://github.com/GoogleCloudPlatform/kubernetes<br>  Process: 21234 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET
5Nov 10 17:35:05 node1 kubelet[1951]: W1110 17:35:05.380982    1951 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "app-7b54557dd4-bzjd9_default": unexpected command output nsenter: cannot open /proc/12311/ns/net: No such file or directory
6

ANSWER

Answered 2021-Nov-15 at 17:56

Posting comment as the community wiki answer for better visibility


This issue was due to kubelet certificate expired and fixed following these steps. If someone faces this issue, make sure /var/lib/kubelet/pki/kubelet-client-current.pem certificate and key values are base64 encoded when placing on /etc/kubernetes/kubelet.conf

Source https://stackoverflow.com/questions/69923716

QUESTION

Git clone error: RPC failed - curl 28 Operation too slow

Asked 2021-Nov-10 at 12:19

I am trying to clone the linux kernel, the transfer speed seems perfectly fine, but curl always aborts:

1❯ git clone --depth 1 https://github.com/archlinux/linux
2Cloning into 'linux'...
3remote: Enumerating objects: 78109, done.
4remote: Counting objects: 100% (78109/78109), done.
5error: RPC failed; curl 28 Operation too slow. Less than 1000 bytes/sec transferred the last 3 seconds
6fatal: early EOF
7fatal: fetch-pack: invalid index-pack output
8

I have tried adding or removing --depth as well as using a different machine (one using Arch, the other on Ubuntu), same result...

Diagnostics Setup

Arch Linux

1❯ git clone --depth 1 https://github.com/archlinux/linux
2Cloning into 'linux'...
3remote: Enumerating objects: 78109, done.
4remote: Counting objects: 100% (78109/78109), done.
5error: RPC failed; curl 28 Operation too slow. Less than 1000 bytes/sec transferred the last 3 seconds
6fatal: early EOF
7fatal: fetch-pack: invalid index-pack output
8❯ git --version
9git version 2.33.1
10❯ curl --version
11curl 7.79.1 (x86_64-pc-linux-gnu) libcurl/7.79.1 OpenSSL/1.1.1l zlib/1.2.11 brotli/1.0.9 zstd/1.5.0 libidn2/2.3.2 libpsl/0.21.1 (+libidn2/2.3.0) libssh2/1.9.0 nghttp2/1.45.1
12Release-Date: 2021-09-22
13Protocols: dict file ftp ftps gopher gophers http https imap imaps mqtt pop3 pop3s rtsp scp sftp smb smbs smtp smtps telnet tftp
14Features: alt-svc AsynchDNS brotli GSS-API HSTS HTTP2 HTTPS-proxy IDN IPv6 Kerberos Largefile libz NTLM NTLM_WB PSL SPNEGO SSL TLS-SRP UnixSockets zstd
15❯ ldd "$(which curl)"
16    linux-vdso.so.1 (0x00007ffcde7df000)
17    /usr/lib/libstderred.so (0x00007fbb71615000)
18    libcurl.so.4 => /usr/lib/libcurl.so.4 (0x00007fbb71541000)
19    libpthread.so.0 => /usr/lib/libpthread.so.0 (0x00007fbb71520000)
20    libc.so.6 => /usr/lib/libc.so.6 (0x00007fbb71354000)
21    libdl.so.2 => /usr/lib/libdl.so.2 (0x00007fbb7134d000)
22    libnghttp2.so.14 => /usr/lib/libnghttp2.so.14 (0x00007fbb71321000)
23    libidn2.so.0 => /usr/lib/libidn2.so.0 (0x00007fbb712fd000)
24    libssh2.so.1 => /usr/lib/libssh2.so.1 (0x00007fbb712bc000)
25    libpsl.so.5 => /usr/lib/libpsl.so.5 (0x00007fbb712a9000)
26    libssl.so.1.1 => /usr/lib/libssl.so.1.1 (0x00007fbb71217000)
27    libcrypto.so.1.1 => /usr/lib/libcrypto.so.1.1 (0x00007fbb70f38000)
28    libgssapi_krb5.so.2 => /usr/lib/libgssapi_krb5.so.2 (0x00007fbb70ee3000)
29    libzstd.so.1 => /usr/lib/libzstd.so.1 (0x00007fbb70dd2000)
30    libbrotlidec.so.1 => /usr/lib/libbrotlidec.so.1 (0x00007fbb70dc4000)
31    libz.so.1 => /usr/lib/libz.so.1 (0x00007fbb70daa000)
32    /lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007fbb71653000)
33    libunistring.so.2 => /usr/lib/libunistring.so.2 (0x00007fbb70c28000)
34    libkrb5.so.3 => /usr/lib/libkrb5.so.3 (0x00007fbb70b41000)
35    libk5crypto.so.3 => /usr/lib/libk5crypto.so.3 (0x00007fbb70b0e000)
36    libcom_err.so.2 => /usr/lib/libcom_err.so.2 (0x00007fbb70b08000)
37    libkrb5support.so.0 => /usr/lib/libkrb5support.so.0 (0x00007fbb70af8000)
38    libkeyutils.so.1 => /usr/lib/libkeyutils.so.1 (0x00007fbb70af1000)
39    libresolv.so.2 => /usr/lib/libresolv.so.2 (0x00007fbb70ad7000)
40    libbrotlicommon.so.1 => /usr/lib/libbrotlicommon.so.1 (0x00007fbb70ab4000)
41

Speedtest:

1❯ git clone --depth 1 https://github.com/archlinux/linux
2Cloning into 'linux'...
3remote: Enumerating objects: 78109, done.
4remote: Counting objects: 100% (78109/78109), done.
5error: RPC failed; curl 28 Operation too slow. Less than 1000 bytes/sec transferred the last 3 seconds
6fatal: early EOF
7fatal: fetch-pack: invalid index-pack output
8❯ git --version
9git version 2.33.1
10❯ curl --version
11curl 7.79.1 (x86_64-pc-linux-gnu) libcurl/7.79.1 OpenSSL/1.1.1l zlib/1.2.11 brotli/1.0.9 zstd/1.5.0 libidn2/2.3.2 libpsl/0.21.1 (+libidn2/2.3.0) libssh2/1.9.0 nghttp2/1.45.1
12Release-Date: 2021-09-22
13Protocols: dict file ftp ftps gopher gophers http https imap imaps mqtt pop3 pop3s rtsp scp sftp smb smbs smtp smtps telnet tftp
14Features: alt-svc AsynchDNS brotli GSS-API HSTS HTTP2 HTTPS-proxy IDN IPv6 Kerberos Largefile libz NTLM NTLM_WB PSL SPNEGO SSL TLS-SRP UnixSockets zstd
15❯ ldd "$(which curl)"
16    linux-vdso.so.1 (0x00007ffcde7df000)
17    /usr/lib/libstderred.so (0x00007fbb71615000)
18    libcurl.so.4 => /usr/lib/libcurl.so.4 (0x00007fbb71541000)
19    libpthread.so.0 => /usr/lib/libpthread.so.0 (0x00007fbb71520000)
20    libc.so.6 => /usr/lib/libc.so.6 (0x00007fbb71354000)
21    libdl.so.2 => /usr/lib/libdl.so.2 (0x00007fbb7134d000)
22    libnghttp2.so.14 => /usr/lib/libnghttp2.so.14 (0x00007fbb71321000)
23    libidn2.so.0 => /usr/lib/libidn2.so.0 (0x00007fbb712fd000)
24    libssh2.so.1 => /usr/lib/libssh2.so.1 (0x00007fbb712bc000)
25    libpsl.so.5 => /usr/lib/libpsl.so.5 (0x00007fbb712a9000)
26    libssl.so.1.1 => /usr/lib/libssl.so.1.1 (0x00007fbb71217000)
27    libcrypto.so.1.1 => /usr/lib/libcrypto.so.1.1 (0x00007fbb70f38000)
28    libgssapi_krb5.so.2 => /usr/lib/libgssapi_krb5.so.2 (0x00007fbb70ee3000)
29    libzstd.so.1 => /usr/lib/libzstd.so.1 (0x00007fbb70dd2000)
30    libbrotlidec.so.1 => /usr/lib/libbrotlidec.so.1 (0x00007fbb70dc4000)
31    libz.so.1 => /usr/lib/libz.so.1 (0x00007fbb70daa000)
32    /lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007fbb71653000)
33    libunistring.so.2 => /usr/lib/libunistring.so.2 (0x00007fbb70c28000)
34    libkrb5.so.3 => /usr/lib/libkrb5.so.3 (0x00007fbb70b41000)
35    libk5crypto.so.3 => /usr/lib/libk5crypto.so.3 (0x00007fbb70b0e000)
36    libcom_err.so.2 => /usr/lib/libcom_err.so.2 (0x00007fbb70b08000)
37    libkrb5support.so.0 => /usr/lib/libkrb5support.so.0 (0x00007fbb70af8000)
38    libkeyutils.so.1 => /usr/lib/libkeyutils.so.1 (0x00007fbb70af1000)
39    libresolv.so.2 => /usr/lib/libresolv.so.2 (0x00007fbb70ad7000)
40    libbrotlicommon.so.1 => /usr/lib/libbrotlicommon.so.1 (0x00007fbb70ab4000)
41curl -o /dev/null http://speedtest.tele2.net/100MB.zip  0,16s user 0,57s system 5% cpu 14,183 total
42
Retry

On a new day, it now gets a little further:

1❯ git clone --depth 1 https://github.com/archlinux/linux
2Cloning into 'linux'...
3remote: Enumerating objects: 78109, done.
4remote: Counting objects: 100% (78109/78109), done.
5error: RPC failed; curl 28 Operation too slow. Less than 1000 bytes/sec transferred the last 3 seconds
6fatal: early EOF
7fatal: fetch-pack: invalid index-pack output
8❯ git --version
9git version 2.33.1
10❯ curl --version
11curl 7.79.1 (x86_64-pc-linux-gnu) libcurl/7.79.1 OpenSSL/1.1.1l zlib/1.2.11 brotli/1.0.9 zstd/1.5.0 libidn2/2.3.2 libpsl/0.21.1 (+libidn2/2.3.0) libssh2/1.9.0 nghttp2/1.45.1
12Release-Date: 2021-09-22
13Protocols: dict file ftp ftps gopher gophers http https imap imaps mqtt pop3 pop3s rtsp scp sftp smb smbs smtp smtps telnet tftp
14Features: alt-svc AsynchDNS brotli GSS-API HSTS HTTP2 HTTPS-proxy IDN IPv6 Kerberos Largefile libz NTLM NTLM_WB PSL SPNEGO SSL TLS-SRP UnixSockets zstd
15❯ ldd "$(which curl)"
16    linux-vdso.so.1 (0x00007ffcde7df000)
17    /usr/lib/libstderred.so (0x00007fbb71615000)
18    libcurl.so.4 => /usr/lib/libcurl.so.4 (0x00007fbb71541000)
19    libpthread.so.0 => /usr/lib/libpthread.so.0 (0x00007fbb71520000)
20    libc.so.6 => /usr/lib/libc.so.6 (0x00007fbb71354000)
21    libdl.so.2 => /usr/lib/libdl.so.2 (0x00007fbb7134d000)
22    libnghttp2.so.14 => /usr/lib/libnghttp2.so.14 (0x00007fbb71321000)
23    libidn2.so.0 => /usr/lib/libidn2.so.0 (0x00007fbb712fd000)
24    libssh2.so.1 => /usr/lib/libssh2.so.1 (0x00007fbb712bc000)
25    libpsl.so.5 => /usr/lib/libpsl.so.5 (0x00007fbb712a9000)
26    libssl.so.1.1 => /usr/lib/libssl.so.1.1 (0x00007fbb71217000)
27    libcrypto.so.1.1 => /usr/lib/libcrypto.so.1.1 (0x00007fbb70f38000)
28    libgssapi_krb5.so.2 => /usr/lib/libgssapi_krb5.so.2 (0x00007fbb70ee3000)
29    libzstd.so.1 => /usr/lib/libzstd.so.1 (0x00007fbb70dd2000)
30    libbrotlidec.so.1 => /usr/lib/libbrotlidec.so.1 (0x00007fbb70dc4000)
31    libz.so.1 => /usr/lib/libz.so.1 (0x00007fbb70daa000)
32    /lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007fbb71653000)
33    libunistring.so.2 => /usr/lib/libunistring.so.2 (0x00007fbb70c28000)
34    libkrb5.so.3 => /usr/lib/libkrb5.so.3 (0x00007fbb70b41000)
35    libk5crypto.so.3 => /usr/lib/libk5crypto.so.3 (0x00007fbb70b0e000)
36    libcom_err.so.2 => /usr/lib/libcom_err.so.2 (0x00007fbb70b08000)
37    libkrb5support.so.0 => /usr/lib/libkrb5support.so.0 (0x00007fbb70af8000)
38    libkeyutils.so.1 => /usr/lib/libkeyutils.so.1 (0x00007fbb70af1000)
39    libresolv.so.2 => /usr/lib/libresolv.so.2 (0x00007fbb70ad7000)
40    libbrotlicommon.so.1 => /usr/lib/libbrotlicommon.so.1 (0x00007fbb70ab4000)
41curl -o /dev/null http://speedtest.tele2.net/100MB.zip  0,16s user 0,57s system 5% cpu 14,183 total
42❯ git clone --depth 1 https://github.com/archlinux/linux
43Cloning into 'linux'...
44remote: Enumerating objects: 78109, done.
45remote: Counting objects: 100% (78109/78109), done.
46remote: Compressing objects:  36% (26365/73234)
47

but still aborts whenever the transfer slows down for a few seconds:

1❯ git clone --depth 1 https://github.com/archlinux/linux
2Cloning into 'linux'...
3remote: Enumerating objects: 78109, done.
4remote: Counting objects: 100% (78109/78109), done.
5error: RPC failed; curl 28 Operation too slow. Less than 1000 bytes/sec transferred the last 3 seconds
6fatal: early EOF
7fatal: fetch-pack: invalid index-pack output
8❯ git --version
9git version 2.33.1
10❯ curl --version
11curl 7.79.1 (x86_64-pc-linux-gnu) libcurl/7.79.1 OpenSSL/1.1.1l zlib/1.2.11 brotli/1.0.9 zstd/1.5.0 libidn2/2.3.2 libpsl/0.21.1 (+libidn2/2.3.0) libssh2/1.9.0 nghttp2/1.45.1
12Release-Date: 2021-09-22
13Protocols: dict file ftp ftps gopher gophers http https imap imaps mqtt pop3 pop3s rtsp scp sftp smb smbs smtp smtps telnet tftp
14Features: alt-svc AsynchDNS brotli GSS-API HSTS HTTP2 HTTPS-proxy IDN IPv6 Kerberos Largefile libz NTLM NTLM_WB PSL SPNEGO SSL TLS-SRP UnixSockets zstd
15❯ ldd "$(which curl)"
16    linux-vdso.so.1 (0x00007ffcde7df000)
17    /usr/lib/libstderred.so (0x00007fbb71615000)
18    libcurl.so.4 => /usr/lib/libcurl.so.4 (0x00007fbb71541000)
19    libpthread.so.0 => /usr/lib/libpthread.so.0 (0x00007fbb71520000)
20    libc.so.6 => /usr/lib/libc.so.6 (0x00007fbb71354000)
21    libdl.so.2 => /usr/lib/libdl.so.2 (0x00007fbb7134d000)
22    libnghttp2.so.14 => /usr/lib/libnghttp2.so.14 (0x00007fbb71321000)
23    libidn2.so.0 => /usr/lib/libidn2.so.0 (0x00007fbb712fd000)
24    libssh2.so.1 => /usr/lib/libssh2.so.1 (0x00007fbb712bc000)
25    libpsl.so.5 => /usr/lib/libpsl.so.5 (0x00007fbb712a9000)
26    libssl.so.1.1 => /usr/lib/libssl.so.1.1 (0x00007fbb71217000)
27    libcrypto.so.1.1 => /usr/lib/libcrypto.so.1.1 (0x00007fbb70f38000)
28    libgssapi_krb5.so.2 => /usr/lib/libgssapi_krb5.so.2 (0x00007fbb70ee3000)
29    libzstd.so.1 => /usr/lib/libzstd.so.1 (0x00007fbb70dd2000)
30    libbrotlidec.so.1 => /usr/lib/libbrotlidec.so.1 (0x00007fbb70dc4000)
31    libz.so.1 => /usr/lib/libz.so.1 (0x00007fbb70daa000)
32    /lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007fbb71653000)
33    libunistring.so.2 => /usr/lib/libunistring.so.2 (0x00007fbb70c28000)
34    libkrb5.so.3 => /usr/lib/libkrb5.so.3 (0x00007fbb70b41000)
35    libk5crypto.so.3 => /usr/lib/libk5crypto.so.3 (0x00007fbb70b0e000)
36    libcom_err.so.2 => /usr/lib/libcom_err.so.2 (0x00007fbb70b08000)
37    libkrb5support.so.0 => /usr/lib/libkrb5support.so.0 (0x00007fbb70af8000)
38    libkeyutils.so.1 => /usr/lib/libkeyutils.so.1 (0x00007fbb70af1000)
39    libresolv.so.2 => /usr/lib/libresolv.so.2 (0x00007fbb70ad7000)
40    libbrotlicommon.so.1 => /usr/lib/libbrotlicommon.so.1 (0x00007fbb70ab4000)
41curl -o /dev/null http://speedtest.tele2.net/100MB.zip  0,16s user 0,57s system 5% cpu 14,183 total
42❯ git clone --depth 1 https://github.com/archlinux/linux
43Cloning into 'linux'...
44remote: Enumerating objects: 78109, done.
45remote: Counting objects: 100% (78109/78109), done.
46remote: Compressing objects:  36% (26365/73234)
47❯ git clone --depth 1 https://github.com/archlinux/linux
48Cloning into 'linux'...
49remote: Enumerating objects: 78109, done.
50remote: Counting objects: 100% (78109/78109), done.
51remote: Compressing objects:  36% (26365/73234)
52

The solution seems to be to pass --speed-time to curl via git, which I have no idea how to do even after looking at all git man pages related to configuration I could find.

ANSWER

Answered 2021-Nov-10 at 12:19

After lots of frustration it became apparent that the problem was once again in front of the computer. The following option in my git config was the culprit:

1❯ git clone --depth 1 https://github.com/archlinux/linux
2Cloning into 'linux'...
3remote: Enumerating objects: 78109, done.
4remote: Counting objects: 100% (78109/78109), done.
5error: RPC failed; curl 28 Operation too slow. Less than 1000 bytes/sec transferred the last 3 seconds
6fatal: early EOF
7fatal: fetch-pack: invalid index-pack output
8❯ git --version
9git version 2.33.1
10❯ curl --version
11curl 7.79.1 (x86_64-pc-linux-gnu) libcurl/7.79.1 OpenSSL/1.1.1l zlib/1.2.11 brotli/1.0.9 zstd/1.5.0 libidn2/2.3.2 libpsl/0.21.1 (+libidn2/2.3.0) libssh2/1.9.0 nghttp2/1.45.1
12Release-Date: 2021-09-22
13Protocols: dict file ftp ftps gopher gophers http https imap imaps mqtt pop3 pop3s rtsp scp sftp smb smbs smtp smtps telnet tftp
14Features: alt-svc AsynchDNS brotli GSS-API HSTS HTTP2 HTTPS-proxy IDN IPv6 Kerberos Largefile libz NTLM NTLM_WB PSL SPNEGO SSL TLS-SRP UnixSockets zstd
15❯ ldd "$(which curl)"
16    linux-vdso.so.1 (0x00007ffcde7df000)
17    /usr/lib/libstderred.so (0x00007fbb71615000)
18    libcurl.so.4 => /usr/lib/libcurl.so.4 (0x00007fbb71541000)
19    libpthread.so.0 => /usr/lib/libpthread.so.0 (0x00007fbb71520000)
20    libc.so.6 => /usr/lib/libc.so.6 (0x00007fbb71354000)
21    libdl.so.2 => /usr/lib/libdl.so.2 (0x00007fbb7134d000)
22    libnghttp2.so.14 => /usr/lib/libnghttp2.so.14 (0x00007fbb71321000)
23    libidn2.so.0 => /usr/lib/libidn2.so.0 (0x00007fbb712fd000)
24    libssh2.so.1 => /usr/lib/libssh2.so.1 (0x00007fbb712bc000)
25    libpsl.so.5 => /usr/lib/libpsl.so.5 (0x00007fbb712a9000)
26    libssl.so.1.1 => /usr/lib/libssl.so.1.1 (0x00007fbb71217000)
27    libcrypto.so.1.1 => /usr/lib/libcrypto.so.1.1 (0x00007fbb70f38000)
28    libgssapi_krb5.so.2 => /usr/lib/libgssapi_krb5.so.2 (0x00007fbb70ee3000)
29    libzstd.so.1 => /usr/lib/libzstd.so.1 (0x00007fbb70dd2000)
30    libbrotlidec.so.1 => /usr/lib/libbrotlidec.so.1 (0x00007fbb70dc4000)
31    libz.so.1 => /usr/lib/libz.so.1 (0x00007fbb70daa000)
32    /lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007fbb71653000)
33    libunistring.so.2 => /usr/lib/libunistring.so.2 (0x00007fbb70c28000)
34    libkrb5.so.3 => /usr/lib/libkrb5.so.3 (0x00007fbb70b41000)
35    libk5crypto.so.3 => /usr/lib/libk5crypto.so.3 (0x00007fbb70b0e000)
36    libcom_err.so.2 => /usr/lib/libcom_err.so.2 (0x00007fbb70b08000)
37    libkrb5support.so.0 => /usr/lib/libkrb5support.so.0 (0x00007fbb70af8000)
38    libkeyutils.so.1 => /usr/lib/libkeyutils.so.1 (0x00007fbb70af1000)
39    libresolv.so.2 => /usr/lib/libresolv.so.2 (0x00007fbb70ad7000)
40    libbrotlicommon.so.1 => /usr/lib/libbrotlicommon.so.1 (0x00007fbb70ab4000)
41curl -o /dev/null http://speedtest.tele2.net/100MB.zip  0,16s user 0,57s system 5% cpu 14,183 total
42❯ git clone --depth 1 https://github.com/archlinux/linux
43Cloning into 'linux'...
44remote: Enumerating objects: 78109, done.
45remote: Counting objects: 100% (78109/78109), done.
46remote: Compressing objects:  36% (26365/73234)
47❯ git clone --depth 1 https://github.com/archlinux/linux
48Cloning into 'linux'...
49remote: Enumerating objects: 78109, done.
50remote: Counting objects: 100% (78109/78109), done.
51remote: Compressing objects:  36% (26365/73234)
52[http]
53    lowSpeedLimit = 1000
54    lowSpeedTime = 3
55

Raising the values fixed it.

Source https://stackoverflow.com/questions/69774467

QUESTION

Connection refused when HornetQ runs in Docker

Asked 2021-Oct-11 at 02:03

I'm testing a very simple scenario, I'm running the test located under examples/jms/queue on a standalone server running locally on my computer with success. Running the same on a dockerized HornetQ 2.4.0 gives me the error:

Connection refused: connect

I made sure to open port 1099 and I can see the port open,

0.0.0.0:1099->1099/tcp

Telnet-ing to localhost 1099 gives a gibberish result with means there is something there listening but running the test connecting to jnp://localhost:1099 as I said it's failing.

Finally the configuration of hornetq-beans.xml:

1<bean name="StandaloneServer" class="org.hornetq.jms.server.impl.StandaloneNamingServer">
2   <constructor>
3      <parameter>
4         <inject bean="HornetQServer"/>
5      </parameter>
6   </constructor>
7   <property name="port">1099</property>
8   <property name="bindAddress">0.0.0.0</property>
9   <property name="rmiPort">1098</property>
10   <property name="rmiBindAddress">0.0.0.0</property>
11</bean>
12

Result of netstat -plunt:

1<bean name="StandaloneServer" class="org.hornetq.jms.server.impl.StandaloneNamingServer">
2   <constructor>
3      <parameter>
4         <inject bean="HornetQServer"/>
5      </parameter>
6   </constructor>
7   <property name="port">1099</property>
8   <property name="bindAddress">0.0.0.0</property>
9   <property name="rmiPort">1098</property>
10   <property name="rmiBindAddress">0.0.0.0</property>
11</bean>
12# netstat -plunt
13Active Internet connections (only servers)
14Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
15tcp        0      0 0.0.0.0:5445            0.0.0.0:*               LISTEN      10/java
16tcp        0      0 0.0.0.0:1098            0.0.0.0:*               LISTEN      10/java
17tcp        0      0 0.0.0.0:1099            0.0.0.0:*               LISTEN      10/java
18tcp        0      0 0.0.0.0:39437           0.0.0.0:*               LISTEN      10/java
19tcp        0      0 0.0.0.0:5455            0.0.0.0:*               LISTEN      10/java
20

My Dockerfile:

1<bean name="StandaloneServer" class="org.hornetq.jms.server.impl.StandaloneNamingServer">
2   <constructor>
3      <parameter>
4         <inject bean="HornetQServer"/>
5      </parameter>
6   </constructor>
7   <property name="port">1099</property>
8   <property name="bindAddress">0.0.0.0</property>
9   <property name="rmiPort">1098</property>
10   <property name="rmiBindAddress">0.0.0.0</property>
11</bean>
12# netstat -plunt
13Active Internet connections (only servers)
14Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
15tcp        0      0 0.0.0.0:5445            0.0.0.0:*               LISTEN      10/java
16tcp        0      0 0.0.0.0:1098            0.0.0.0:*               LISTEN      10/java
17tcp        0      0 0.0.0.0:1099            0.0.0.0:*               LISTEN      10/java
18tcp        0      0 0.0.0.0:39437           0.0.0.0:*               LISTEN      10/java
19tcp        0      0 0.0.0.0:5455            0.0.0.0:*               LISTEN      10/java
20FROM openjdk:8
21
22WORKDIR /app
23
24COPY ./hornetq-2.4.0.Final .
25
26EXPOSE 1099 1098 5445 5455
27
28ENTRYPOINT [ "/bin/bash", "-c", "cd bin/; ./run.sh" ]
29

The updated part of hornetq-configuration.xml:

1<bean name="StandaloneServer" class="org.hornetq.jms.server.impl.StandaloneNamingServer">
2   <constructor>
3      <parameter>
4         <inject bean="HornetQServer"/>
5      </parameter>
6   </constructor>
7   <property name="port">1099</property>
8   <property name="bindAddress">0.0.0.0</property>
9   <property name="rmiPort">1098</property>
10   <property name="rmiBindAddress">0.0.0.0</property>
11</bean>
12# netstat -plunt
13Active Internet connections (only servers)
14Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
15tcp        0      0 0.0.0.0:5445            0.0.0.0:*               LISTEN      10/java
16tcp        0      0 0.0.0.0:1098            0.0.0.0:*               LISTEN      10/java
17tcp        0      0 0.0.0.0:1099            0.0.0.0:*               LISTEN      10/java
18tcp        0      0 0.0.0.0:39437           0.0.0.0:*               LISTEN      10/java
19tcp        0      0 0.0.0.0:5455            0.0.0.0:*               LISTEN      10/java
20FROM openjdk:8
21
22WORKDIR /app
23
24COPY ./hornetq-2.4.0.Final .
25
26EXPOSE 1099 1098 5445 5455
27
28ENTRYPOINT [ "/bin/bash", "-c", "cd bin/; ./run.sh" ]
29<connectors>
30   <connector name="netty">
31      <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
32      <param key="host"  value="0.0.0.0"/>
33      <param key="port"  value="5445"/>
34   </connector>
35   
36   <connector name="netty-throughput">
37      <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
38      <param key="host"  value="0.0.0.0"/>
39      <param key="port"  value="5455"/>
40      <param key="batch-delay" value="50"/>
41   </connector>
42</connectors>
43
44<acceptors>
45   <acceptor name="netty">
46      <factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
47      <param key="host"  value="0.0.0.0"/>
48      <param key="port"  value="5445"/>
49   </acceptor>
50   
51   <acceptor name="netty-throughput">
52      <factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
53      <param key="host"  value="0.0.0.0"/>
54      <param key="port"  value="5455"/>
55      <param key="batch-delay" value="50"/>
56      <param key="direct-deliver" value="false"/>
57   </acceptor>
58</acceptors>
59

The updated part of hornetq-beans.xml:

1<bean name="StandaloneServer" class="org.hornetq.jms.server.impl.StandaloneNamingServer">
2   <constructor>
3      <parameter>
4         <inject bean="HornetQServer"/>
5      </parameter>
6   </constructor>
7   <property name="port">1099</property>
8   <property name="bindAddress">0.0.0.0</property>
9   <property name="rmiPort">1098</property>
10   <property name="rmiBindAddress">0.0.0.0</property>
11</bean>
12# netstat -plunt
13Active Internet connections (only servers)
14Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
15tcp        0      0 0.0.0.0:5445            0.0.0.0:*               LISTEN      10/java
16tcp        0      0 0.0.0.0:1098            0.0.0.0:*               LISTEN      10/java
17tcp        0      0 0.0.0.0:1099            0.0.0.0:*               LISTEN      10/java
18tcp        0      0 0.0.0.0:39437           0.0.0.0:*               LISTEN      10/java
19tcp        0      0 0.0.0.0:5455            0.0.0.0:*               LISTEN      10/java
20FROM openjdk:8
21
22WORKDIR /app
23
24COPY ./hornetq-2.4.0.Final .
25
26EXPOSE 1099 1098 5445 5455
27
28ENTRYPOINT [ "/bin/bash", "-c", "cd bin/; ./run.sh" ]
29<connectors>
30   <connector name="netty">
31      <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
32      <param key="host"  value="0.0.0.0"/>
33      <param key="port"  value="5445"/>
34   </connector>
35   
36   <connector name="netty-throughput">
37      <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
38      <param key="host"  value="0.0.0.0"/>
39      <param key="port"  value="5455"/>
40      <param key="batch-delay" value="50"/>
41   </connector>
42</connectors>
43
44<acceptors>
45   <acceptor name="netty">
46      <factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
47      <param key="host"  value="0.0.0.0"/>
48      <param key="port"  value="5445"/>
49   </acceptor>
50   
51   <acceptor name="netty-throughput">
52      <factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
53      <param key="host"  value="0.0.0.0"/>
54      <param key="port"  value="5455"/>
55      <param key="batch-delay" value="50"/>
56      <param key="direct-deliver" value="false"/>
57   </acceptor>
58</acceptors>
59<bean name="StandaloneServer" class="org.hornetq.jms.server.impl.StandaloneNamingServer">
60   <constructor>
61      <parameter>
62         <inject bean="HornetQServer"/>
63      </parameter>
64   </constructor>
65   <property name="port">1099</property>
66   <property name="bindAddress">0.0.0.0</property>
67   <property name="rmiPort">1098</property>
68   <property name="rmiBindAddress">0.0.0.0</property>
69</bean>
70

The command I'm using to run the image is:

1<bean name="StandaloneServer" class="org.hornetq.jms.server.impl.StandaloneNamingServer">
2   <constructor>
3      <parameter>
4         <inject bean="HornetQServer"/>
5      </parameter>
6   </constructor>
7   <property name="port">1099</property>
8   <property name="bindAddress">0.0.0.0</property>
9   <property name="rmiPort">1098</property>
10   <property name="rmiBindAddress">0.0.0.0</property>
11</bean>
12# netstat -plunt
13Active Internet connections (only servers)
14Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
15tcp        0      0 0.0.0.0:5445            0.0.0.0:*               LISTEN      10/java
16tcp        0      0 0.0.0.0:1098            0.0.0.0:*               LISTEN      10/java
17tcp        0      0 0.0.0.0:1099            0.0.0.0:*               LISTEN      10/java
18tcp        0      0 0.0.0.0:39437           0.0.0.0:*               LISTEN      10/java
19tcp        0      0 0.0.0.0:5455            0.0.0.0:*               LISTEN      10/java
20FROM openjdk:8
21
22WORKDIR /app
23
24COPY ./hornetq-2.4.0.Final .
25
26EXPOSE 1099 1098 5445 5455
27
28ENTRYPOINT [ "/bin/bash", "-c", "cd bin/; ./run.sh" ]
29<connectors>
30   <connector name="netty">
31      <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
32      <param key="host"  value="0.0.0.0"/>
33      <param key="port"  value="5445"/>
34   </connector>
35   
36   <connector name="netty-throughput">
37      <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
38      <param key="host"  value="0.0.0.0"/>
39      <param key="port"  value="5455"/>
40      <param key="batch-delay" value="50"/>
41   </connector>
42</connectors>
43
44<acceptors>
45   <acceptor name="netty">
46      <factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
47      <param key="host"  value="0.0.0.0"/>
48      <param key="port"  value="5445"/>
49   </acceptor>
50   
51   <acceptor name="netty-throughput">
52      <factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
53      <param key="host"  value="0.0.0.0"/>
54      <param key="port"  value="5455"/>
55      <param key="batch-delay" value="50"/>
56      <param key="direct-deliver" value="false"/>
57   </acceptor>
58</acceptors>
59<bean name="StandaloneServer" class="org.hornetq.jms.server.impl.StandaloneNamingServer">
60   <constructor>
61      <parameter>
62         <inject bean="HornetQServer"/>
63      </parameter>
64   </constructor>
65   <property name="port">1099</property>
66   <property name="bindAddress">0.0.0.0</property>
67   <property name="rmiPort">1098</property>
68   <property name="rmiBindAddress">0.0.0.0</property>
69</bean>
70docker run -d -p 1098:1098 -p 1099:1099 -p 5445:5445 -p 5455:5455 hornetq
71

ANSWER

Answered 2021-Oct-11 at 02:03

The host values of 0.0.0.0 for your connector configurations in hornetq-configuration.xml are invalid. This is why the broker logs:

1<bean name="StandaloneServer" class="org.hornetq.jms.server.impl.StandaloneNamingServer">
2   <constructor>
3      <parameter>
4         <inject bean="HornetQServer"/>
5      </parameter>
6   </constructor>
7   <property name="port">1099</property>
8   <property name="bindAddress">0.0.0.0</property>
9   <property name="rmiPort">1098</property>
10   <property name="rmiBindAddress">0.0.0.0</property>
11</bean>
12# netstat -plunt
13Active Internet connections (only servers)
14Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
15tcp        0      0 0.0.0.0:5445            0.0.0.0:*               LISTEN      10/java
16tcp        0      0 0.0.0.0:1098            0.0.0.0:*               LISTEN      10/java
17tcp        0      0 0.0.0.0:1099            0.0.0.0:*               LISTEN      10/java
18tcp        0      0 0.0.0.0:39437           0.0.0.0:*               LISTEN      10/java
19tcp        0      0 0.0.0.0:5455            0.0.0.0:*               LISTEN      10/java
20FROM openjdk:8
21
22WORKDIR /app
23
24COPY ./hornetq-2.4.0.Final .
25
26EXPOSE 1099 1098 5445 5455
27
28ENTRYPOINT [ "/bin/bash", "-c", "cd bin/; ./run.sh" ]
29<connectors>
30   <connector name="netty">
31      <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
32      <param key="host"  value="0.0.0.0"/>
33      <param key="port"  value="5445"/>
34   </connector>
35   
36   <connector name="netty-throughput">
37      <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
38      <param key="host"  value="0.0.0.0"/>
39      <param key="port"  value="5455"/>
40      <param key="batch-delay" value="50"/>
41   </connector>
42</connectors>
43
44<acceptors>
45   <acceptor name="netty">
46      <factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
47      <param key="host"  value="0.0.0.0"/>
48      <param key="port"  value="5445"/>
49   </acceptor>
50   
51   <acceptor name="netty-throughput">
52      <factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
53      <param key="host"  value="0.0.0.0"/>
54      <param key="port"  value="5455"/>
55      <param key="batch-delay" value="50"/>
56      <param key="direct-deliver" value="false"/>
57   </acceptor>
58</acceptors>
59<bean name="StandaloneServer" class="org.hornetq.jms.server.impl.StandaloneNamingServer">
60   <constructor>
61      <parameter>
62         <inject bean="HornetQServer"/>
63      </parameter>
64   </constructor>
65   <property name="port">1099</property>
66   <property name="bindAddress">0.0.0.0</property>
67   <property name="rmiPort">1098</property>
68   <property name="rmiBindAddress">0.0.0.0</property>
69</bean>
70docker run -d -p 1098:1098 -p 1099:1099 -p 5445:5445 -p 5455:5455 hornetq
71Invalid "host" value "0.0.0.0" detected for "netty" connector. Switching to "8ba14b02658a". If this new address is incorrect please manually configure the connector to use the proper one.
72

I assume 8ba14b02658a is not the proper host value which is why it continues to fail. Therefore, as the log indicates, you need to configure it with a valid value for your environment. This needs to be a hostname or IP address that the client on your host can use to connect to the broker running in Docker. This is because the connector is simply a configuration holder (sometimes called a "stub") which is passed back to the remote client when it performs the JNDI lookup. The remote client then uses this stub to make the actual JMS connection to the broker. Therefore, whatever is configured as the host and port for the connector is what the client will use.

A simpler option would be to use --network host when you run the Docker container, e.g.:

1<bean name="StandaloneServer" class="org.hornetq.jms.server.impl.StandaloneNamingServer">
2   <constructor>
3      <parameter>
4         <inject bean="HornetQServer"/>
5      </parameter>
6   </constructor>
7   <property name="port">1099</property>
8   <property name="bindAddress">0.0.0.0</property>
9   <property name="rmiPort">1098</property>
10   <property name="rmiBindAddress">0.0.0.0</property>
11</bean>
12# netstat -plunt
13Active Internet connections (only servers)
14Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
15tcp        0      0 0.0.0.0:5445            0.0.0.0:*               LISTEN      10/java
16tcp        0      0 0.0.0.0:1098            0.0.0.0:*               LISTEN      10/java
17tcp        0      0 0.0.0.0:1099            0.0.0.0:*               LISTEN      10/java
18tcp        0      0 0.0.0.0:39437           0.0.0.0:*               LISTEN      10/java
19tcp        0      0 0.0.0.0:5455            0.0.0.0:*               LISTEN      10/java
20FROM openjdk:8
21
22WORKDIR /app
23
24COPY ./hornetq-2.4.0.Final .
25
26EXPOSE 1099 1098 5445 5455
27
28ENTRYPOINT [ "/bin/bash", "-c", "cd bin/; ./run.sh" ]
29<connectors>
30   <connector name="netty">
31      <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
32      <param key="host"  value="0.0.0.0"/>
33      <param key="port"  value="5445"/>
34   </connector>
35   
36   <connector name="netty-throughput">
37      <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
38      <param key="host"  value="0.0.0.0"/>
39      <param key="port"  value="5455"/>
40      <param key="batch-delay" value="50"/>
41   </connector>
42</connectors>
43
44<acceptors>
45   <acceptor name="netty">
46      <factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
47      <param key="host"  value="0.0.0.0"/>
48      <param key="port"  value="5445"/>
49   </acceptor>
50   
51   <acceptor name="netty-throughput">
52      <factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
53      <param key="host"  value="0.0.0.0"/>
54      <param key="port"  value="5455"/>
55      <param key="batch-delay" value="50"/>
56      <param key="direct-deliver" value="false"/>
57   </acceptor>
58</acceptors>
59<bean name="StandaloneServer" class="org.hornetq.jms.server.impl.StandaloneNamingServer">
60   <constructor>
61      <parameter>
62         <inject bean="HornetQServer"/>
63      </parameter>
64   </constructor>
65   <property name="port">1099</property>
66   <property name="bindAddress">0.0.0.0</property>
67   <property name="rmiPort">1098</property>
68   <property name="rmiBindAddress">0.0.0.0</property>
69</bean>
70docker run -d -p 1098:1098 -p 1099:1099 -p 5445:5445 -p 5455:5455 hornetq
71Invalid "host" value "0.0.0.0" detected for "netty" connector. Switching to "8ba14b02658a". If this new address is incorrect please manually configure the connector to use the proper one.
72docker run -d --network host hornetq
73

This will make the container use the host's network. Once you set the host values for your connector configurations in hornetq-configuration.xml back to localhost everything should work. You can read more about this option in the Docker documentation.

It's worth noting that there hasn't been a release of HornetQ in almost 5 years now. The HornetQ code-base was donated to the Apache ActiveMQ community in June of 2015 and is now known as ActiveMQ Artemis - the next-generation broker from ActiveMQ. I would strongly recommend migrating to ActiveMQ Artemis and discontinuing use of HornetQ.

Furthermore, if you did migrate to ActiveMQ Artemis you wouldn't experience this particular problem as the JNDI implementation has changed completely. There is no longer an actual JNDI server. The JNDI implementation is 100% client-side so you'd just need to configure the URL in the JNDI properties.

Source https://stackoverflow.com/questions/69417596

QUESTION

Cannot connect to kafka from ourside of its docker container

Asked 2021-Aug-30 at 00:40

This is my docker compose file:

1version: "3.7"
2
3services:
4  zookeeper:
5    image: 'bitnami/zookeeper:latest'
6    container_name: zookeeper
7    ports: 
8      - 2181:2181
9    env_file: 
10      - zookeeper.env
11  
12  kafka:
13    image: 'bitnami/kafka:latest'
14    container_name: kafka
15    env_file:
16      - kafka.env
17    ports: 
18      - 9093:9092
19    depends_on:
20      - zookeeper
21

And also these are my .env files:

kakfa.env

1version: "3.7"
2
3services:
4  zookeeper:
5    image: 'bitnami/zookeeper:latest'
6    container_name: zookeeper
7    ports: 
8      - 2181:2181
9    env_file: 
10      - zookeeper.env
11  
12  kafka:
13    image: 'bitnami/kafka:latest'
14    container_name: kafka
15    env_file:
16      - kafka.env
17    ports: 
18      - 9093:9092
19    depends_on:
20      - zookeeper
21KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
22KAFKA_CFG_LISTENERS=INTERNAL://kafka:9092,EXTERNAL://localhost:9093
23KAFKA_CFG_ADVERTISED_LISTENERS=INTERNAL://kafka:9092,EXTERNAL://localhost:9093
24KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
25KAFKA_CFG_INTER_BROKER_LISTENER_NAME=INTERNAL
26KAFKA_CFG_OFFSETS_TOPIC_REPLICATION_FACTOR=1
27ALLOW_PLAINTEXT_LISTENER=yes
28

zookeeper.env

1version: "3.7"
2
3services:
4  zookeeper:
5    image: 'bitnami/zookeeper:latest'
6    container_name: zookeeper
7    ports: 
8      - 2181:2181
9    env_file: 
10      - zookeeper.env
11  
12  kafka:
13    image: 'bitnami/kafka:latest'
14    container_name: kafka
15    env_file:
16      - kafka.env
17    ports: 
18      - 9093:9092
19    depends_on:
20      - zookeeper
21KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
22KAFKA_CFG_LISTENERS=INTERNAL://kafka:9092,EXTERNAL://localhost:9093
23KAFKA_CFG_ADVERTISED_LISTENERS=INTERNAL://kafka:9092,EXTERNAL://localhost:9093
24KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
25KAFKA_CFG_INTER_BROKER_LISTENER_NAME=INTERNAL
26KAFKA_CFG_OFFSETS_TOPIC_REPLICATION_FACTOR=1
27ALLOW_PLAINTEXT_LISTENER=yes
28ZOOKEEPER_CLIENT_PORT=2181
29ZOOKEEPER_TICK_TIME=2000
30ALLOW_ANONYMOUS_LOGIN=yes
31

When I make the compose file up, everything seems to be fine and I have good telnet result form outside of the docker container, on my localhost, to 9093 port.

I write a simple producer python file as below:

1version: "3.7"
2
3services:
4  zookeeper:
5    image: 'bitnami/zookeeper:latest'
6    container_name: zookeeper
7    ports: 
8      - 2181:2181
9    env_file: 
10      - zookeeper.env
11  
12  kafka:
13    image: 'bitnami/kafka:latest'
14    container_name: kafka
15    env_file:
16      - kafka.env
17    ports: 
18      - 9093:9092
19    depends_on:
20      - zookeeper
21KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
22KAFKA_CFG_LISTENERS=INTERNAL://kafka:9092,EXTERNAL://localhost:9093
23KAFKA_CFG_ADVERTISED_LISTENERS=INTERNAL://kafka:9092,EXTERNAL://localhost:9093
24KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
25KAFKA_CFG_INTER_BROKER_LISTENER_NAME=INTERNAL
26KAFKA_CFG_OFFSETS_TOPIC_REPLICATION_FACTOR=1
27ALLOW_PLAINTEXT_LISTENER=yes
28ZOOKEEPER_CLIENT_PORT=2181
29ZOOKEEPER_TICK_TIME=2000
30ALLOW_ANONYMOUS_LOGIN=yes
31import logging
32
33from kafka import KafkaProducer
34
35logging.basicConfig(level=logging.INFO)
36
37
38class Producer():
39    def __init__(self) -> None:
40        self.conn = KafkaProducer(bootstrap_servers="localhost:9093")
41
42    def produce(self):
43        while True:
44            try:
45                self.conn.send("test", b'test')
46            except KeyboardInterrupt:
47                logging.info("Producer finished")
48                break
49
50
51if __name__ == "__main__":
52    producer = Producer()
53    producer.produce()
54

When I run my code, I get this error:

1version: "3.7"
2
3services:
4  zookeeper:
5    image: 'bitnami/zookeeper:latest'
6    container_name: zookeeper
7    ports: 
8      - 2181:2181
9    env_file: 
10      - zookeeper.env
11  
12  kafka:
13    image: 'bitnami/kafka:latest'
14    container_name: kafka
15    env_file:
16      - kafka.env
17    ports: 
18      - 9093:9092
19    depends_on:
20      - zookeeper
21KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
22KAFKA_CFG_LISTENERS=INTERNAL://kafka:9092,EXTERNAL://localhost:9093
23KAFKA_CFG_ADVERTISED_LISTENERS=INTERNAL://kafka:9092,EXTERNAL://localhost:9093
24KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
25KAFKA_CFG_INTER_BROKER_LISTENER_NAME=INTERNAL
26KAFKA_CFG_OFFSETS_TOPIC_REPLICATION_FACTOR=1
27ALLOW_PLAINTEXT_LISTENER=yes
28ZOOKEEPER_CLIENT_PORT=2181
29ZOOKEEPER_TICK_TIME=2000
30ALLOW_ANONYMOUS_LOGIN=yes
31import logging
32
33from kafka import KafkaProducer
34
35logging.basicConfig(level=logging.INFO)
36
37
38class Producer():
39    def __init__(self) -> None:
40        self.conn = KafkaProducer(bootstrap_servers="localhost:9093")
41
42    def produce(self):
43        while True:
44            try:
45                self.conn.send("test", b'test')
46            except KeyboardInterrupt:
47                logging.info("Producer finished")
48                break
49
50
51if __name__ == "__main__":
52    producer = Producer()
53    producer.produce()
54WARNING:kafka.conn:DNS lookup failed for kafka:9092, exception was [Errno -3] Temporary failure in name resolution. Is your advertised.listeners (called advertised.host.name before Kafka 9) correct and resolvable?
55ERROR:kafka.conn:DNS lookup failed for kafka:9092 (AddressFamily.AF_UNSPEC)
56

I read also this post but I could not resolve my error and I don't know what should I do to get rid of this error.

ANSWER

Answered 2021-Aug-30 at 00:39

You've forwarded the wrong port

9093 on the host needs to map to the localhost:9093 advertised port

Otherwise, you're connecting to 9093, which returns kafka:9092, as explained in the blog. Container hostnames cannot be resolved by the host, by default

Source https://stackoverflow.com/questions/68975387

QUESTION

Connection timeout using local kafka-connect cluster to connect on a remote database

Asked 2021-Jul-06 at 12:09

I'm trying to run a local kafka-connect cluster using docker-compose. I need to connect on a remote database and i'm also using a remote kafka and schema-registry. I have enabled access to these remotes resources from my machine.

To start the cluster, on my project folder in my Ubuntu WSL2 terminal, i'm running

docker build -t my-connect:1.0.0

docker-compose up

The application runs successfully, but when I try to create a new connector, returns error 500 with timeout.

My Dockerfile

1FROM confluentinc/cp-kafka-connect-base:5.5.0
2
3RUN cat /etc/confluent/docker/log4j.properties.template
4
5ENV CONNECT_PLUGIN_PATH="/usr/share/java,/usr/share/confluent-hub-components"
6ARG JDBC_DRIVER_DIR=/usr/share/java/kafka/
7
8RUN   confluent-hub install --no-prompt confluentinc/kafka-connect-jdbc:5.5.0 \
9   && confluent-hub install --no-prompt confluentinc/connect-transforms:1.3.2
10
11ADD java/kafka-connect-jdbc /usr/share/confluent-hub-components/confluentinc-kafka-connect-jdbc/lib/
12COPY java/kafka-connect-jdbc/ojdbc8.jar /usr/share/confluent-hub-components/confluentinc-kafka-connect-jdbc/lib/
13
14ENTRYPOINT ["sh","-c","export CONNECT_REST_ADVERTISED_HOST_NAME=$(hostname -I);/etc/confluent/docker/run"] 
15

My docker-compose.yaml

1FROM confluentinc/cp-kafka-connect-base:5.5.0
2
3RUN cat /etc/confluent/docker/log4j.properties.template
4
5ENV CONNECT_PLUGIN_PATH="/usr/share/java,/usr/share/confluent-hub-components"
6ARG JDBC_DRIVER_DIR=/usr/share/java/kafka/
7
8RUN   confluent-hub install --no-prompt confluentinc/kafka-connect-jdbc:5.5.0 \
9   && confluent-hub install --no-prompt confluentinc/connect-transforms:1.3.2
10
11ADD java/kafka-connect-jdbc /usr/share/confluent-hub-components/confluentinc-kafka-connect-jdbc/lib/
12COPY java/kafka-connect-jdbc/ojdbc8.jar /usr/share/confluent-hub-components/confluentinc-kafka-connect-jdbc/lib/
13
14ENTRYPOINT ["sh","-c","export CONNECT_REST_ADVERTISED_HOST_NAME=$(hostname -I);/etc/confluent/docker/run"] 
15services:
16  connect:
17    image: my-connect:1.0.0
18    ports:
19     - 8083:8083
20    environment:
21      - CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL=http=//schema-registry:8081
22      - CONNECT_KEY_CONVERTER=io.confluent.connect.avro.AvroConverter
23      - CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL=http=//schema-registry:8081
24      - CONNECT_BOOTSTRAP_SERVERS=broker1.intranet:9092
25      - CONNECT_GROUP_ID=kafka-connect
26      - CONNECT_INTERNAL_KEY_CONVERTER=org.apache.kafka.connect.json.JsonConverter
27      - CONNECT_VALUE_CONVERTER=io.confluent.connect.avro.AvroConverter
28      - CONNECT_INTERNAL_VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter
29      - CONNECT_OFFSET_STORAGE_TOPIC=kafka-connect.offset
30      - CONNECT_CONFIG_STORAGE_TOPIC=kafka-connect.config
31      - CONNECT_STATUS_STORAGE_TOPIC=kafka-connect.status
32      - CONNECT_CONNECTOR_CLIENT_CONFIG_OVERRIDE_POLICY=All
33      - CONNECT_LOG4J_ROOT_LOGLEVEL=INFO
34      - KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
35      - CONNECT_REST_ADVERTISED_HOST_NAME=localhost
36

My cluster it's up

1FROM confluentinc/cp-kafka-connect-base:5.5.0
2
3RUN cat /etc/confluent/docker/log4j.properties.template
4
5ENV CONNECT_PLUGIN_PATH="/usr/share/java,/usr/share/confluent-hub-components"
6ARG JDBC_DRIVER_DIR=/usr/share/java/kafka/
7
8RUN   confluent-hub install --no-prompt confluentinc/kafka-connect-jdbc:5.5.0 \
9   && confluent-hub install --no-prompt confluentinc/connect-transforms:1.3.2
10
11ADD java/kafka-connect-jdbc /usr/share/confluent-hub-components/confluentinc-kafka-connect-jdbc/lib/
12COPY java/kafka-connect-jdbc/ojdbc8.jar /usr/share/confluent-hub-components/confluentinc-kafka-connect-jdbc/lib/
13
14ENTRYPOINT ["sh","-c","export CONNECT_REST_ADVERTISED_HOST_NAME=$(hostname -I);/etc/confluent/docker/run"] 
15services:
16  connect:
17    image: my-connect:1.0.0
18    ports:
19     - 8083:8083
20    environment:
21      - CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL=http=//schema-registry:8081
22      - CONNECT_KEY_CONVERTER=io.confluent.connect.avro.AvroConverter
23      - CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL=http=//schema-registry:8081
24      - CONNECT_BOOTSTRAP_SERVERS=broker1.intranet:9092
25      - CONNECT_GROUP_ID=kafka-connect
26      - CONNECT_INTERNAL_KEY_CONVERTER=org.apache.kafka.connect.json.JsonConverter
27      - CONNECT_VALUE_CONVERTER=io.confluent.connect.avro.AvroConverter
28      - CONNECT_INTERNAL_VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter
29      - CONNECT_OFFSET_STORAGE_TOPIC=kafka-connect.offset
30      - CONNECT_CONFIG_STORAGE_TOPIC=kafka-connect.config
31      - CONNECT_STATUS_STORAGE_TOPIC=kafka-connect.status
32      - CONNECT_CONNECTOR_CLIENT_CONFIG_OVERRIDE_POLICY=All
33      - CONNECT_LOG4J_ROOT_LOGLEVEL=INFO
34      - KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
35      - CONNECT_REST_ADVERTISED_HOST_NAME=localhost
36~$ curl -X GET http://localhost:8083/
37{"version":"5.5.0-ccs","commit":"606822a624024828","kafka_cluster_id":"OcXKHO7eT4m9NBHln6ACKg"}
38

Connector call

1FROM confluentinc/cp-kafka-connect-base:5.5.0
2
3RUN cat /etc/confluent/docker/log4j.properties.template
4
5ENV CONNECT_PLUGIN_PATH="/usr/share/java,/usr/share/confluent-hub-components"
6ARG JDBC_DRIVER_DIR=/usr/share/java/kafka/
7
8RUN   confluent-hub install --no-prompt confluentinc/kafka-connect-jdbc:5.5.0 \
9   && confluent-hub install --no-prompt confluentinc/connect-transforms:1.3.2
10
11ADD java/kafka-connect-jdbc /usr/share/confluent-hub-components/confluentinc-kafka-connect-jdbc/lib/
12COPY java/kafka-connect-jdbc/ojdbc8.jar /usr/share/confluent-hub-components/confluentinc-kafka-connect-jdbc/lib/
13
14ENTRYPOINT ["sh","-c","export CONNECT_REST_ADVERTISED_HOST_NAME=$(hostname -I);/etc/confluent/docker/run"] 
15services:
16  connect:
17    image: my-connect:1.0.0
18    ports:
19     - 8083:8083
20    environment:
21      - CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL=http=//schema-registry:8081
22      - CONNECT_KEY_CONVERTER=io.confluent.connect.avro.AvroConverter
23      - CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL=http=//schema-registry:8081
24      - CONNECT_BOOTSTRAP_SERVERS=broker1.intranet:9092
25      - CONNECT_GROUP_ID=kafka-connect
26      - CONNECT_INTERNAL_KEY_CONVERTER=org.apache.kafka.connect.json.JsonConverter
27      - CONNECT_VALUE_CONVERTER=io.confluent.connect.avro.AvroConverter
28      - CONNECT_INTERNAL_VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter
29      - CONNECT_OFFSET_STORAGE_TOPIC=kafka-connect.offset
30      - CONNECT_CONFIG_STORAGE_TOPIC=kafka-connect.config
31      - CONNECT_STATUS_STORAGE_TOPIC=kafka-connect.status
32      - CONNECT_CONNECTOR_CLIENT_CONFIG_OVERRIDE_POLICY=All
33      - CONNECT_LOG4J_ROOT_LOGLEVEL=INFO
34      - KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
35      - CONNECT_REST_ADVERTISED_HOST_NAME=localhost
36~$ curl -X GET http://localhost:8083/
37{"version":"5.5.0-ccs","commit":"606822a624024828","kafka_cluster_id":"OcXKHO7eT4m9NBHln6ACKg"}
38curl -i -X POST -H "Accept:application/json" -H "Content-Type:application/json" localhost:8083/connectors/ -d
39{
40    "name": "my-connector",
41    "config":  
42    { 
43    "connector.class" : "io.debezium.connector.oracle.OracleConnector",
44    "tasks.max": "1",
45    "database.user": "user", 
46    "database.password": "pass",    
47    "database.dbname":"SID",
48    "database.schema":"schema",
49    "database.server.name": "dbname",   
50    "schema.include.list": "schema",    
51    "database.connection.adapter":"logminer",   
52    "database.hostname":"databasehost",
53    "database.port":"1521"
54   }
55}
56

Error

1FROM confluentinc/cp-kafka-connect-base:5.5.0
2
3RUN cat /etc/confluent/docker/log4j.properties.template
4
5ENV CONNECT_PLUGIN_PATH="/usr/share/java,/usr/share/confluent-hub-components"
6ARG JDBC_DRIVER_DIR=/usr/share/java/kafka/
7
8RUN   confluent-hub install --no-prompt confluentinc/kafka-connect-jdbc:5.5.0 \
9   && confluent-hub install --no-prompt confluentinc/connect-transforms:1.3.2
10
11ADD java/kafka-connect-jdbc /usr/share/confluent-hub-components/confluentinc-kafka-connect-jdbc/lib/
12COPY java/kafka-connect-jdbc/ojdbc8.jar /usr/share/confluent-hub-components/confluentinc-kafka-connect-jdbc/lib/
13
14ENTRYPOINT ["sh","-c","export CONNECT_REST_ADVERTISED_HOST_NAME=$(hostname -I);/etc/confluent/docker/run"] 
15services:
16  connect:
17    image: my-connect:1.0.0
18    ports:
19     - 8083:8083
20    environment:
21      - CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL=http=//schema-registry:8081
22      - CONNECT_KEY_CONVERTER=io.confluent.connect.avro.AvroConverter
23      - CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL=http=//schema-registry:8081
24      - CONNECT_BOOTSTRAP_SERVERS=broker1.intranet:9092
25      - CONNECT_GROUP_ID=kafka-connect
26      - CONNECT_INTERNAL_KEY_CONVERTER=org.apache.kafka.connect.json.JsonConverter
27      - CONNECT_VALUE_CONVERTER=io.confluent.connect.avro.AvroConverter
28      - CONNECT_INTERNAL_VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter
29      - CONNECT_OFFSET_STORAGE_TOPIC=kafka-connect.offset
30      - CONNECT_CONFIG_STORAGE_TOPIC=kafka-connect.config
31      - CONNECT_STATUS_STORAGE_TOPIC=kafka-connect.status
32      - CONNECT_CONNECTOR_CLIENT_CONFIG_OVERRIDE_POLICY=All
33      - CONNECT_LOG4J_ROOT_LOGLEVEL=INFO
34      - KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
35      - CONNECT_REST_ADVERTISED_HOST_NAME=localhost
36~$ curl -X GET http://localhost:8083/
37{"version":"5.5.0-ccs","commit":"606822a624024828","kafka_cluster_id":"OcXKHO7eT4m9NBHln6ACKg"}
38curl -i -X POST -H "Accept:application/json" -H "Content-Type:application/json" localhost:8083/connectors/ -d
39{
40    "name": "my-connector",
41    "config":  
42    { 
43    "connector.class" : "io.debezium.connector.oracle.OracleConnector",
44    "tasks.max": "1",
45    "database.user": "user", 
46    "database.password": "pass",    
47    "database.dbname":"SID",
48    "database.schema":"schema",
49    "database.server.name": "dbname",   
50    "schema.include.list": "schema",    
51    "database.connection.adapter":"logminer",   
52    "database.hostname":"databasehost",
53    "database.port":"1521"
54   }
55}
56{"error_code": 500,"message": "IO Error trying to forward REST request: java.net.SocketTimeoutException: Connect Timeout"}
57
58## LOG
59connect_1  | [2021-07-01 19:08:50,481] INFO Database Version: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
60connect_1  | Version 19.4.0.0.0 (io.debezium.connector.oracle.OracleConnection)
61connect_1  | [2021-07-01 19:08:50,628] INFO Connection gracefully closed (io.debezium.jdbc.JdbcConnection)
62connect_1  | [2021-07-01 19:08:50,643] INFO AbstractConfig values:
63connect_1  |  (org.apache.kafka.common.config.AbstractConfig)
64connect_1  | [2021-07-01 19:09:05,722] ERROR IO error forwarding REST request:  (org.apache.kafka.connect.runtime.rest.RestClient)
65connect_1  | java.util.concurrent.ExecutionException: java.net.SocketTimeoutException: Connect Timeout
66

Testing the connection to the database

$ telnet databasehostname 1521 Trying <ip>... Connected to databasehostname

Testing connection to kafka broker

$ telnet broker1.intranet 9092 Trying <ip>... Connected to broker1.intranet

Testing connection to remote schema-registry

$ telnet schema-registry.intranet 8081 Trying <ip>... Connected to schema-registry.intranet

What am I doing wrong? Do I need to configure something else to allow connection to this remote database?

ANSWER

Answered 2021-Jul-06 at 12:09

You need to set correctly rest.advertised.host.name (or CONNECT_REST_ADVERTISED_HOST_NAME, if you’re using Docker). This is how a Connect worker communicates with other workers in the cluster.

For more details see Common mistakes made when configuring multiple Kafka Connect workers by Robin Moffatt.

In your case try to remove CONNECT_REST_ADVERTISED_HOST_NAME=localhost from compose file.

Source https://stackoverflow.com/questions/68217193

QUESTION

codecov fails in github actions

Asked 2021-Jun-09 at 22:09
backgrond
  • my setup for codecov has worked well so far

    • you can regular updates with each pr commits here
    • I haven't change my repo settings
  • as I've inadvertently pushed a folder that I wasn't supposed to,
    then I merged a pr to remove said folder

  • here is my codecov.yml

issue
  • on the aforementioned last pr linked above the github action ci complained with the log below
1  _____          _
2 / ____|        | |
3| |     ___   __| | ___  ___ _____   __
4| |    / _ \ / _` |/ _ \/ __/ _ \ \ / /
5| |___| (_) | (_| |  __/ (_| (_) \ V /
6 \_____\___/ \__,_|\___|\___\___/ \_/
7                              Bash-1.0.3
8
9
10==&gt; git version 2.31.1 found
11==&gt; curl 7.68.0 (x86_64-pc-linux-gnu) libcurl/7.68.0 OpenSSL/1.1.1f zlib/1.2.11 brotli/1.0.7 libidn2/2.2.0 libpsl/0.21.0 (+libidn2/2.2.0) libssh/0.9.3/openssl/zlib nghttp2/1.40.0 librtmp/2.3
12Release-Date: 2020-01-08
13Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtmp rtsp scp sftp smb smbs smtp smtps telnet tftp 
14Features: AsynchDNS brotli GSS-API HTTP2 HTTPS-proxy IDN IPv6 Kerberos Largefile libz NTLM NTLM_WB PSL SPNEGO SSL TLS-SRP UnixSockets
15==&gt; GitHub Actions detected.
16    Env vars used:
17      -&gt; GITHUB_ACTIONS:    true
18      -&gt; GITHUB_HEAD_REF:   remove-speedtest
19      -&gt; GITHUB_REF:        refs/pull/136/merge
20      -&gt; GITHUB_REPOSITORY: iapicca/yak_packages
21      -&gt; GITHUB_RUN_ID:     {{I'll keep this for myself}}
22      -&gt; GITHUB_SHA:        {{I'll keep this for myself}}
23      -&gt; GITHUB_WORKFLOW:   CI
24-&gt;  Issue detecting commit SHA. Please run actions/checkout with fetch-depth &gt; 1 or set to 0
25    project root: .
26    Yaml found at: codecov.yml
27==&gt; Running gcov in . (disable via -X gcov)
28==&gt; Python coveragepy not found
29==&gt; Searching for coverage reports in:
30    + .
31    -&gt; Found 7 reports
32==&gt; Detecting git/mercurial file structure
33==&gt; Reading reports
34    + ./packages/yak_tween/coverage/lcov.info bytes=2228
35    + ./packages/yak_utils/coverage.lcov bytes=687
36    + ./packages/yak_test/coverage.lcov bytes=339
37    + ./packages/stub/coverage.lcov bytes=678
38    + ./packages/yak_runner/coverage.lcov bytes=6429
39    + ./packages/yak_widgets/coverage/lcov.info bytes=1444
40    + ./packages/yak_error_handler/coverage.lcov bytes=1017
41==&gt; Appending adjustments
42    https://docs.codecov.io/docs/fixing-reports
43    + Found adjustments
44==&gt; Gzipping contents
45        8.0K    /tmp/codecov.yP3SSF.gz
46==&gt; Uploading reports
47    url: https://codecov.io
48    query: branch=remove-speedtest&amp;commit={{I'll keep this for myself}}
49    &amp;build={{I'll keep this for myself}}&amp;build_url=http%3A%2F%2Fgithub.com%2Fiapicca%2Fyak_packages%2Factions%2Fruns%2F911981303&amp;name=&amp;tag=&amp;slug=iapicca%2Fyak_packages&amp;service=github-actions&amp;flags=&amp;pr=136&amp;job=CI&amp;cmd_args=
50
51-&gt;  Pinging Codecov
52https://codecov.io/upload/v4?package=bash-1.0.3&amp;token=secret&amp;branch=remove-speedtest&amp;commit={{I'll keep this for myself}}&amp;build={{I'll keep this for myself}}&amp;build_url=http%3A%2F%2Fgithub.com%2Fiapicca%2Fyak_packages%2Factions%2Fruns%2F911981303&amp;name=&amp;tag=&amp;slug=iapicca%2Fyak_packages&amp;service=github-actions&amp;flags=&amp;pr=136&amp;job=CI&amp;cmd_args=
53{'detail': ErrorDetail(string='Unable to locate build via Github Actions API. Please upload with the Codecov repository upload token to resolve issue.', code='not_found')}
54404
55==&gt; Uploading to Codecov
56  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
57                                 Dload  Upload   Total   Spent    Left  Speed
58
59  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
60100  5026  100   171  100  4855   1000  28391 --:--:-- --:--:-- --:--:-- 29220
61100  5026  100   171  100  4855   1000  28391 --:--:-- --:--:-- --:--:-- 29220
62    {'detail': ErrorDetail(string='Unable to locate build via Github Actions API. Please upload with the Codecov repository upload token to resolve issue.', code='not_found')}
63
64
  • the suggested fix is quite obscure to me
1  _____          _
2 / ____|        | |
3| |     ___   __| | ___  ___ _____   __
4| |    / _ \ / _` |/ _ \/ __/ _ \ \ / /
5| |___| (_) | (_| |  __/ (_| (_) \ V /
6 \_____\___/ \__,_|\___|\___\___/ \_/
7                              Bash-1.0.3
8
9
10==&gt; git version 2.31.1 found
11==&gt; curl 7.68.0 (x86_64-pc-linux-gnu) libcurl/7.68.0 OpenSSL/1.1.1f zlib/1.2.11 brotli/1.0.7 libidn2/2.2.0 libpsl/0.21.0 (+libidn2/2.2.0) libssh/0.9.3/openssl/zlib nghttp2/1.40.0 librtmp/2.3
12Release-Date: 2020-01-08
13Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtmp rtsp scp sftp smb smbs smtp smtps telnet tftp 
14Features: AsynchDNS brotli GSS-API HTTP2 HTTPS-proxy IDN IPv6 Kerberos Largefile libz NTLM NTLM_WB PSL SPNEGO SSL TLS-SRP UnixSockets
15==&gt; GitHub Actions detected.
16    Env vars used:
17      -&gt; GITHUB_ACTIONS:    true
18      -&gt; GITHUB_HEAD_REF:   remove-speedtest
19      -&gt; GITHUB_REF:        refs/pull/136/merge
20      -&gt; GITHUB_REPOSITORY: iapicca/yak_packages
21      -&gt; GITHUB_RUN_ID:     {{I'll keep this for myself}}
22      -&gt; GITHUB_SHA:        {{I'll keep this for myself}}
23      -&gt; GITHUB_WORKFLOW:   CI
24-&gt;  Issue detecting commit SHA. Please run actions/checkout with fetch-depth &gt; 1 or set to 0
25    project root: .
26    Yaml found at: codecov.yml
27==&gt; Running gcov in . (disable via -X gcov)
28==&gt; Python coveragepy not found
29==&gt; Searching for coverage reports in:
30    + .
31    -&gt; Found 7 reports
32==&gt; Detecting git/mercurial file structure
33==&gt; Reading reports
34    + ./packages/yak_tween/coverage/lcov.info bytes=2228
35    + ./packages/yak_utils/coverage.lcov bytes=687
36    + ./packages/yak_test/coverage.lcov bytes=339
37    + ./packages/stub/coverage.lcov bytes=678
38    + ./packages/yak_runner/coverage.lcov bytes=6429
39    + ./packages/yak_widgets/coverage/lcov.info bytes=1444
40    + ./packages/yak_error_handler/coverage.lcov bytes=1017
41==&gt; Appending adjustments
42    https://docs.codecov.io/docs/fixing-reports
43    + Found adjustments
44==&gt; Gzipping contents
45        8.0K    /tmp/codecov.yP3SSF.gz
46==&gt; Uploading reports
47    url: https://codecov.io
48    query: branch=remove-speedtest&amp;commit={{I'll keep this for myself}}
49    &amp;build={{I'll keep this for myself}}&amp;build_url=http%3A%2F%2Fgithub.com%2Fiapicca%2Fyak_packages%2Factions%2Fruns%2F911981303&amp;name=&amp;tag=&amp;slug=iapicca%2Fyak_packages&amp;service=github-actions&amp;flags=&amp;pr=136&amp;job=CI&amp;cmd_args=
50
51-&gt;  Pinging Codecov
52https://codecov.io/upload/v4?package=bash-1.0.3&amp;token=secret&amp;branch=remove-speedtest&amp;commit={{I'll keep this for myself}}&amp;build={{I'll keep this for myself}}&amp;build_url=http%3A%2F%2Fgithub.com%2Fiapicca%2Fyak_packages%2Factions%2Fruns%2F911981303&amp;name=&amp;tag=&amp;slug=iapicca%2Fyak_packages&amp;service=github-actions&amp;flags=&amp;pr=136&amp;job=CI&amp;cmd_args=
53{'detail': ErrorDetail(string='Unable to locate build via Github Actions API. Please upload with the Codecov repository upload token to resolve issue.', code='not_found')}
54404
55==&gt; Uploading to Codecov
56  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
57                                 Dload  Upload   Total   Spent    Left  Speed
58
59  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
60100  5026  100   171  100  4855   1000  28391 --:--:-- --:--:-- --:--:-- 29220
61100  5026  100   171  100  4855   1000  28391 --:--:-- --:--:-- --:--:-- 29220
62    {'detail': ErrorDetail(string='Unable to locate build via Github Actions API. Please upload with the Codecov repository upload token to resolve issue.', code='not_found')}
63
64{'detail': ErrorDetail(string='Unable to locate build via Github Actions API. Please upload with the Codecov repository upload token to resolve issue.', code='not_found')}
65
request

as I don't really want to run anything locally can someone help me to fix the issue inside the CI

thank you

ANSWER

Answered 2021-Jun-06 at 17:47

Codecov has some heisenberg issues. If you don't have a token, please add one otherwise try to:

  • Force-push to retrigger Codecov
  • Rotate your token.

Source https://stackoverflow.com/questions/67861379

QUESTION

Recognize telnet protocol with Scapy python

Asked 2021-May-25 at 15:37

I am reading a Pcap file with Scapy. How can I recognize if, in this pcap file, there is a packet that uses the Telnet protocol?

I see that Scapy can write 'telnet' into dport/sport only if 1 of those ports is 23, but if I am using another port for Telnet, how do I recognize this with Scapy?

ANSWER

Answered 2021-May-24 at 19:35

Try doing

1for pkt in PcapReader('your_file.pcap'):
2    # you can try printing summary to see the packet
3    print(pkt.summary())
4    # should print something like 
5    # IP / TCP 10.1.99.25:ftp_data &gt; 10.1.99.2:telnet S
6
7    pkt_src = pkt[IP].src
8    pky_type = pkt[IP].type
9    pkt_payload = pkt[TCP].payload
10    if [...]
11

you could print the pkt dictionary to see under which key the telnet string could be falling into and do some pattern matching

I saw the recommendation to do directly PcapReader as not to take memory with rdpcap in another stackoverflow answer but I lost the link

Source https://stackoverflow.com/questions/67404757

QUESTION

Ruby: BUILD FAILED (macOS 11.2 using ruby-build 20210119) Mac Big Sur

Asked 2021-May-21 at 22:31

I looked at this Ruby installation (2.2.2) fails in macOS Big Sur

My macOS is Big Sur and the version I have is 11.2 and it was the closest I could find to the issue I'm having with my OS, I followed what I could by trying

1CFLAGS=&quot;-Wno-error=implicit-function-declaration&quot; rbenv install 2.5.3
2

and also

1CFLAGS=&quot;-Wno-error=implicit-function-declaration&quot; rbenv install 2.5.3
2RUBY_CFLAGS=-DUSE_FFI_CLOSURE_ALLOC rbenv install 2.5.3
3

This is the output in my Terminal:

1CFLAGS=&quot;-Wno-error=implicit-function-declaration&quot; rbenv install 2.5.3
2RUBY_CFLAGS=-DUSE_FFI_CLOSURE_ALLOC rbenv install 2.5.3
3Downloading openssl-1.1.1i.tar.gz...
4-&gt; https://dqw8nmjcqpjn7.cloudfront.net/e8be6a35fe41d10603c3cc635e93289ed00bf34b79671a3a4de64fcee00d5242
5Installing openssl-1.1.1i...
6Installed openssl-1.1.1i to /Users/richard/.rbenv/versions/2.5.3
7
8Downloading ruby-2.5.3.tar.bz2...
9-&gt; https://cache.ruby-lang.org/pub/ruby/2.5/ruby-2.5.3.tar.bz2
10Installing ruby-2.5.3...
11
12WARNING: ruby-2.5.3 is nearing its end of life.
13It only receives critical security updates, no bug fixes.
14
15ruby-build: using readline from homebrew
16/opt/homebrew/bin/ruby-build: line 1121: 31528 Killed: 9               &quot;$RUBY_BIN&quot; -e '
17    manager = ARGV[0]
18    packages = {
19      &quot;apt-get&quot; =&gt; Hash.new {|h,k| &quot;lib#{k}-dev&quot; }.update(
20        &quot;openssl&quot; =&gt; &quot;libssl-dev&quot;,
21        &quot;zlib&quot; =&gt; &quot;zlib1g-dev&quot;
22      ),
23      &quot;yum&quot; =&gt; Hash.new {|h,k| &quot;#{k}-devel&quot; }.update(
24        &quot;yaml&quot; =&gt; &quot;libyaml-devel&quot;
25      )
26    }
27
28    failed = %w[openssl readline zlib yaml].reject do |lib|
29      begin
30        require lib
31      rescue LoadError
32        $stderr.puts &quot;The Ruby #{lib} extension was not compiled.&quot;
33      end
34    end
35
36    if failed.size &gt; 0
37      $stderr.puts &quot;ERROR: Ruby install aborted due to missing extensions&quot;
38      $stderr.print &quot;Try running `%s install -y %s` to fetch missing dependencies.\n\n&quot; % [
39        manager,
40        failed.map { |lib| packages.fetch(manager)[lib] }.join(&quot; &quot;)
41      ] unless manager.empty?
42      $stderr.puts &quot;Configure options used:&quot;
43      require &quot;rbconfig&quot;; require &quot;shellwords&quot;
44      RbConfig::CONFIG.fetch(&quot;configure_args&quot;).shellsplit.each { |arg| $stderr.puts &quot;  #{arg}&quot; }
45      exit 1
46    end
47  ' &quot;$(basename &quot;$(type -p yum apt-get | head -1)&quot;)&quot; 1&gt;&amp;4 2&gt;&amp;1
48
49BUILD FAILED (macOS 11.2 using ruby-build 20210119)
50
51Inspect or clean up the working tree at /var/folders/rn/c7nmr3x12gg5r8qwsr4ty8hh0000gn/T/ruby-build.20210209143521.94730.xfFT9O
52Results logged to /var/folders/rn/c7nmr3x12gg5r8qwsr4ty8hh0000gn/T/ruby-build.20210209143521.94730.log
53
54Last 10 log lines:
55installing bundled gems:            /Users/richard/.rbenv/versions/2.5.3/lib/ruby/gems/2.5.0 (build_info, cache, doc, extensions, gems, specifications)
56                                    power_assert 1.1.1
57                                    net-telnet 0.1.1
58                                    did_you_mean 1.2.0
59                                    xmlrpc 0.3.0
60                                    rake 12.3.0
61                                    minitest 5.10.3
62                                    test-unit 3.2.7
63installing rdoc:                    /Users/richard/.rbenv/versions/2.5.3/share/ri/2.5.0/system
64installing capi-docs:               /Users/richard/.rbenv/versions/2.5.3/share/doc/ruby
65

I get this error for both commands mentioned above and both give this same output. The version of Ruby also doesn't seem to matter, I've tried 3.0.0 as well and get the same results.

Additionally this is the original output when I try to just install ruby with rbenv install

1CFLAGS=&quot;-Wno-error=implicit-function-declaration&quot; rbenv install 2.5.3
2RUBY_CFLAGS=-DUSE_FFI_CLOSURE_ALLOC rbenv install 2.5.3
3Downloading openssl-1.1.1i.tar.gz...
4-&gt; https://dqw8nmjcqpjn7.cloudfront.net/e8be6a35fe41d10603c3cc635e93289ed00bf34b79671a3a4de64fcee00d5242
5Installing openssl-1.1.1i...
6Installed openssl-1.1.1i to /Users/richard/.rbenv/versions/2.5.3
7
8Downloading ruby-2.5.3.tar.bz2...
9-&gt; https://cache.ruby-lang.org/pub/ruby/2.5/ruby-2.5.3.tar.bz2
10Installing ruby-2.5.3...
11
12WARNING: ruby-2.5.3 is nearing its end of life.
13It only receives critical security updates, no bug fixes.
14
15ruby-build: using readline from homebrew
16/opt/homebrew/bin/ruby-build: line 1121: 31528 Killed: 9               &quot;$RUBY_BIN&quot; -e '
17    manager = ARGV[0]
18    packages = {
19      &quot;apt-get&quot; =&gt; Hash.new {|h,k| &quot;lib#{k}-dev&quot; }.update(
20        &quot;openssl&quot; =&gt; &quot;libssl-dev&quot;,
21        &quot;zlib&quot; =&gt; &quot;zlib1g-dev&quot;
22      ),
23      &quot;yum&quot; =&gt; Hash.new {|h,k| &quot;#{k}-devel&quot; }.update(
24        &quot;yaml&quot; =&gt; &quot;libyaml-devel&quot;
25      )
26    }
27
28    failed = %w[openssl readline zlib yaml].reject do |lib|
29      begin
30        require lib
31      rescue LoadError
32        $stderr.puts &quot;The Ruby #{lib} extension was not compiled.&quot;
33      end
34    end
35
36    if failed.size &gt; 0
37      $stderr.puts &quot;ERROR: Ruby install aborted due to missing extensions&quot;
38      $stderr.print &quot;Try running `%s install -y %s` to fetch missing dependencies.\n\n&quot; % [
39        manager,
40        failed.map { |lib| packages.fetch(manager)[lib] }.join(&quot; &quot;)
41      ] unless manager.empty?
42      $stderr.puts &quot;Configure options used:&quot;
43      require &quot;rbconfig&quot;; require &quot;shellwords&quot;
44      RbConfig::CONFIG.fetch(&quot;configure_args&quot;).shellsplit.each { |arg| $stderr.puts &quot;  #{arg}&quot; }
45      exit 1
46    end
47  ' &quot;$(basename &quot;$(type -p yum apt-get | head -1)&quot;)&quot; 1&gt;&amp;4 2&gt;&amp;1
48
49BUILD FAILED (macOS 11.2 using ruby-build 20210119)
50
51Inspect or clean up the working tree at /var/folders/rn/c7nmr3x12gg5r8qwsr4ty8hh0000gn/T/ruby-build.20210209143521.94730.xfFT9O
52Results logged to /var/folders/rn/c7nmr3x12gg5r8qwsr4ty8hh0000gn/T/ruby-build.20210209143521.94730.log
53
54Last 10 log lines:
55installing bundled gems:            /Users/richard/.rbenv/versions/2.5.3/lib/ruby/gems/2.5.0 (build_info, cache, doc, extensions, gems, specifications)
56                                    power_assert 1.1.1
57                                    net-telnet 0.1.1
58                                    did_you_mean 1.2.0
59                                    xmlrpc 0.3.0
60                                    rake 12.3.0
61                                    minitest 5.10.3
62                                    test-unit 3.2.7
63installing rdoc:                    /Users/richard/.rbenv/versions/2.5.3/share/ri/2.5.0/system
64installing capi-docs:               /Users/richard/.rbenv/versions/2.5.3/share/doc/ruby
65Downloading openssl-1.1.1i.tar.gz...
66-&gt; https://dqw8nmjcqpjn7.cloudfront.net/e8be6a35fe41d10603c3cc635e93289ed00bf34b79671a3a4de64fcee00d5242
67Installing openssl-1.1.1i...
68Installed openssl-1.1.1i to /Users/richard/.rbenv/versions/2.5.3
69Downloading ruby-2.5.3.tar.bz2...
70-&gt; https://cache.ruby-lang.org/pub/ruby/2.5/ruby-2.5.3.tar.bz2
71Installing ruby-2.5.3...
72WARNING: ruby-2.5.3 is nearing its end of life.
73It only receives critical security updates, no bug fixes.
74ruby-build: using readline from homebrew
75BUILD FAILED (macOS 11.2 using ruby-build 20210119)
76Inspect or clean up the working tree at /var/folders/rn/c7nmr3x12gg5r8qwsr4ty8hh0000gn/T/ruby-build.20210209143107.60561.YqaRpk
77Results logged to /var/folders/rn/c7nmr3x12gg5r8qwsr4ty8hh0000gn/T/ruby-build.20210209143107.60561.log
78Last 10 log lines:
79compiling ../.././ext/psych/yaml/reader.c
80compiling ../.././ext/psych/yaml/emitter.c
81compiling ../.././ext/psych/yaml/parser.c
82linking shared-object json/ext/generator.bundle
835 warnings generated.
84linking shared-object date_core.bundle
85linking shared-object zlib.bundle
861 warning generated.
87linking shared-object psych.bundle
88make: *** [build-ext] Error 2
89

xcode-select version is 2384.
Homebrew version is 3.0.0 and brew doctor say's I'm ready to brew.

My .zshrc file also contains this line eval "$(rbenv init -)"

At this point I'm not sure where else to turn 🤷 If there is any specifics you want to see in the log file let me know where from, the log file is too big to share here. Why is this happening and how can I fix this? 🤦‍♂️

ANSWER

Answered 2021-Feb-23 at 19:38

This is not an official solution. I'm sure the rbenv devs are working on an actual solution but this workaround should help others who are setting up their ruby environments on the new M1 chips for Mac.

  • Make sure your Terminal is using Rosetta. You can find how to do that using Google.

  • Uninstall your current rbenv following these instructions Removing rbenv. Be sure you also remove all the downloaded versions of ruby if you have any (minus the system default) located in /Users/<your user name>/.rbenv/versions/.

  • Uninstall the ARM version of Homebrew with: /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/uninstall.sh)"

  • Install the x86_64 version of Homebrew with: arch -x86_64 /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"

  • If you run brew install rbenv should produce output saying "Error: Cannot install in Homebrew on ARM processor in Intel default prefix (/usr/local)!". This is expected.

  • You want to tell brew to install the older architecture x86_64 arch -x86_64 brew install rbenv

  • Then finally install the version you want using arch -x86_64 rbenv install x.x.x (x = some number i.e. 2.7.2)

From there you just need to remember to tell brew arch -x86_64 when installing other versions of Ruby.

Once an actual fix comes through you'll be able to switch back to the newer architecture and not have to use the arch argument. You also don't have to do this all the time with brew either, just rbenv.

Source https://stackoverflow.com/questions/66128681

Community Discussions contain sources that include Stack Exchange Network

Tutorials and Learning Resources in Telnet

Tutorials and Learning Resources are not available at this moment for Telnet

Share this Page

share link

Get latest updates on Telnet