sshauthenticator | A simple SSH authenticator for JupyterHub | SSH Utils library

 by   andreas-h Python Version: Current License: BSD-3-Clause

kandi X-RAY | sshauthenticator Summary

kandi X-RAY | sshauthenticator Summary

sshauthenticator is a Python library typically used in Utilities, SSH Utils, Jupyter applications. sshauthenticator has no bugs, it has no vulnerabilities, it has build file available, it has a Permissive License and it has low support. You can download it from GitHub.

A simple SSH authenticator for JupyterHub
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              sshauthenticator has a low active ecosystem.
              It has 5 star(s) with 1 fork(s). There are 2 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              sshauthenticator has no issues reported. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of sshauthenticator is current.

            kandi-Quality Quality

              sshauthenticator has no bugs reported.

            kandi-Security Security

              sshauthenticator has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

            kandi-License License

              sshauthenticator is licensed under the BSD-3-Clause License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              sshauthenticator releases are not available. You will need to build from source code and install.
              Build file is available. You can build the component from source.
              Installation instructions, examples and code snippets are available.

            Top functions reviewed by kandi - BETA

            kandi has reviewed sshauthenticator and discovered the below as its top functions. This is intended to give you an instant insight into sshauthenticator implemented functionality, and help decide if they suit your requirements.
            • Authenticate using SSH .
            Get all kandi verified functions for this library.

            sshauthenticator Key Features

            No Key Features are available at this moment for sshauthenticator.

            sshauthenticator Examples and Code Snippets

            No Code Snippets are available at this moment for sshauthenticator.

            Community Discussions

            QUESTION

            Jenkins Slave Offline Node Connection timed out / closed - Docker container - Relaunch step - Configuration is picking Old Port
            Asked 2017-Jul-28 at 19:05

            Jenkins version: 1.643.2

            Docker Plugin version: 0.16.0

            In my Jenkins environment, I have a Jenkins master with 2-5 slave node servers (slave1, slave2, slave3).

            Each of these slaves are configured in Jenkins Global configuration using Docker Plugin.

            Everything is working at this minute.

            I saw our monitoring system throwing some alerts for high SWAP space usage on slave3 (for ex IP: 11.22.33.44) so I ssh'ed to that machine and ran: sudo docker ps which gave me the valid output for the currently running docker containers on this slave3 machine.

            By running ps -eo pmem,pcpu,vsize,pid,cmd | sort -k 1 -nr | head -10 on the target slave's machine (where 4 containers were running), I found the top 5 processes eating all the RAM was java -jar slave.jar running inside each container. So I thought why not restart the shit and recoup some memory back. In the following output, I see what was the state of sudo docker ps command before and after the docker restart step. SCROLL right, you'll notice that in the 2nd line for container ID ending with ...0a02, the virtual port (listed under heading NAMES) on the host (slave3) machine was 1053 (which was mapped to container's virtual IP's port 22 for SSH). Cool, what this means is, when from Jenkins Manage Node section, if you try to Relaunch a slave's container, Jenkins will try to connect to the HOST IP's 11.22.33.44:1053 and do whatever it's supposed to successfully bring the slave up. So, Jenkins is holding that PORT (1053) somewhere.

            ...

            ANSWER

            Answered 2017-Jul-28 at 19:05

            OK. Zeeesus!

            In JENKINS_HOME (of the MASTER server), I searched which config file was holding the OLD port# info for that/those container node(s) which were now showing as OFFLINE.

            Changed directory to: nodes folder inside $JENKINS_HOME and found that there are config.xml files for each nodes.

            For ex: $JENKINS_HOME/nodes/-d4745b720a02/config.xml

            Resolution Steps:

            1. Vim edited the file to change the OLD with NEW port.
            2. Manage Jenkins > Reload configuration from Disk.
            3. Manage Nodes > Selected the particular node which was OFFLINE.
            4. Relaunch slave, and this time Jenkins picked the new PORT and started the container slave as expected (as SSH connection to the new port visible after the configuration change).

            I think this page: https://my.company.jenkins.instance.com/projectInstance/docker-plugin/server// web page, where it shows all the containers info (in a tabular form running on a given slave machine), this page has a button (last column) to STOP a given slave's container but not to START or RESTART.

            Having a START or RESTART button there should do what I just did above in some fashion.

            Better solution:

            What was happening is, all 4 long lived container nodes running on slave3 were competing for gaining all the available RAM (11-12GB) and over the time the JVM process (java -jar slave.jar which the Relaunch step starts on the target container's virtual machine (IP) running on the slave3 slave server) for an individual container were trying to take as much memory (RAM) as they could. That was leading to low FREE memory and thus SWAP getting used and also getting used up to a point where a monitoring tool will start screaming at us via sending notifications etc.

            To fix this situation, first thing one should do is:

            1) Under Jenkins Global configuration (Manage Jenkins > Configure Systems > Docker Plugin section, for that slave server's Image / Docker Template, under the Advanced Settings section, we can put JVM options to tell the container NOT to compete for all RAM. Putting the following JVM options helped. These JVM settings will try and keep the heap space of each container in a smaller box as to not starve out the rest of the system.

            You can start with 3-4GB depending upon how much total RAM you have on your slave/machine where the containers based slave nodes will be running.

            2) Look for any recent version of slave.jar (that may have some performance / maintenance enhancements in place which will help.

            3) Integrating the monitoring solution (Incinga/etc you have) to auto launch a Jenkins job (where Jenkins job will run some piece of action - BASH one liner, Python shit or Groovy goodness, an Ansible playbook etc) to fix the issue related to any such alert.

            4) Automatically have a container slave nodes relaunched (i.e. Relaunch Step) - take slave offline, online, Relaunch step as that'll bring the slave back to a rejuvenated state of freshness. All we have to do is, look for an idle slave (if it's not running any job) then, take it offline > then online > then Relaunch the slave using Jenkins REST API via a small Groovy script and put this all in a Jenkins job and let it do the above if those slave nodes were long lived.

            5) OR one can spin the container based slaves on the fly - use and throw model each time Jenkins queues a job to run.

            Source https://stackoverflow.com/questions/45363095

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install sshauthenticator

            You can install by using.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/andreas-h/sshauthenticator.git

          • CLI

            gh repo clone andreas-h/sshauthenticator

          • sshUrl

            git@github.com:andreas-h/sshauthenticator.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Explore Related Topics

            Consider Popular SSH Utils Libraries

            openssl

            by openssl

            solid

            by solid

            Bastillion

            by bastillion-io

            sekey

            by sekey

            sshj

            by hierynomus

            Try Top Libraries by andreas-h

            pyloess

            by andreas-hC

            smartypants.py

            by andreas-hPython

            pyatran

            by andreas-hPython

            bibtex_js

            by andreas-hJavaScript

            andreas.hilboll.de

            by andreas-hJavaScript