aws-batch | jenkins plugin for submitting jobs | Web Services library

 by   nfultz Java Version: v3 License: MIT

kandi X-RAY | aws-batch Summary

kandi X-RAY | aws-batch Summary

aws-batch is a Java library typically used in Web Services, Jenkin, Docker applications. aws-batch has no bugs, it has no vulnerabilities, it has build file available, it has a Permissive License and it has high support. You can download it from GitHub.

Now hosted in jenkins repo:
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              aws-batch has a highly active ecosystem.
              It has 8 star(s) with 9 fork(s). There are 2 watchers for this library.
              OutlinedDot
              It had no major release in the last 12 months.
              There are 0 open issues and 3 have been closed. On average issues are closed in 6 days. There are no pull requests.
              OutlinedDot
              It has a negative sentiment in the developer community.
              The latest version of aws-batch is v3

            kandi-Quality Quality

              aws-batch has no bugs reported.

            kandi-Security Security

              aws-batch has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

            kandi-License License

              aws-batch is licensed under the MIT License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              aws-batch releases are available to install and integrate.
              Build file is available. You can build the component from source.

            Top functions reviewed by kandi - BETA

            kandi has reviewed aws-batch and discovered the below as its top functions. This is intended to give you an instant insight into aws-batch implemented functionality, and help decide if they suit your requirements.
            • Submit a job to AWS
            • Do the actual logging
            • Generate container overrides
            • Fetch logs from cloudwatch logs
            • Gets the submit job request
            • Logs a single log step
            • Translate map to collection
            • Returns the descriptor
            • Handles the termination of the job
            • Gets the cpu value
            • Converts an integer or null to a string
            • Gets the command
            • Parses the string to an integer or returns null if the string is null
            • Gets the memory
            • The number of retries
            Get all kandi verified functions for this library.

            aws-batch Key Features

            No Key Features are available at this moment for aws-batch.

            aws-batch Examples and Code Snippets

            No Code Snippets are available at this moment for aws-batch.

            Community Discussions

            QUESTION

            Reverse SSH tunnel into AWS Batch array jobs
            Asked 2020-Dec-10 at 07:47

            What exactly would it take to reverse tunnel into an AWS Batch array job from the local computer submitting the job (e.g. via AWS CLI)? Unlike the typical reverse tunneling scenario, the remote nodes here do not share a local network with the local computer. Motivation: https://github.com/mschubert/clustermq/issues/208. Related: ssh into AWS Batch jobs.

            And yes, I am aware that SSH is easier in pure EC2, but Batch is preferable because of its support for arbitrary Docker images, easy job monitoring, and automatic spot pricing.

            ...

            ANSWER

            Answered 2020-Dec-10 at 07:47

            Use a Unmanaged Compute Environment. Then you can ssh into your ec2 instances as you normally would, as they are under your control. A managed compute environment means that your use of ec2 is abstracted away from you, so you cannot ssh into the underlying instances. To find out what instance a job is running on, you can use the metadata endpoint.

            Source https://stackoverflow.com/questions/65066533

            QUESTION

            FTP upload file failed in Docker container
            Asked 2020-Mar-18 at 15:17

            I have a spring boot app using library apache commons-net
            I have a FTP server running on AWS EC2.
            Filezilla connect and upload file normally.
            Local app connect and upload file normally.
            When i run this app using Docker, the app connect to ftp succeed,but it cannot upload file.
            Reply code is: 500

            Dockerfile:

            ...

            ANSWER

            Answered 2020-Mar-15 at 11:50

            You might want to read about the differences of active and passive mode ftp: What is the difference between active and passive FTP?

            You should come to the conclusion that active mode ftp is not suited for containerized ftp clients, due to the nature that the server activily tries to initate a data connection to the ftp client, which is not possible for a container (unless the container uses network: host, which is typicaly unwanted).

            With passive mode ftp on the other hand, the server will tell your ftp client which port needs to be used for the next data connection.

            Source https://stackoverflow.com/questions/60690703

            QUESTION

            How to define root volume size in AWS batch
            Asked 2019-Dec-25 at 04:48

            I'm using AWS Batch and I found the root volume size too low for my task.

            I tried creating a new computing environment/job queue but there's not any option to set volume size. I tried changing launch configuration from here but the new launch configuration and/or autoscaling group are not considered by AWS Batch. I probably have to change dm.basesize but it is unclear where this should be done.

            So, I set up a custom AMI from Amazon 2 Linux with 500 GB of storage, and changed the --storage-opt with dm.basesize=400GB as indicated here but, although my instances are spawned, the jobs remain in RUNNABLE state indefinitely. I inspected the possible causes as defined here, but i) "Enable auto-assign public IPv4 address" is checked, ii) the image should be good (it has been validated when creating the environment and it can be spawned), iii) I have a limit of 5 instances for such instance type (but I am unable to run even 1), iv) my role permissions should be ok - I used the same roles with a default amazonlinux image successfully, v) insufficient resources (the instance get spawned so I think this should not be the problem), vi) connectivity - it should be working since the autoscaling group displays a successful state.

            One possible solution may be to attach a specific AWS volume at runtime, but it would be limited and I'd like to find an automatic solution, since instead I'd have to manage several volumes for parallel execution.

            I also tried executing the task by piping input from an s3 bucket, analyzing the data and piping output to a second s3 bucket but I get Connection Reset by Peer error each time, probably because the task runs for too long (I also set --cli-read-timeout to 0 but it doesn't fix it at all).

            Is there a way to configure root volume size for jobs in AWS batch?

            ...

            ANSWER

            Answered 2018-Oct-21 at 16:18

            The recommended solution is to use an unmanaged compute environment. Unfortunately this ended up being poor advice because not only is creating your own unmanaged compute environment difficult and esoteric, and not only does it kind-of defeat the entire purpose of AWS batch, but there is a much better (and much simpler) solution.

            The solution to this problem is to create an Amazon Machine Image that is dervied from the default AMI that AWS Batch uses. An AMI allows you to configure an operating system exactly how you want it by installing libraries, modifying startup scripts, customizing configuration files, and most importantly for our purposes: define the logical partitioning and mount points of data volumes.

            1. Choose a base AMI to start from, configure your instance

            The AMIs we want to base ourselves off of are the official ECS-optimized AMIs. Take a gander at this page to find which AMI you need according to the AWS region you are running on.

            After identifying your AMI, click the “Launch instance” link in the righthand column. You’ll be taken to this page:

            Choose the t2.micro instance type.

            Choose Next: Configuration Details.

            Give your instance an appropriate IAM role if desired. What constitutes “appropriate” is at your discretion. Leave the rest of the default options. Click Next: Add Storage.

            Now is where you can configure what your data volumes will look like on your AMI. This step also does not define the final volume configuration for your AMI, but I find it to be useful to configure this how you want. You will have the opportunity to change this later before you create your AMI. Once you’re done, click Next: Add Tags.

            Add any tags that you want (optional). Click Next: Configure Security Group.

            Select SSH for Type, and set the Source to be Anywhere, or if you are more responsible than me, set a specific set of IP ranges that you know you will be using to connect to your instance. Click Review and Launch.

            This page will allow you to review the options you have set. If everything looks good, then Launch. When it asks for a keypair, either select and existing keypair you’ve created, or create a new one. Failing to do this step will leave you unable to connect to your instance.

            2. Configure your software environment

            After you have clicked launch, go to your EC2 dashboard to see your running instances:

            Wait for your instance to start, then right click it. Click Connect, then copy-paste the Example ssh command into an ssh-capable terminal. The -i "keyname.pem" is actually a path to your .pem file, so make sure you either cd to your ~/.ssh directory, or change the flag’s value to be the path to where you stored your private SSH key. You also may need to change “root” to “ec2-user”.

            After you have logged in, you can configure your VM however you want by installing any packages, libraries, and configurations you need your VM to have. If you used the AWS-provided ECS-optimized AMI, your AMI will already meet the base requirements for an ECS AMI. If you for some (strange) reason choose not to use the ECS-optimized AMI, you will have install and configure the following packages:

            1. The latest version of the Amazon ECS container agent
            2. The latest version of the ecs-init agent
            3. The recommended version of Docker for your version of the ECS container agent.

            Also note that if you want to attach another volume separate from your root volume, you will want to modify the /etc/fstab file so that your new volume is mounted on instance startup. I refer you to Google on how to do this.

            3. Save your AMI

            After all of your software configuration and installation is done, go back to your EC2 dashboard and view your running instances.

            Right click on the instance you just made. Hover over Image, then select Create Image.

            You’ll see that this has the volume configuration you chose in Step 1. I did not change my volumes from their default settings, so you can see in the screenshot above that the default volumes for the ECS-optimized AMI are in fact 8GB for /dev/xvda/ (root), and 22GB for /dev/xvdc/ (docker images, etc). Ensure that the Delete on Termination options are selected so that your Batch compute environment removes the volumes after the instances are terminated, or else you’ll risk creating an unbounded number of EBS volumes (very expensive, so I’m told). I will configure my AMI to have only 111GB of root storage, and nothing else. You do not necessarily need a separate volume for Docker.

            Give your image a name and description, then select Create Image.

            Your instance will be rebooted. Once the instance is turned off, AWS will create an image of it, then turn the instance back on.

            In your EC2 console, go to Images, AMIs on the left hand side. After a few minutes, you should see your newly created AMI in the list.

            4. Configure AWS Batch to use your new AMI

            Go back to your AWS dashboard and navigate to the AWS Batch page. Select Compute environments on the left hand side. Select Create environment.

            Configure your environment by selecting the appropriate IAM roles for both your container (Service Role) and EC2 instance (Instance role), provisioning model, networking, and tags.

            Source https://stackoverflow.com/questions/52768937

            QUESTION

            How to pass script arguments to AWS Batch fetch-and-run
            Asked 2018-Oct-11 at 19:41

            I am following this tutorial to run a simple fetch-and-run example in AWS batch. However, I'm unable to pass arguments to the script fetched through this example.

            The basic example will produce this execution:

            ...

            ANSWER

            Answered 2018-Oct-11 at 19:41

            QUESTION

            CannotStartContainerError: API error (400): OCI runtime create failed: container_linux.go:348
            Asked 2018-Oct-11 at 16:43

            I am trying to run a script through AWS Batch, following the tutorial from here. In particular, the entry point script is the same: it is a script that downloads the code to be executed in AWS Batch from an S3 bucket. However, no matter how I try to execute it on AWS, I always receive:

            ...

            ANSWER

            Answered 2018-Oct-11 at 16:43

            The culprit was in the Job Definition (from the AWS console, see "Create a job definition" from here). For ECR Repository URI I forgot to use the URI of my updated image (e.g. 012345678901.dkr.ecr.us-east-1.amazonaws.com/awsbatch/fetch_and_run), I was using instead the default amazonlinux image.

            The main hint was that I was able to run it locally.

            Source https://stackoverflow.com/questions/52764054

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install aws-batch

            You can download it from GitHub.
            You can use aws-batch like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the aws-batch component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/nfultz/aws-batch.git

          • CLI

            gh repo clone nfultz/aws-batch

          • sshUrl

            git@github.com:nfultz/aws-batch.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Explore Related Topics

            Consider Popular Web Services Libraries

            Try Top Libraries by nfultz

            nb2mail

            by nfultzPython

            grpc

            by nfultzC++

            tron

            by nfultzR

            AWR.Athena

            by nfultzR

            stackoverflow

            by nfultzR