amazon-ecs | Amazon Product Advertising Ruby API | AWS library

 by   jugend Ruby Version: Current License: MIT

kandi X-RAY | amazon-ecs Summary

kandi X-RAY | amazon-ecs Summary

amazon-ecs is a Ruby library typically used in Cloud, AWS applications. amazon-ecs has no bugs, it has no vulnerabilities, it has a Permissive License and it has low support. You can download it from GitHub.

Amazon Product Advertising Ruby API
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              amazon-ecs has a low active ecosystem.
              It has 560 star(s) with 97 fork(s). There are 21 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 4 open issues and 31 have been closed. On average issues are closed in 179 days. There are 1 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of amazon-ecs is current.

            kandi-Quality Quality

              amazon-ecs has 0 bugs and 2 code smells.

            kandi-Security Security

              amazon-ecs has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              amazon-ecs code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              amazon-ecs is licensed under the MIT License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              amazon-ecs releases are not available. You will need to build from source code and install.
              Installation instructions are not available. Examples and code snippets are available.
              amazon-ecs saves you 2471 person hours of effort in developing the same functionality from scratch.
              It has 5380 lines of code, 67 functions and 3 files.
              It has high code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed amazon-ecs and discovered the below as its top functions. This is intended to give you an instant insight into amazon-ecs implemented functionality, and help decide if they suit your requirements.
            • Search for a path
            • Get an array of elements with the given path .
            • Get the element for a given path .
            • Get the value of a hash
            • Get the value of an element .
            • Fetches the element .
            • Retrieves the value of the current element .
            • Get an array of elements
            Get all kandi verified functions for this library.

            amazon-ecs Key Features

            No Key Features are available at this moment for amazon-ecs.

            amazon-ecs Examples and Code Snippets

            No Code Snippets are available at this moment for amazon-ecs.

            Community Discussions

            QUESTION

            How to configure ephemeral storage on ECS Fargate Task via Ruby SDK?
            Asked 2021-Jun-14 at 09:28

            I'm using the Ruby SDK for AWS ECS to kick-off a task hosted in Fargate via run_task method. This all works fine with the defaults — I can kick off the task OK and can send along custom command parameters to my Docker container:

            ...

            ANSWER

            Answered 2021-Jun-14 at 09:28

            This was a bug of the SDK, now fixed (server-side, so doesn't require a library update).

            The block of code in the question is the correct way for increasing ephemeral storage via the Ruby SDK:

            Source https://stackoverflow.com/questions/67607006

            QUESTION

            How does CodeDeploy work with dynamic port mapping?
            Asked 2021-Feb-18 at 18:05

            It's been weeks that I am trying to make CodeDeploy / CodePipeline works for our solution. To make some sort of CI/CD, and make deployment faster, safer...etc

            As I keep diving into it, I feel like either I am not doing it the right way at all, or either it is not suitable in our case.

            What all our AWS infra is :

            • We have an ECS Cluster, that contain for now one service (under EC2), associated with one or multiple tasks, a reverse proxy and an API. So the reverse proxy is internally listening to port 80, and when reached, proxy pass internally to the API on port 5000.

            • We have an application load balancer associated with this service, that will be publicly reachable. It currently has 2 listeners, http and https. Both listener redirect to the same target group, that only have instance(s) where our reverse proxy is. Note that the instance port to redirect to is random (check this link)

            • We have an auto scaling group, that is scaling numbers of instance depending on the number of call to the application load balancer.

            What we may have in the futur :

            • Other tasks will be in the same instance as our API. For example, we may create another API that is in the same cluster as before, on another port, with another reverse proxy, and yet another load-balancer. We may have some Batch running, and other stuffs.

            What's the problem :

            Well for now, deploying "manually" (that is, telling the service to make a new deployment on ECS) doesn't work. CodeDeploy is stuck at creating replacement tasks, and when i look at the log of the service, there is the following error

            service xxxx-xxxx was unable to place a task because no container instance met all of its requirements. The closest matching container-instance yyyy is already using a port required by your task.

            Which i don't really understand, since port assignation is random, but maybe CodeDeploy operate before that, and just understand that assignated port is 0, and that it's the same as the previous task definition ?

            I don't really know how i can resolve this, and i even doubt that CodeDeploy is usable in our case...

            -- Edit 02/18/2021 --

            So, i know why it is not working now. Like i said, the port that my host is listening for the reverse proxy is random. But there is still the port that my API is listening on that is not random

            But now, even if i tell the API port to be random like the reverse proxy one, how would my reverse proxy know on what port the API will be reachable ? I tried to link containers, but it seems that it doesn't work in the configuration file (i use nginx as reverse proxy).

            --

            Not specifying hostPort seems to assign a "random" port on the host

            But still, since NGINX and the API are two diferent containers, i would need my first NGINX container to call my first API container which is API:32798. I think i'm missing something

            ...

            ANSWER

            Answered 2021-Feb-18 at 15:41

            You're probably getting this port conflict, because you have two tasks on the same host that want to map the Port 80 of the Host into their containers.

            I've tried to visualize the conflict:

            The violet boxes share a port namespace and so do the green and orange boxes. This means in each box you can use the ports from 1 - ~65k once. When you explicitly require a Host Port, it will try to map violet port 80 to two container ports, which doesn't work.

            You don't want to explicitly map these container ports to the host port, let ECS worry about that.

            Just specify the container port in Load Balancer integration in the service definition and it will do the mapping for you. If you set the container port to 80, this refers to the green port 80, and the orange port 80. It will expose these as random ports and automatically register these random ports with the Load Balancer.

            Service Definition docs (search for containerPort)

            Source https://stackoverflow.com/questions/66262708

            QUESTION

            Run `docker compose` in GitLab pipeline
            Asked 2021-Jan-13 at 21:50

            I need to deploy an application stack to ECS. To do this, I use the new Docker Compose ECS integration (see e.g. here).

            In few words, everything boils up to using the correct docker context and launching the command docker compose up. This is of great help and very quick.

            I would like to automate the deploy process in a GitLab pipeline.

            What (Docker) image should I use in my pipeline to be able to run the new docker compose command?

            Thanks in advance.

            ...

            ANSWER

            Answered 2021-Jan-13 at 21:50

            So to elaborate on the idea from the comments:

            The docker compose binary is direcly woven together with the docker command itself, essentially being an extension to it.

            As I see it there are now two main options:

            1. You setup a dedicated gitlab runner, that works with the normal shell executor. You then install docker on that machine and also setup the compose-cli according to this manual. Then you can start deploying with the compose command.

            2. You create a docker image that gives you the docker compose command. An example dockerfile could look like this:

            Source https://stackoverflow.com/questions/65683256

            QUESTION

            Call load balancer from private subnet lambda function
            Asked 2020-Nov-26 at 07:38

            I have a internet-facing load balancer in a public subnet and have a privated hosted zone with a cname record for the load balancer as described here. I try to request the record from a lambda in a private subnet which times out. I thought it is VPC internal and should resolve. Is that possible at all?

            ...

            ANSWER

            Answered 2020-Nov-25 at 22:26

            I try to request the record from a lambda in a private subnet which times out. I thought it is VPC internal and should resolve. Is that possible at all?

            If you create cname in private hosted zone to the public DNS of your internet-facing load balancer, the traffic will go over the internet anyway. To use private traffic, you need internal load balancer, not public one.

            Thus for your lambda function to access the ALB, you need to use NAT gateway to enable it accessing the internet.

            Source https://stackoverflow.com/questions/65012476

            QUESTION

            How to cache maven repo when building Spring Boot docker image on Jenkins
            Asked 2020-Nov-24 at 07:04

            I'm trying to build docker images for a Spring Boot application (2.3.6.RELEASE) using the spring-boot-maven-plugin build-image goal (buildpacks), but it's downloading the internet everytime! Is there a way to mount the .m2 directory to the buildpack, so it can use dependencies from the cache?

            ...

            ANSWER

            Answered 2020-Nov-24 at 07:04

            There's probably a better way to do it, but I got it working by adding: -Dmaven.repo.local=/home/jenkins/.m2/repository, so:

            Source https://stackoverflow.com/questions/64921606

            QUESTION

            Unable to connect to RDS from Elastic Beanstalk Docker
            Asked 2020-Nov-02 at 11:22

            My spring boot app deployed in Elastic Beanstalks docker is unable to connect to external RDS. It always stuck at "com.zaxxer.hikari.HikariDataSource - HikariPool-1 - Starting..." during app startup.

            Dockerfile (I added the db connection details in ENTRYPOINT for troubleshooting purpose)

            ...

            ANSWER

            Answered 2020-Nov-02 at 11:22

            Based on the comments.

            The issue was caused by not sufficient memory allocated to the container.

            The solution was to increase the memory.

            Source https://stackoverflow.com/questions/64630714

            QUESTION

            docker-compose app container can't connect to mongo container
            Asked 2020-Sep-04 at 11:42

            I'm trying to run my dotnet core project and mongodb as container services with docker-compose. Both services have a clean start with no errors. When I call an endpoint that interacts with mongo I get a timeout error. Since I'm using docker-compose I expect that I can reference the mongo service by the compose service name in the connection string.

            mongo:27017/api?authSource=api with username api and password password123 as seen in the docker-compose file below. Instead I get this error:

            ...

            ANSWER

            Answered 2020-Sep-04 at 05:31

            Have you try with yaml file version: '3.7'? if still can not please try compose with no network defined.

            Source https://stackoverflow.com/questions/63731966

            QUESTION

            Can't access running EC2 Dockerized image from outside
            Asked 2020-Aug-31 at 11:25
            The problem

            I can't access a running docker image from outside of the EC2 instance.

            What I've tried
            • I created a cluster in ECS, a service with a related task definition and an Application Load Balancer.
            • When the task gets executed I can see the logs from the Docker image in the task:
            • I also see the related EC2 instance running. When I ssh into the instance I can see the docker image running, as expected:
            ...

            ANSWER

            Answered 2020-Aug-31 at 11:25

            Do you config your Security Group of your EC2 or the NACL of the VPC where the EC2 is launched?

            I see that you are expose port 5001 in your task so in the SG, you should open that port

            Source https://stackoverflow.com/questions/63669507

            QUESTION

            Running ECS task in a cluster within a private subnet remains in provisioning status
            Asked 2020-Aug-29 at 16:25

            We want to build an ECS cluster with the following characteristics:

            1. It must run inside a VPC, then, we need the awsvpc mode
            2. It must use GPU instances, so we can't use Fargate
            3. It must provision dynamically the instances, therefore, we need a capacity provider
            4. It will run tasks (batch jobs) that are going to be triggered directly through the AWS ECS API. For this reason, we don't need a service, only a task definition.
            5. These tasks must have access to S3 (internet), so according to AWS documentation the instances must be placed inside a private subnet (a reference to docs).

            We've already read this post in stackoverflow where it says that we need to set up a private subnet with a route table that points to a NAT Gateway configured in a public subnet, and this public subnet should point to an internet gateway. We already have this configuration. We also have an S3 vpc endpoint configured in the route table.

            Bellow, you can see some relevant configurations of the cluster in terraform (for the shake of simplicity I only put the relevant parts):

            ...

            ANSWER

            Answered 2020-Aug-29 at 16:25

            Finally!! Solved the mystery!

            The problem wasn't in the cluster configuration. When calling through the ECS API to run_task you need to specify the subnet the task should run into.

            Our code was setting in this field the value of one of the public subnets. For that reason, when we changed the container instances to the availability zone corresponding to this public subnet the task was placed.

            Changing this call from the code the task is placed correctly and it has access to the internet.

            Source https://stackoverflow.com/questions/63621979

            QUESTION

            Task definition CPU reservation on AWS ECS EC2
            Asked 2020-Aug-19 at 17:17

            I am building my cluster on ECS while using EC2 instances. I am curious about specifying CPU reservation on my Task definitions. How does AWS manage my tasks inside of EC2 instances when i leave CPU reservation empty or write 0?

            I have read this article: https://aws.amazon.com/blogs/containers/how-amazon-ecs-manages-cpu-and-memory-resources/

            And here it says:

            when you don’t specify any CPU units for a container, ECS intrinsically enforces two Linux CPU shares for the cgroup (which is the minimum allowed).

            I am not really sure what this means and is it different for Tasks, because this is specifically stated for containers?

            ...

            ANSWER

            Answered 2020-Aug-19 at 17:17

            Cgroups are a feature of the Linux kernel that allow the distribution and hierarchy of services that run on your host.

            This enables your containers to operate independently from each other (they will have access to a portion of the available CPU), whilst also providing the ability for higher priority tasks to gain access to the CPU if it required.

            A CPU share defines how much of the overall CPU your container can have access to, as you add more containers this becomes a ratio of division between each container. Each of your containers in your case will get 2, if there are 4 containers this is a ratio of 0.25 of the available CPU per each one.

            If you define in a task the limits you can cap the maximum of the resource on the host that can be used, of which the CPU shares will then be split in a ratio of. However, this will affect scheduling of new containers (if there is not enough resource available for the task and auto scaling is not enabled your chosen task cannot be scheduled).

            There is some documentation here on cgroups, it is technical so if you have little experience of Linux it might be a little confusing.

            Source https://stackoverflow.com/questions/63491597

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install amazon-ecs

            You can download it from GitHub.
            On a UNIX-like operating system, using your system’s package manager is easiest. However, the packaged Ruby version may not be the newest one. There is also an installer for Windows. Managers help you to switch between multiple Ruby versions on your system. Installers can be used to install a specific or multiple Ruby versions. Please refer ruby-lang.org for more information.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/jugend/amazon-ecs.git

          • CLI

            gh repo clone jugend/amazon-ecs

          • sshUrl

            git@github.com:jugend/amazon-ecs.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Explore Related Topics

            Consider Popular AWS Libraries

            localstack

            by localstack

            og-aws

            by open-guides

            aws-cli

            by aws

            awesome-aws

            by donnemartin

            amplify-js

            by aws-amplify

            Try Top Libraries by jugend

            fgraph

            by jugendRuby

            pluit-carousel

            by jugendHTML

            common-pool

            by jugendRuby

            webpack-react-hmr-examples

            by jugendJavaScript

            test-rollup

            by jugendJavaScript