amazon-ecs | Amazon Product Advertising Ruby API | AWS library
kandi X-RAY | amazon-ecs Summary
kandi X-RAY | amazon-ecs Summary
Amazon Product Advertising Ruby API
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Search for a path
- Get an array of elements with the given path .
- Get the element for a given path .
- Get the value of a hash
- Get the value of an element .
- Fetches the element .
- Retrieves the value of the current element .
- Get an array of elements
amazon-ecs Key Features
amazon-ecs Examples and Code Snippets
Community Discussions
Trending Discussions on amazon-ecs
QUESTION
I'm using the Ruby SDK for AWS ECS to kick-off a task hosted in Fargate via run_task
method. This all works fine with the defaults — I can kick off the task OK and can send along custom command parameters to my Docker container:
ANSWER
Answered 2021-Jun-14 at 09:28This was a bug of the SDK, now fixed (server-side, so doesn't require a library update).
The block of code in the question is the correct way for increasing ephemeral storage via the Ruby SDK:
QUESTION
It's been weeks that I am trying to make CodeDeploy / CodePipeline works for our solution. To make some sort of CI/CD, and make deployment faster, safer...etc
As I keep diving into it, I feel like either I am not doing it the right way at all, or either it is not suitable in our case.
What all our AWS infra is :
We have an ECS Cluster, that contain for now one service (under EC2), associated with one or multiple tasks, a reverse proxy and an API. So the reverse proxy is internally listening to port 80, and when reached, proxy pass internally to the API on port 5000.
We have an application load balancer associated with this service, that will be publicly reachable. It currently has 2 listeners, http and https. Both listener redirect to the same target group, that only have instance(s) where our reverse proxy is. Note that the instance port to redirect to is random (check this link)
We have an auto scaling group, that is scaling numbers of instance depending on the number of call to the application load balancer.
What we may have in the futur :
- Other tasks will be in the same instance as our API. For example, we may create another API that is in the same cluster as before, on another port, with another reverse proxy, and yet another load-balancer. We may have some Batch running, and other stuffs.
What's the problem :
Well for now, deploying "manually" (that is, telling the service to make a new deployment on ECS) doesn't work. CodeDeploy is stuck at creating replacement tasks, and when i look at the log of the service, there is the following error
service xxxx-xxxx was unable to place a task because no container instance met all of its requirements. The closest matching container-instance yyyy is already using a port required by your task.
Which i don't really understand, since port assignation is random, but maybe CodeDeploy operate before that, and just understand that assignated port is 0, and that it's the same as the previous task definition ?
I don't really know how i can resolve this, and i even doubt that CodeDeploy is usable in our case...
-- Edit 02/18/2021 --
So, i know why it is not working now. Like i said, the port that my host is listening for the reverse proxy is random. But there is still the port that my API is listening on that is not random
But now, even if i tell the API port to be random like the reverse proxy one, how would my reverse proxy know on what port the API will be reachable ? I tried to link containers, but it seems that it doesn't work in the configuration file (i use nginx as reverse proxy).
--
Not specifying hostPort seems to assign a "random" port on the host
But still, since NGINX and the API are two diferent containers, i would need my first NGINX container to call my first API container which is API:32798. I think i'm missing something
...ANSWER
Answered 2021-Feb-18 at 15:41You're probably getting this port conflict, because you have two tasks on the same host that want to map the Port 80 of the Host into their containers.
I've tried to visualize the conflict:
The violet boxes share a port namespace and so do the green and orange boxes. This means in each box you can use the ports from 1 - ~65k once. When you explicitly require a Host Port, it will try to map violet port 80 to two container ports, which doesn't work.
You don't want to explicitly map these container ports to the host port, let ECS worry about that.
Just specify the container port in Load Balancer integration in the service definition and it will do the mapping for you. If you set the container port to 80, this refers to the green port 80, and the orange port 80. It will expose these as random ports and automatically register these random ports with the Load Balancer.
Service Definition docs (search for containerPort
)
QUESTION
I need to deploy an application stack to ECS. To do this, I use the new Docker Compose ECS integration (see e.g. here).
In few words, everything boils up to using the correct docker context
and launching the command docker compose up
. This is of great help and very quick.
I would like to automate the deploy process in a GitLab pipeline.
What (Docker) image should I use in my pipeline to be able to run the new docker compose
command?
Thanks in advance.
...ANSWER
Answered 2021-Jan-13 at 21:50So to elaborate on the idea from the comments:
The docker compose
binary is direcly woven together with the docker
command itself, essentially being an extension to it.
As I see it there are now two main options:
You setup a dedicated gitlab runner, that works with the normal shell executor. You then install docker on that machine and also setup the compose-cli according to this manual. Then you can start deploying with the compose command.
You create a docker image that gives you the
docker compose
command. An example dockerfile could look like this:
QUESTION
I have a internet-facing load balancer in a public subnet and have a privated hosted zone with a cname record for the load balancer as described here. I try to request the record from a lambda in a private subnet which times out. I thought it is VPC internal and should resolve. Is that possible at all?
...ANSWER
Answered 2020-Nov-25 at 22:26I try to request the record from a lambda in a private subnet which times out. I thought it is VPC internal and should resolve. Is that possible at all?
If you create cname in private hosted zone to the public DNS of your internet-facing load balancer, the traffic will go over the internet anyway. To use private traffic, you need internal load balancer, not public one.
Thus for your lambda function to access the ALB, you need to use NAT gateway to enable it accessing the internet.
QUESTION
I'm trying to build docker images for a Spring Boot application (2.3.6.RELEASE) using the spring-boot-maven-plugin build-image goal (buildpacks), but it's downloading the internet everytime! Is there a way to mount the .m2 directory to the buildpack, so it can use dependencies from the cache?
...ANSWER
Answered 2020-Nov-24 at 07:04There's probably a better way to do it, but I got it working by adding: -Dmaven.repo.local=/home/jenkins/.m2/repository
, so:
QUESTION
My spring boot app deployed in Elastic Beanstalks docker is unable to connect to external RDS. It always stuck at "com.zaxxer.hikari.HikariDataSource - HikariPool-1 - Starting..." during app startup.
Dockerfile (I added the db connection details in ENTRYPOINT for troubleshooting purpose)
...ANSWER
Answered 2020-Nov-02 at 11:22Based on the comments.
The issue was caused by not sufficient memory allocated to the container.
The solution was to increase the memory.
QUESTION
I'm trying to run my dotnet core project and mongodb as container services with docker-compose
. Both services have a clean start with no errors. When I call an endpoint that interacts with mongo I get a timeout error. Since I'm using docker-compose
I expect that I can reference the mongo
service by the compose service name in the connection string.
mongo:27017/api?authSource=api
with username api
and password password123
as seen in the docker-compose file below. Instead I get this error:
ANSWER
Answered 2020-Sep-04 at 05:31Have you try with yaml file version: '3.7'
?
if still can not please try compose with no network defined.
QUESTION
I can't access a running docker image from outside of the EC2 instance.
What I've tried- I created a cluster in ECS, a service with a related task definition and an Application Load Balancer.
- When the task gets executed I can see the logs from the Docker image in the task:
- I also see the related EC2 instance running. When I ssh into the instance I can see the docker image running, as expected:
ANSWER
Answered 2020-Aug-31 at 11:25Do you config your Security Group of your EC2 or the NACL of the VPC where the EC2 is launched?
I see that you are expose port 5001 in your task so in the SG, you should open that port
QUESTION
We want to build an ECS cluster with the following characteristics:
- It must run inside a VPC, then, we need the awsvpc mode
- It must use GPU instances, so we can't use Fargate
- It must provision dynamically the instances, therefore, we need a capacity provider
- It will run tasks (batch jobs) that are going to be triggered directly through the AWS ECS API. For this reason, we don't need a service, only a task definition.
- These tasks must have access to S3 (internet), so according to AWS documentation the instances must be placed inside a private subnet (a reference to docs).
We've already read this post in stackoverflow where it says that we need to set up a private subnet with a route table that points to a NAT Gateway configured in a public subnet, and this public subnet should point to an internet gateway. We already have this configuration. We also have an S3 vpc endpoint configured in the route table.
Bellow, you can see some relevant configurations of the cluster in terraform (for the shake of simplicity I only put the relevant parts):
...ANSWER
Answered 2020-Aug-29 at 16:25Finally!! Solved the mystery!
The problem wasn't in the cluster configuration. When calling through the ECS API to run_task you need to specify the subnet the task should run into.
Our code was setting in this field the value of one of the public subnets. For that reason, when we changed the container instances to the availability zone corresponding to this public subnet the task was placed.
Changing this call from the code the task is placed correctly and it has access to the internet.
QUESTION
I am building my cluster on ECS while using EC2 instances. I am curious about specifying CPU reservation on my Task definitions. How does AWS manage my tasks inside of EC2 instances when i leave CPU reservation empty or write 0?
I have read this article: https://aws.amazon.com/blogs/containers/how-amazon-ecs-manages-cpu-and-memory-resources/
And here it says:
when you don’t specify any CPU units for a container, ECS intrinsically enforces two Linux CPU shares for the cgroup (which is the minimum allowed).
I am not really sure what this means and is it different for Tasks, because this is specifically stated for containers?
...ANSWER
Answered 2020-Aug-19 at 17:17Cgroups are a feature of the Linux kernel that allow the distribution and hierarchy of services that run on your host.
This enables your containers to operate independently from each other (they will have access to a portion of the available CPU), whilst also providing the ability for higher priority tasks to gain access to the CPU if it required.
A CPU share defines how much of the overall CPU your container can have access to, as you add more containers this becomes a ratio of division between each container. Each of your containers in your case will get 2, if there are 4 containers this is a ratio of 0.25 of the available CPU per each one.
If you define in a task the limits you can cap the maximum of the resource on the host that can be used, of which the CPU shares will then be split in a ratio of. However, this will affect scheduling of new containers (if there is not enough resource available for the task and auto scaling is not enabled your chosen task cannot be scheduled).
There is some documentation here on cgroups, it is technical so if you have little experience of Linux it might be a little confusing.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install amazon-ecs
On a UNIX-like operating system, using your system’s package manager is easiest. However, the packaged Ruby version may not be the newest one. There is also an installer for Windows. Managers help you to switch between multiple Ruby versions on your system. Installers can be used to install a specific or multiple Ruby versions. Please refer ruby-lang.org for more information.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page