instance-type | common mapping of various cloud instance types | Map library
kandi X-RAY | instance-type Summary
kandi X-RAY | instance-type Summary
A common mapping of various cloud instance types to the resources they provide
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of instance-type
instance-type Key Features
instance-type Examples and Code Snippets
Community Discussions
Trending Discussions on instance-type
QUESTION
I am playing with a cluster using SLURM on AWS. I have defined the following parameters :
...ANSWER
Answered 2021-Jun-11 at 14:41In Slurm the number of tasks is essentially the number of parallel programs you can start in your allocation. By default, each task can access one CPU (which can be core or thread, depending on config), which can be modified with --cpus-per-task=#
.
This in itself does not tell you anything about the number of nodes you will get. If you just specify --ntasks
(or just -n
), your job will be spread over many nodes, depending on whats available. You can limit this with --nodes #min-#max/--nodes #exact
.
Another way to specify the number of tasks is --ntasks-per-node
, which does exactly what is says and is best used in conjunction with --nodes
. (not with --ntasks
, otherwise it's the max number of tasks per node!)
So, if you want three nodes with 72 tasks (each with the one default CPU), try:
QUESTION
AWS CLI provides command describe-instance-types
to list all offered EC2 instance types. It also allows to filter them by different attributes. Is it possible to do something similar with Google Cloud CLI?
I want to list all offered machine types with their attributes. Additionally, I would like to filter them by their attributes (memory size, cpus, etc).
...ANSWER
Answered 2021-Jun-02 at 21:08QUESTION
we followed this guide to use GPU enabled nodes in our existing cluster but when we try to schedule pods we're getting 2 Insufficient nvidia.com/gpu error
Details:
We are trying to use GPU in our existing cluster and for that we're able to successfully create a NodePool with a single node having GPU enabled.
Then as a next step according to the guide above we've to create a daemonset and we're also able to run the DS successfully.
But now when we are trying to schedule the Pod using the following resource section the pod becomes un-schedulable with this error 2 insufficient nvidia.com/gpu
ANSWER
Answered 2021-May-30 at 07:28The nvidia-gpu-device-plugin
should be installed in the GPU node as well. You should see nvidia-gpu-device-plugin
DaemonSet in your kube-system
namespace.
It should be automatically deployed by Google, but if you want to deploy it on your own, run the following command: kubectl apply -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/device-plugins/nvidia-gpu/daemonset.yaml
It will install the GPU plugin in the node and afterwards your pods will be able to consume it.
QUESTION
Used ami-0fd3c3c68a2a8066f
from ap-south-1
region http://cloud-images.ubuntu.com/locator/ec2/
, but unable to use t2.micro instance type against this.
ANSWER
Answered 2021-Apr-27 at 06:00Using AWS CLI you can use describe-instance-types:
QUESTION
Cluster Specification:
...ANSWER
Answered 2021-Apr-17 at 07:21Nevermind, I have solved my own question. Since my cluster is using t2.small and t3.small instances, the resources are too low to trigger autoscaler to scale down the dummy nodes. I have tried with bigger instance specifications, t3a.medium, and t3.medium and it worked well.
QUESTION
This is my Jenkins EC2 configuration:
- URL: $JENKINS_URL/configureClouds/
- Add new cloud: Amazon EC2
- Name: Amazon EC2 eu-central-1
- Amazon EC2 Credentials: AKIA...
- Region: eu-central-1
- EC2 Key Pair's Private Key: ubuntu
- Test connection: success
- Advanced...
- Instance Cap: 3
- No delay provisioning: checked
- Add AMI
- Description: Linux node
- AMI ID: ami-0293...
- Check AMI: 05052029...
- Instance Type: T3aMedium
- EBS Optimized: checked
- Monitoring: checked
- T2 Unlimited: checked
- Security group names: sg-0c2d... (opens SSH port 22)
- Remote FS root: ./jenkins
- Remote user: ubuntu
- AMI Type: unix
- Labels: aws ubuntu linux
- Usage: Use this node as much as possible
- Idle termination time: 30
- Advanced...
- Number of executors: 2
- Stop/Disconnect on Idle Timeout: checked
- Minimum number of instances: 1
- Minimum number of spare instances: 0
- Instance cap: 10
- Block device mapping: /dev/sda1=snap-0eadbe3f...:200:true:gp2, /dev/sdb=ephemeral0, /dev/sdc=ephemeral1
- Associate Public IP: checked
- Connection Strategy: Public DNS
- Host Key Verification Strategy: off
- Maximum Total Uses: 10
- Environment variables: checked (not listing all environment variables)
- Tool locations: checked (not listing all tool locations)
With this configuration, I would expect that at least 1 EC2 instance would be started, but no instance is started.
In the nodes page in Jenkins when I hit the provision via button, I get an error:
Oops! A problem occurred while processing the request. Logging ID=8ead3651-3809-4a47-984c-e0e494c705bb
In /log/all I have:
...ANSWER
Answered 2021-Apr-15 at 08:11Write up of the comments for anyone else looking for help diagnosing EC2 Agent Plugin issue.
- When you have configured your agents go to the Nodes page (Jenkins URL/computer)
- Hit the button to Provision a new agent from your cloud
- If there is a configuration issue you will get Evil Jenkins and a Logging ID
- Go to Jenkins logs page (Jenkins URL/log/all) and search that ID
- This should give you the stack trace from the AWS SDK call which will help you to narrow down whether is it missing config or IAM permissions etc at fault
If there were no config errors you would get taken to the node that is being launched config page where you would be able to see its EC2 startup log to check for any User Data or AMI issues.
QUESTION
I just kicked off a g4dn.8xlarge instance on AWS. According to their website, this instance should have 128GB memory, as shown in the screenshot below:
However, I notice my model keeps running out of memory. When I investigated, I see this:
...ANSWER
Answered 2021-Mar-06 at 22:48The g4dn.8xlarge
use NVIDIA T4 GPU which have 16GB of RAM. So I think PyTorch shows the memory of the GPU, not the instance.
QUESTION
I've been trying since yesterday to get this CloudFormation template working... the goal is to launch an EC2 instance into a public subnet that I can access through HTTP. Everything looks like it has been created correctly to me, but the instance won't connect in the browser. Things I've checked:
- EC2 instance deployed in correct subnet
- Subnet has a route table pointing to an IGW
- EC2 instance has a public IP
- Security group allows inbound access over ports 80, 443 and 22 from all sources
- Creating without userdata
- verified user data works correctly in a manual EC2
Any suggestions for other things to check?
Here's my template:
...ANSWER
Answered 2021-Feb-23 at 20:26You have no SubnetRouteTableAssociation so your public subnet is not associated with your VPC's default route table and hence your public subnet has no default route to the Internet Gateway and cannot reach the internet.
Add the following:
QUESTION
I am having a problem where I am trying to restrict a deployment to work on avoid a specific node pool and nodeAffinity and nodeAntiAffinity don't seem to be working.
- We are running DOKS (Digital Ocean Managed Kubernetes) v1.19.3
- We have two node pools: infra and clients, with nodes on both labelled as such
- In this case, we would like to avoid deploying to the nodes labelled "infra"
For whatever reason, it seems like no matter what configuration I use, Kubernetes seems to schedule randomly across both node pools.
See configuration below, and the results of scheduling
deployment.yaml snippet
...ANSWER
Answered 2021-Feb-12 at 17:36In the deployment file, you have mentioned operator: NotIn
which working as anti-affinity.
Please use operator: In
to achieve node affinity. So for instance, if we want pods to use node which has clients
labels.
QUESTION
I was trying out my cluster in EKS with managed node group. I am able to attach CSI to the cluster and able to create storageClass and persistentVolumeClaim, but when ever I try to deploy a deployment. The pods seems to be not associating with the specified nodes.
the pod file
...ANSWER
Answered 2021-Feb-08 at 13:34According to the AWS documentation IP addresses per network interface per instance type the t2.micro
only has 2 Network Interfaces and 2 IPv4 addresses per interface.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI
There is limitation on AWS EKS to schedule the pod : https://github.com/awslabs/amazon-eks-ami/blob/master/files/eni-max-pods.txt
you can remove this limit if want : https://medium.com/@swazza85/dealing-with-pod-density-limitations-on-eks-worker-nodes-137a12c8b218
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install instance-type
You can use instance-type like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page