node-pool | Generic resource pooling for node.js | Runtime Evironment library
kandi X-RAY | node-pool Summary
kandi X-RAY | node-pool Summary
Generic resource pool with Promise based API. Can be used to reuse or throttle usage of expensive resources such as database connections. Version 3 contains many breaking changes. The differences are mostly minor and I hope easy to accommodate. There is a very rough and basic upgrade guide I've written, improvements and other attempts most welcome. If you are after the older version 2 of this library you should look at the current github branch for it.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of node-pool
node-pool Key Features
node-pool Examples and Code Snippets
Community Discussions
Trending Discussions on node-pool
QUESTION
I have created GKE using terraform with custom machine type custom-6-20480. I want to create committed use discount for the CPU and memory that I'm using for nodes. I know that the machine type indicates I am using 6CPU and 20GB memory but I can't see the machine series type which N1 or N2. I tried looking up to the console and ran "gcloud container node-pools describe node-pool-name --cluster cluster-name", but these are only showing the machine type as custom-6-20480, not series. How to know which series I'm using on gke-node-pool? To create committed use discount I need to select the Commitment type which is N1 or N2.
...ANSWER
Answered 2021-Jun-03 at 10:01You can check the series of machines by following the steps below.
- Navigate to Compute engine, click on VM instances.
- You can see the nodes of your cluster present there with their names starting with the cluster's name.
- Click on any of them for details and you can see the series of machine under the header ‘machine type’.
QUESTION
I am trying to set up a Mongo DB
on a specific node on GKE
. I upgraded my current cluster using
ANSWER
Answered 2021-May-27 at 11:37There were two issues with the deployment setup:
The nodeSelector
specified in the Deployment manifest was using wrong label
QUESTION
How to change the existing GKE cluster to GKE private cluster? Will I be able to connect to the Kubectl API from internet based on firewall rules or should I have a bastion host? I don't want to implement Cloud Nat
or nat gateway
. I have a squid proxy VM that can handle internet access for pods. I just need to be able to connect to Kubectl to apply or modify anything.
I'm unsure how to modify the existing module I wrote to make the nodes private and I'm not sure if the cluster will get deleted if I try and apply the new changes related to private gke cluster.
...ANSWER
Answered 2021-Jan-27 at 12:09Answering the part of the question:
How to change the existing GKE cluster to GKE private cluster?
GKE
setting: Private cluster
is immutable. This setting can only be set during the GKE
cluster provisioning.
To create your cluster as a private one you can either:
- Create a new
GKE
private cluster. - Duplicate existing cluster and set it to private:
- This setting is available in
GCP Cloud Console
->Kubernetes Engine
->CLUSTER-NAME
->Duplicate
- This setting will clone the configuration of your infrastructure of your previous cluster but not the workload (
Pods
,Deployments
, etc.)
- This setting is available in
Will I be able to connect to the Kubectl API from internet based on firewall rules or should I have a bastion host?
Yes, you could but it will heavily depend on the configuration that you've chosen during the GKE
cluster creation process.
As for ability to connect to your GKE
private cluster, there is a dedicated documentation about it:
As for how you can create a private cluster with Terraform, there is the dedicated site with configuration options specific to GKE
. There are also parameters responsible for provisioning a private
cluster:
As for a basic example of creating a private GKE
cluster with Terraform:
main.tf
QUESTION
How can I get the current size of a GKE node pool using the REST (or Node) API?
I'm managing my own worker pool using my Express app running on my cluster, and can set the size of the pool and track the success of the setSize operation, but I see no API for getting the current node count. The NodePool resource only includes the original node count, not the current count. I don't want to use gcloud or kubectl on one of my production VMs.
I could go around GKE and try to infer the size using the Compute Engine (GCE) API, but I haven't looked into that approach yet. Note that it seems difficult to get the node count even from Stack Driver. Has anyone found any workarounds to get the current node size?
...ANSWER
Answered 2021-Mar-20 at 18:44The worker pool size can be retrieved from the Compute Engine API by getting the instance group associated with the node pool.
QUESTION
I am having a problem where I am trying to restrict a deployment to work on avoid a specific node pool and nodeAffinity and nodeAntiAffinity don't seem to be working.
- We are running DOKS (Digital Ocean Managed Kubernetes) v1.19.3
- We have two node pools: infra and clients, with nodes on both labelled as such
- In this case, we would like to avoid deploying to the nodes labelled "infra"
For whatever reason, it seems like no matter what configuration I use, Kubernetes seems to schedule randomly across both node pools.
See configuration below, and the results of scheduling
deployment.yaml snippet
...ANSWER
Answered 2021-Feb-12 at 17:36In the deployment file, you have mentioned operator: NotIn
which working as anti-affinity.
Please use operator: In
to achieve node affinity. So for instance, if we want pods to use node which has clients
labels.
QUESTION
I have a regional cluster for redundancy. In this cluster I want to create a node-pool in just 1 zone in this region. Is this configuration possible? reason I trying this is, I want to run service like RabbitMQ in just 1 zone to avoid split, and my application services running on all zones in the region for redundancy.
I am using terraform to create the cluster and node pools, below is my config for creating region cluster and zone node pool
...ANSWER
Answered 2020-Dec-24 at 17:04Found out that location
in google_container_node_pool
should specify cluster master's region/zone. To actually specify the node-pool location node_locations
should be used. Below is the config that worked
QUESTION
I have both Node Auto Provisioning
and Autoscaling
enabled on a GKE
Cluster. Few queries on Auto Scaling.
For AutoScaling
, the minimum nodes are 1 and the maximum number of nodes is 2. Few queries based on this setup.
I set the number of nodes to 0 using gcloud
command
ANSWER
Answered 2020-Nov-11 at 13:52It's a documented limitation. If your node pool is set to 0, there isn't auto scaling from 0
Yes it works as long as you don't manually scale to 0.
It's also documented. The node pool scale according with the request. If a Pod is unschedulable because of a lack of resource, and the max-node isn't reach, a new node is provisioned and the pod deployed.
you can set 0 to min-nodes, but you must at least have 1 node active in the cluster, on anther node pool
If you specify a minimum of zero nodes, an idle node pool can scale down completely. However, at least one node must always be available in the cluster to run system Pods.
QUESTION
Does GKE supports vertical node auto-scaling?
For example:
I have a GKE cluster with only one node-pool and two nodes that node-pool, in case there is a requirement for more memory or CPU by any pod, I do not want any other nodes / compute instances to be created, is there a way in which the configuration of existing nodes change and extra memory / CPU get added?
Basically, existing instances / nodes get upgraded to become instances with higher configuration.
...ANSWER
Answered 2020-Oct-05 at 14:08You could manually change node pools with different node types. AFAIK there is no vertical node auto scaler in GKE.
QUESTION
I am running a cluster on Google Kubernetes Engine and I am currently trying to switch from using an Ingress with external load balancing (and NodePort services) to an ingress with container-native load balancing (and ClusterIP services) following this documentation: Container native load balancing
To communicate with my services I am using the following ingress configuration that used to work just fine when using NodePort services instead of ClusterIP:
...ANSWER
Answered 2020-Jul-24 at 12:25I found my issue. Apparently the BackendConfig's type
attribute is case-sensitive. Once I changed it from http
to HTTP
it worked after I recreated the ingress.
QUESTION
I had to move my .Net Core app from a Google App Engine to Google Kubernetes Engine because I need static IPs and sadly Google App Engine doesn't have that option.
I've managed to make a cluster and some pod, but in the logs I see:
...ANSWER
Answered 2020-Jun-16 at 17:21The error message reads:
Request had insufficient authentication scopes
Therefore it needs scope https://www.googleapis.com/auth/cloud-platform
added.
The service account needs IAM role roles/cloudkms.cryptoKeyEncrypterDecrypter
.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install node-pool
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page