externalip | get your external ip in Node.js | Runtime Evironment library
kandi X-RAY | externalip Summary
kandi X-RAY | externalip Summary
WARNING: perhaps is more and more fast, but opendns does not work fine at mainland china. get your external ip in Node.js. Based on .
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of externalip
externalip Key Features
externalip Examples and Code Snippets
Community Discussions
Trending Discussions on externalip
QUESTION
I'm using Rancher 2.5.8 to manage my Kubernetes clusters. Today, I created a new cluster and everything worked as expected, except the metrics-server. The status of the metrics-server is always "CrashLoopBackOff" and the logs are telling me the following:
...ANSWER
Answered 2021-May-31 at 06:48The issue was with the metrics server.
Metrics server was configured to use kubelet-preferred-address-types=InternalIP
but worker node didn't have any InternalIP listed:
QUESTION
Background Context:
Due to enterprise limitations, an uncooperative 3rd party vendor, and a lack of internal tools, this approach has been deemed most desirable. I am fully aware that there are easier ways to do this, but that decision is a couple of pay grades away from my hands, and I'm not about to fund new development efforts out of my own pocket.
Problem: We need to send an internal file to an external vendor. The team responsible for these types of files only transfers with SFTP, while our vendor only accepts files via REST API calls. The idea we came up with (considering the above constraints) was to use our OpenShift environment to host a "middle-man" SFTP server (running from a jar file) that will hit the vendor's API after our team sends it the file.
I have learned that if we want to get SFTP to work with OpenShift we need to set up of our cluster and pods with an ingress/external IP. This looks promising, but due to enterprise bureaucracy, I'm waiting for the OpenShift admins to make the required changes before I can see if this works, and I'm running out of time.
Questions:
- Is this approach even possible with the technologies involved? Am I on the right track?
- Are there other configuration options I should be using instead of what I explained above?
- Are there any clever ways in which an SFTP client can send a file via HTTP request? So instead of running an embedded SFTP server, we could just set up a web service instead (this is what our infrastructure supports and prefers).
References:
...ANSWER
Answered 2021-May-15 at 08:14That's totally possible, I have done it in the past as well with OpenShift 3.10. The approach to use externalIPs
is the right way.
QUESTION
I am using an external TCP/UDP network load balancer (Fortigate), Kubernetes 1.20.6 and Istio 1.9.4. I have set set externalTrafficPolicy: Local and need to run ingress gateway on every node (as said here in network load balancer tab) . How do I do that?
This is my ingress gateway service:
...ANSWER
Answered 2021-May-11 at 07:16As brgsousa mentioned in the comment, the solution was redeploy as DaemonSet.
Here is working yaml file:
QUESTION
I'm trying to wrap my head around exposing internal loadbalancing to outside world on bare metal k8s cluster.
Let's say we have a basic cluster:
Some master nodes and some worker nodes, that has two interfaces, one public facing (eth0) and one local(eth1) with ip within 192.168.0.0/16 network
Deployed MetalLB and configured 192.168.200.200-192.168.200.254 range for its internal ips
Ingress controller with its service with type LoadBalancer
MetalLB now should assign one of the ips from 192.168.200.200-192.168.200.254 to ingress service, as of my current understanding.
But I have some following questions:
On every node I could curl ingress controller externalIP (as long as they are reachable on eth1) with host header attached and get a response from a service thats configured in coresponding ingress resource or is it valid only on node where Ingress pods are currently placed?
What are my options to pass incoming external traffic to eth0 to an ingress listening on eth1 network?
Is it possible to forward requests saving source ip address or attaching X-Forwarded-For header is the only option?
...ANSWER
Answered 2021-May-10 at 15:27Assuming that we are talking about Metallb
using Layer2
.
Addressing the following questions:
On every node I could curl ingress controller externalIP (as long as they are reachable on eth1) with host header attached and get a response from a service thats configured in coresponding ingress resource or is it valid only on node where Ingress pods are currently placed?
Is it possible to forward requests saving source ip address or attaching X-Forwarded-For header is the only option?
Dividing the solution on the premise of preserving the source IP, this question could go both ways:
Preserve the source IP addressTo do that you would need to set the Service of type LoadBalancer
of your Ingress controller
to support "Local traffic policy" by setting (in your YAML
manifest):
.spec.externalTrafficPolicy: Local
This setup will be valid as long as on each Node
there is replica of your Ingress controller
as all of the networking coming to your controller will be contained in a single Node
.
Citing the official docs:
Do not preserve the source IP addressWith the
Local
traffic policy,kube-proxy
on the node that received the traffic sends it only to the service’s pod(s) that are on the same node. There is no “horizontal” traffic flow between nodes.Because
kube-proxy
doesn’t need to send traffic between cluster nodes, your pods can see the real source IP address of incoming connections.The downside of this policy is that incoming traffic only goes to some pods in the service. Pods that aren’t on the current leader node receive no traffic, they are just there as replicas in case a failover is needed.
If your use case does not require you to preserve the source IP address, you could go with the:
.spec.externalTrafficPolicy: Cluster
This setup won't require that the replicas of your Ingress controller
will be present on each Node
.
Citing the official docs:
With the default
Cluster
traffic policy,kube-proxy
on the node that received the traffic does load-balancing, and distributes the traffic to all the pods in your service.This policy results in uniform traffic distribution across all pods in the service. However,
kube-proxy
will obscure the source IP address of the connection when it does load-balancing, so your pod logs will show that external traffic appears to be coming from the service’s leader node.
Addressing the 2nd question:
What are my options to pass incoming external traffic to eth0 to an ingress listening on eth1 network?
Metallb listen by default on all interfaces, all you need to do is to specify the address pool from this eth
within Metallb config.
You can find more reference on this topic by following:
An example of such configuration, could be following:
QUESTION
For instance, I have a bare-metal cluster with 3 nodes ich with some instance exposing the port 105. In order to expose it on external Ip address I can define a service of type NodePort with "externalIPs" and it seems to work well. In the documentation it says to use a load balancer but I didn't get well why I have to use it and I worried to do some mistake.
...ANSWER
Answered 2021-Apr-19 at 09:35Can somebody explain whay I have to use external (MetalLB, HAProxy etc) Load Balancer with Bare-metal kubernetes cluster?
You don't have to use it, it's up to you to choose if you would like to use NodePort or LoadBalancer.
Let's start with the difference between NodePort and LoadBalancer.
NodePort is the most primitive way to get external traffic directly to your service. As the name implies, it opens a specific port on all the Nodes (the VMs), and any traffic that is sent to this port is forwarded to the service.
LoadBalancer service is the standard way to expose a service to the internet. It gives you a single IP address that will forward all traffic to your service.
You can find more about that in kubernetes documentation.
As for the question you've asked in the comment, But NodePort with "externalIPs" option is doing exactly the same. I see only one tiny difference is that the IP should be owned by one of the cluster machin. So where is the profit of using a loadBalancer?
let me answer that more precisely.
There are the advantages & disadvantages of ExternalIP:
The advantages of using ExternalIP is:
You have full control of the IP that you use. You can use IP that belongs to your ASN >instead of a cloud provider’s ASN.
The disadvantages of using ExternalIP is:
The simple setup that we will go thru right now is NOT highly available. That means if the node dies, the service is no longer reachable and you’ll need to manually remediate the issue.
There is some manual work that needs to be done to manage the IPs. The IPs are not dynamically provisioned for you thus it requires manual human intervention
Summarizing the pros and cons of both, we can conclude that ExternalIP is not made for a production environment, it's not highly available, if node dies the service will be no longer reachable and you will have to manually fix that.
With a LoadBalancer if node dies the service will be recreated automatically on another node. So it's dynamically provisioned and there is no need to configure it manually like with the ExternalIP.
QUESTION
I am running a Kuberentes cluster in dev environment. I executed deployment files for metrics server, my pod is up and running without any error message. See the output here:
...ANSWER
Answered 2021-Mar-29 at 19:24Following container arguments work for me in our development cluster
QUESTION
I currently have this code to import values from a csv file:
...ANSWER
Answered 2021-Apr-06 at 03:20Summarize from the comments, you just need to add a if
in your code to implement the requirement. But the code you provided in comments seems not correct. As hasdrubal mentioned, continue
means exist current loop and go to next iteration, so you code should be like:
QUESTION
I have the helm chart mongodb installed on my k8s cluster (https://github.com/bitnami/charts/tree/master/bitnami/mongodb).
I also have kube-prometheus-stack installed on my k8s cluster. (https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack)
I've setup a grafana dashboard for mongodb which should pull in data from a prometheus data source. (https://grafana.com/grafana/dashboards/2583 )
However, my grafana dashboard is empty with no data.
I'm wondering if i have not configured something with the helm chart properly. Please see the mongodb helm chart below.
mognodb chart.yml
...ANSWER
Answered 2021-Mar-17 at 00:40Installing prometheus using the "prometheus-community/kube-prometheus-stack" helm chart could be quite an extensive topic in itself considering the fact that it has a lot of configurable options.
As the helm chart comes with "prometheus operator", we have used PodMonitor and/or ServiceMonitor CRD's as they provide far more configuration options. Here's some documentation around that.
We've installed it with setting "prometheus.prometheusSpec.serviceMonitorSelector.matchLabels"
with a label value. Something like this
QUESTION
I am trying to setup Horizontal Pod Autoscaler to automatically scale up and down my api server pods based on CPU usage.
I currently have 12 pods running for my API but they are using ~0% CPU.
...ANSWER
Answered 2021-Mar-13 at 00:07I don’t see any “resources:” fields (e.g. cpu, mem, etc.) assigned, and this should be the root cause. Please be aware that having resource(s) set on a HPA (Horizontal Pod Autoscaler) is a requirement, explained on official Kubernetes documentation
Please note that if some of the Pod's containers do not have the relevant resource request set, CPU utilization for the Pod will not be defined and the autoscaler will not take any action for that metric.
This can definitely cause the message unable to read all metrics on target Deployment(s).
QUESTION
I'm looking the way to define externalIP range during Openshift cluster installation ( via declarations in install-config.yaml ).
Openshift docs for 4.3 and later version ( linky ) did not provide any fields for that.
Older definition ( externalIPNetworkCIDR ) from 3.5 ( linky ) doesn't seems to work ether.
...ANSWER
Answered 2021-Feb-11 at 14:35As per RH "we can't specify externalIP parameter during the cluster installation, it should be done post-installation."
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install externalip
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page