kubernetes-ingress | NGINX and NGINX Plus Ingress Controllers for Kubernetes | HTTP library
kandi X-RAY | kubernetes-ingress Summary
kandi X-RAY | kubernetes-ingress Summary
The Ingress is a Kubernetes resource that lets you configure an HTTP load balancer for applications running on Kubernetes, represented by one or more Services. Such a load balancer is necessary to deliver those applications to clients outside of the Kubernetes cluster.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of kubernetes-ingress
kubernetes-ingress Key Features
kubernetes-ingress Examples and Code Snippets
Community Discussions
Trending Discussions on kubernetes-ingress
QUESTION
I am following this official k8 ingress tutorial. However I am not able to curl
the minikube IP address and access the "web" application.
ANSWER
Answered 2021-Dec-15 at 15:57You need to setup your /etc/hosts, I guess the ingress controller wait for requests with an host defined to redirect them, but it's pretty strange that it didn't even respond to the http request with an error.
Could you show what these commands returns ?
QUESTION
Alright, various permutations of this question have been asked and I feel terrible asking; I'm throwing the towel in and was curious if anyone could point me in the right direction (or point out where I'm wrong). I went ahead and tried a number of examples from the docs, but to no avail (see below).
I'm trying to route traffic to the appropriate location under Kubernetes using an Ingress controller.
Server SetupI have a server, myserver.com
and three services running at:
myserver.com/services/
myserver.com/services/service_1/
myserver.com/services/service_2/
Note that I'm not doing anything (purposefully) to myserver.com/
.
At each of the three locations, there's a webapp running. For example, myserver.com/services/service_2
needs to load css files at myserver.com/services/service_2/static/css
, etc...
To manage the networking, I'm using a Kubernetes Ingress controller, which I've defined below. The CORS annotations aren't super relevant, but I've included them to clear up any confusion.
...ANSWER
Answered 2022-Jan-21 at 20:34The problem you reported in your most recent comment is resolved by looking at the rewrite example in the nginx-ingress documentation.
The rewrite-target
annotation configures the ingress such that matching paths will be rewritten to that value. Since you've specified a static value of /
, anything matching your ingress rules will get rewritten to /
, which is exactly the behavior you're seeing.
The solution is to capture the portion of the path we care about, and then use that in the rewrite-target
annotation. For example:
QUESTION
I have a haproxy as a load balancer running in k8s with a route to a service with two running pods. I want the server naming inside haproxy to correspond to the pod names behind my service. If I'm not mistaken the following configmap / annotation value should do exactly this: https://haproxy-ingress.github.io/docs/configuration/keys/#backend-server-naming
. But for me it doesn't and for the life of me I can't find out why. The relevant parts of my configuration look like this:
controller deployment:
...ANSWER
Answered 2021-Dec-06 at 17:07Here are a few hints to help you out solving your issue.
Be sure you know the exact version of your haproxy-ingress controller:Looking at the manifest files you shared, it's hard to tell which exact version of haproxy-ingress-controller
container you are running in your cluster (btw, it's against best practices in production envs to leave it w/o tag, read more on it here).
For backend-server-naming
configuration key to be working, minimum the v0.8.1
is required (it was backported).
Before you move on in troubleshooting, firstly please double check your ingress deployment for compatibility.
My observations of "backend-server-naming=pod" behavior Configuration dynamic updates:If I understand correctly the official documentation on this configuration key, setting a server naming of backends to pod names (backend-server-naming=pod
) instead of sequences
, does support a dynamic re-load of haproxy configuration, but does NOT support as of now dynamic updates to haproxy run-time configuration to server names at backend section (it was explained by haproxy-ingress author here, and here)
It means you need to restart your haproxy-ingress controller instance first, to be able to see changes in backend's server names reflected at haproxy configuration, e.g. situations when new Pod replicas appear or POD_IP changed due the Pod crash (expect addition/updates of server entries based on sequence naming).
Ingress Class:I have tested successfully (see test below) the backend-server-naming=pod
setting on v0.13.4
with classified Ingress type, based on ingressClassName
field , rather than deprecated annotation kubernetes.io/ingress.class
, as in your case:
I'm not claiming your configuration won't work (it should too), but it's important to know, that dynamic updates to configuration (this includes changes to backend configs) won't happen on unclassified Ingress resource or wrongly classified one, unless you're really running v0.12
or newer version.
QUESTION
I have a service which is running in kubernetes, and has a path prefix /api
. Now I want to use Ingress to access it through the host address example.com/service1/
because I have multiple services. But the problem is that ingress redirects all the requests from path service1/
with that prefix service1/
, but I want it to redirect from example.com/service1/
to my service with just /
(so if I request example.com/service1/api
it will redirect to service with just /api
). Can I achieve something like this? I'm writing Ingress configuration in the helm chart of the service.
Ingress configuration in service chart file values.yaml
looks like this:
ANSWER
Answered 2021-Dec-01 at 08:59This is a community wiki answer posted for better visibility. Feel free to expand it.
Based on the solution provided in the comments (method 1 example 2 in the Medium post ), a possible values.yaml
file for Ingress might looks like below.
QUESTION
I'm struggling to expose a service in an AWS cluster to outside and access it via a browser. Since my previous question haven't drawn any answers, I decided to simplify the issue in several aspects.
First, I've created a deployment which should work without any configuration. Based on this article, I did
kubectl create namespace tests
created file
...probe-service.yaml
based onpaulbouwer/hello-kubernetes:1.8
and deployed itkubectl create -f probe-service.yaml -n tests
:
ANSWER
Answered 2021-Nov-16 at 13:46Well, I haven't figured this out for ArgoCD yet (edit: figured, but the solution is ArgoCD-specific), but for this test service it seems that path resolving is the source of the issue. It may be not the only source (to be retested on test2 subdomain), but when I created a new subdomain in the hosted zone (test3, not used anywhere before) and pointed it via A
entry to the load balancer (as "alias" in AWS console), and then added to the ingress a new rule with /
path, like this:
QUESTION
I'm trying to setup TLS for a service that's available outside a Kubernetes cluster (AWS EKS). With cert-manager, I've successfully issued a certificate and configured ingress, but I'm still getting error NET::ERR_CERT_AUTHORITY_INVALID
. Here's what I have:
namespace
tests
andhello-kubernetes
in it (both deployment and service have namehello-kubernetes-first
, serivce is ClusterIP withport
80 andtargetPort
8080, deployment is based onpaulbouwer/hello-kubernetes:1.8
, see details in my previous question)DNS and ingress configured to show the service:
...
ANSWER
Answered 2021-Nov-15 at 21:31Your ClusterIssuer refers to LetsEncrypt staging issuer. Remove that setting / the default should use their production setup. As pointed out in comments: https://acme-v02.api.letsencrypt.org/directory
Deleting the previously generated secrets or switching to new secrets should ensure your certificates would be re-generated, using the right issuer.
The staging issuer could be useful testing LetsEncrypt integration, it shouldn't be used otherwise.
QUESTION
I have setup K8s cluster on AWS. I have followed the Nginx Ingress setup using the link - Ingress-Setup. I then tried to deploy a coffee application using the link - demo-application and accessing the coffee application, I am getting a 404
error. I am getting a 200 OK
response when accessing the curl http://localhost:8080/coffee
from within the pod. I am not sure how to troubleshoot this issue.
ANSWER
Answered 2021-Oct-21 at 15:11Your application is listening on port 8080. In your Service file you need to use the targetPort as 8080.
QUESTION
I am a beginner with k8s and I followed the k8s official docs to create a hello-world ingress, but I can't make it work. First I create a service and just like the tutorial I get:
...ANSWER
Answered 2021-Oct-12 at 13:15Seems there is a bug with the Ingress Addon with Minikube 1.23.0, as documented here, which matches the issue you are seeing. ConfigMap issues prevent IngressClass from being generated (usually "nginx" by default) and ingress services won't work.
This issue was fixed in 1.23.1, so updating Minikube should fix your issue.
QUESTION
I've installed kong-ingress-controller using yaml file on a 3-nodes k8s cluster( bare metal ) (you can see the file at the bottom of question) and every thing is up and runnig:
...ANSWER
Answered 2021-Sep-14 at 12:40Had the same issue, after days of looking for a solution, I came across metallb, from nginx ingress installation on bare metal
MetalLB provides a network load-balancer implementation for Kubernetes clusters that do not run on a supported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster
, from their documentation I got this
Kubernetes does not offer an implementation of network load balancers (Services of type LoadBalancer) for bare-metal clusters. The implementations of network load balancers that Kubernetes does ship with are all glue code that calls out to various IaaS platforms (GCP, AWS, Azure…). If you’re not running on a supported IaaS platform (GCP, AWS, Azure…), LoadBalancers will remain in the “pending” state indefinitely when created.
I didn't finalize the installation but I hope the explanation above answers your question on pending status on external ip
QUESTION
Recently I've been evaluating different API Gateway (API GW) options for the IoT-based project. The purpose of this was to find a good enough solution for performing Mutual TLS (mTLS) authentication of the devices and API GW.
Most of the solutions I've tried out seem to perform mTLS during the TLS handshake as nicely depicted here. So this is what I understand OSI Layer 4 (TCP/IP) authentication method.
However, the Kong API Gateway seem to do it at OSI Layer 7 (Application). Basically, no client auth during the TLS handshake phase, and rather application layer validates the peer certificate. Hence it's able to send the response with 401 status and some payload (which is not possible, if TLS handshake fails). Example
...ANSWER
Answered 2021-Aug-10 at 07:41Most of the solutions I've tried out seem to perform mTLS during the TLS handshake as nicely depicted here. So this is what I understand OSI Layer 4 (TCP/IP) authentication method.
Since TLS is above layer OSI layer 4 the authentication is also above layer 4. But OSI layers aside (which don't sufficiently match today's reality above layer 4 anyway) you essentially ask at what stage the mutual authentication happens.
Mutual authentication in TLS happens in two stages: requesting the clients certificate and validating that the certificate matches the requirements. Requesting the certificate is always done inside the TLS handshake, although it does not need to be the initial TLS handshake of the connection.
Validating the certificate can be done inside the TLS handshake, outside of it or a combination of both. Typically it is checked inside the handshake that the certificate is issued by some trusted certificate authority, but further checks for a specific subject or so might be application specific and will thus be done after the TLS handshake inside the application. But it might also be that the full validation is done inside or outside the TLS handshake.
Accepting any certificates inside the TLS handshake and validating the certificate then outside the handshake only, has the advantage that one can return a useful error message to the client inside the established TLS connection. Validation errors inside the TLS handshake instead result in cryptic errors like handshake error alerts or just closing the connection, which are not that helpful to debug the problem.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install kubernetes-ingress
Configure load balancing for a simple web application: Use the Ingress resource. See the Cafe example. Or the VirtualServer resource. See the Basic configuration example.
See additional configuration examples.
Learn more about all available configuration and customization in the docs.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page