kubernetes-ingress | NGINX and NGINX Plus Ingress Controllers for Kubernetes | HTTP library

 by   nginxinc Go Version: v3.1.1 License: Apache-2.0

kandi X-RAY | kubernetes-ingress Summary

kandi X-RAY | kubernetes-ingress Summary

kubernetes-ingress is a Go library typically used in Networking, HTTP, Nginx applications. kubernetes-ingress has no bugs, it has no vulnerabilities, it has a Permissive License and it has medium support. You can download it from GitHub.

The Ingress is a Kubernetes resource that lets you configure an HTTP load balancer for applications running on Kubernetes, represented by one or more Services. Such a load balancer is necessary to deliver those applications to clients outside of the Kubernetes cluster.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              kubernetes-ingress has a medium active ecosystem.
              It has 4271 star(s) with 1884 fork(s). There are 109 watchers for this library.
              There were 1 major release(s) in the last 12 months.
              There are 49 open issues and 789 have been closed. On average issues are closed in 49 days. There are 27 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of kubernetes-ingress is v3.1.1

            kandi-Quality Quality

              kubernetes-ingress has 0 bugs and 0 code smells.

            kandi-Security Security

              kubernetes-ingress has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              kubernetes-ingress code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              kubernetes-ingress is licensed under the Apache-2.0 License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              kubernetes-ingress releases are available to install and integrate.
              Installation instructions are available. Examples and code snippets are not available.
              It has 81362 lines of code, 2774 functions and 350 files.
              It has high code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of kubernetes-ingress
            Get all kandi verified functions for this library.

            kubernetes-ingress Key Features

            No Key Features are available at this moment for kubernetes-ingress.

            kubernetes-ingress Examples and Code Snippets

            No Code Snippets are available at this moment for kubernetes-ingress.

            Community Discussions

            QUESTION

            Ingress not working from official kubernetes tutorial
            Asked 2022-Mar-11 at 08:43

            I am following this official k8 ingress tutorial. However I am not able to curl the minikube IP address and access the "web" application.

            ...

            ANSWER

            Answered 2021-Dec-15 at 15:57

            You need to setup your /etc/hosts, I guess the ingress controller wait for requests with an host defined to redirect them, but it's pretty strange that it didn't even respond to the http request with an error.

            Could you show what these commands returns ?

            Source https://stackoverflow.com/questions/70366074

            QUESTION

            Kubernetes Multiple Path Rewrites
            Asked 2022-Jan-21 at 20:34

            Alright, various permutations of this question have been asked and I feel terrible asking; I'm throwing the towel in and was curious if anyone could point me in the right direction (or point out where I'm wrong). I went ahead and tried a number of examples from the docs, but to no avail (see below).

            I'm trying to route traffic to the appropriate location under Kubernetes using an Ingress controller.

            Server Setup

            I have a server, myserver.com and three services running at:

            myserver.com/services/

            myserver.com/services/service_1/

            myserver.com/services/service_2/

            Note that I'm not doing anything (purposefully) to myserver.com/.

            At each of the three locations, there's a webapp running. For example, myserver.com/services/service_2 needs to load css files at myserver.com/services/service_2/static/css, etc...

            Kubernetes Ingress

            To manage the networking, I'm using a Kubernetes Ingress controller, which I've defined below. The CORS annotations aren't super relevant, but I've included them to clear up any confusion.

            ...

            ANSWER

            Answered 2022-Jan-21 at 20:34

            The problem you reported in your most recent comment is resolved by looking at the rewrite example in the nginx-ingress documentation.

            The rewrite-target annotation configures the ingress such that matching paths will be rewritten to that value. Since you've specified a static value of /, anything matching your ingress rules will get rewritten to /, which is exactly the behavior you're seeing.

            The solution is to capture the portion of the path we care about, and then use that in the rewrite-target annotation. For example:

            Source https://stackoverflow.com/questions/70795048

            QUESTION

            haproxy use pod name as server name
            Asked 2021-Dec-09 at 09:43

            I have a haproxy as a load balancer running in k8s with a route to a service with two running pods. I want the server naming inside haproxy to correspond to the pod names behind my service. If I'm not mistaken the following configmap / annotation value should do exactly this: https://haproxy-ingress.github.io/docs/configuration/keys/#backend-server-naming. But for me it doesn't and for the life of me I can't find out why. The relevant parts of my configuration look like this:

            controller deployment:

            ...

            ANSWER

            Answered 2021-Dec-06 at 17:07

            Here are a few hints to help you out solving your issue.

            Be sure you know the exact version of your haproxy-ingress controller:

            Looking at the manifest files you shared, it's hard to tell which exact version of haproxy-ingress-controller container you are running in your cluster (btw, it's against best practices in production envs to leave it w/o tag, read more on it here).

            For backend-server-naming configuration key to be working, minimum the v0.8.1 is required (it was backported).

            Before you move on in troubleshooting, firstly please double check your ingress deployment for compatibility.

            My observations of "backend-server-naming=pod" behavior Configuration dynamic updates:

            If I understand correctly the official documentation on this configuration key, setting a server naming of backends to pod names (backend-server-naming=pod) instead of sequences, does support a dynamic re-load of haproxy configuration, but does NOT support as of now dynamic updates to haproxy run-time configuration to server names at backend section (it was explained by haproxy-ingress author here, and here)

            It means you need to restart your haproxy-ingress controller instance first, to be able to see changes in backend's server names reflected at haproxy configuration, e.g. situations when new Pod replicas appear or POD_IP changed due the Pod crash (expect addition/updates of server entries based on sequence naming).

            Ingress Class:

            I have tested successfully (see test below) the backend-server-naming=pod setting on v0.13.4 with classified Ingress type, based on ingressClassName field , rather than deprecated annotation kubernetes.io/ingress.class, as in your case:

            I'm not claiming your configuration won't work (it should too), but it's important to know, that dynamic updates to configuration (this includes changes to backend configs) won't happen on unclassified Ingress resource or wrongly classified one, unless you're really running v0.12 or newer version.

            Testing:

            Source https://stackoverflow.com/questions/70199796

            QUESTION

            Ingress path redirection, helm chart
            Asked 2021-Dec-01 at 08:59

            I have a service which is running in kubernetes, and has a path prefix /api. Now I want to use Ingress to access it through the host address example.com/service1/ because I have multiple services. But the problem is that ingress redirects all the requests from path service1/ with that prefix service1/, but I want it to redirect from example.com/service1/ to my service with just / (so if I request example.com/service1/api it will redirect to service with just /api). Can I achieve something like this? I'm writing Ingress configuration in the helm chart of the service. Ingress configuration in service chart file values.yaml looks like this:

            ...

            ANSWER

            Answered 2021-Dec-01 at 08:59

            This is a community wiki answer posted for better visibility. Feel free to expand it.

            Based on the solution provided in the comments (method 1 example 2 in the Medium post ), a possible values.yaml file for Ingress might looks like below.

            Source https://stackoverflow.com/questions/69927448

            QUESTION

            How to expose a service to outside Kubernetes cluster via ingress?
            Asked 2021-Nov-27 at 09:36

            I'm struggling to expose a service in an AWS cluster to outside and access it via a browser. Since my previous question haven't drawn any answers, I decided to simplify the issue in several aspects.

            First, I've created a deployment which should work without any configuration. Based on this article, I did

            1. kubectl create namespace tests

            2. created file probe-service.yaml based on paulbouwer/hello-kubernetes:1.8 and deployed it kubectl create -f probe-service.yaml -n tests:

              ...

            ANSWER

            Answered 2021-Nov-16 at 13:46

            Well, I haven't figured this out for ArgoCD yet (edit: figured, but the solution is ArgoCD-specific), but for this test service it seems that path resolving is the source of the issue. It may be not the only source (to be retested on test2 subdomain), but when I created a new subdomain in the hosted zone (test3, not used anywhere before) and pointed it via A entry to the load balancer (as "alias" in AWS console), and then added to the ingress a new rule with / path, like this:

            Source https://stackoverflow.com/questions/69888157

            QUESTION

            How to setup TLS correctly in Kubernetes via cert-manager?
            Asked 2021-Nov-16 at 16:54

            I'm trying to setup TLS for a service that's available outside a Kubernetes cluster (AWS EKS). With cert-manager, I've successfully issued a certificate and configured ingress, but I'm still getting error NET::ERR_CERT_AUTHORITY_INVALID. Here's what I have:

            1. namespace tests and hello-kubernetes in it (both deployment and service have name hello-kubernetes-first, serivce is ClusterIP with port 80 and targetPort 8080, deployment is based on paulbouwer/hello-kubernetes:1.8, see details in my previous question)

            2. DNS and ingress configured to show the service:

              ...

            ANSWER

            Answered 2021-Nov-15 at 21:31

            Your ClusterIssuer refers to LetsEncrypt staging issuer. Remove that setting / the default should use their production setup. As pointed out in comments: https://acme-v02.api.letsencrypt.org/directory

            Deleting the previously generated secrets or switching to new secrets should ensure your certificates would be re-generated, using the right issuer.

            The staging issuer could be useful testing LetsEncrypt integration, it shouldn't be used otherwise.

            Source https://stackoverflow.com/questions/69964611

            QUESTION

            Nginx Ingress returning 404 when accessing the services
            Asked 2021-Oct-21 at 15:11

            I have setup K8s cluster on AWS. I have followed the Nginx Ingress setup using the link - Ingress-Setup. I then tried to deploy a coffee application using the link - demo-application and accessing the coffee application, I am getting a 404 error. I am getting a 200 OK response when accessing the curl http://localhost:8080/coffee from within the pod. I am not sure how to troubleshoot this issue.

            ...

            ANSWER

            Answered 2021-Oct-21 at 15:11

            Your application is listening on port 8080. In your Service file you need to use the targetPort as 8080.

            Source https://stackoverflow.com/questions/69126927

            QUESTION

            Kubernetes is not assigning address to ingress
            Asked 2021-Oct-12 at 13:15

            I am a beginner with k8s and I followed the k8s official docs to create a hello-world ingress, but I can't make it work. First I create a service and just like the tutorial I get:

            ...

            ANSWER

            Answered 2021-Oct-12 at 13:15

            Seems there is a bug with the Ingress Addon with Minikube 1.23.0, as documented here, which matches the issue you are seeing. ConfigMap issues prevent IngressClass from being generated (usually "nginx" by default) and ingress services won't work.

            This issue was fixed in 1.23.1, so updating Minikube should fix your issue.

            Source https://stackoverflow.com/questions/69540215

            QUESTION

            kong-ingress-controller's EXTERNAL_IP is pending
            Asked 2021-Sep-17 at 08:00

            I've installed kong-ingress-controller using yaml file on a 3-nodes k8s cluster( bare metal ) (you can see the file at the bottom of question) and every thing is up and runnig:

            ...

            ANSWER

            Answered 2021-Sep-14 at 12:40

            Had the same issue, after days of looking for a solution, I came across metallb, from nginx ingress installation on bare metal

            MetalLB provides a network load-balancer implementation for Kubernetes clusters that do not run on a supported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster

            , from their documentation I got this

            Kubernetes does not offer an implementation of network load balancers (Services of type LoadBalancer) for bare-metal clusters. The implementations of network load balancers that Kubernetes does ship with are all glue code that calls out to various IaaS platforms (GCP, AWS, Azure…). If you’re not running on a supported IaaS platform (GCP, AWS, Azure…), LoadBalancers will remain in the “pending” state indefinitely when created.

            I didn't finalize the installation but I hope the explanation above answers your question on pending status on external ip

            Source https://stackoverflow.com/questions/69158477

            QUESTION

            Is Mutual TLS supposed to be performed during TLS handshake only?
            Asked 2021-Sep-06 at 09:21

            Recently I've been evaluating different API Gateway (API GW) options for the IoT-based project. The purpose of this was to find a good enough solution for performing Mutual TLS (mTLS) authentication of the devices and API GW.

            Most of the solutions I've tried out seem to perform mTLS during the TLS handshake as nicely depicted here. So this is what I understand OSI Layer 4 (TCP/IP) authentication method.

            However, the Kong API Gateway seem to do it at OSI Layer 7 (Application). Basically, no client auth during the TLS handshake phase, and rather application layer validates the peer certificate. Hence it's able to send the response with 401 status and some payload (which is not possible, if TLS handshake fails). Example

            ...

            ANSWER

            Answered 2021-Aug-10 at 07:41

            Most of the solutions I've tried out seem to perform mTLS during the TLS handshake as nicely depicted here. So this is what I understand OSI Layer 4 (TCP/IP) authentication method.

            Since TLS is above layer OSI layer 4 the authentication is also above layer 4. But OSI layers aside (which don't sufficiently match today's reality above layer 4 anyway) you essentially ask at what stage the mutual authentication happens.

            Mutual authentication in TLS happens in two stages: requesting the clients certificate and validating that the certificate matches the requirements. Requesting the certificate is always done inside the TLS handshake, although it does not need to be the initial TLS handshake of the connection.

            Validating the certificate can be done inside the TLS handshake, outside of it or a combination of both. Typically it is checked inside the handshake that the certificate is issued by some trusted certificate authority, but further checks for a specific subject or so might be application specific and will thus be done after the TLS handshake inside the application. But it might also be that the full validation is done inside or outside the TLS handshake.

            Accepting any certificates inside the TLS handshake and validating the certificate then outside the handshake only, has the advantage that one can return a useful error message to the client inside the established TLS connection. Validation errors inside the TLS handshake instead result in cryptic errors like handshake error alerts or just closing the connection, which are not that helpful to debug the problem.

            Source https://stackoverflow.com/questions/68722526

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install kubernetes-ingress

            Install the NGINX Ingress controller using the Kubernetes manifests or the helm chart.
            Configure load balancing for a simple web application: Use the Ingress resource. See the Cafe example. Or the VirtualServer resource. See the Basic configuration example.
            See additional configuration examples.
            Learn more about all available configuration and customization in the docs.

            Support

            We’d like to hear your feedback! If you have any suggestions or experience issues with our Ingress controller, please create an issue or send a pull request on Github. You can contact us directly via kubernetes@nginx.com.
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/nginxinc/kubernetes-ingress.git

          • CLI

            gh repo clone nginxinc/kubernetes-ingress

          • sshUrl

            git@github.com:nginxinc/kubernetes-ingress.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link