balancers | Implementation of HTTP load-balancers | Load Balancing library

 by   olivere Go Version: Current License: MIT

kandi X-RAY | balancers Summary

kandi X-RAY | balancers Summary

balancers is a Go library typically used in Networking, Load Balancing applications. balancers has no bugs, it has no vulnerabilities, it has a Permissive License and it has low support. You can download it from GitHub.

Balancers provides implementations of HTTP load-balancers.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              balancers has a low active ecosystem.
              It has 43 star(s) with 10 fork(s). There are 5 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 2 open issues and 2 have been closed. On average issues are closed in 317 days. There are 1 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of balancers is current.

            kandi-Quality Quality

              balancers has no bugs reported.

            kandi-Security Security

              balancers has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

            kandi-License License

              balancers is licensed under the MIT License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              balancers releases are not available. You will need to build from source code and install.
              Installation instructions are not available. Examples and code snippets are available.

            Top functions reviewed by kandi - BETA

            kandi has reviewed balancers and discovered the below as its top functions. This is intended to give you an instant insight into balancers implemented functionality, and help decide if they suit your requirements.
            • RoundTrip implements the http . RoundTripper interface .
            • NewBalancerFromURL creates a new balancer from the given URLs .
            • modifyRequest modifies the request s URL .
            • NewHttpConnection creates a new http connection
            • NewBalancer creates a new balancer .
            • cloneRequest returns a shallow copy of the request .
            • NewClient returns a new HTTP client for the given balancer .
            Get all kandi verified functions for this library.

            balancers Key Features

            No Key Features are available at this moment for balancers.

            balancers Examples and Code Snippets

            No Code Snippets are available at this moment for balancers.

            Community Discussions

            QUESTION

            ECS - communication between tasks
            Asked 2021-Jun-15 at 09:03

            I am trying to deploy 2 containers on 2 different tasks (1 container per task), one is my frontend and the other is my backend server. I am trying to figure out how to configure the communication between them.

            I saw that a load balancer in a service is a good option. However, should I configure load balancer for my front end server and another one for my backend? Meaning each time I have public-facing services and private services I need 2 load balancers?

            I would like to only expose my front-end to the public internet and my backend will remain private (although I make API requests to the outside world - probably need to configure outbound route too?).

            I would highly appreciate any information.

            ...

            ANSWER

            Answered 2021-Jun-15 at 09:03

            No you don't need a private LB for that. It is an option you can use but ECS has since introduced the concept of Service Discovery for back-end services. The idea is that your front end is exposed to your users via a standard LB (e.g. ALB) but services that are being called by the front end and that run behind the scene can be addressed using this service discovery mechanism (based on Route53/CloudMap).

            You can see an example of this concept here. This CFN template gives you the details re how you can build this layout.

            Source https://stackoverflow.com/questions/67975605

            QUESTION

            AWS Load Balancer Controller successfully creates ALB when Ingress is deployed, but unable to get DNS Name in CDK code
            Asked 2021-Jun-13 at 20:44

            I originally posted this question as an issue on the GitHub project for the AWS Load Balancer Controller here: https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/2069.

            I'm seeing some odd behavior that I can't trace or explain when trying to get the loadBalacnerDnsName from an ALB created by the controller. I'm using v2.2.0 of the AWS Load Balancer Controller in a CDK project. The ingress that I deploy triggers the provisioning of an ALB, and that ALB can connect to my K8s workloads running in EKS.

            Here's my problem: I'm trying to automate the creation of a Route53 A Record that points to the loadBalancerDnsName of the load balancer, but the loadBalancerDnsName that I get in my CDK script is not the same as the loadBalancerDnsName that shows up in the AWS console once my stack has finished deploying. The value in the console is correct and I can get a response from that URL. My CDK script outputs the value of the DnsName as a CfnOutput value, but that URL does not point to anything.

            In CDK, I have tried to use KubernetesObjectValue to get the DNS name from the load balancer. This isn't working (see this related issue: https://github.com/aws/aws-cdk/issues/14933), so I'm trying to lookup the Load Balancer with CDK's .fromLookup and using a tag that I added through my ingress annotation:

            ...

            ANSWER

            Answered 2021-Jun-13 at 20:23

            I think that the answer is to use external-dns.

            ExternalDNS allows you to control DNS records dynamically via Kubernetes resources in a DNS provider-agnostic way.

            Source https://stackoverflow.com/questions/67955013

            QUESTION

            AWS - Private VPC Multiuser access to specific servers
            Asked 2021-Jun-08 at 05:19

            I need some suggestions for best practicality, security and maintainability

            The scenario is:

            • We have a private VPC with some servers,
            • We have users that can access server A and A only
            • Some users can access A, and B.
            • Other only B and so on.

            They need to access to theses servers from home and office.

            The current idea is having a multiuser OpenVPN server with IPTables blocking access to the servers that the user can't access

            Is there another option using AWS tools (VPCs,Security Groups, ACLs, Load Balancers, or others)?

            Or other solutions better than this one?

            Draw of current arch:

            • One boundary server that does the bridge from the open world to the Private VPC (With OpenVpn and IPTables)
            • 5 servers inside the private VPC
            • 10 Users with different levels of access

            Thanks

            ...

            ANSWER

            Answered 2021-Jun-08 at 05:19

            Use AWS IAM to manage user access and permissions.

            For your scenario, you can create 3 groups: Server A, Server B, Server AB.

            Then attach IAM policy to each group. The policies will restrict access to specific EC2 only.

            Sample Policy that may work for you (via https://aws.amazon.com/premiumsupport/knowledge-center/restrict-ec2-iam/ )

            Source https://stackoverflow.com/questions/67826847

            QUESTION

            UI 404 - Vault Kubernetes
            Asked 2021-Jun-01 at 10:04

            I'm testing out Vault in Kubernetes and am installing via the Helm chart. I've created an overrides file, it's an amalgamation of a few different pages from the official docs.

            The pods seem to come up OK and into Ready status and I can unseal vault manually using 3 of the keys generated. I'm having issues getting 404 when browsing the UI though, the UI is presented externally on a Load Balancer in AKS. Here's my config:

            ...

            ANSWER

            Answered 2021-Jun-01 at 10:04

            So, I don't think the documentation around deploying in Kubernetes from Helm is really that clear but I was basically missing a ui = true flag from the HCL config stanza. It's to be noted that this is in addition to the value passed to the helm chart:

            Source https://stackoverflow.com/questions/67619401

            QUESTION

            Does AWS classic load balancer keeps the SNI after tls termination?
            Asked 2021-May-31 at 10:05

            I have an AWS classic load balancer. Here are my listeners :

            The AWS classic load balancer is doing tls termination, and redirecting the traffic to port 30925 of my nodes
            The process listening on port 30925 is an istio gateway, redirecting traffic afterwards based on the SNI of the request

            However, the AWS classic load balancer doesn't seems to keep the SNI of the request after tls termination

            Is there any documentation regarding the behavior of the load balancer in that situation?
            I found a couple of links talking about SNI (here for example), but it's only talking about the load balancer itself handling the routing of the SNI

            ...

            ANSWER

            Answered 2021-May-31 at 10:05

            Based on the comments.

            If you terminate SSL on the load balancer (LB), SSL-related information is not carried over to your targets. To ensure full SSL-forwarding to your targets, you have to use TCP listener. This way your targets will be responsible for handling SSL, and subsequently will be able to custom process it.

            Source https://stackoverflow.com/questions/67768997

            QUESTION

            Generating SSL Certs for Customer Domains and integrating with Python Flask
            Asked 2021-May-11 at 13:02

            So, here is the problem I ran into, I am trying to build a very small-scale MVP app that I will be releasing soon. I have been able to figure out everything from deploying the flask application with Dokku (I'll upgrade to something better later) and have been able to get most things working on the app including S3 uploading, stripe integration, etc. Here is the one thing I am stuck on, how do I generate SSL certs on the fly for customers and then link everything back to the Python app? Here are my thoughts:

            I can use a simple script and connect to the Letsencrypt API to generate and request certs once domains are pointed to my server(s). The problem I am running into is that once the domain is pointed, how do I know? Dokku doesn't connect all incoming requests to my container and therefore Flask wouldn't be able to detect it unless I manually connect it with the dokku domains:add command?

            Is there a better way to go about this? I know of SSL for SaaS by Cloudflare but it seems to only be for their Enterprise customers and I need a robust solution like this that I don't mind building out but just need a few pointers (unless there is already a solution that is out there for free - no need to reinvent the wheel, eh?). Another thing, in the future I do plan to have my database running separately and load balancers pointing to multiple different instances of my app (won't be a major issue as the DB is still central, but just worried about the IP portion of it). To recap though:

            Client Domain (example.io) -> dns1.example.com -> Lets Encrypt SSL Cert -> Dokku Container -> My App

            Please let me know if I need to re-explain anything, thank you!

            ...

            ANSWER

            Answered 2021-May-06 at 15:47

            Your solutions is a wildcard certificate, or use app prefixing.

            So I'm not sure why you need a cert per customer, but let's say you are going to do

            customer1.myapp.com -> routes to customer1 backend. For whatever reason.

            Let's Encrypt lets you register *.myapp.com and therefore you can use subdomains for each customer.

            The alternative is a customer prefix.

            Say your app URL looks like www.myapp.com/api/v1/somecomand

            you could use www.myapp.com/api/v1/customerID/somecommand and then allow your load balancer to route based on the prefix and use a rewrite rule to remove the customerID back to the original URL.

            This is more complicated, and it is load balancer dependent but so is the first solution.

            All this being said, both solutions would most likely require a separate instance of your application per customer, which is a heavy solution, but fine if that's what you want and are using lightweight containers or deploying multiple instances per server.

            Anyway, a lot more information would be needed to give a solid solution.

            Source https://stackoverflow.com/questions/67352618

            QUESTION

            Unable to build Docker images through Jenkins installed on Kubernetes
            Asked 2021-May-03 at 05:16

            I used the following helm chart to install Jenkins

            https://artifacthub.io/packages/helm/jenkinsci/jenkins

            The problem is it does't build docker images, saying there's no docker. Docker was installed on host with sudo apt install docker-ce docker-ce-cli containerd.io

            ...

            ANSWER

            Answered 2021-Apr-08 at 20:25

            You are running Jenkins itself as a container. Therefore the docker command line application must be present in the container, not the host.

            Easiest solution: Use a Jenkins docker image that contains the docker cli already, for example https://hub.docker.com/r/trion/jenkins-docker-client

            Source https://stackoverflow.com/questions/67011315

            QUESTION

            How to add Cloud CDN to GCP VM? Always no load balancer available
            Asked 2021-Apr-30 at 19:39

            I have a running Web server on Google Cloud. It's a Debian VM serving a few sites with low-ish traffic, but I don't like Cloudflare. So, Cloud CDN it is.

            I created a load balancer with static IP.

            I do all the items from the guides I've found. But when it comes time to Add origin to Cloud CDN, no load balancer is available because it's "unhealthy", as seen by rolling over the yellow triangle in the LB status page: "1 backend service is unhealthy".

            At this point, the only option is to choose Create a Load Balancer.

            I've created several load balancers with different attributes, thinking that might be it, but no luck. They all get the "1 backend service is unhealthy" tag, and thus are unavailable.

            ---Edit below---

            During LB creation, I don't see anywhere that causes the LB to know about the VM, except in cert issue (see below). Nowhere does it ask for any field that would point to the VM.

            I created another LB just now, and here are those settings. It finishes, then it's marked unhealthy.

            Type HTTP(S) Load Balancing

            Internet facing or internal only? From Internet to my VMs

            (my VM is not listed in backend services, so I create one... is this the problem?)

            Create backend service

            • Backend type: Instanced group
            • Port numbers: 80,443
            • Enable Cloud CDN: checked
            • Health check: create new: https, check /

            Simple host and path rule: checked

            New Frontend IP and port

            • Protocol: HTTPS
            • IP: v4, static reserved and issued
            • Port: 443
            • Certificate: Create New: Create Google-managed certificate, mydomain.com and www.mydomain.com
            ...

            ANSWER

            Answered 2021-Apr-30 at 09:03

            Load balancer's unhealthy state could mean that your LB's healthcheck probe is unable to reach your backend service(Your Debian VM in this case).

            If your backend service looks good now, I think there is a problem with your firewall configuration.

            Check your firewall rules whether it allows healthcheck probe's IP address range or not.

            Refer to the docoment below to get more detailed information.

            Required firewall rule

            Source https://stackoverflow.com/questions/67327149

            QUESTION

            Consul load balancing north south traffic
            Asked 2021-Apr-29 at 04:50

            I am trying to run some of my micro services within consul service mesh. As per consul documentation, it is clear that consul takes care of routing, load balancing and service discovery. But their documentation also talks about 3rd party load balancers like NGINX, HAProxy and F5.

            https://learn.hashicorp.com/collections/consul/load-balancing

            If consul takes care of load balancing, then what is the purpose of these load balancers.

            My assumptions,

            1. These load balancers are to replace the built-in load balancing technique of consul, but the LB still uses consul service discovery data. (Why anyone need this !!!)

            2. Consul only provides load balancing for east-west traffic (within the service mesh). To load balance north-south traffic (internet traffic), we need external load balancers.

            Please let me know which of my assumption is correct

            ...

            ANSWER

            Answered 2021-Apr-28 at 20:03

            I checked with one of my colleagues (full disclosure: I work for F5) and he mentioned that whereas it is not a technical requirement to use external services for load balancing, a lot of organizations already have the infrastructure in place, along with the operational requirements, policies, and procedures that come with it.

            For some examples on how Consul might work with edge services like the F5 BIG-IP, here are a couple articles you might find interesting that can provide context for your question.

            Source https://stackoverflow.com/questions/67263502

            QUESTION

            Can we point multiple domain names to the same IP address?
            Asked 2021-Apr-25 at 17:31

            I have two GKE cluster (GKE-OLD and GKE-NEW) running behind two separate load balancers.

            The GKE-OLD cluster runs behind a L4 global load balancer where as the GKE-NEW cluster runs behind a L7 load balancer.

            The services of the clusters are accessible through two separate domain names.

            www.service.company.com points to the L4 load balancer behind which the GKE-OLD cluster is running.

            www.service-1.company.com points to the L7 load balancer behind which the GKE-NEW cluster is running.

            I want to eventually get rid of the old cluster and LB associated with it. However, I want to keep the domain name (www.service.company.com) from the old cluster and eventually retire the www.service-1.company.com domain name that is associated with the new cluster.

            Before I decommission the old cluster, the current setup I want to have should look something like this:

            My questions are:

            Can we have multiple domains pointing at same IP address (LB) and same domain pointing at multiple IP addresses (LBs) at the same time? www.service.company.com and www.service-1.company.com pointing at the same load balancer. And www.service.company.com pointing at both L4 and L7 LBs.

            ...

            ANSWER

            Answered 2021-Apr-25 at 17:31

            Can we have multiple domains pointing at same IP address (LB) and same domain pointing at multiple IP addresses (LBs)

            Yes you can have multiple names resolve to the same IP address (either directly with A and AAAA or through CNAME records), and yes you can have a name resolving to multiple IP addresses (again through direct A+AAAA records or through CNAME records), but in which case, except if there is some specific tooling on the client side (the application consuming those records), things will work in a load balancing fashion among all addresses, not in failover fashion.

            Source https://stackoverflow.com/questions/67252546

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install balancers

            You can download it from GitHub.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/olivere/balancers.git

          • CLI

            gh repo clone olivere/balancers

          • sshUrl

            git@github.com:olivere/balancers.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Explore Related Topics

            Consider Popular Load Balancing Libraries

            ingress-nginx

            by kubernetes

            bfe

            by bfenetworks

            metallb

            by metallb

            glb-director

            by github

            Try Top Libraries by olivere

            elastic

            by olivereGo

            iterm2-imagetools

            by olivereGo

            grpc

            by olivereGo

            esdiff

            by olivereGo

            jobqueue

            by olivereGo