balancer | Golang client side load balancer | Load Balancing library
kandi X-RAY | balancer Summary
kandi X-RAY | balancer Summary
A simple client side load balancer for go applications. balancer was made to provide easier access to DNS-based load balancing for go services running in kubernetes and was mainly built for http.Client. Grpc has its own DNS load balancer, use that one.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of balancer
balancer Key Features
balancer Examples and Code Snippets
Community Discussions
Trending Discussions on balancer
QUESTION
...Nothing to install, update or remove Generating optimized autoload files Class App\Helpers\Helper located in C:/wamp64/www/vuexylaravel/app\Helpers\helpers.php does not comply with psr-4 autoloading standard. Skipping. > Illuminate\Foundation\ComposerScripts::postAutoloadDump > @php artisan package:discover --ansi
ANSWER
Answered 2022-Feb-13 at 17:35If you are upgrading your Laravel 8 project to Laravel 9 by importing your existing application code into a totally new Laravel 9 application skeleton, you may need to update your application's "trusted proxy" middleware.
Within your app/Http/Middleware/TrustProxies.php file, update use Fideloper\Proxy\TrustProxies as Middleware to use Illuminate\Http\Middleware\TrustProxies as Middleware.
Next, within app/Http/Middleware/TrustProxies.php, you should update the $headers property definition:
// Before...
protected $headers = Request::HEADER_X_FORWARDED_ALL;
// After...
QUESTION
I'm trying to deploy a simple REST API written in Golang to AWS EKS.
I created an EKS cluster on AWS using Terraform and applied the AWS load balancer controller Helm chart to it.
All resources in the cluster look like:
...ANSWER
Answered 2022-Mar-15 at 15:23A CrashloopBackOff means that you have a pod starting, crashing, starting again, and then crashing again.
Maybe the error come from the application itself that it can not connect to database, redis,...
You may find something useful here:
My kubernetes pods keep crashing with "CrashLoopBackOff" but I can't find any log
QUESTION
I am trying to deploy a PHP through AWS CodeDeploy and am currently stuck on the AllowTraffic step in CodeDeploy. The application is on an EC2 instance behind an ALB. In the ALB, I am getting failing health checks. I have the PHP application code sitting in the following directory on the EC2 instance: /var/www/html/src
. If I were to curl the private IP of the EC2 following by the directory where the code sits, I am getting an error 404 Not Found. Even though the index.php
file is in that directory, I am unable to curl it. Currently I have security groups setup where the ALB security group allows any traffic from only HTTP, and all traffic from the ALB security group is allowed to reach the EC2 instance. I am able to curl the root of the instance and see Apache's default page.
If I were to adjust the health check settings on the ALB Target group, I get a 403 error when setting the health check to /
. I get a 404 error when specifying the path to the directory that has the PHP application code.
Any advice on how I can get the instance to a healthy state for the ALB would be appreciated.
Application Load balancer security group allows traffic on port 80 EC2 instance security group allows traffic from Application Load Balancer security group.
The PHP application should be accessible on port 80, where Apache is running. The Application Load Balancer has only 1 listener that is set up for port 80, that forwards traffic to the target group.
...ANSWER
Answered 2022-Feb-16 at 03:33The heath check path in your TG should be URL path, not the actual location on the EB instance. You can try with just /index.php
:
QUESTION
I've been trying to get over this but I'm out of ideas for now hence I'm posting the question here.
I'm experimenting with the Oracle Cloud Infrastructure (OCI) and I wanted to create a Kubernetes cluster which exposes some service.
The goal is:
- A running managed Kubernetes cluster (OKE)
- 2 nodes at least
- 1 service that's accessible for external parties
The infra looks the following:
- A VCN for the whole thing
- A private subnet on 10.0.1.0/24
- A public subnet on 10.0.0.0/24
- NAT gateway for the private subnet
- Internet gateway for the public subnet
- Service gateway
- The corresponding security lists for both subnets which I won't share right now unless somebody asks for it
- A containerengine K8S (OKE) cluster in the VCN with public Kubernetes API enabled
- A node pool for the K8S cluster with 2 availability domains and with 2 instances right now. The instances are ARM machines with 1 OCPU and 6GB RAM running Oracle-Linux-7.9-aarch64-2021.12.08-0 images.
- A namespace in the K8S cluster (call it staging for now)
- A deployment which refers to a custom NextJS application serving traffic on port 3000
And now it's the point where I want to expose the service running on port 3000.
I have 2 obvious choices:
- Create a LoadBalancer service in K8S which will spawn a classic Load Balancer in OCI, set up it's listener and set up the backendset referring to the 2 nodes in the cluster, plus it adjusts the subnet security lists to make sure traffic can flow
- Create a Network Load Balancer in OCI and create a NodePort on K8S and manually configure the NLB to the ~same settings as the classic Load Balancer
The first one works perfectly fine but I want to use this cluster with minimal costs so I decided to experiment with option 2, the NLB since it's way cheaper (zero cost).
Long story short, everything works and I can access the NextJS app on the IP of the NLB most of the time but sometimes I couldn't. I decided to look it up what's going on and turned out the NodePort that I exposed in the cluster isn't working how I'd imagine.
The service behind the NodePort is only accessible on the Node that's running the pod in K8S. Assume NodeA is running the service and NodeB is just there chilling. If I try to hit the service on NodeA, everything is fine. But when I try to do the same on NodeB, I don't get a response at all.
That's my problem and I couldn't figure out what could be the issue.
What I've tried so far:
- Switching from ARM machines to AMD ones - no change
- Created a bastion host in the public subnet to test which nodes are responding to requests. Turned out only the node responds that's running the pod.
- Created a regular LoadBalancer in K8S with the same config as the NodePort (in this case OCI will create a classic Load Balancer), that works perfectly
- Tried upgrading to Oracle 8.4 images for the K8S nodes, didn't fix it
- Ran the Node Doctor on the nodes, everything is fine
- Checked the logs of kube-proxy, kube-flannel, core-dns, no error
- Since the cluster consists of 2 nodes, I gave it a try and added one more node and the service was not accessible on the new node either
- Recreated the cluster from scratch
Edit: Some update. I've tried to use a DaemonSet instead of a regular Deployment for the pod to ensure that as a temporary solution, all nodes are running at least one instance of the pod and surprise. The node that was previously not responding to requests on that specific port, it still does not, even though a pod is running on it.
Edit2: Originally I was running the latest K8S version for the cluster (v1.21.5) and I tried downgrading to v1.20.11 and unfortunately the issue is still present.
Edit3: Checked if the NodePort is open on the node that's not responding and it is, at least kube-proxy is listening on it.
...ANSWER
Answered 2022-Jan-31 at 12:06Might not be the ideal fix, but can you try changing the externalTrafficPolicy to Local. This would prevent the health check on the nodes which don't run the application to fail. This way the traffic will only be forwarded to the node where the application is . Setting externalTrafficPolicy to local is also a requirement to preserve source IP of the connection. Also, can you share the health check config for both NLB and LB that you are using. When you change the externalTrafficPolicy, note that the health check for LB would change and the same needs to be applied to NLB.
Edit: Also note that you need a security list/ network security group added to your node subnet/nodepool, which allows traffic on all protocols from the worker node subnet.
QUESTION
We have two VMs behind a Load Balancer. We would like to make one of the VMs publicly inaccessible when we do a new deployment, so we can test the new version of the application before it becomes publicly accessible. The current plan is to block one out of two VMs by changing Network Security Group rule via Service Tag for Load Balancer:
It works. When we change NSG Rule for VM1 from Allow to Deny only VM2 stays publicly accessible. Once we verify that new release works as expected we then switch NSG rule for VM2 and switch NSG rule for VM1, so only a VM with the newest version of application is accessible while we update application on the other VM.
The problem with that is NSG rules don't immediately take effect and can take 1-3 minutes to make VM inaccessible/accessible. More over if we switch NSG for both VMs at the same time we can be in situations when both VMs with different version of software are publicly available which can lead to data corruption or data lose or both VM are not accessible. So the only way around this is to change NSG rule for VM2 then for VM1 and having downtime of 2-6 minutes. Is there a better way to do that?
...ANSWER
Answered 2021-Dec-29 at 11:04Blocking ports 80 and 443 with Windows Defender Firewall via PowerShell Remoting brought the downtime to 40 seconds total.
QUESTION
I have the following basic architecture:
For reasons I don't want to get into, I want to allow the client to fetch data from either server if they so choose. If they don't care then the load balancer will decide for them.
Is there a best practice for designing the API request?
I've come up with a few options:
- Add an optional query string parameter:
ANSWER
Answered 2021-Dec-15 at 16:48Just create the public domain name for the servers that you allow client to call it directly and then configure the DNS such that it can route the request to them or to the load balancer depending on the domain name of the HTTP request.
For example, you may have the following domain names for the servers:
api.example.com
for the load balancerapi-server1.example.com
for Server1api-server2.example.com
for Server2
Then ask the clients to choose which servers to use by configuring the corresponding domain name in the API call.
One of the real-life example is Mixpanel API. You can see that they have two kind of the servers to let the API client to choose which to use through different domain names.
QUESTION
Is there a way we can alter the resource names of the resources provisioned by AKS itself (screenshot below). I know I can change the node resource group name as per the documentation but cannot find any reference (or documentation) if we can change the AKS managed resource names. The resources for which I want to have custom naming specifically are:
- Load balancer
- AKS Virtual Machine Scale Set
ANSWER
Answered 2021-Dec-14 at 07:18You cannot change the resource names of the resources provisioned by AKS itself. Because it is managed by AKS only. you can give your own name of node resource group
at the time of creation using IAC tool like Terraform
, Biceps
etc. But you can’t change the node resource name
of AKS once created.
From Portal you can not assign your own name of node_resource_group
Note : node_resource_group - (Optional) The name of the Resource Group where the Kubernetes Nodes should exist. Changing this forces a new resource to be created.
Azure requires that a new, non-existent Resource Group is used, as otherwise the provisioning of the Kubernetes Service will fail.
Please Refer these document : azurerm_kubernetes_cluster | Resources | hashicorp/azurerm | Terraform Registry
Microsoft.ContainerService/managedClusters - Bicep & ARM template reference | Microsoft Docs
QUESTION
We are having difficulties choosing a load balancing solution (Load Balancer, Application Gateway, Traffic Manager, Front Door) for IIS websites on Azure VMs. The simple use case when there are 2 identical sites is covered well – just use Azure Load Balancer or Application Gateway. However, in cases when we would like to update websites and test those updates, we encounter limitation of load balancing solutions.
For example, if we would like to update IIS websites on VM1 and test those updates, the strategy would be:
- Point a load balancer to VM2.
- Update IIS website on VM1
- Test the changes
- If all tests are passed then point the load balancer to VM1 only, while we update VM2.
- Point the load balancer to both VMs
We would like to know what is the best solution for directing traffic to only one VM. So far, we only see one option – removing a VM from backend address pool then returning it back and repeating the process for other VMs. Surely, there must be a better way to direct 100% of traffic to only one (or to specific VMs), right?
Update:
We ended up blocking the connection between VMs and Load Balancer by creating Network Security Group rule with Deny action on Service Tag Load Balancer. Once we want that particular VM to be accessible again we switch the NSG rule from Deny to Allow.
The downside of this approach is that it takes 1-3 minutes for the changes to take an effect. Continuous Delivery with Azure Load Balancer
If anybody can think of a faster (or instantaneous) solution for this, please let me know.
...ANSWER
Answered 2021-Nov-02 at 21:22Without any Azure specifics, the usual pattern is to point a load balancer to a /status endpoint of your process, and to design the endpoint behavior according to your needs, eg:
- When a service is first deployed its status is 'pending"
- When you deem it healthy, eg all tests pass, do a POST /status to update it
- The service then returns status 'ok'
Meanwhile the load balancer polls the /status endpoint every minute and knows to mark down / exclude forwarding for any servers not in the 'ok' state.
Some load balancers / gateways may work best with HTTP status codes whereas others may be able to read response text from the status endpoint. Pretty much all of them will support this general behavior though - you should not need an expensive solution.
QUESTION
I have an ingress pod deployed with Scaleway on a Kubernetes cluster and it exists in the kube-system namespace. I accidentally created a load balancer service on the default
namespace and I don't want to delete and recreate it a new one on the kube-system
namespace so I want my Load balancer in the default
namespace to have the ingress pods as endpoints:
ANSWER
Answered 2021-Dec-02 at 10:51At least three reasons why you need to re-create it properly (2 technical and advice):
ExternalName
is used for accessing external services or services in other namespaces. The way it works is when looking up the service's name happens, CNAME will be returned. So in other words it works for egress connections when requests should be directed somewhere else.See service - externalname type and use cases Kubernetes Tips - Part 1 blog post from Alen Komljen.
Your use case is different. You want to get requests from outside the kubernetes cluster to exposed loadbalancer and then direct traffic from it to another service within the cluster. It's not possible by built-in kubernetes terms, because service can be either
LoadBalancer
orExternalName
. You can see in your last manifest there are two types which will not work at all. See service types.Avoid unnecessary complexity. It will be hard to keep track of everything since there will be more and more services and other parts.
Based on documentation it's generally possible to have issues using
ExternalName
with some protocols:Warning: You may have trouble using ExternalName for some common protocols, including HTTP and HTTPS. If you use ExternalName then the hostname used by clients inside your cluster is different from the name that the ExternalName references.
For protocols that use hostnames this difference may lead to errors or unexpected responses. HTTP requests will have a Host: header that the origin server does not recognize; TLS servers will not be able to provide a certificate matching the hostname that the client connected to.
QUESTION
I'm struggling to expose a service in an AWS cluster to outside and access it via a browser. Since my previous question haven't drawn any answers, I decided to simplify the issue in several aspects.
First, I've created a deployment which should work without any configuration. Based on this article, I did
kubectl create namespace tests
created file
...probe-service.yaml
based onpaulbouwer/hello-kubernetes:1.8
and deployed itkubectl create -f probe-service.yaml -n tests
:
ANSWER
Answered 2021-Nov-16 at 13:46Well, I haven't figured this out for ArgoCD yet (edit: figured, but the solution is ArgoCD-specific), but for this test service it seems that path resolving is the source of the issue. It may be not the only source (to be retested on test2 subdomain), but when I created a new subdomain in the hosted zone (test3, not used anywhere before) and pointed it via A
entry to the load balancer (as "alias" in AWS console), and then added to the ingress a new rule with /
path, like this:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install balancer
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page