loadbalancer | A sticky load balancer optimized for realtime apps
kandi X-RAY | loadbalancer Summary
kandi X-RAY | loadbalancer Summary
A sticky load balancer optimized for realtime apps
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Called when a connection to a new target .
- Success callback .
loadbalancer Key Features
loadbalancer Examples and Code Snippets
Community Discussions
Trending Discussions on loadbalancer
QUESTION
We are installing Anthos on VMWare platform and now we have an error in the Admin Cluster deployment procedure of the Seesaw Loadbalancer in HA.
The Deploy of two Seesaw VMs has been created with success, but when checking the health check we get the following error 403:
...ANSWER
Answered 2021-Jul-29 at 12:43Solved after the recreation of the admin workstation with the following parameter.
QUESTION
I have a k8 setup that looks like this
ingress -> headless service (k8 service with clusterIp: none) -> statefulsets ( 2pods)
Fqdn looks like this:
...ANSWER
Answered 2021-Aug-01 at 02:02example statefulset called foo
with image nginx
:
QUESTION
I'm experimenting with kubernetes and a minio deployment. I have a k3s 4 node cluster, each one with 4 50GB disk. Following the instructions here I have done this:
First I installed krew in order to install the minio and the directpv operators.
I installed those two without a problem.
I formatted every Available hdd in the node using
kubectl directpv drives format --drives /dev/vd{b...e} --nodes k3s{1...4}
I then proceed to make the deployment, first I create the namespace with
kubectl create namespace minio-tenant-1
, and then I actually create the tenant with:kubectl minio tenant create minio-tenant-1 --servers 4 --volumes 8 --capacity 10Gi --storage-class direct-csi-min-io --namespace minio-tenant-1
The only thing I need to do then is expose the port to access, which I do with:
kubectl port-forward service/minio 443:443
(I'm guessing it should be a better way to achieve this, as the last command isn't apparently permanent, maybe using a LoadBalancer or NodePort type services in the kubernetes cluster).
So far so good, but I'm facing some problems:
- When I try to create an alias to the server using mc the prompt answer me back with:
mc: Unable to initialize new alias from the provided credentials. Get "https://127.0.0.1/probe-bucket-sign-9aplsepjlq65/?location=": x509: cannot validate certificate for 127.0.0.1 because it doesn't contain any IP SANs
I can surpass this with simply adding the --insecure
option, but I don't know why it throws me this error, I guess is something how k3s manage the TLS auto-signed certificates.
Once created the alias (I named it test) of the server with the
--insecure
option I try to create a bucket, but the server always answer me back with:mc mb test/hello
mc: Unable to make bucket \test/hello. The specified bucket does not exist.
So... I can't really use it... Any help will be appreciated, I need to know what I'm doing wrong.
...ANSWER
Answered 2022-Mar-14 at 13:32Guided by information at the Minio documentation. You have to generate a public certificate. First of all generate a private key use command:
QUESTION
I am following this official k8 ingress tutorial. However I am not able to curl
the minikube IP address and access the "web" application.
ANSWER
Answered 2021-Dec-15 at 15:57You need to setup your /etc/hosts, I guess the ingress controller wait for requests with an host defined to redirect them, but it's pretty strange that it didn't even respond to the http request with an error.
Could you show what these commands returns ?
QUESTION
I have two fargate tasks running in two different clusters, the first one is running on port 3000 and can receive requests from anyone, the second one is running on port 8080 and can be accessed only by the first one. Both are in the same Security Group and VPC.
I created an inbound rule to allow public access for the first one, then I tried to create other inbound rule to enable the access for the second through security group ingress. But when the first service tries to access the second, I receive an Timeout Error.
When I allow the public access to the second service, the communication works properly, but I cannot allow it for forever.
Each service has a loadbalancer configured, but I already tried to access the service by his task public ip without success too.
Anyone has any idea what I am doing wrong?? The inbound rules for the security group can be checked in this image
...ANSWER
Answered 2022-Mar-08 at 20:26If the first service tries to access the second service by the second service's public IP, then the traffic will to out to the Internet and back, which will destroy the association with the network traffic's association with the origin security group.
To keep the traffic inside the VPC, and to make sure the security group rules apply as intended, the first service needs to connect to the second service via the second service's private IP.
If you are using a load balancer for the second service, then it needs to be an internal load balancer, not an external load balancer.
QUESTION
I have multiple containers/images
such as admin(django)
,nginx(httpserver)
my system is here below.
port80-> nginx -> port8011 -> admin
I want to deploy these on fargate
.
However I'm still confused.
Two Image, Two Container,Two task difinition,two loadbalancer it's ok.
However one public IP to only nginx?
How can I connect between container?
Currently my source code is like this below.
I am familiar with docker-compose, but not for aws fargate.
Any help appreciated.
...ANSWER
Answered 2022-Feb-20 at 13:43How can I connect between container?
In your current configuration you can't connect directly between containers. You would have to have Nginx connect to the internal load balancer that is connected to the Django tasks.
Web Browser -> Public Nginx Load Balancer -> Nginx Container -> Private Django Load Balancer -> Django Container.
I would suggest looking into running both containers in the same ECS task. You would probably save a good bit of money by only having one load balancer and half as many Fargate instances. The traffic flow would look like this:
Web Browser -> Public Load Balancer -> Nginx Container on port 80 -> Django Container on port 8011.
In that scenario you would configure Nginx to proxy requests to 127.0.0.1:8011
. All containers in the same task can connect to each other over 127.0.0.1
inside a Fargate instance. See the Fargate networking documentation here.
A much more advanced setup would be to keep each container running as a separate task, and use AWS App Mesh for internal container communication, instead of internal load balancers. This is probably overkill for your situation, and much more appropriate in a large environment with many microservices deployed independently.
QUESTION
I'm trying to deploy a HA Keycloak cluster (2 nodes) on Kubernetes (GKE). So far the cluster nodes (pods) are failing to discover each other in all the cases as of what I deduced from the logs. Where the pods initiate and the service is up but they fail to see other nodes.
Components
- PostgreSQL DB deployment with a clusterIP service on the default port.
- Keycloak Deployment of 2 nodes with the needed ports container ports 8080, 8443, a relevant clusterIP, and a service of type LoadBalancer to expose the service to the internet
Logs Snippet:
...ANSWER
Answered 2022-Feb-05 at 13:58The way KUBE_PING works is similar to running kubectl get pods
inside one Keycloak pod to find the other Keycloak pods' IPs and then trying to connect to them one by one. Except Keycloak does that by querying the Kubernetes API directly instead of running kubectl
.
To do that, it needs credentials to query the API, basically an access token.
You can pass your token directly, if you have it, but its not very secure and not very convenient (you can check other options and behavior here).
Kubernetes have a very convenient way to inject a token to be used by a pod (or a software running inside that pod) to query the API. Check the documentation for a deeper look.
The mechanism is to create a service account, give it permissions to call the API using a RoleBinding and set that account in the pod configuration.
That works by mounting the token as a file at a known location, hardcoded and expected by all Kubernetes clients. When the client wants to call the API it looks for a token at that location.
Although not very convenient, you may be in the even more inconvenient situation of lacking permissions to create RoleBindings (somewhat common in more strict environments).
You can then ask an admin to create the service account and RoleBinding for you or just (very unsecurely) pass you own user's token (if you are capable of doing a kubectl get pod
on Keycloak's namespace you have the permissions) via SA_TOKEN_FILE
environment variable.
Create the file using a secret or configmap, mount it to the pod and set SA_TOKEN_FILE
to that file location. Note that this method is specific to Keycloak.
If you do have permissions to create service accounts and RoleBindings in the cluster:
An example (not tested):
QUESTION
I've been trying to get over this but I'm out of ideas for now hence I'm posting the question here.
I'm experimenting with the Oracle Cloud Infrastructure (OCI) and I wanted to create a Kubernetes cluster which exposes some service.
The goal is:
- A running managed Kubernetes cluster (OKE)
- 2 nodes at least
- 1 service that's accessible for external parties
The infra looks the following:
- A VCN for the whole thing
- A private subnet on 10.0.1.0/24
- A public subnet on 10.0.0.0/24
- NAT gateway for the private subnet
- Internet gateway for the public subnet
- Service gateway
- The corresponding security lists for both subnets which I won't share right now unless somebody asks for it
- A containerengine K8S (OKE) cluster in the VCN with public Kubernetes API enabled
- A node pool for the K8S cluster with 2 availability domains and with 2 instances right now. The instances are ARM machines with 1 OCPU and 6GB RAM running Oracle-Linux-7.9-aarch64-2021.12.08-0 images.
- A namespace in the K8S cluster (call it staging for now)
- A deployment which refers to a custom NextJS application serving traffic on port 3000
And now it's the point where I want to expose the service running on port 3000.
I have 2 obvious choices:
- Create a LoadBalancer service in K8S which will spawn a classic Load Balancer in OCI, set up it's listener and set up the backendset referring to the 2 nodes in the cluster, plus it adjusts the subnet security lists to make sure traffic can flow
- Create a Network Load Balancer in OCI and create a NodePort on K8S and manually configure the NLB to the ~same settings as the classic Load Balancer
The first one works perfectly fine but I want to use this cluster with minimal costs so I decided to experiment with option 2, the NLB since it's way cheaper (zero cost).
Long story short, everything works and I can access the NextJS app on the IP of the NLB most of the time but sometimes I couldn't. I decided to look it up what's going on and turned out the NodePort that I exposed in the cluster isn't working how I'd imagine.
The service behind the NodePort is only accessible on the Node that's running the pod in K8S. Assume NodeA is running the service and NodeB is just there chilling. If I try to hit the service on NodeA, everything is fine. But when I try to do the same on NodeB, I don't get a response at all.
That's my problem and I couldn't figure out what could be the issue.
What I've tried so far:
- Switching from ARM machines to AMD ones - no change
- Created a bastion host in the public subnet to test which nodes are responding to requests. Turned out only the node responds that's running the pod.
- Created a regular LoadBalancer in K8S with the same config as the NodePort (in this case OCI will create a classic Load Balancer), that works perfectly
- Tried upgrading to Oracle 8.4 images for the K8S nodes, didn't fix it
- Ran the Node Doctor on the nodes, everything is fine
- Checked the logs of kube-proxy, kube-flannel, core-dns, no error
- Since the cluster consists of 2 nodes, I gave it a try and added one more node and the service was not accessible on the new node either
- Recreated the cluster from scratch
Edit: Some update. I've tried to use a DaemonSet instead of a regular Deployment for the pod to ensure that as a temporary solution, all nodes are running at least one instance of the pod and surprise. The node that was previously not responding to requests on that specific port, it still does not, even though a pod is running on it.
Edit2: Originally I was running the latest K8S version for the cluster (v1.21.5) and I tried downgrading to v1.20.11 and unfortunately the issue is still present.
Edit3: Checked if the NodePort is open on the node that's not responding and it is, at least kube-proxy is listening on it.
...ANSWER
Answered 2022-Jan-31 at 12:06Might not be the ideal fix, but can you try changing the externalTrafficPolicy to Local. This would prevent the health check on the nodes which don't run the application to fail. This way the traffic will only be forwarded to the node where the application is . Setting externalTrafficPolicy to local is also a requirement to preserve source IP of the connection. Also, can you share the health check config for both NLB and LB that you are using. When you change the externalTrafficPolicy, note that the health check for LB would change and the same needs to be applied to NLB.
Edit: Also note that you need a security list/ network security group added to your node subnet/nodepool, which allows traffic on all protocols from the worker node subnet.
QUESTION
I am trying to setup a certificate for a locally running react app on a virtual host local.example.com
. This has to just work locally on docker setup. After going through some articles, I came up with this docker-compose.yml:
ANSWER
Answered 2022-Jan-31 at 13:05You need to use TLS for your local setup. The host you need a certificate for is local.example.com
. There is no way to obtain a certificate from Letsencrypt
for this name, because you're not controlling the example.com
domain. One of the ways Letsencrypt
creates a certificate is a challenge - you prove that you own the domain by creating a TXT DNS record. If you own a domain you can do that, but your case is different, because you only need this for local development.
However, you can just use openssl
to generate a self signed certificate for whichever domain name you want. This is a good reference on how to do this. You can use the local.example.com
domain name for the generated certificate. If you're successful, you'll end up with the certificate and it's private key. Note where you save those files, as you'll need them. Keep in mind that the certificate is self-signed, so your browser will give you a warning, unless you add this certificate to the trust store of your operating system.
The next step in your case is to make Traefik use those self signed certificates when serving content from your application. I think this answer has a good example of that.
After having this, you'll only need to edit your hosts file and redirect your localhost:8080
(the port on which your Traefik serves your application) to local.example.com
.
Also, Traefik is not the only solution for your case. You can also achieve the same using Nginx, for example. Choose which one satisfies your use case. My suggestion would be to use the one that's easiest to configure, because it's for local development. Here's the first result I got when searching for a nginx docker-compose self-signed certificate
.
UPDATE
Here's a quick example of what I'm describing above.
First generate the certificate:
QUESTION
If we need static IP address in AWS for Load balancer then we have to go for Network Loadbalancer forwarding requests to Application Loadbalancer.
Now since ALB only supports HTTP and HTTPS protocols And NLB only supports TCP protocol
How does this communication actually work?
The client like browser will send the request in HTTP or HTTPS. How does this communication happens ?
...ANSWER
Answered 2022-Jan-27 at 06:24HTTP/HTTPS
runs on top of TCP
. If you check Open Systems Interconnection model, HTTP/HTTPS
are at the top application layer, whereas TCP
is at transport layer.
ALB supports only application layer (HTTP/HTTPS in that case), while NLB works on transport layer (TCP/UDP). Thus NLB can load balance anything above TCP as well. This include HTTP
, SSH
, FTP
and so on.
There is no protocol conversion between TCP and HTTP, as they work on different layers. So everything happens transparently.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install loadbalancer
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page