haproxy | HAProxy related stuff : scripts configs | Proxy library
kandi X-RAY | haproxy Summary
kandi X-RAY | haproxy Summary
HAProxy related stuff: scripts, configs, etc… provided by HAProxy Technologies:
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of haproxy
haproxy Key Features
haproxy Examples and Code Snippets
Community Discussions
Trending Discussions on haproxy
QUESTION
I have two backend web servers, and i need to monitor them using httpcheck by checking the URL and looking for a string to be present in the response of the request. if the string is not available switch the backend to another server.
Status:
- Server1 - Active
- Server2 - Backup
Configuration Details:
- Health Check Method : HTTP
- HTTP Check Method : GET
- Url used by http check requests:
/jsonp/FreeForm&maxrecords=10&format=XML&ff=223
- Http check version : HTTP/1.0\r\nAccept:\ XS01
Result of the http Request is
...ANSWER
Answered 2022-Mar-02 at 18:12This can be done under Advanced Settings--> Backend Pass thru using the expect string,
http-check expect string XS01
QUESTION
I would like to print some log messages from the external check script of HAPROXY to rsyslog. For now, I use “echo” in my external-check.sh script but it does not show up the echo messages. It only shows the haproxy log messages. Is that possible?
The content of haproxy.cfg
:
ANSWER
Answered 2022-Feb-18 at 12:09I got the answer:
Currently, I am using echo
but I should use logger
to log messages from the external script to the rsyslog socket via 127.0.0.1
. By default, HAPROXY does not do it for us. It only redirects the log messages of the haproxy.cfg
events, but not the external script messages.
The trick is to replace all echo
messages by:
QUESTION
We have deployed a django server (nginx/gunicorn/django) but to scale the server there are multiple instances of same django application running.
Here is the diagram (architecture):
Each blue rectangle is a Virtual Machine.
HAProxy sends all request to example.com/admin to Server 3.other requests are divided between Server 1 and Server 2.(load balance).
Old Problem:
Each machine has a media folder and when admin Uploads something the uploaded media is only on Server 3. (normal users can't upload anything)
We solved this by sending all requests to example.com/media/* to Server 3 and nginx from Server3 serves all static files and media.
Problem right now
We are also using sorl-thumbnail.
When a requests comes for example.com/,sorl-thumbnail tries to access the media file but it doesn't exist on this machine because it's on Server3.
So now all requests to that machine(server 1 or 2) get 404 for that media file.
One solution that comes to mind is to make a shared partition between all 3 machines and use it as media. Another solution is to sync all media folders after each upload but this solution has problem and that is we have almost 2000 requests per second and sometimes sync might not be fast enough and sorl-thumbnail creates the database record of empty file and 404 happens.
Thanks in advance and sorry for long question.
...ANSWER
Answered 2021-Dec-26 at 19:53You should use an object store to save and serve your user uploaded files. django-storages makes the implementation really simple.
If you don’t want to use cloud based AWS S3 or equivalent, you can host your own on-prem S3 compatible object store with minio.
On your current setup I don’t see any easy way to fix where the number of vm s are dynamic depending on load.
If you have deployment automation then maybe try out rsync so that the vm takes care of syncing files with other vms.
QUESTION
I managed to install kubernetes 1.22, longhorn, kiali, prometheus and istio 1.12 (profile=minimal) on a dedicated server at a hosting provider (hetzner).
I then went on to test httpbin with an istio ingress gateway from the istio tutorial. I had some problems making this accessible from the internet (I setup HAProxy to forward local port 80 to the dynamic port that was assigned in kubernetes, so port 31701/TCP in my case)
How can I make kubernetes directly available on bare metal interface port 80 (and 443).
I thought I found the solution with metallb but I cannot make that work so I think it's not intended for that use case. (I tried to set EXTERNAL-IP to the IP of the bare metal interface but that doesn't seem to work)
My HAProxy setup is not working right now for my SSL traffic (with cert-manager on kubernetes) but before I continue looking into that I want to make sure. Is this really how you are suppose to route traffic into kubernetes with an istio gateway configuration on bare metal?
I came across this but I don't have an external Load Balancer nor does my hosting provider provide one for me to use.
...ANSWER
Answered 2021-Dec-14 at 09:31Posted community wiki answer for better visibility based on the comment. Feel free to expand it.
The solution for the issue is:
I setup HAProxy in combination with Istio gateway and now it's working.
The reason:
I think the reason why SSL was not working was because istio.io/latest/docs/setup/additional-setup/gateway creates the ingress gateway in a different namespace (
istio-ingress
) from the rest of the tutorials (istio-system
).
QUESTION
I have a haproxy as a load balancer running in k8s with a route to a service with two running pods. I want the server naming inside haproxy to correspond to the pod names behind my service. If I'm not mistaken the following configmap / annotation value should do exactly this: https://haproxy-ingress.github.io/docs/configuration/keys/#backend-server-naming
. But for me it doesn't and for the life of me I can't find out why. The relevant parts of my configuration look like this:
controller deployment:
...ANSWER
Answered 2021-Dec-06 at 17:07Here are a few hints to help you out solving your issue.
Be sure you know the exact version of your haproxy-ingress controller:Looking at the manifest files you shared, it's hard to tell which exact version of haproxy-ingress-controller
container you are running in your cluster (btw, it's against best practices in production envs to leave it w/o tag, read more on it here).
For backend-server-naming
configuration key to be working, minimum the v0.8.1
is required (it was backported).
Before you move on in troubleshooting, firstly please double check your ingress deployment for compatibility.
My observations of "backend-server-naming=pod" behavior Configuration dynamic updates:If I understand correctly the official documentation on this configuration key, setting a server naming of backends to pod names (backend-server-naming=pod
) instead of sequences
, does support a dynamic re-load of haproxy configuration, but does NOT support as of now dynamic updates to haproxy run-time configuration to server names at backend section (it was explained by haproxy-ingress author here, and here)
It means you need to restart your haproxy-ingress controller instance first, to be able to see changes in backend's server names reflected at haproxy configuration, e.g. situations when new Pod replicas appear or POD_IP changed due the Pod crash (expect addition/updates of server entries based on sequence naming).
Ingress Class:I have tested successfully (see test below) the backend-server-naming=pod
setting on v0.13.4
with classified Ingress type, based on ingressClassName
field , rather than deprecated annotation kubernetes.io/ingress.class
, as in your case:
I'm not claiming your configuration won't work (it should too), but it's important to know, that dynamic updates to configuration (this includes changes to backend configs) won't happen on unclassified Ingress resource or wrongly classified one, unless you're really running v0.12
or newer version.
QUESTION
I am a newbie in software architecture and I have some question:
I don't understand which requests is sent to the HAProxy on this image.
I mean: if one "Application" Server (backend) want to save data in the Galera Cluster
what is the request that will be sent to the HAProxy?
Is it an sql query "request"?
If it is an sql query should the HAProxy server needs a mysql-server to "handle" the connection?
Should Application Server needs to be configured to make an sql connection to the HAProxy?
from: https://fromdual.com/making-haproxy-high-available-for-mysql-galera-cluster
Thanks!
...ANSWER
Answered 2021-Dec-08 at 16:08The application only needs to know the IP address of the VIP in this architecture. The app connects to that VIP using a MySQL connector, as if it is a MySQL Server.
The "requests" are then stateful TCP/IP connections using the MySQL protocol, just as if the app were connected directly to a MySQL node.
This is not a series of stateless http requests. You might be assuming that HAProxy is only for load-balancing http requests. In fact, HAProxy can be used for other protocols than http.
QUESTION
I have installed ingress controller via helm as a daemonset. I have configured the ingress as follows:
...ANSWER
Answered 2021-Dec-07 at 10:41I installed last haproxy ingress
which is 0.13.4
version using helm.
By default it's installed with LoadBalancer
service type:
QUESTION
I would like to deploy an ngingx-ingress controller on my self-hosted Kubernetes (microk8s) that is configurable to listen on one or more interfaces (external IPs).
Not even sure if that is easily possible and I should just switch to an external solution, such as HAProxy or an nginx.
Required behavior:
...ANSWER
Answered 2021-Sep-13 at 04:48I would like to deploy an ngingx-ingress controller on my self-hosted Kubernetes (microk8s) that is configurable to listen on one or more interfaces (external IPs).
For above scenario, you have to deploy the multiple ingress controller of Nginx ingress and keep the different class name in it.
Official document : https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/
So in this scenario, you have to create the Kubernetes service with Loadbalancer IP and each will point to the respective deployment and class will be used in the ingress object.
If you looking forward to use the multiple domains with a single ingress controller you can easily do it by mentioning the host into ingress.
Example for two domain :
- bar.foo.dev
- foo.bar.dev
YAML example
QUESTION
Currently I have a system where I have installed HAProxy on one machine and my other 3 machines serves the webapps and the fourth machine for the database. Now I need to add another load balancer in my system so that any one of the load balancer could pick the request and process it.
But I don't understand how exactly are we going to configure a second load balancer if my domain say example.com is pointing to the IP address which is the load balancer currently. When I add a second load balancer
Will there be any third machine where something needs to be installed so that it can redirect the request to one of my load balancer? Again if this is so, it again is a single point of failure and creates a bottle neck.
If at all I am going to have 2 machines running load balancers then how exactly is the request going to come in because both machines will anyway have different IP.
ANSWER
Answered 2021-Oct-21 at 07:36This sort of thing is generally achieved by either putting both load balancers in DNS ("round-robin DNS") so a lookup for app.example.com might resolve to either lb1.example.com or lb2.example.com, or by having an anycast IP address that can route to any individual load balancer (where the one chosen depends on the network topology between a client and the load balancer).
QUESTION
Does someone have clear instructions for upgrading HAProxy to the latest stable release?
We're presently using 1.8.19 on a Debian VM and need to upgrade it to 2.1.3.
I came across the following instructions: https://blog.geralexgr.com/linux/upgrade-haproxy-to-2-1-3-red-hat-enterprise-linux-server-centos
However, they really aren't clear for someone who's never done this before. I don't want to be compiling source code unless I absolutely have to.
Running apt-get install haproxy says I'm on the latest version. Why then do I see 2.1.3 as the latest stable release?
Any help would be appreciated, as always!
...ANSWER
Answered 2021-Oct-12 at 14:48Right now the current stable release is version 2.4, you can get the binary to install it via apt from haproxy.debian.net.
Also, you can install the previous versions of haproxy like, 2.1, 2.2 and 2.3.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install haproxy
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page