marathon-lb | service discovery & load balancing tool | Load Balancing library
kandi X-RAY | marathon-lb Summary
kandi X-RAY | marathon-lb Summary
Marathon-lb is a service discovery & load balancing tool for DC/OS
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Load configuration
- Get global default options
- Argument parser
- Set logging options
- Set the arguments for Marathon authentication
- Return list of available HAProxy templates
- Try to reset the event
- Validate the haproxy config file
- Reload the configuration
- Return the set of haproxy pids
- Setup logging
- Get marathon auth params
- Process command line arguments
- Fetch events from SSE active
- Iterates over the chunks in chunks
- Check status code
- Return received data
- Set the ports
- Get an event stream
- Called when a signal is received
- Reload existing configuration
- Reset all pending tasks
- Start the thread
- Set request retries
- Stop the thread
marathon-lb Key Features
marathon-lb Examples and Code Snippets
Community Discussions
Trending Discussions on marathon-lb
QUESTION
I've been unable to find an answer to this after days of Googling. I have a service running in Marathon/Mesos. I have a Prometheus cluster scraping metrics. My Marathon metrics port config looks like this:
...ANSWER
Answered 2018-Dec-27 at 00:32It is a known bug in Prometheus: it uses the servicePort
Marathon app definition property instead of the hostPort
one. It is fixed in the v2.6.0.
QUESTION
We can deploy the package with dcos commands like
...ANSWER
Answered 2018-Aug-28 at 11:54I added Marathon-LB from universe package then accessed http://azurehost.azure.com/marathon/v2/apps
where I got the correct marathon application definition of Marathon-lb. Now I am using the same definition through curl command and it is working fine.
QUESTION
I have dcos up and running. I created a service and i am able to access it through the ip:port but when i try to do the same with marathon-lb i just cant reach it. I tried curl http://marathon-lb.marathon.mesos:10000/ 10000 being the port number, i still get connection refused.
Here is my json for service:
...ANSWER
Answered 2017-Sep-17 at 18:52Both accessing it from outside the cluster by using public-ip:10000
(see here for finding the public ip) and from inside the cluster using curl http://marathon-lb.marathon.mesos:10000/
worked fine. Note, you need to have marathon-lb installed (dcos package install marathon-lb
) and marathon-lb.marathon.mesos
can only be resolved from inside the cluster.
In order to debug marathon-lb issues I ususally check the haproxy stats first: https://dcos.io/docs/1.9/networking/marathon-lb/marathon-lb-advanced-tutorial/#deploy-an-external-load-balancer-with-marathon-lb
From inside the cluster
QUESTION
I find myself in a situation where I have the necessity to scale down container instances based on their actual lifetime. It looks like fresh instances are removed first when scaling down through marathon's API. Is there any configuration I'm not aware of to implement this kind of strategy or policy when scaling down instances on apache marathon?
As of right now I'm using marathon-lb-autoscale to atumatically adjust the number of running instances. What actually happens under the hood though is that marathon-lb-autoscale
does perform a PUT
request updating the instances
property of the current application when req/s increases or decreaseas.
ANSWER
Answered 2017-Aug-25 at 12:28One can specify a killSelection
directly inside the application's config and specify YoungestFirst
which kills youngest tasks first or OldestFirst
which kills the oldest ones first.
Reference: https://mesosphere.github.io/marathon/docs/configure-task-handling.html
QUESTION
My marathon-lb configuration:
...ANSWER
Answered 2017-Aug-03 at 08:58As you probably know marathon-lb is HAProxy plus some wrappers. You can add a redirect to HAProxy configuration, by using HAPROXY_0_BACKEND_HTTP_OPTIONS label. There's a legacy reqrep statement which you may find convenient and you can also go for 301 redirect. For example you can do:
"HAPROXY_0_BACKEND_HTTP_OPTIONS": " reqrep ^/([\w\-]+?)$ /results?q=\\1 \n",
or
"HAPROXY_0_BACKEND_HTTP_OPTIONS": " acl is_foo path -i /foo \n redirect code 301 location /bar if is_foo\n",
Note double spaces for indent. Not that you'll have to play with escapes to make it work.
QUESTION
I've installed my application on azure DCOS container service. I use marathon-lb to map docker containers to the external point. Config looks like:
...ANSWER
Answered 2017-Jul-15 at 15:07Nothing prevents you from doing that, add a cname alias to api1.my-site.com
pointing to api1.mycustomename-devagents.westeurope.cloudapp.azure.com
and you are done.
QUESTION
I know that 'servicePort' is used by marathon-lb to identify an app. Is there any other user of this setting besides marathon-lb?
If the answer is no, why is it mandatory (omitting it well generate one for me)? I have many marathon apps which are not managed by marathon-lb, and they all take up service ports by default.
ANSWER
Answered 2017-Apr-28 at 15:39From the documentation: ""servicePort" is a helper port intended for doing service discovery using a well-known port per service. The assigned servicePort value is not used/interpreted by Marathon itself but supposed to be used by the load balancer infrastructure."
So service ports seem to have no other use other than for marathon-lb. When you don't specify a servicePort, its as if you put in "servicePort": 0. See closed issue here.
Here's a discussion about the re-architected networking API.
If you look at the Jira ticket, you will see that the new API model lets you define services without servicePorts at all.
QUESTION
I have a cluster managed by DC/OS and a dockerized service that I want to deploy through Marathon. I already have a marathon-lb that is being used for service discovery and load balancing of other existing services. All these services are deployed using BRIDGE network.
The new service exposes more than one ports. Port A is for communication between the instances of the service and port B is for accepting requests from the world. I want to use HOST (and not BRIGE) network for deploying the service.
I would like to know how to configure the json of the service in order for marathon-lb to load-balance and expose externally port B.
I have already tried various scenarios and configurations but none worked. The json that I have constructed is the below.
...ANSWER
Answered 2017-Jan-11 at 17:51It looks like you have requirePorts
in the wrong section of the app definition. It should be at the top level, like this:
QUESTION
I have successfully created host and bridge mode marathon apps without issue, and used l4lb and marathon-lb to host them. That all works without a problem.
I'm now trying to use USER mode networking, using the default "dcos" 9.0.0.0/8 network. In this mode my apps can only talk to other containers on the same agent. The host OS's can only talk to containers hosted on themselves. It appears that nodes can't route traffic between each other on the virtual network.
For testing I'm using the docker "nginx:alpine" container, with 2 instances, on different hosts. Their IPs are 9.0.6.130 and 9.0.3.130. No L4LB or Marathon-LB config, no service endpoints, no ports exposed on the host network. Basically:
...ANSWER
Answered 2017-Feb-25 at 01:29Figured this out with help from the DC/OS community mailing list.
RHEL7 installs firewalld by default, which DC/OS needs disabled. I had done that, but that still leaves the FORWARD policy as DROP until the node is rebooted. DC/OS's firewall manipulation only changes the rules, not the default policy.
This fixes it:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install marathon-lb
Using marathon-lb
Securing your service with TLS/SSL (blog post)
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page