round-robin | faster round-robin balancing algorithm written in golang | SMS library
kandi X-RAY | round-robin Summary
kandi X-RAY | round-robin Summary
faster round-robin balancing algorithm written in golang
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- New returns a roundrobin round robin
- Next returns the next URL .
round-robin Key Features
round-robin Examples and Code Snippets
Community Discussions
Trending Discussions on round-robin
QUESTION
We have a 2 node K3S cluster with one master and one worker node and would like "reasonable availability" in that, if one or the other nodes goes down the cluster still works i.e. ingress reaches the services and pods which we have replicated across both nodes. We have an external load balancer (F5) which does active health checks on each node and only sends traffic to up nodes.
Unfortunately, if the master goes down the worker will not serve any traffic (ingress).
This is strange because all the service pods (which ingress feeds) on the worker node are running.
We suspect the reason is that key services such as the traefik
ingress controller and coredns
are only running on the master.
Indeed when we simulated a master failure, restoring it from a backup, none of the pods on the worker could do any DNS resolution. Only a reboot of the worker solved this.
We've tried to increase the number of replicas of the traefik
and coredns
deployment which helps a bit BUT:
- This gets lost on the next reboot
- The worker still functions when the master is down but every 2nd ingress request fails
- It seems the worker still blindly (round-robin) sends traffic to a non-existant master
We would appreciate some advice and explanation:
- Should not key services such as
traefik
andcoredns
be DaemonSets by default? - How can we change the service description (e.g. replica count) in a persistent way that does not get lost
- How can we get intelligent traffic routing with ingress to only "up" nodes
- Would it make sense to make this a 2-master cluster?
UPDATE: Ingress Description:
...ANSWER
Answered 2022-Mar-18 at 09:50Running single node or two node masters in k8s cluster is not recommended and it doesnt tolerate failure of master components. Consider running 3 masters in your kubernetes cluster.
Following link would be helpful --> https://netapp-trident.readthedocs.io/en/stable-v19.01/dag/kubernetes/kubernetes_cluster_architecture_considerations.html
QUESTION
in my problem RSS did not have a good load balance between CPU cores case the rx packets has been modified by insert tags between mac and ip so the dpdk could not recognize it.assume I want to load balance by way of round-robin, Multiple rx queues have been set up. in this question answer: How to disable RSS but still using multiple RX queues in DPDK? it says its possible to load balance in round robin fashion by using RTE_FLOW. what is the right way to do it grammatically ? I would like to know the API or structs for setting up the round-robin method here is my runtime environment infomation: 1)dpdk version: 19.11.9 2)nic PMD :ixgbe 3) fireware:825999 and XXV710 4) os version: ubuntu 16.04 kernel:4.4.0-186
...ANSWER
Answered 2022-Mar-26 at 02:47As per the question
RSS did not have a good load balance between CPU cores case the rx packets has been modified by insert tags between mac and ip
There are couple of items which needs to be clarified. so let me explain
- Certain Physical and virtual NIC exposes RSS via DPDK RX offload for fixed tuples like IP, Protocol, TCP|UDP|SCTP port number.
- Certain NIC allows to configure the hash reta algorithm to better suit to needs (example when the source destination IP address is fixed we can skip and use others).
- As I recollect from DPDK 18.11, RTE_FLOW is been introduced to support RSS on selected RX queues (example Q1,Q2,Q3 can be RSS for TCP packets while Q4,Q5 can be used for UDP). But again this is based on the either Inner or Outer IP+PORT number
- For DPDK version 19.11 onwards RTE_FLOW is been enhanced to support
RAW Pattern
. The intend of this feature is support Special protocol which by default the NIC does not understand like (VXLAN, GENEVE, RTP and other protocols). - For NIC like Fortville & Columbiaville (from Intel) allows loading of special firmware via DDP (Dynamic Device Personation) to configure special fabric headers or MPLS like headers (between ethernet and ip) to be parsed, lookup and used as seed for RSS (allowing better distribution).
- There are NIC which do support L2 layer but these would be limited SMAC, DMAC, VLAN1,VLAn2, MPLS only and not custom header.
Hence depending upon NIC, vendor, RSS support for L2 and firmware the ability calculating RSS on fields between varies in port init or RTE_FLOW specific configuration. For example RSS ETH supported on
- I40E is
I40E_INSET_DMAC | I40E_INSET_SMAC
- DPAA2 is
NH_FLD_ETH_TYPE and NET_PROT_ETH
- CNXK is
RSS_DMAC_INDEX
- OCTEONX2 is
FLOW_KEY_TYPE_VLAN and FLOW_KEY_TYPE_CH_LEN_90B
Hence for NIC ixgbe and XXV710
there is no ready support for the custom header
between ethernet and IP.
Alternatives:
- Use a smart NIC or FPGA: that is programmed to parse and RSS on your specific headers to RSS on multiple RX queue
- Work with Intel using XXV710 (Fortville): to create DDP which can parse your specific headers as RSS on multiple RX queue.
- Identify DPDK NIC: which can parse
RAW
header as defined 12.2.6.2. Once the support is added by the vendor you can create a simpletraffic spread tool
which will ensure traffic distribution across the desired RX queues in round robin fashion. - Use SW to support the missing hardware.
note: I am not recommending use of HW based static Round Robin as it will create 2 fold problem
- if it is pure DPDK BOND Round ROBIN you will not have any flow pinning
- If you use hash based pinning, there are chances elephant flow can pushed 1 or a few queues causing performance drops in cpu processing.
- My recommendation is to use EVENTDEV model with atomic mode, which ensure better cache locality (at a given instance same flows will fall onto worker thread) and almost linear performance performance. sample app
For option 4 (Software Model):
- Disable RSS in port_init function
- Use single RX queue to receive all the packet either custom rx thread.
- calculate the hash based on the desired header and update mbuf hash field.
- Use
rte_distributor
library to spread traffic based on the custom. - or use
rte_eventdev
with atomic model to spread the work load on multiple worker.
[Clarification from Comments]:
- I have asked relevant practitioners, they said modifying the pmd driver can solves my problem, its the only way?
[ANSWER] Since you are using custom header and not generic VLAN|IP|Port this suggestion is not correct. As you have clarified in question and comments you want to distribute like RSS for custom header.
- I haven't written any code about rte_flow distribution yet, i read the rte_flow example, don't see the code to configure round-robin
[ANSWER] as explained above not all nic support RSS and RAW. Since your current NIC is ixgbe and i40e
the function of parsing and executing RSS for custom header is unlikely. you can try option 2 (work with intel create new ddp) for i40e to achieve the same or implement in SW as suggested in option 4.
- im not ask for solution, I just want to know how to set up round robin by 'RTE_FLOW' Can you give me a few API
[ANSWER] normally one updates with code snippets or steps used for reproducing the error. But the current question is more like clarification. Please refer above.
QUESTION
I am using ActiveMQ Artemis 2.19.1. I created producer and consumer apps using Spring Boot. I need multiple instances of the consumer to receive all the messages (multicast). I configured a Last Value Queue like this (broker.xml
):
ANSWER
Answered 2022-Mar-16 at 19:17It sounds to me like everything is working as designed. I believe your expectations are being thwarted because you're using pub/sub (i.e. JMS topics).
Let me provide a bit of background. When a JMS client creates a subscription on a topic the broker responds by creating a multicast queue on the address with the same name. The queue is named according to the kind of subscription it is. If it is a non-durable subscription then the queue is named with a UUID. If it is a durable subscription then the queue is named according to the subscription name provided by the client and the client ID (if available). When a message is sent to the address it is put in all the multicast queues bound to that address.
Therefore, when a new non-durable subscription is created a new queue for that subscription is also created which means that the subscriber will receive none of the messages sent to the topic prior to the creation of the subscription. This is the expected behavior for JMS topics (i.e. normal pub/sub semantics). Also, since the queue for a non-durable subscription is only available while the subscriber is connected that means there's no way to enforce LVQ semantics since any message which arrives in the queue will be immediately dispatched to the consumer. In short, LVQ with JMS topics doesn't make a lot of sense.
The behavior changes when you use a JMS queue because the queue is always there to receive messages. Consumers can come and go as they please while the broker enforces LVQ semantics.
One possible solution would be to create a special "initialization" queue where consumers could initially connect to get the latest information and after that they could subscribe to the JMS topic to get the pub/sub semantics you need. You could use a divert to make this transparent for the applications sending the messages so they can continue to just send to the JMS topic. Here's sample configuration:
QUESTION
I am learning the headless service of kubernetes.
I understand the following without question (please correct me if I am wrong):
- A headless service doesn't have a cluster IP,
- It is used for communicating with stateful app
- When client app container/pod communicates with a database pod via headless service the pod IP address is returned instead of the service's.
What I don't quite sure:
- Many articles on internet explaining headless service is vague in my opinion. Because all I found only directly state something like :
If you don't need load balancing but want to directly connect to the pod (e.g. database) you can use headless service
But what does it mean exactly?
So, following are my thoughts of headless service in k8s & two questions with an example
Let's say I have 3 replicas of PostgreSQL database instance behind a service, if it is a regular service I know by default request to database would be routed in a round-robin fasion to one of the three database pod. That's indeed a load balancing.
Question 1:
If using headless service instead, does the above quoted statement mean the headless service will stick with one of the three database pod, never change until the pod dies? I ask this because otherwise it would still be doing load balancing if not stick with one of the three pod. Could some one please clarify it?
Question 2:
I feel no matter it is regular service or headless service, client application just need to know the DNS name of the service to communicate with database in k8s cluster. Isn't it so? I mean what's the point of using the headless service then? To me the headless service only makes sense if client application code really needs to know the IP address of the pod it connects to. So, as long as client application doesn't need to know the IP address it can always communicate with database either with regular service or with headless service via the service DNS name in cluster, Am I right here?
...ANSWER
Answered 2022-Mar-18 at 22:06A headless service will return all Pod IPs that are associated through the selector. The order is not stable, so if a client is making repeated DNS queries and uses only the first returned IP, this will result in some kind of load balancing as well.
Regarding your second question: That is correct. In general, if a client does not need to know all instances - and handle the unstable IPs - a regular service provides more benefits.
QUESTION
I am using Databricks Labs Data Generator to send synthetic data to Event Hub.
Everything appears to be working fine for a about two minutes but then the streaming stops and provides the following error:
...ANSWER
Answered 2022-Mar-18 at 18:57This is due to usual traffic throttling from Event Hubs, take a look at the limits for 1 TU https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-quotas, you can increase the number of TUs to 2 and then go from there. If you think this is unexpected throttling then open a support ticket for the issue.
QUESTION
We have our cluster running locally (for now) and everything seems to be configured correctly. Our prime calculation messages are distributed over our seednodes. However, we are intermittently losing messages. You can see the behaviour of two runs in the screenshot. Which messages are marked as dead letters isn't consistent at all.
Our messages are always sent the same way, they look like this. The last parameter means the nth prime to find.
...ANSWER
Answered 2022-Mar-03 at 20:00I think I know what the issue is here - you don't have any akka.cluster.role
s defined nor is your /commander
router configured with the use-role
setting - so as a result, every Nth message is being dropped because it's trying to route a message to itself and does not have a /user/cluster
actor present to receive it.
To fix this properly, we should do the following:
- Have all nodes that can process the
PrimeCalculationEntry
declareakka.cluster.roles=[prime]
- Have the node with the
/commander
router change its HOCON to:
QUESTION
I need to use 3 goroutines named g1
, g2
, g3
. and distribute numbers from 1-10 among the above 3 goroutines in a round-robin fashion. They will do some hypothetical work based on the provided number. And program should print output in the following manner.
g1-1
g2-2
g3-3
g1-4
g2-5
g3-6
...
Tasks must be performed concurrently but the output must be in sequential order.
I have implemented the below code which distributes numbers and prints but output print order is not guaranteed as mentioned above.
I need some help to fix the below code or suggestions on another approach to get the above-desired output.
Approach 1:
...ANSWER
Answered 2022-Feb-17 at 13:54if you don't want to use slice then I think something like this will work:- (playground)
QUESTION
I have a table called loop_msg
with msg_id
and content
. I have a second table called loop_msg_status
with channel
and msg_id
. This is used to post messages as round-robin in different channels, so I need to keep track of which msg_id
has been posted last in each channel
.
ANSWER
Answered 2022-Feb-14 at 05:22Final SQL with the schema (and expected result) given in the question:
QUESTION
I am wondering how compose implements services. To my understanding, each thing that compose does could be done with the docker CLI. For example, creating container, binding volumes, exposing ports and joining them on networks.
The one thing that is a blackbox in my understanding is how compose achieves the concept of a service as a unit. So that when you specify replicas
under the deploy
key, you get DNS round-robin kind of load balancing, similar to when you specify --endpoint-mode dnsrr
in with swarm.
Can this actually be achieved with CLI commands, or does compose do some tricks with the SDK? In both cases, my question would be what exactly happens there?
...ANSWER
Answered 2022-Jan-28 at 23:18So the key here is network alias.
QUESTION
I want to allow a client to send a task to some server at a fixed address. The server may take that task and perform it at some arbitrary point in the future, but may still take requests from other clients before then. After performing the task, the server will reply to the client, which may have been running a blocking wait on the reply. The work and clients come dynamically, so there can't be a fixed initial number. The work is done in a non-thread-safe context, so workers can't exist on different threads, so all work should take place in a single thread.
ImplementationThe following example1 is not a complete implementation of the server, only a compilable section of the sequence that should be able to take place (but is in reality hanging). Two clients send an integer each, and the server takes one request, then the next request, echo replies to the first request, then echo replies to the second request. The intention isn't to get the responses ordered, only to allow for the holding of multiple requests simultaneously by the server. What actually happens here is that the second worker hangs waiting on the request - this is what confuses me, as DEALER sockets should route outgoing messages in a round-robin strategy.
...ANSWER
Answered 2022-Jan-23 at 12:51Let me share a view on how ZeroMQ could meet the above defined Intention
.
Let's rather use ZeroMQ Scalable Formal Communication Pattern Archetypes ( as they are RTO now, not as we may wish them to be at some, yet unsure, point in (a just potentially happening) future evolution state ).
We need not hesitate to use many more ZeroMQ-based connections among a herd of coming/leaving client
-instance(s) and the server
For example :
Client .connect()
-s a REQ
-socket to Server-address:port
-A to ask for a "job"-ticket processing over this connection
Client .connect()
-s a SUB
-socket to Server-address:port
-B to listen ( if present ) about published announcements about already completed "job"-tickets that are Server-ready to deliver results for
Client exposes another REQ
-socket to request upon an already broadcast "job"-ticket completion announcement message, it has just heard about over the SUB
-socket, to get "job"-ticket results finally delivered, if proving itself, by providing a proper / matching job-ticket-AUTH-key
to proof its right to receive the publicly announced results' availability, using this same socket to deliver a POSACK-message to Server upon client has correctly received this "job"-ticket results "in hands"
Server exposes REP
-socket to respond each client ad-hoc upon a "job"-ticket request, notifying this way about having "accepted"-job-ticket, delivering also a job-ticket-AUTH-key
for later pickup of results
Server exposes PUB
-socket to announce any and all not yet picked-up "finished"-job-tickets
Server exposes another REP
-socket to receive any possible attempt to request to deliver "job"-ticket-results. Upon verifying there delivered job-ticket-AUTH-key
, Server decides whether the respective REQ
-message had matching job-ticket-AUTH-key
to indeed deliver a proper message with results, or whether a match did not happen, in which case a message will carry some other payload data ( logic is left for further thoughts, so as to prevent potential bruteforcing or eavesdropping and similar, less primitive attacks on stealing the results )
Clients need not stay waiting for results live/online and/or can survive certain amounts of LoS, L2/L3-errors or network-storm stresses
Clients need just to keep some kind of job-ticket-ID
and job-ticket-AUTH-key
for later retrieving of the Server-processes/maintained/auth-ed results
Server will keep listening for new jobs
Server will accept new job-tickets with providing a privately added job-ticket-AUTH-key
Server will process job-tickets as it will to do so
Server will maintain a circular-buffer of completed job-tickets to be announced
Server will announce, in due time and repeated as decided in public, job-tickets, that are ready for client-initiated retrieval
Server will accept new retrieval requests
Server will verify client-requests for matching any announced job-ticket-ID
and testing if job-ticket-AUTH-key
match either
Server will respond to either matching / non-matching job-ticket-ID
results retrieval request(s)
Server will remove a job-ticket-ID
from a circular-buffer only upon both POSACK-ed AUTH-match before a retrieval and a POSACK-message re-confirmed delivery to client
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install round-robin
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page