Random exchange behavior on node failure

I was actually
trying

to avoid the performance impact of replicated queues.

I have two types of requests for my specific problem:

The first type is mission critical, speed/latency is key, errors are uncommon, clients will retry (mandatory flag is set), these are sent to the random exchange and “balanced” to a queue to each of my nodes. Each machine has consumers that listen to all of the queues bound to the exchange, so if a host is down (or even two!), at least one is processing requests.

The other type of requests go to quorum queues, typically are called during the processing of the first type, asynchronously. I use these for batching, auditing, etc… I don’t want to loose these messages but they can be processed later if it’s unavoidable.

If I’m using quorum queues for the first type, even if I could bear the performance penalty they don’t survive two down nodes.

This is pertinent as in my infrastructure I have two physical server rooms, so if a specific room is “down” I can loose two of the nodes at once, and quorums don’t survive that, I would like to keep processing the mission critical requests in this case.

Regarding alternate exchanges, the problem is deciding where to send the requests to be processed.

Something like asynchronously checking if a queue is available every N seconds to determine if a binding is “active” would solve my case, also, the cluster knows (?) if a queue is available. But I guess something like this would require some custom code.

Am I overthinking this?

Read more here: Source link