I'll be upfront: I'm primarily a Kafka and Solace person. But I've run RabbitMQ in production on a few projects, and I've come to appreciate it for what it is - a messaging broker that does traditional messaging really well, without pretending to be a distributed log or an event store. If your use case is "send messages between services with routing flexibility and good delivery guarantees," RabbitMQ is a genuinely solid choice.

Cluster Setup: Getting It Right From the Start

A single RabbitMQ node is fine for development. For production, you need a cluster, and there are decisions to make.

RabbitMQ clusters share metadata (exchanges, queue definitions, bindings) across all nodes, but queue contents live on a single node by default. If that node goes down, the queue is unavailable until it comes back. This surprises people who assume "cluster" means "replicated."

For high availability, you want quorum queues (the successor to classic mirrored queues, which are deprecated as of 3.13):

@Bean
public Queue ordersQueue() {
    return QueueBuilder.durable("orders")
        .withArgument("x-queue-type", "quorum")
        .build();
}

Quorum queues use the Raft consensus protocol. Data is replicated to a majority of nodes before a publish is confirmed. It's slower than classic queues, but you don't lose messages when a node dies.

The cluster topology that's worked for me: three nodes, quorum queues for anything important, classic queues for transient/high-throughput stuff where message loss is acceptable (like metrics aggregation). Put the nodes in different availability zones if your cloud provider supports it.

Spring AMQP Integration

Spring AMQP is the library that makes RabbitMQ feel natural in a Spring Boot application. Add the starter:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-amqp</artifactId>
</dependency>

And Spring Boot auto-configures a ConnectionFactory, RabbitTemplate, and RabbitAdmin for you. Configuration in application.yml:

spring:
  rabbitmq:
    host: rabbit-cluster.internal
    port: 5672
    username: ${RABBIT_USER}
    password: ${RABBIT_PASS}
    virtual-host: /production
    connection-timeout: 5000
    template:
      retry:
        enabled: true
        initial-interval: 1000
        max-attempts: 3

The retry configuration on the template is something I always enable. Without it, a brief network blip causes a publish failure that propagates as an exception to your business logic. With it, the template retries transparently. Three attempts with backoff covers the vast majority of transient issues.

Publishing

@Service
public class OrderPublisher {

    private final RabbitTemplate rabbitTemplate;

    public OrderPublisher(RabbitTemplate rabbitTemplate) {
        this.rabbitTemplate = rabbitTemplate;
    }

    public void publishOrder(OrderEvent event) {
        rabbitTemplate.convertAndSend(
            "orders.exchange",    // exchange
            "order.created",      // routing key
            event                 // message (Jackson serialization)
        );
    }
}

Consuming

@RabbitListener(bindings = @QueueBinding(
    value = @Queue(value = "inventory.orders", durable = "true",
        arguments = @Argument(name = "x-queue-type", value = "quorum")),
    exchange = @Exchange(value = "orders.exchange", type = "topic"),
    key = "order.*"
))
public void handleOrderEvent(OrderEvent event) {
    inventoryService.processOrder(event);
}

The @RabbitListener annotation with @QueueBinding declares the queue, exchange, and binding in one place. If they don't exist, RabbitAdmin creates them on startup. This is convenient for development but makes me nervous in production - I prefer declaring infrastructure separately and disabling auto-creation with spring.rabbitmq.listener.simple.auto-startup=true and managing declarations through Terraform or Ansible.

Monitoring on Kubernetes

RabbitMQ on Kubernetes is a common deployment, and the RabbitMQ Cluster Operator makes it reasonably painless. But monitoring is where you need to invest time.

The built-in management plugin gives you a web UI on port 15672 and a Prometheus metrics endpoint on 15692. The metrics you actually want to alert on:

Queue depth growth rate: A queue that's steadily growing means consumers can't keep up. Alert on the rate of change, not the absolute size - some queues are legitimately deep.

Unacknowledged messages: Messages delivered to consumers but not yet acked. If this number is high, your consumers are either slow or stuck. This metric has caught stuck consumers for me more than once.

Memory and disk alarms: RabbitMQ stops accepting publishes when it hits memory or disk watermarks. This is a hard stop - publishers block. Alert well before the watermarks.

Connection churn: Lots of connections opening and closing is a sign of misconfigured clients. Each connection has overhead. A stable system should have relatively stable connection counts.

A minimal Prometheus scrape config:

- job_name: 'rabbitmq'
  metrics_path: '/metrics'
  static_configs:
    - targets: ['rabbitmq-0:15692', 'rabbitmq-1:15692', 'rabbitmq-2:15692']

On Kubernetes, use the PodMonitor or ServiceMonitor CRD if you're running the Prometheus Operator. The RabbitMQ Cluster Operator can create these for you.

Event-Driven Microservices with Spring Cloud Stream

Spring Cloud Stream abstracts the messaging broker behind a binding layer. Your application code doesn't know or care whether it's talking to RabbitMQ, Kafka, or something else. In practice, the abstraction leaks (it always does), but for straightforward event-driven flows it's genuinely productive.

@Bean
public Function<OrderEvent, InventoryCommand> processOrder() {
    return order -> {
        var reservation = inventoryService.reserve(order);
        return new InventoryCommand(reservation.getId(), "RESERVE");
    };
}
spring:
  cloud:
    stream:
      bindings:
        processOrder-in-0:
          destination: orders
          group: inventory-service
        processOrder-out-0:
          destination: inventory-commands
      rabbit:
        bindings:
          processOrder-in-0:
            consumer:
              quorum.enabled: true

The function processOrder reads from the orders exchange and writes to the inventory-commands exchange. The group property creates a competing consumers setup - multiple instances of your service share the work.

Where Spring Cloud Stream shines is in simple pipelines: event in, process, event out. Where it falls apart is when you need fine-grained control over acknowledgment, batching, or error handling. The abstraction layer means you're fighting the framework when you need broker-specific behavior.

My rule of thumb: use Spring Cloud Stream for new greenfield services with simple messaging patterns. Use Spring AMQP directly for anything that requires precise control over how messages are consumed and acknowledged.

When to Pick RabbitMQ Over Kafka

This is the question everyone asks, and the answer is annoyingly nuanced. But here's my take:

Pick RabbitMQ when:

  • You need flexible routing (topic exchanges, headers exchanges, fanout with filtering)
  • Your messages are commands, not events (process this, then it's done)
  • You want per-message acknowledgment and dead-letter queues out of the box
  • Your throughput is measured in thousands per second, not millions
  • You need request/reply patterns (RabbitMQ has built-in RPC support)

Pick Kafka when:

  • You need event replay (consumers can re-read history)
  • You need strict ordering within a partition
  • Your throughput requirements are massive
  • Multiple consumers need independent views of the same event stream
  • You want events as the source of truth, not just a transport

Pick neither when:

  • You're building a simple REST-based system and messaging adds complexity you don't need. Seriously, not every system needs a broker. Sometimes a direct HTTP call with a retry policy is the right answer.

The worst architecture decisions I've seen were driven by "we should use X because it's the industry standard" rather than "X solves the specific problem we have." RabbitMQ is excellent at traditional messaging. It's not trying to be Kafka, and that's fine.