High Availability of ActiveMQ with Shared Storage

Posted by Amar Singh, Nikita Saroha

In production environment, there are multiple disaster scenarios that need to be planned for, like – network failures, hardware failures, software failures or power outages. ActiveMQ can be configured to defend such scenarios from inhibiting your application in production. Typically, you will want to run multiple ActiveMQ brokers so that if one fails, then other one can take over and the whole system stays online or highly available. In ActiveMQ terminology, such deployment of multiple brokers of ActiveMQ servers is called master/slave. In master/slave configuration, one broker takes the role of master- the primary broker, and all other brokers wait till the master broker fails, for them to take over and become masters. This ensures high availability and insignificant downtime.

ActiveMQ currently provides two types of master/slave configurations; shared nothing – where each broker has its own message storage, and shared storage – where multiple brokers can connect to shared store but only one broker will be active at a time. We will discuss only shared storage master/slave configuration in this blog.

When to use shared nothing master/slave        

You should use a shared nothing master/slave configuration in production environments when ‘some’ down time on failure is acceptable. Manual intervention by an administrator will be necessary after a master fails, as it is advisable to set up and configure a new slave for the new master after the old master has failed.

Shared storage master/slave

Brokers in shared nothing master/slave remain independent of each other whereas shared storage master/slave allows multiple brokers to share the storage mechanism, but only one broker can be live at a time. Also, no manual effort is required at the time of master broker failure, as a slave will automatically take over the network topology to become a master. Another benefit is that there is no limitation on number of slave brokers that can be active in shared storage master/slave.

Above diagram depicts the master/slave configuration using shared storage. All the brokers, master and slaves, are connected over network. Below is the network configuration required to connect brokers.

Shared database master/slave

When ActiveMQ broker uses a relational database, it acquires an exclusive lock to the table to ensure that no other ActiveMQ broker can access the database at the same time. This is due to the fact that the state of a broker is held in the storage mechanism and is only designed to be used by a single broker at a time. In master/slave configuration, the master acquires the lock and all the slave brokers keep polling until they get access to the lock. While in this polling state the ActiveMQ broker assumes that it’s a slave and does not start any transport or network connections.

If the master broker fails, a slave broker will be able to grab the lock on the database and will then take over as a master broker.  Since all the ActiveMQ brokers are using the same shared database, no additional manual intervention is required to introduce new brokers or to remove the existing ones. The failed broker can be inspected and restarted again to join the network topology as a slave broker.

Below are the configurations are required:

Properties available:

  • createTablesOnStartup = true. Once table created change to “false”.
  • useDatabaseLock – To achieve the locking on database.
  • lockAcquireSleepInterval (optional)- check in time interval to become master broker.

When to use shared database master/slave

Shared database master/slave is an ideal configuration if you’re already using an enterprise relational database. Although, generally slower than a shared nothing configuration, it requires no additional configuration, and there are no limitations on the number of slave brokers that can be run or when they can be run.

Shared file system master/slave

Alternate to using shared database is shared file system. The setup is similar to shared database master/slave configuration. Also, there is no need of manual intervention and there is no limitation on number of brokers in the topology. It’s recommended that you use the Kaha DB message store, but use an underlying shared file system for the message storage.

The first broker that is started grabs the file lock and becomes the master. This prevents all other brokers from getting the lock and accessing the file based message store at the same time. All other brokers automatically become slaves as they did not have the lock on the file based message store and their network connections and transport connections are not started.

When to use share file system master/slave

Using a shared file system is probably the best solution for providing high availability for Active MQ to date. It combines the high throughput of Kaha DB and the simplicity that you get from using a shared resource. Kaha DB is only limited by the performance of the underlying shared file system. The only caveat is that you’re restricted to environments that support distributed locking on a shared file system.

Client connection

The client connecting to this master/slave configuration should connect using failover transport as below:


So ActiveMQ provides features to make it resilient to failures. Now you can choose whichever master/slave configuration suits your requirement.

Related Posts

  • HDFS on Mesos Installation

    HDFS on Mesos Installation Mesos cluster optimize the resources and bring the whole data-center at one platform where all the resources can be managed efficiently. Setting up mesos cluster with…

  • Understanding Teradata Wallet

    Teradata Wallet is a facility for storage of sensitive/secret information, such as Teradata Database user passwords. Users are able to save and retrieve items by using this facility. Teradata wallet…

  • Oracle Goldengate

    Oracle GoldenGate is an Oracle proprietary software for real-time data integration and replication that supports different databases- Oracle, DB2, SQL Server, Ingres, MySQL etc. Even the source and target database…

  • Continuous Deployment using Dockerized ApplicationContinuous Deployment using Dockerized Application

    Dockerized Applications Dockerized applications are applications that are packed into a Docker image and stored on a Docker repository. These Docker images are used to provision applications along with the…

  • Teradata Intelligent Memory (TIM)

    Overview of Teradata Teradata is a RDBMS (Relational Database Management System). This system is solely based on off-the-shelf (readymade) symmetric multiprocessing (SMP) technology combined with communication networking, connecting SMP systems…

  • Understanding Oracle Multitenant 12c database

    Overview of Oracle Multitenant Databases Overview Database 12c Enterprise Edition introduces Multitenant, a new design that empowers clients to effortlessly merge numerous databases, without changing their applications. This new design…

Leave a Reply

Your email address will not be published. Required fields are marked *