## 10.3 为 高并发程序部署ActiveMQ

### 10.3 Deploying ActiveMQ for large numbers of concurrent applications

Scaling your applications that make use of ActiveMQ can take some time and require  some diligence. In this section, we examine three techniques to help you with this  task. We’ll start with vertical scaling, where a single broker is used for thousands of  connections and queues. Then we’ll look at scaling to tens of thousands of connections  by horizontally scaling your applications using networks. Finally we’ll examine  traffic partitioning, which will balance scaling and performance, but will add more  complexity to your ActiveMQ application.

### 10.3 为高并发程序部署ActiveMQ

#### 10.3.1 Vertical scaling

Vertical scaling is a technique used to increase the number of connections (and therefore  load) that a single ActiveMQ broker can handle. By default, the ActiveMQ broker  is designed to move messages as efficiently as possible to ensure low latency and good  performance. But you can make some configuration decisions to ensure that the  ActiveMQ broker can handle both a large number of concurrent connections and a  large number of queues.

#### 10.3.1 垂直扩展

By default, ActiveMQ will use blocking I/O to handle transport connections. This  results in a thread being used per connection. You can use nonblocking I/O on the  ActiveMQ broker (and still use the default transport on the client) to reduce the number  of threads used. Nonblocking I/O can be configured via the transport connector  in the ActiveMQ configuration file. An example of this is shown next.
Listing 10.10 Configure the NIO transport connector

<broker>
<transportConnectors>
<transportConnector name="nio" uri="nio://localhost:61616"/>
</<transportConnectors>
</broker>
In addition to using a thread per connection for blocking I/O, the ActiveMQ broker  can use a thread for dispatching messages per client connection. You can tell  ActiveMQ to use a thread pool instead by setting the system property named  org.apache.activemq.
UseDedicatedTaskRunner to false. Here’s an example:

ActiveMQ使用一个线程池.下面是一个示例:
ACTIVEMQ_OPTS="-Dorg.apache.activemq.UseDedicatedTaskRunner=false"
Ensuring that the ActiveMQ broker has enough memory to handle lots of concurrent  connections is a two-step process. First, you need to ensure that the JVM in which the  ActiveMQ broker is started is configured with enough memory. This can be achieved  using the -Xmx JVM option as shown: 确保ActiveMQ代理用于足够的内存来处理大量的并发连接,需要分两步进行:  首先,你需要确保运行ActiveMQ的JVM在启动之前已经配置了足够的内存.可以使用JVM的-Xmx选项来  配置,如下所示:
ACTIVEMQ_OPTS="-Xmx1024M -Dorg.apache.activemq.UseDedicatedTaskRunner=false"
Second, be sure to configure an appropriate amount of the memory available to the  JVM specifically for the ActiveMQ broker. This adjustment is made via the <system-  Usage> element’s limit attribute. A good rule of thumb is to begin at 512 MB as the  minimum for an ActiveMQ broker with more than a few hundred active connections.  If your testing proves that this isn’t enough, bump it up from there. You can configure  the memory limit in the ActiveMQ configuration file as shown in the following listing.
Listing 10.11 Setting the memory limit for the ActiveMQ broker

<systemUsage>
<systemUsage>

<memoryUsage>
<memoryUsage limit="512 mb"/>
</memoryUsage>

<storeUsage>
<storeUsage limit="10 gb" name="foo"/>
</storeUsage>

<tempUsage>
<tempUsage limit="1 gb"/>
</tempUsage>

</systemUsage>
</systemUsage>
It’s also advisable to reduce the CPU load per connection. If you’re using the Open-  Wire wire format, disable tight encoding, which can be CPU intense. Tight encoding  can be disabled on a client-by-client basis using URI parameters. Here’s an example: 同样,简易减少每个连接的CPU负载.如果你正使用Open-Wire格式的消息,关闭tight encoding选项,  开启该选项会导致CPU占有过多.Tight encoding选项可以通过客户端连接的URI中的参数设置以便关闭  该选项.下面是示例代码:
String uri = "failover://(tcp://localhost:61616?" + wireFormat.tightEncodingEnabled=false)";
ConnectionFactory cf = new ActiveMQConnectionFactory(uri);
We’ve looked at some tuning aspects for scaling an ActiveMQ broker to handle thousands  of connections. So now we can look at tuning the broker to handle thousands of  queues. 了解了一些扩展ActiveMQ代理处理大量连接的调优选项之后,我们在了解一些让ActiveMQ处理大量消息队列的调优选项.
The default queue configuration uses a separate thread for paging messages from  the message store into the queue to be dispatched to interested message consumers.  For a large number of queues, it’s advisable to disable this by enabling the optimize-  Dispatch property for all queues, as shown next.
Listing 10.12 Setting the optimizeDispatch property

<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry queue=">" optimizedDispatch="true"/>
</policyEntries>
</policyMap>
</destinationPolicy>
Note the use of the wildcard > character in listing 10.11, which denotes all queues  recursively.

To ensure you can scale not only to thousands of connections, but also to tens of  thousands of queues, use either a JDBC message store or the newer and much faster  KahaDB message store. KahaDB is enabled by default in ActiveMQ.

So far we’ve looked at scaling connections, reducing thread usage, and selecting  the right message store. An example configuration for ActiveMQ, tuned for scaling, is  shown in the following listing.
Listing 10.13 Configuration for scaling

<broker xmlns="http://activemq.apache.org/schema/core"
brokerName="amq-broker" dataDirectory="${activemq.base}/data"> <persistenceAdapter> <kahaDB directory="${activemq.base}/data"
journalMaxFileLength="32mb"/>

<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry queue=">" optimizedDispatch="true"/>
</policyEntries>
</policyMap>
</destinationPolicy>

<systemUsage>
<systemUsage>

<memoryUsage>
<memoryUsage limit="512 mb"/>
</memoryUsage>

<storeUsage>
<storeUsage limit="10 gb" name="foo"/>
</storeUsage>

<tempUsage>
<tempUsage limit="1 gb"/>
</tempUsage>

</systemUsage>
</systemUsage>

<transportConnectors>
<transportConnector name="openwire" uri="nio://localhost:61616"/> </transportConnectors>
</broker>

Note the use of all the suggested items for tuning ActiveMQ. Such tuning isn’t enabled  in the default configuration file, so be sure to give yours some attention.

Having looked at how to scale an ActiveMQ broker, now it’s time to look at using  networks to increase horizontal scaling. 了解过如何扩展ActiveMQ后,现在是时候了解使用代理网络来进行横向扩展了.

#### 10.3.2 Horizontal scaling

In addition to scaling a single broker, you can use networks to increase the number of  ActiveMQ brokers available for your applications. As networks automatically pass messages  to connected brokers that have interested consumers, you can configure your  clients to connect to a cluster of brokers, selecting one at random to connect to. This  can be configured using a URI parameter as shown:

#### 10.3.2 横向扩展

failover://(tcp://broker1:61616,tcp://broker2:61616)?randomize=true
In order to make sure that messages for queues or durable topic subscribers aren’t  orphaned on a broker, configure the networks to use dynamicOnly and a low network  prefetchSize. Here’s an example: 为了确保队列或持久化主题中的消息不会卡在某个代理上而不能进行转发,需要在配置网络连接时,将  dynamicOnly配置成true并使用小一点的prefetchSize.下面是一个示例:
<networkConnector uri="static://(tcp://remotehost:61617)" name="bridge" dynamicOnly="true" prefetchSize="1" >
</networkConnector>
Using networks for horizontal scaling does introduce more latency, because potentially  messages have to pass through multiple brokers before being delivered to a consumer. 示例代理网络来横向扩展并不会的代理更多的延迟,因为消息在传送到消息消费者在之前会经过多个代理.  另外一种可选的部署方案可以提供更多的扩展性和更好的性能,但是需要在应用程序中做更多的计划.
Another alternative deployment provides great scalability and performance, but  requires more application planning. This hybrid solution, called traffic partitioning,  combines vertical scaling of a broker with application-level splitting of destinations  across different brokers. 这种混合的解决方案被称为传输负载分流(trafficpartitioning),这种方案通过在应用程序中分割消息目的地  到不同的代理上以完成垂直扩展.

#### 10.3.3 Traffic partitioning

Client-side traffic partitioning is a hybrid of vertical and horizontal partitioning. Networks  are typically not used, as the client application decides what traffic should go to  which broker(s). The client application has to maintain multiple JMS connections,  and decide which JMS connection should be used for which destinations.

#### 10.3.3 传输负载分流

The advantages of not directly using network connections is that you reduce the  overhead of forwarding messages between brokers. You do need to balance that with  the additional complexity that results in a typical application. A representation of  using traffic partitioning can be seen in figure 10.8. 没有直接使用网络连接的好处是降低了代理见过量的消息转发.你不需要像在传统程序中那样进行  额外的负载的均衡处理.使用传输负载分流的示意图如图10.8所
We’ve covered both vertical and horizontal scaling, as well as traffic partitioning.  You should now have a good understanding of how to use ActiveMQ to provide connectivity  for thousands of concurrent connections and tens of thousands of destinations. 到这里,我们已经了解了垂直和水平扩展以及传输负载分流.你应该能够深刻了解如何使用  ActiveMQ来处理大量的并发连接和海量的消息目的地的连接了.

微信赞赏  支付宝赞赏