当前位置: 首页 > ActiveMQ in Action 读书笔记 > 正文

10.3 为 高并发程序部署ActiveMQ

10.3 Deploying ActiveMQ for large numbers
of concurrent applications

Scaling your applications that make use of ActiveMQ can take some time and require  some diligence. In this section, we examine three techniques to help you with this  task. We’ll start with vertical scaling, where a single broker is used for thousands of  connections and queues. Then we’ll look at scaling to tens of thousands of connections  by horizontally scaling your applications using networks. Finally we’ll examine  traffic partitioning, which will balance scaling and performance, but will add more  complexity to your ActiveMQ application.

10.3 为高并发程序部署ActiveMQ

使用ActiveMQ来扩展你的应用程序需要一些时间并要花一些精力.本节中我们将介绍三种技术用于  扩展应用程序.我们将从垂直扩展开始,这种扩展方式中,单个代理需要处理成千上万的连接和消息队列.  接下来我们将介绍水平扩展,这种扩展方式需要处理比前一种方式更多的网络连接.最后,我们介绍的  传输负载分流,可以在扩展和性能间得到平衡,但是会增加ActiveMQ程序的复杂性.

10.3.1 Vertical scaling

Vertical scaling is a technique used to increase the number of connections (and therefore  load) that a single ActiveMQ broker can handle. By default, the ActiveMQ broker  is designed to move messages as efficiently as possible to ensure low latency and good  performance. But you can make some configuration decisions to ensure that the  ActiveMQ broker can handle both a large number of concurrent connections and a  large number of queues.

10.3.1 垂直扩展

垂直扩展是一种用于增加单个ActiveMQ代理连接数(因而也增加了负载能力)的技术.默认情况下,  ActiveMQ的被设计成尽可高效的传输消息以确保低延迟和良好的性能.但是,你也可以进行一些配置  是的ActiveMQ代理可以同时处理大量并发的连接以及大量的消息队列.

By default, ActiveMQ will use blocking I/O to handle transport connections. This  results in a thread being used per connection. You can use nonblocking I/O on the  ActiveMQ broker (and still use the default transport on the client) to reduce the number  of threads used. Nonblocking I/O can be configured via the transport connector  in the ActiveMQ configuration file. An example of this is shown next.
Listing 10.10 Configure the NIO transport connector
默认情况下,ActiveMQ使用阻塞IO来处理传输连接,这种方式为每一个连接分配一个线程.你可以为ActiveMQ代理使用非阻塞IO(同时客户端可以使用默认的传输)以减少线程的使用.可以在ActiveMQ的配置文件中通过传输连接器配置非阻塞IO.下面的是配置非阻塞IO的示例代码:
代码清单10.10 配置NIO传输连接器
<broker>
  <transportConnectors>
    <transportConnector name="nio" uri="nio://localhost:61616"/>
  </<transportConnectors>
</broker>
In addition to using a thread per connection for blocking I/O, the ActiveMQ broker  can use a thread for dispatching messages per client connection. You can tell  ActiveMQ to use a thread pool instead by setting the system property named  org.apache.activemq.
UseDedicatedTaskRunner to false. Here’s an example:
除了为每个连接使用一个线程的阻塞IO,ActiveMQ还可以为每一个客户端连接使用一个消息分发线程.你可以通过将系统参数org.apache.activemq.
UseDedicatedTaskRunner设置为false来使
ActiveMQ使用一个线程池.下面是一个示例:
ACTIVEMQ_OPTS="-Dorg.apache.activemq.UseDedicatedTaskRunner=false"
Ensuring that the ActiveMQ broker has enough memory to handle lots of concurrent  connections is a two-step process. First, you need to ensure that the JVM in which the  ActiveMQ broker is started is configured with enough memory. This can be achieved  using the -Xmx JVM option as shown: 确保ActiveMQ代理用于足够的内存来处理大量的并发连接,需要分两步进行:  首先,你需要确保运行ActiveMQ的JVM在启动之前已经配置了足够的内存.可以使用JVM的-Xmx选项来  配置,如下所示:
ACTIVEMQ_OPTS="-Xmx1024M -Dorg.apache.activemq.UseDedicatedTaskRunner=false"
Second, be sure to configure an appropriate amount of the memory available to the  JVM specifically for the ActiveMQ broker. This adjustment is made via the <system-  Usage> element’s limit attribute. A good rule of thumb is to begin at 512 MB as the  minimum for an ActiveMQ broker with more than a few hundred active connections.  If your testing proves that this isn’t enough, bump it up from there. You can configure  the memory limit in the ActiveMQ configuration file as shown in the following listing.
Listing 10.11 Setting the memory limit for the ActiveMQ broker
其次,需要确保JVM配置了适量的专门供ActiveMQ代理使用的内存.这个配置可用通过<system-Usage>   元素的limit属性来配置.一个不错的根据经验得到的规则时,在连接数为几百个时配置512MB为最小内存.  如果测试发现内存不够用,可以增加内存配置.你可以按照下面代码示例来配置ActiveMQ使用的内存限制:
代码清单10.11 为ActiveMQ代理设置内存使用限制
<systemUsage>
<systemUsage>
 
<memoryUsage>
  <memoryUsage limit="512 mb"/>
</memoryUsage>
 
<storeUsage>
  <storeUsage limit="10 gb" name="foo"/>
</storeUsage>
 
<tempUsage>
  <tempUsage limit="1 gb"/>
</tempUsage>
 
</systemUsage>
</systemUsage>
It’s also advisable to reduce the CPU load per connection. If you’re using the Open-  Wire wire format, disable tight encoding, which can be CPU intense. Tight encoding  can be disabled on a client-by-client basis using URI parameters. Here’s an example: 同样,简易减少每个连接的CPU负载.如果你正使用Open-Wire格式的消息,关闭tight encoding选项,  开启该选项会导致CPU占有过多.Tight encoding选项可以通过客户端连接的URI中的参数设置以便关闭  该选项.下面是示例代码:
String uri = "failover://(tcp://localhost:61616?" + wireFormat.tightEncodingEnabled=false)";
ConnectionFactory cf = new ActiveMQConnectionFactory(uri);
We’ve looked at some tuning aspects for scaling an ActiveMQ broker to handle thousands  of connections. So now we can look at tuning the broker to handle thousands of  queues. 了解了一些扩展ActiveMQ代理处理大量连接的调优选项之后,我们在了解一些让ActiveMQ处理大量消息队列的调优选项.
The default queue configuration uses a separate thread for paging messages from  the message store into the queue to be dispatched to interested message consumers.  For a large number of queues, it’s advisable to disable this by enabling the optimize-  Dispatch property for all queues, as shown next.
Listing 10.12 Setting the optimizeDispatch property
默认的消息队列配置中使用一个独立的线程负责将消息存储中的消息提取到消息队列中而后再被分发到对其感兴趣的消息消费者.如果有大量的消息队列,建议通过启用optimizeDispatch这个属性  改善这个特性,示例代码如下所示:
代码清单 10.12 设置 optimizeDispatch 属性
<destinationPolicy>
  <policyMap>
    <policyEntries>
      <policyEntry queue=">" optimizedDispatch="true"/>
    </policyEntries>
  </policyMap>
</destinationPolicy>
Note the use of the wildcard > character in listing 10.11, which denotes all queues  recursively.

To ensure you can scale not only to thousands of connections, but also to tens of  thousands of queues, use either a JDBC message store or the newer and much faster  KahaDB message store. KahaDB is enabled by default in ActiveMQ.

注意,代码清单中使用通配符>表示该配置会递归的应用到所有的消息队列中.为确保扩展配置既可以处理大量连接也可以处理海量消息队列,请使用JDBC或更新更快的KahaDB  消息存储.默认情况下ActiveMQ使用KahaDB消息存储.
So far we’ve looked at scaling connections, reducing thread usage, and selecting  the right message store. An example configuration for ActiveMQ, tuned for scaling, is  shown in the following listing.
Listing 10.13 Configuration for scaling
到目前位置,我们关注了连接数扩展,减少线程使用以及选择正确的消息存储.下面的示例配置代码  展示了ActiveMQ配置中为扩展进行了调优:
代码情况10.13 为扩展进行调优的配置示例代码
<broker xmlns="http://activemq.apache.org/schema/core" 
brokerName="amq-broker" dataDirectory="${activemq.base}/data">
 
  <persistenceAdapter>
    <kahaDB directory="${activemq.base}/data" 
                 journalMaxFileLength="32mb"/>
  </persistenceAdapter>
   
  <destinationPolicy>
    <policyMap>
      <policyEntries>
        <policyEntry queue=">" optimizedDispatch="true"/>
      </policyEntries>
    </policyMap>
  </destinationPolicy>
   
  <systemUsage>
  <systemUsage>
   
  <memoryUsage>
    <memoryUsage limit="512 mb"/>
  </memoryUsage>
   
  <storeUsage>
    <storeUsage limit="10 gb" name="foo"/>
  </storeUsage>
   
  <tempUsage>
    <tempUsage limit="1 gb"/>
  </tempUsage>
   
  </systemUsage>
  </systemUsage>
   
  <transportConnectors>
    <transportConnector name="openwire" uri="nio://localhost:61616"/> </transportConnectors>
  </broker>

Note the use of all the suggested items for tuning ActiveMQ. Such tuning isn’t enabled  in the default configuration file, so be sure to give yours some attention.

注意 示例代码中所有为调优而建议的配置条目,这些调优条目在默认的配置文件中并没有配置,所以请  确保给予充分重视.

Having looked at how to scale an ActiveMQ broker, now it’s time to look at using  networks to increase horizontal scaling. 了解过如何扩展ActiveMQ后,现在是时候了解使用代理网络来进行横向扩展了.

10.3.2 Horizontal scaling

In addition to scaling a single broker, you can use networks to increase the number of  ActiveMQ brokers available for your applications. As networks automatically pass messages  to connected brokers that have interested consumers, you can configure your  clients to connect to a cluster of brokers, selecting one at random to connect to. This  can be configured using a URI parameter as shown:

10.3.2 横向扩展

除了扩展单独的代理,你还可以使用代理网络来增加应用程序可用的代理数量.因为网络会自动传递  消息给所有互联的具有对消息感兴趣的消息消费者的代理,所以你可以配置客户端连接到一个代理  集群,随机的选择集群中的一个代理来连接.可以通过URI中的参数来配置,如下所示:

failover://(tcp://broker1:61616,tcp://broker2:61616)?randomize=true
In order to make sure that messages for queues or durable topic subscribers aren’t  orphaned on a broker, configure the networks to use dynamicOnly and a low network  prefetchSize. Here’s an example: 为了确保队列或持久化主题中的消息不会卡在某个代理上而不能进行转发,需要在配置网络连接时,将  dynamicOnly配置成true并使用小一点的prefetchSize.下面是一个示例:
<networkConnector uri="static://(tcp://remotehost:61617)" name="bridge" dynamicOnly="true" prefetchSize="1" >
</networkConnector>
Using networks for horizontal scaling does introduce more latency, because potentially  messages have to pass through multiple brokers before being delivered to a consumer. 示例代理网络来横向扩展并不会的代理更多的延迟,因为消息在传送到消息消费者在之前会经过多个代理.  另外一种可选的部署方案可以提供更多的扩展性和更好的性能,但是需要在应用程序中做更多的计划.
Another alternative deployment provides great scalability and performance, but  requires more application planning. This hybrid solution, called traffic partitioning,  combines vertical scaling of a broker with application-level splitting of destinations  across different brokers. 这种混合的解决方案被称为传输负载分流(trafficpartitioning),这种方案通过在应用程序中分割消息目的地  到不同的代理上以完成垂直扩展.

10.3.3 Traffic partitioning

Client-side traffic partitioning is a hybrid of vertical and horizontal partitioning. Networks  are typically not used, as the client application decides what traffic should go to  which broker(s). The client application has to maintain multiple JMS connections,  and decide which JMS connection should be used for which destinations.

10.3.3 传输负载分流

客户端的传输负载分流是一个垂直和水平混合的负载分流方案.通常不使用代理网络,因为客户端程序  会决定将哪个负载发送到哪个(些)代理上.客户端程序需要维护多个JMS连接并且决定哪个JMS连接  应该用于那个消息目的地.

The advantages of not directly using network connections is that you reduce the  overhead of forwarding messages between brokers. You do need to balance that with  the additional complexity that results in a typical application. A representation of  using traffic partitioning can be seen in figure 10.8. 没有直接使用网络连接的好处是降低了代理见过量的消息转发.你不需要像在传统程序中那样进行  额外的负载的均衡处理.使用传输负载分流的示意图如图10.8所
We’ve covered both vertical and horizontal scaling, as well as traffic partitioning.  You should now have a good understanding of how to use ActiveMQ to provide connectivity  for thousands of concurrent connections and tens of thousands of destinations. 到这里,我们已经了解了垂直和水平扩展以及传输负载分流.你应该能够深刻了解如何使用  ActiveMQ来处理大量的并发连接和海量的消息目的地的连接了.
赞 赏

   微信赞赏  支付宝赞赏


本文固定链接: https://www.jack-yin.com/coding/translation/activemq-in-action/1671.html | 边城网事

该日志由 边城网事 于2013年12月04日发表在 ActiveMQ in Action 读书笔记 分类下, 你可以发表评论,并在保留原文地址及作者的情况下引用到你的网站或博客。
原创文章转载请注明: 10.3 为 高并发程序部署ActiveMQ | 边城网事

10.3 为 高并发程序部署ActiveMQ 暂无评论

发表评论

快捷键:Ctrl+Enter