`
w800927
  • 浏览: 120033 次
  • 性别: Icon_minigender_1
  • 来自: 北京
社区版块
存档分类
最新评论

TERRA-COTTA多服务器配置及控制台运行

 
阅读更多

第一步:编写tc-config.xml:

<?xml version="1.0" encoding="UTF-8"?>

<con:tc-config xmlns:con="http://www.terracotta.org/config">

<servers>配置主从terracotta位置

<server host="192.168.7.73" name="73">

<dso-port bind="192.168.7.73">9510</dso-port>

<jmx-port bind="192.168.7.73">9520</jmx-port>

<data>terracotta/server-data</data>

<logs>terracotta/server-logs</logs>

<statistics>terracotta/cluster-statistics</statistics>

</server>

<server host="192.168.7.39" name="39">

<dso-port bind="192.168.7.39">9510</dso-port>

<jmx-port bind="192.168.7.39">9520</jmx-port>

<data>terracotta/server-data</data>

<logs>terracotta/server-logs</logs>

<statistics>terracotta/cluster-statistics</statistics>

</server>

</servers>

<clients>

<logs>terracotta/client-logs</logs>

</clients>

<application>

<dso>

<instrumented-classes>

<include>

<class-expression>package.ClassA</class-expression>

</include>

<include>

<class-expression>package.ClassB</class-expression>

</include>

</instrumented-classes>

<locks>

<autolock auto-synchronized="false">

<method-expression>返回类型 package.ClassA.MethodA(java.lang.String)</method-expression>

<lock-level>write</lock-level>

</autolock>

<autolock auto-synchronized="false">

<method-expression>* package.ClassB.*(..) 通配</method-expression>

<lock-level>write</lock-level>

</autolock>

</locks>

<roots>

<root>

<field-name>package.ClassA.FieldA</field-name>

</root>

<root>

<field-name>package.ClassA.FieldB</field-name>

</root>

 

</roots>

</dso>

</application>

</con:tc-config>

 

第二步编写cmd下脚本:注意这个bat的同目录下放有tc-config.xml

@echo off

cls

 

REM set JAVA_HOME=java_path

if "%JAVA_HOME%" == "" goto noJavaHome

 

set pwd=%cd%

set lib=%pwd%\lib

set classpath=.

set config=%pwd%\config

 

 

setlocal enabledelayedexpansion

FOR /R %lib% %%f IN (*.jar) DO set classpath=%%f;!classpath!

set classpath=%classpath%;%pwd%\config

F:\LearnByMyself\Terracotta\install3.5\platform\bin\dso-java.bat -classpath %classpath% classA %config% startup

endlocal

//用dso-java.bat|.sh替换原有的java命令

goto exit

 

:noJavaHome

echo The JAVA_HOME environment variable is not defined correctly

echo This environment variable is needed to run this program

goto exit

 

:exit

pause

 

第三步,在两台分布式缓存服务器上启动之

启动时命令行如下,并注意参数.如果不指定启动参数,启动时加载默认的配置文件,在lib目录下tc.jar里面的/com/tc/config/schema/setup/default-config.xml

F:\LearnByMyself\Terracotta\install3.5\bin>start-tc-server.bat -f tc-config.xml//此处的tc-config.xml就是前文的tc-config.xml

2011-08-31 10:05:35,515 INFO - Terracotta 3.5.1, as of 20110415-160445 (Revision

17477 by cruise@su10mo4 from 3.5.1)

2011-08-31 10:05:36,031 INFO - Successfully loaded base configuration from file

at 'F:\LearnByMyself\Terracotta\install3.5\bin\tc-config.xml'.

2011-08-31 10:05:36,109 INFO - Log file: 'F:\LearnByMyself\Terracotta\install3.5

\bin\terracotta\server-logs\terracotta-server.log'.

2011-08-31 10:05:37,531 INFO - Available Max Runtime Memory: 512MB

2011-08-31 10:05:37,890 INFO - JMX Server started. Available at URL[service:jmx:

jmxmp://192.168.7.73:9520]

2011-08-31 10:05:43,468 INFO - Becoming State[ ACTIVE-COORDINATOR ]

2011-08-31 10:05:43,484 INFO - Terracotta Server instance has started up as ACTI

VE node on 192.168.7.73:9510 successfully, and is now ready for work.

2011-08-31 10:08:41,484 INFO - NodeID[192.168.7.39:9510] joined the cluster

类似的,启动另外一台server

F:\learnbymyself\terracotta\install3.5\bin>start-tc-server.bat -f tc-config.xml

2011-08-31 10:08:16,826 INFO - Terracotta 3.5.1, as of 20110415-160445 (Revision

17477 by cruise@su10mo4 from 3.5.1)

2011-08-31 10:08:17,504 INFO - Successfully loaded base configuration from file

at 'F:\learnbymyself\terracotta\install3.5\bin\tc-config.xml'.

2011-08-31 10:08:17,580 INFO - Log file: 'F:\learnbymyself\terracotta\install3.5

\bin\terracotta\server-logs\terracotta-server.log'.

2011-08-31 10:08:17,891 INFO - Available Max Runtime Memory: 490MB

2011-08-31 10:08:18,322 INFO - JMX Server started. Available at URL[service:jmx:

jmxmp://192.168.7.39:9520]

2011-08-31 10:08:19,399 INFO - Moved to State[ PASSIVE-UNINITIALIZED ]

2011-08-31 10:08:19,403 INFO - NodeID[192.168.7.73:9510] joined the cluster

2011-08-31 10:08:19,689 INFO - Moved to State[ PASSIVE-STANDBY ]

完成上述操作后监控界面如附件

 

ps:boot-jar会自动生成

 

本分布式缓存其他优化方法

Contents

The page Header does not exist.

What are the main components of tuning Terracotta.

For Scale and Latency, one needs to typically look at:

  • Instrumentation Scope
  • Distributed Locking
    • Lock Type
    • Lock Striping
    • Lock Scoping/Batching
    • Injection of Synchronization within your application.
    • Be aware of Terracotta Limitations.
  • Memory Management
    • Garbage Collection
      • on Client JVM
      • on Terracotta Server JVM
    • Virtual Memory Manager
      • on client JVM
      • on Terracotta Server JVM
    • Distributed Garbage Collection on Terracotta Server JVM
  • Application Code
    • Choice of Data Structures that offer more concurrency and play well with the Terracotta Virtual Memory Manager.
    • Synchronizing on the right objects
  • Thrash/Faulting Behavior
    • Fault Count in tc-config.xml
    • Fault Threads on the L2.
    • Locality of Reference
  • Transaction Batching
    • l1.transactionmanager properties in tc.properties.
    • Commit Threads on the L2.

For HA, one needs to look at:

  • Choice of HA strategy
  • Infrastructure Design
  • L1 <-> L2 HeartBeating
  • L2 <-> L2 HeartBeating

How do I tune Instrumentation Scope?

So how do I tune distributed Locks.

  • Lock Type.
    • In order of most pessimistic to optimistic, Lock Types supported by Terracotta are:
      • Synchronous Write
      • Write
      • Read
      • Concurrent
    • So make sure that we are using the right lock-type. e.g.
      • if a method is getXXX () and is only reading shared state, make sure it is not marked in tc-config.xml as lock-type of write.

      如果一个方法仅涉及读操作,lock type就设成read

      • Synchronous-write gives you greater guarantees, but comes at a high cost in terms of scale/latency - so use with caution.

      synchronous-write提供了更大的保障,但成本巨大

      • Concurrent locks only demarcate transaction boundaries - so again be sure that your application is fine with broken semantic correctness

concurrent lock仅仅是标定事务边界,所以一定要确保语义的正确性

 

  • Lock Striping: 分块锁
    • If you notice too much contention on a lock, you may be able to work around it via striping the lock.
    • The most popular example is when executing Hashmap.put and Hashmap.remove , you presumably are locking on the whole Hashmap. So by replacing a Hashmap with a ConcurrentHashMap (which by default support 16 buckets - and can be overridden via passing the concurrency-level as a constructor parameter) - you stripe the single Hashmap lock 16 ways. SEE http://unserializableone.blogspot.com/2007/04/performance-comparision-between.html
  • Lock Scoping/Batching: 批量锁
    • Batching implies executing a bunch of state mutations in the context of a single distributed lock as against multiple-distributed locks. It usually is a trade-ff between concurrency (the more fine grained the locking, the more concurrent the application) and latency (the more coarse grained the lock, the more mutatations can be "batched" as a transaction reducing overheads around lock acquisistion and "Terracotta Transaction" committing).
    • As an example consider for (i=0; i< 1000; i++) { synchronized(foo) { hashmap.put(i, new Object()); }}. Executing the whole for loop in a single transaction (i.e. synchronizing outside the for-loop) is better than synchronizing within the for-loop.外边比里面好
  • Terracotta limitations:
    • Of course, one needs to be careful of not creating too huge of a "Terracotta Transaction" since as of now, Terracotta does not support transparent partial- transaction-commits or transaction-streaming, hence you could OOME the L1 with a giant "Terracotta-transaction".
    • Also be careful of what data-structures, since Terracotta is a little inconsistent today - e.g. ConcurrentHashMaps are implicitly locked, but Vectors/Hashtables and other thread-safe Collection implementations are not explicitly locked.
    • Volatiles are not supported.
  • Injection of Synchronization into the Application:
    • Use ReentrantReadWriteLocks.readLocks and ReentrantReadWriteLocks.writeLocks instead of synchronization.
    • Use a ThreadSafe DataStructure that Terracotta implicitly locks (e.g. ConcurrentHashMap) or a data structure that Terracotta provides a TIM for (e.g. Hashtable) - if you want locking at the datastructure level (as against higher in the application)
    • <auto-synchronized=true> in the lock-definition section of the tc-config.xml - implies that Terracotta will inject a ReentrantReadWriteLock.readLock and .writeLock based on your lock-type definition.
  • Others:
    • You will essentially need to analyze your code to identify what level of locking coarse-ness works best.
    • Please use the LOCK-PROFILER (admin Console) to gain visibility into the distributed locking behavior of your integrated application. See the Lock Profiler Guide at for details.

MEMORY-MANAGEMENT:

Can I get details of Garbage Collection Tuning with Terracotta.

  • It is no different than GC tuning on any standard JVM. On both client-JVM and TerracottaServer-JVM, the methodology is the same.
    • Profile GC characteristics - several tools available.
      • jstat [(e.g. $JAVA_HOME/bin/jstat -gcutil -h 10 -t <pid> 1s) will print GC stats every 1s]
      • Visual-GC/ JConsole / Visual-VM etc.
      • Parameters passed to java startup will result in more GC output in logs.
        • -verbose:gc
        • -XX:+PrintGCDetails
        • -XX:+PrintGCTimeStamps
        • -XX:+HeapDumpOnOutOfMemory (to get a heap-dump when JVM encounters Out Of Memory Exception)
    • Characterise the observations.
      • Are there frequent full GCs?
      • Are some of the full GC pauses unacceptably long (> double-digit seconds for example)?
      • Are Eden and Old appropriately sized - or is Old predominantly empty, while Eden is constantly getting filled up?
    • Improvements to GC characteristics typcially entail:
      • Heap Sizing (e.g. bump -Xms, -Xmx from 256m to 2048m)
      • Ratio of New/Young/Eden to Old/Tenured - [e.g. Modify via -XX:NewRatio (e.g. NewRatio=2, means Young is 1/3rd the Heap and Old is 2/3rd the Heap) or specify New Sizes directly - -XX:NewSize=512m -XX:MaxNewSize=512m implies size of Eden is 512m).
      • Choice of Collector (e.g. use -XX:+UseConcMarkSweepGC for the ConcurrentCollector. You could use parallel for Young via -XX:+UseParNewGC -XX:ParallelThreads=2, assuming you have atleast 2 cores on the box).
      • PermSize calibration (e.g. -XX:PermSize=256m -XX:MaxPermSize=256m)
      • As a general rule we should avoid having a younger generation that is half or more that half of full heap. The default NewRatio is 3, people might try at most with 2, but anything beyond that is not recommended. A larger young space may result in lots of frequent and long full GCs, which could create a problem on terracotta servers and clients.
    • For a more detailed article, that discusses GC tuning, see http://java.sun.com/javase/technologies/hotspot/gc/gc_tuning_6.html
How do I tune Virtual Memory Manager on the Client JVM and Terracotta Server JVM
  • What is Virtual Memory - see http://www.terracotta.org/confluence/display/docs1/Concept+and+Architecture+Guide#ConceptandArchitectureGuide-VirtualHeap and http://javamuse.blogspot.com/2007/10/why-is-rat-better-mouse-trap.html
  • It is also important to understand that you have to choose data-structures that play well with the Virtual Memory Manager implementation. e.g. know the difference between Partial Collections and Non-Partial Collections. Partial Collections are currently only implemented for HashMap, Hashtable, ConcurrentHashMap, LinkedBlockingQueue, Arrays).
  • Typical parameters to tune on the client JVM are (following makes the VMM run more aggressively).
    • l1.cachemanager.threshold (Decrease it - e.g. to 50 from default)
    • l2.cachemanager.criticalThreshold (Decrease it - e.g. to 50 from default)
    • l1.cachemanager.percentageToEvict (Increase it - e.g. to 20 from default)
    • l1.cachemanager.enabled, l1.cachemanager.logging.enabled (i.e. is Virtual Memory manager enabled at all and can I log its activity)
  • Typical parameters to tune on the Terracotta server are
    • l2.cachemanager.threshold
    • l2.cachemanager.criticalThreshold
    • l2.cachemanager.percentageToEvict
    • l2.cachemanager.enabled, l2.cachemanager.logging.enabled. (i.e. is Virtual Memory manager enabled at all and can I log its activity).
  • How do I tune my application for faster startup times? - When Terracotta client starts it will start faulting objects and application blocks till sufficient information is faulted from L2
What is DGC and why should I tune it. And if I need to, how should I tune DGC.
  • How do you know that you have to tune DGC.
    • If left untuned, cluster wide garbage will remain in the distributed system for longer than needed.
    • This will manifest itself as
      • increases in the Live Managed Object Count (on the DGC Tab in the Admin Console) which will result in performance/scale degradation.
      • It will also manifest itself as increased disk consumption on the Terracotta Server.
    • For the period that DGC runs, resources on the L2 host are being taxed - since all of the clustered objects have to be evaluated in terms of whether they are candidates for garbage collection. 1 stage of the multi-phase DGC process (paused-stage) is STOP-THE-WORLD.
    • Hence it is an important step in the Terracotta integration SDLC, especially for Applications that are heavy in terms of Objects created/expired.
  • To tune DGC, you need to tune:
    • DGC Interval
      • In the tc-config.xml, you could tune DGC to run more frequently than the default 60 minutes.
      • We have seen applications (those with high churn - e.g. caches with low TTLs, VOIP Sessions) that need DGC to run as frequently as every 3 minutes
    • DGC Collector Choice
      • In some usages, it may be valuable to additionally turn on Young-Gen DGC. Specifically if there is a lot of churn i.e. short-lived cluster objects e.g. cache-elements with low-TTL, sessions with low TimeOuts. As an example, Young-Gen DGC may run every 5 minutes (via setting tc.properties), whereas Full-DGC may run every 60 minutes (via the frequency specified in tc-config.xml).
      • To do so "l2.objectmanager.dgc.young.enabled" must be true and set "l2.objectmanager.dgc.young.frequencyInMillis" to the desired frequency.
      • Note that these are tc-properties that you can set in the tc-config.xml.
    • GC on the L1
      • For the Terracotta Server to consider a reference as garbage, no L1 in the cluster can have a reference to the object. To be sure of it, the reference has to get GC'ed on the L1.
      • To ensure that GC runs aggressively on the L1 and thus informs the L2, in a timely fashion, you have to tune OccupanyFraction. (assumption here is that the L1 is running the ConcurrentCollector - i.e. -XX:+UseConcMarkSweepGC) -
      • Set XX:CMSInitiatingOccupancyFraction in the Java Options. (e.g. If you set -XX:UseCMSInitiatingOccupancyOnly=true and -XX:CMSInitialtingOccupanyFraction=50, then GCs will happen when Old gets to 50%. Otherwise, the threshold at which is collected is calculated by the formula ==> intiatingOccupancy = 100-MinHeapFreeRatio + MinHeapFreeRatio * (CMSTriggerRatio/100)). We have tuned it to as low as 10% - i.e. CMSIntitatingOccupancyFraction=10, to ensure that when a reference becomes garbage, the L2 is aware of it very quickly.
    • BerkeleyDB
      • Sometimes, DGC is fine in terms of keeping up with Garbage churn, but BerkeleyDB Cleaner can't keep up. Hence, there may be a need to tune je.properties or tc.properties.
      • je.properties
        • l2.berkeleydb.je.cleaner.bytesInterval=100000000 (decrease - e.g. 20Million) - more aggressive cleaning.
        • l2.berkeleydb.je.checkpointer.bytesInterval=100000000 (decrease it e.g. ~ 20MB) - this forces more frequent checkpoints.
        • l2.berkeleydb.je.cleaner.lookAheadCacheSize=32768 (increase it - e.g. 65536) - lookahead cachesize for cleaning. This will reduce #Btree lookups.
        • l2.berkeleydb.je.cleaner.minAge=5 (decrease it - e.g. 1) - files get considered for cleaning sooner.
        • l2.berkeleydb.je.cleaner.threads=4 (increase this e.g. 8) - more threads cleaning.
      • tc.properties
        • l2.objectmanager.deleteBatchSize = 5000 (increase it - e.g. 40000) - more batched deletes
        • l2.objectmanager.loadObjectID.checkpoint.maxlimit = 1000 (increase it - e.g. 4Million) - product will default to this value in the next release.
  • Also review the application and the Shared-Graph-depiction on the Roots Tab of the Dev-Console, to ensure that you are not accidentally clustering more than you mean to - too many short-lived objects might indicate that there may be objects joining the shared graph that you did not intend to cluster.

THRASH:

How do I tune Faulting Behavior

What if I see many cache-misses on the Admin Console?
  • In addition, if the bottleneck is on the L2 (You see this as Cache-Misses on the Admin Console) - l2.seda.faultstage.threads=4 (increase this - e.g. 8)
What is the impact of request distribution?
  • Performance will get impacted if here is no locality of reference - hence necessiating round trips to the Terracotta Server to fault data in.
  • Good locality of reference is needed (e.g. sticky-load-balancing for sessions) to minimize thrash.
  • If data does not fit in a given JVM, consider partitioning with a router with intelligence to route to the right partition,
I see fault and flush rate as almost identical on the Terracotta Admin Console. How do I fix it?
  • Flush means that pre-fetched objects that were pulled in as raw-DNA from the Terracotta server are not being requested (which is when they get converted from raw-DNA to Object references) - in such a case, we "flush" those DNAs, after a certain time-out, to reclaim memory on the L1.
  • In such a case you probably need to tune the fault-count (reduce it) and l1.objectmanager.remote.maxDNALRUSize, in tc.properties - (increase it e.g. to 120 from default of 60)

IMPACT OF PAYLOAD:

Write-Heavy Payload - The bottleneck is in committing "Terracotta transactions" to the Terracotta server. What can I do?

  • Terracotta Transactions are written to a commit-buffer on the L1 from where a Terracotta thread writes to the Terracotta server and waits for an ACK from the server.
  • So one could increase the size of this commit buffer and "batch" Transaction writes to the Terracotta Server. One can do so via the following tc.properties:
    • l1.transactionmanager.logging.enabled=false (enable it e.g. true - to get more details)
    • l1.transactionmanager.maxOutstandingBatchSize=4 (increase it e.g. 8)
    • l1.transactionmanager.maxBatchSizeInKiloBytes=128 (increase it e.g. 256)
    • l1.transactionmanager.maxPendingBatches=88 (increase it e.g. 176)
  • If the bottleneck is on the L2 committing transactions, consider:
    • l2.seda.commitstage.threads=4 (increase it e.g. 8)

FAILOVER:

How do I tune Network Active/Passive for faster failover from Active to Passive.

By default with Network-Active-Passive, failover will occur within 45s. You will have to tune the following properties to get it lower (we have seen sub 5s in certain cases)

  • l2.healthcheck.l2.ping.enabled = true
  • l2.healthcheck.l2.ping.idletime = 5000
  • l2.healthcheck.l2.ping.interval = 1000
  • l2.healthcheck.l2.ping.probes = 3
  • l2.healthcheck.l2.socketConnect = true
  • l2.healthcheck.l2.socketConnectTimeout = 2
  • l2.healthcheck.l2.socketConnectCount = 10

What are the tolerances between client-JVMs and Terracotta-Server JVMS:

Goverened by the foll: (Runbooks - Paid version documentation have additional detail on a more top-down SLA-driven methodology of how to tune these in conjunction with l2.l1reconnect* to meet SLAs)

  • # L2 -> L1 : These settings will detect a network disconnect (like a cable pull) in 100 seconds and will allow a 90 second GC in the L1
  • l2.healthcheck.l1.ping.enabled = true
  • l2.healthcheck.l1.ping.idletime = 30000
  • l2.healthcheck.l1.ping.interval = 10000
  • l2.healthcheck.l1.ping.probes = 6
  • l2.healthcheck.l1.socketConnect = true (to enable)
  • l2.healthcheck.l1.socketConnectTimeout = 2
  • l2.healthcheck.l1.socketConnectCount = 2
  • # L1 -> L2 : Health check These settings will detect a network disconnect (like a cable pull) in 10 seconds but will allow a 40 second GC in the L2
  • l1.healthcheck.l2.ping.enabled = true
  • l1.healthcheck.l2.ping.idletime = 5000
  • l1.healthcheck.l2.ping.interval = 1000
  • l1.healthcheck.l2.ping.probes = 3
  • l1.healthcheck.l2.socketConnect = true
  • l1.healthcheck.l2.socketConnectTimeout = 2
  • l1.healthcheck.l2.socketConnectCount = 10
  • l1.healthcheck.l2.bindAdress = 0.0.0.0
  • l1.healthcheck.l2.bindPort = [valid-port] (0 for system assigned port)

TERRACOTTA INTEGRATION MODULES (TIMs) Tuning:

EHCache - Am using the EHCache TIM - but performance is poor.

  • Tune following tc.properties
    • Read and Write Concurrency: ehcache.concurrency = 1 (Increase it - e.g. to a prime number like 47). Higher concurrency allows multiple threads to perform operations against ehcache.
    • Eviction properties: ehcache.global.eviction* properties
    • Lock Levels: ehcache.lock.readLevel = READ to something more optimistic if the application allows it. If application can afford to read little stale data, set ehcache.lock.readLevel = NO_LOCK. This allows read to be performed without acquiring any lock, though the application runs the risk of not reading latest data/changes.
    • Based on application need try setting only one of 'timeToIdleSeconds and timeToLiveSeconds' inside ehcache.xml. Having both of them set may have an adverse impact on application throughput. If application need both of them consider following options
      • timeToIdleSeconds is relatively much lesser than timeToLiveSeconds
      • if timeToIdleSeconds is closer to timeToLiveSeconds, set only timeToLiveSeconds
      • if all cache entries are to be expired once at some interval, write application logic to call removeAll() operation on EHcache instead of setting timeToLiveSeconds.
  • 大小: 30.4 KB
分享到:
评论
1 楼 walle1027 2011-12-09  
Terracotta server 是否收费?如何收费?

相关推荐

Global site tag (gtag.js) - Google Analytics