Keyword search: 

Tuning FRP for Maximum Throughput

5/10/2011 10:13 AM
You can subscribe to this wiki article using an RSS feed reader.
Resource Manager can either limit or increase the number of CPU/memory intensive tasks running concurrently in FRP.  For details on how to limit resource usage for older machines or those with other tasks running that compete for resources see this article.

The remainder of this article is on how to increase usage of CPU and resources to increase throughput with FRP.

Please note in most cases you will also need to allocate more memory to FRP for this option as well.  For details on increasing the memory allocation to FRP see this article on Memory Allocated to FRP.

You may also benifit from using some of the tips in the article on Advanced Replication Tuning.

Two adjustments to your s2s.properties file on all of your machines should be considered.  If you do not need real time replicaiton turning off this feature can increase throughput.

On ALL FRP servers open the file \FileReplicationPro\etc\s2s.properties with a plain text editor.

Change the following settings

s2s.buffer.size=64000 (change to 2x or 3x value)
s2s.realtime=true (change to false)

Save changes
Stop and restart the FRPrep service on each server.

Resource Manager Overview:
The parameters can be set in the FRP s2s.properties file. The file is located in the /FileReplicationPro/etc directory.  By default, the value for each of the parameters is set to 20.  Please note the 2s.properties file does not include the parameters listed below. They should be added to the file for resource management customization for your environment.  It is our recommendation to first test FRP with the default settings prior to changing the default settings.

The parameters that can be set are as follows:

s2s.max_concurrent_jobs: Defines the number of jobs allowed to run concurrently. Our recommended starting point when testing new parameters in limited environments is 40 for each value below.

s2s.max_concurrent_compressions:   Sets the number of simultaneous compression/decompression threads.

s2s.max_concurrent_transfers: Defines the number of concurrent files that will be replicated. This parameter does not limit the number of replications within a specific job.  It does limit the number of concurrent replications occurring at one time in different jobs.

s2s.max_concurrent_dsync:  Sets the max number of concurrent dsync computations (for byte level differential replication).

Regarding bandwidth under-use issues - it is possible that FRP shares bandwidth with other network applications that consume a large amount of bandwidth. Another limiting factor may be due to system load and/or Java I/O operations.

Below is a sample you can cut and paste to your s2s.properties file to use as a starting point, you many need to adjust these up or down to get best performance.   To restore default settings, remove these settings from your s2s.properties files.

These properties should be placed into any machine that is involved in a particularly intensive workload.  The FRPrep service will need to be restarted after each change.

#--Try these settings and if replication is too slow make small changes to the items and retest until a balance of performance and stability is found.

s2s.max_concurrent_jobs=40
s2s.max_concurrent_compressions=40
s2s.max_concurrent_transfers=40
s2s.max_concurrent_dsync=40


-Yitzi 3-24-2009
Tags:
Home: WIKI - Knowledge Base Index What's new: Recently changed articles