There are a few reasons a file can be found to be corrupt after replication.
1. In systems the use of encryption or compression during replication of the files in a job may result in a corrupted file. A sign of this problem is you find files with a modified file name such as $FRP.... in the folders. In this case you should check in the advanced features of every server and uncheck the options for compression and encryption. Then delete all files with the $FRP... names and permit the job to replace them.
2. If you are running OFM and replicating open files or databases. OFM permits the replication of open files and databases but there is a potential downside. FRP may be replicating a file or database that is not congruent and therefore corrupt. This file may appear to be complete but will be corrupt if you attempt to use it on the destination. You have a couple of options. Ignore it and the eventually the file will be replicated fully when it has been closed by the application and the timestamp updated. Or you can exclude the file from replication by entering a properly formated exclude rule in the advanced features of the job. In the case of a database you can either take it offline to replicate with a script or you can use your database utilities or other means to export a congruent copy for replication. See the article on OFM for more information.
3. If there are communication failures a file may be corrupted. Check the stability of the connection.
4. If the FRP application is low on Java Heap memory jobs may be unstable and files may be corrupted at the point of job crash. This can be corrected by adjusting the Java Heap memory higher or reducing the load on the application by scheduling jobs to run at alternating times. See Memory.