This error can occur when the spool folder on the host in question is filling up and RHA starts to process a large journal file in a subfolder of the spool folder and if this journal we are processing is large enough it could run the spool drive out of space. Once the spool drive runs out of space we cannot write to anymore journal files, this is why 'write to journal file failed' starts to show up over and over.
RHA by default monitors the size of the spool folder location and will alert you when 80% of spool space or more is used and it will stop the scenario when the minimum disk free setting is met, by default this setting is 1 gig. If the spool folder is filling up and has not yet used up all but 1 gig and then RHA starts to process a large journal file in the xomf folder (a subfolder of the spool folder) this can run the drive out of space and cause the error in question to appear over and over.
To resolve this error/issue I would start by raising your minimum disk free setting for the server in question getting this error. Usually when I see this error occuring and the customer has high spool growth at the time the error occurs they usually have changed the minimum disk free setting to a lower value than 1 gig. If you have not lowered the default minimum disk free setting I would still recommend raising it above 1 gig to say 5 gigs. This setting can be found on the properties tab for the server in question under the 'spool' section.
I would also recommend making sure you have excluded the spool folder from antivirus scans, if your antivirus has the ability also exclude the engine process from scans. For more on how to exclude spool and our engine from scans please see the following article