We have an enterprise, versioned, file based rules repository on an AIX Server (Blaze Advisor 6.1). Within the repository, we have seven projects, each having its own JVM / RMA, consisting of 1 - 29 decision tables within the project. All executions are initiated via web service, invoking a function that kicks off a ruleflow with one or more decision tables. All decision tables use sequential execution.
Recently we have been running out of memory on our JVMs while using the RMA (execution environment is not affected by this error). The only way to get back into the RMA is to recycle the JVM, which results in the business user losing whatever RMA work was in process.
What factors affect the heap size needed to be set on the JVM? # of rules, space taken by the table instance, # of versions, # of tables in the project, # of people in the RMA at one time, etc...? Would use of garbage collection help? If so, what changes are needed in the Blaze project vs Java code? We use the out-of-box RMA, where do we put the garbage collection code?