How can computers be instructed to emulate the decision-making process of the human conscience? Since the early 1970s, production based systems have existed to attempt to answer this question, in which rules of an if-then construct have conditions to be met with some related action to be carried out. However, based on the scope of the problem such an expert system solves, it can develop into an extensively complex network of rules acting upon an even larger database of facts. Sifting through all of this information to determine which rules are to be fired and when can prove a daunting task. Until 1978, existing methods for accomplishing this process were extraordinarily inefficient, consisting of looping through the entire fact base and comparing the system’s current state to each of the conditions of every rule in the system. This iterative approach does not take advantage of the temporal redundancy of expert systems, which is the fact that, from cycle to cycle, the majority of the fact base remains consistent.
In his thesis published in 1978, Dr. Charles Forgy of Carnegie Mellon University presented an innovative solution with the Rete algorithm. The Rete approach, translating to “net” in Latin, organizes the rules into a tree-like discrimination network sorted according to the left-hand side (LHS) conditional characteristics to be met in order for the right-hand sides (RHS) of the rules to be executed. Five different types of nodes serve to represent this filtration process: the root node, kind nodes, one-input nodes, two-input nodes, and terminal nodes. During rule compilation, all the individual patterns to be matched are placed into one-input alpha nodes, collectively referred to as the alpha network. These alpha nodes are connected to the single overarching root node by preliminary kind nodes, which separate the patterns to be matched according to the appropriate type of token entering the system. Two-input beta nodes join different pieces of information computed in one-input nodes and construct situations of partial matching, in which some but not all of the conditions of the LHS are met. This web of one and two input nodes congregates at the terminal nodes, each of which represents a rule whose LHS has been satisfied given that the fact to be tested has begun at the root node and passed all conditions until the current terminal node. In this way, the Rete algorithm prevents the same tests from being performed twice by compiling all the tests to be made into one decision tree. In addition to overlapping tests of the same condition, the Rete algorithm’s use of state memory prevents the same tests from being repeated on facts that have not been altered, as is generally the case with the majority of the expert system’s fact base. The Rete algorithm takes into account this temporal redundancy by only remembering facts that have been added or removed from the database from cycle to cycle, and only these select units of information are passed through the discrimination network, since only these facts could result in any changes in rules to be fired. As tokens of information pass through the tree-like structure and identify rules to be executed, these chosen rules are added to a conflict set representing all rules whose LHSs are satisfied and are to be fired in some undetermined order.
As Figure 1 illustrates using a simple example rule that takes into account the age and income of an individual, all fact tokens enter the network from the root node. From there, the tokens are separated according to type by the kind nodes, one of which deals with the age of the user and the other concerns the individual’s income. One-input nodes, or alpha nodes, make each individual test for the conditions to be met of the rule’s LHS. Once these patterns have been matched individually, they are grouped and joined by beta nodes so that all three conditions are ensured to be met at once. Of importance to note is the Beta 1 node, which combines the previous one-input nodes testing that the individual’s age is less than five or greater than sixty. With the partial matching of Beta 1, Alpha 1 and Alpha 2 can be joined and stored before all three conditions are necessarily met. Finally, once the tokens have been verified to pass both the age and income conditions, they are grouped at Beta 2 and pass to the terminal node which will fire the RHS of the related rule.
The Rete algorithm was first implemented in the OPS5 system and has since been expanded to be used with ART, CLIPS, Jess, and other expert system shells. While using the Rete algorithm did vastly improve the rule selection process, the space-time tradeoff is a constant dilemma developers face, and algorithms that are time-efficient, like Rete, can be memory-expensive. Since the advent of the Rete algorithm, more targeted and optimized alternatives have been designed, although many do not have the same adaptability of Rete because of their specificity to a certain scenario. Some modifications to Rete include Rete 2 and Rete 3, commercialized, non-open source versions of the system with substantial advancements in optimization. Other variations include TREAT and LEAPS, which each make additional space-time tradeoffs to either improve or simplify the space efficiency of the Rete algorithm at the cost of losing some information and complexity that Rete provides. TREAT, for example, increases the speed of the deletion process when facts are retracted by directly affecting the conflict set and removing any rules which necessitate the no-longer-existent token. While this adaptation is more time-efficient, the disadvantage is that partial memory can no longer be usefully maintained, as deleted tokens are not accounted for within the tree structure itself. This method by which TREAT directly applies changes to the conflict set is called conflict set support. The LEAPS software, on the other hand, optimizes time and space efficiency by removing the need for a conflict-set and conflict resolution of the rule set altogether. Instead of passing all facts through the Rete network as tokens and constructing a conflict-set of rules to be fired afterwards in an order that must be defined by conflict resolution mechanisms, any rule that a token activates at a terminal node is fired immediately. Of course, this method limits the ability to control the order in which the rules are fired through more intricate means or based on other factors. Yet regardless of alternatives, Forgy’s Rete algorithm is a valuable tool and understanding its logic and process is crucial for any expert system developer to have a sense of how the rule and fact databases are sorted and integrated.
Forgy, Charles L. "Rete: A Fast Algorithm for the Many Pattern/Many Object Pattern Match Problem." Artificial Intelligence. Carnegie-Mellon University, 1979. Web. 23 Mar. 2016. <mwfong/Technical/RETE%20Match%20Algorithm%20-%20Forgy%20OCR.pdf>.
Friedman-Hill, Ernest J. "The Rete Algorithm." Jess. 5 Nov. 2008. Web. 23 Mar. 2016. Link.
Niola, Marino. Human Net Image. Diritte Globali. N.p., 15 Dec. 2015. Web. 15 June 2016. Link.
Proctor, Mark. "1.4. Rete Algorithm." JBug Drools. Web. 23 Mar. 2016. Link.