georgevanecek@fico.com

The Need for States and Models in Event Processing

Blog Post created by georgevanecek@fico.com on Nov 4, 2016

In my last blog, Stateful Event Processing, I outlined how descriptive and contextual information can enable organizations to make better business decisions using event processing that is stateful. In this blog, I take a closer look at why and how events, states and models make this happen.

 

Let’s look at what states are and why they are needed. From a system’s point of view, a state is all the stored information available to the system. From a solution’s point of view, states are the means by which a solution can understand the real world. An event processing system that implements a solution will most likely make decisions that need to understand the states of entities identified by the ingested events. It is the recognition of entities and their states that are needed to make decisions; the ingested events are what drives the process, as shown in this diagram.

Domains.png

Events update states for related entities.

 

Events do two things. First, events update the entity states that keep the system up to date on what entities have done, are doing, or will or should do next. They also trigger the analysis of the state changes, decisions to be made, and actions to be taken.

 

The entities known to a solution are what is of importance to a decisioning system. The system needs to keep track of entities and the states related to those entities. Entities are the people, places, things, and groups of such that are of interest to a given solution. Here, I do not differentiate between a real world entity and the system’s internal abstraction of that entity. From the system’s point of view, the internal representation is the same as the real world one, albeit with a lot less information. A state for an entity is therefore the tracked conditions and situations that exist for the real world entity at a given time or period that is needed for a solution to understand the entity. Since the entities change state outside the system, the only way for the system to track the states is through the events that carry notable information. When some entity or group of entities change state in a notable way, events generated and communicated to the system note that state change.

 

Furthermore, note that ingested events typically cause state changes in more than one entity. When a person buys a product with a credit card, the descriptive information of that person does not change, but the state of that person’s list of possessions and their credit card balance does. Likewise, the merchant’s revenue, the store’s inventory, and so on. A single purchase event may change the states of many entities simultaneously. Thus, in receiving events, an event processing system must be able to identify related entities and update their states.

Yet, the states of related entities do not all change at once. Some states change often and in unison, while others change infrequently, or not at all.  Some states change only when a pattern of several events occurs within a period of time. This depends on the relationship between the entities. For instance, the price of a painting in a gallery may be related to the number of days the painting has been displayed, the number of customers that did not buy the painting, and the bids offered.

 

Processing events using analytics to make decisions is thus fundamentally stateful. Without stateful entities known to a system, ingested events could only be cleaned, filtered or understood in terms of their membership in some set or cluster over some period of time. They would be decoupled from specific entities. Their interpretation could only be looked at from a global perspective in which the related entities are abstracted and seen through the membership in some group or cluster. Solutions that take this approach tend to adopt some algorithmic methodology to formulate a model to make sense of the sets and clusters through the ingested events.

 

Interestingly, even in this seemingly stateless processing mode, the solutions are actually stateful. To understand this, consider how and when a system changes its states, and how the changes are noted by the system. States are manipulated and maintained either explicitly as data objects or derived on-demand computationally through the use of models. The models need to be trained and configured.  In this case of using a stateless process mode with models, the models are themselves stateful.

 

Stateful event processing can effectively be implemented using a data flow paradigm. This paradigm can itself be realized in a pipelined architecture. A pipelined architecture processes many events simultaneously through a sequence of processing stations. Think of this architecture as a system of interconnected conveyor belts at an airport in which luggage moves from check-in counters to airplanes. There may be multiple sources from which luggage is inserted onto the conveyor belts, and along the way conveyor belts may merge or a tag-checking station may split a flow in two.  Along the way, information about each bag is collected and its state is tracked, as well as the total payload for each airplane, etc.

 

Close to the ingestion points, events are typically converted into sets of variables, called tuples. The tuples are sent to the processing tasks along the pipeline. Each task consumes incoming tuples from upstream tasks and emits them as new tuples into its downstreams. In this way, tuples carry the cumulative information from the original events, the descriptive information enriched from related entities as well as the information for their current and future states. As tuples pass through the tasks, they undergo transformations and their information changes. Most often, information from the tuples updates related states, and conversely, information gained from existing states enriches the tuples with needed details.

 

This is where states get interesting. Some states are created and persisted, while others are ephemeral and computed on the fly using models. Where states are kept or obtained from depends on the number of entities related to a given state and on the computational complexity needed to compute those states. This leads to the classic space-time tradeoff in computer science. As such, not all stateful information can be stored explicitly whether internally or externally. Let’s consider two cases.

 

The first case occurs when it is important to determine a state of a group member in relation to its larger group; for example, analyzing the propensity for a member from their actions in response to a given group event. While the members are individually known and these states could be stored, it is not practical to compute and store all the states for all the group members, and then continuously update them for all the members.  Computationally, this would be grossly expensive and ultimately unnecessary.

 

The second case occurs when there are too many states to store due to a possible state space explotion. Consider asking the question, “Where are all the places a person can go next given certain conditions?” That is the same as asking “In what state related to a location and condition can a person be in the next hour?” These are states, some of which may need to be known, needed in order to make a decision. It is both computationally too expensive and too storage intensive to pre-compute all possible states for all the tracked individuals.

 

In either case, there is no need to compute and store every bit of information for every state. This is where models come in. A model is a trained algorithm that can approximate with high confidence, if not determine precisely, a current or future state for a member of a group based on the member or group’s past states. Such a model makes the processing stateful. The model must be configurable to be effective. This configuration is itself a state for the model, which in effect makes the processing stateful. The difference is that the state change has been decoupled from the event processing flow for the purpose of making the decisioning process more computationally efficient.

 

What the software industry realized a while back is that it pays to invest in enabling platforms that simplify customers’ efforts to design and create their solutions. Nowhere is this more relevant than in an event processing system that has to deal with states. This is especially true if the system tries to increase throughput through the use of a distributed architecture. Such an enabling but distributed platform must have a fairly high water line for its API. This pushes much of the functional complexity of state consistency, performance, system availability, event time-order, processing requirements, and more below the water line, and removes the need for solution architects and developers to become experts in these areas.

 

At FICO, the DMS (Decision Management Suite) has been evolving to address all aspects related to analyzing data, scoring events, and creating and retraining models in addition to enhancing analytic methods needed to build business solutions. As part of DMS, the DMP Streaming platform leverages the models and analytics in DMS to process events and leverage states in all operational and quality-of-service modes. The platform is flexible in how a solution chooses to create, update, and utilize states in order to drive its operational analytics and decision-making.

 

In my following blogs, I will expand on how a stateful event processing platform can deal with the issues of: event and state consistency, processing guarantees, latency and throughput requiremenets, time-order, and high availability.

Outcomes