Eric Torkia, MASc
Perception is reality. That was something said to me many years ago by an executive at SAP. I have pondered this statement and find that it is true, applicable in most situations. In a world of constant information, we are now aware of events much more quickly and visibly.
Risk perception is a highly personal process of decision-making, based on an individual’s frame of reference developed over a lifetime, among many other factors.
According to Makridakis (Makrdakis, Hogarth, & Gaba, 2009), a good example of misperception of risk is when 1,700 more people died on the road in 2002 than in 2001 because they elected to drive instead of taking a plane. Terrorism or not, the risk of death is much lower when you take a plane instead of the car – therefore the more people who took cars to avoid terrorists, the more people were at risk of dying. The obvious motivation for this being that the news media and the 24 hours news cycle made a highly tragic but rare event like 9/11 feel imminent. Fortunately, the numbers don’t bear it out (in fact driving is 100 times more dangerous) and that is the point of simulation. Building a model can help you figure out if a risk is real or being manipulated by our personal bias.
What exactly is a model?
When we think about reality, what are we really doing? We are creating models in our minds. Models are visual, written, or mathematical abstractions of reality used to explain and analyze a problem or phenomena. The simplest example of an abstract mathematical model that we can all relate to is: PROFIT = REVENUE – EXPENSES. To demonstrate that profit is an abstract concept, consider that even though you know exactly what it [profit] means, you will never find a pile of profit in nature, nor can you step in it on a spring day in the park. Below is an illustration depicting the relationship between the “real” world and simulation.
Types and flavors of models
Various model types and flavors exist. We shall cover several important classifications, all important to the disciplines of decision and risk analysis.
Deterministic vs. Stochastic (Probabilistic)
A deterministic model is one where you can calculate the output precisely given a specific set of inputs. For example, if Revenue is $3 million and Expenses are $2 million, then per our model [Profit = Revenue – Expenses] our Profit is $1 million – no uncertainty about that.
Quantitative vs. Qualitative
As the names imply, Quantitative Analysis looks at quantifiable values (hard numbers) while qualitative models seek to take abstract concepts such as experiential data and translate it into numbers we can monitor or manage, e.g. customer satisfaction index. The primary tools for qualitative data are surveys, interviews, and testimonies while historical data is the basis for most Quantitative Analysis.
Symbolic vs. Numerical
This is perhaps an old-school distinction to make but here it is: Symbolic models are based on algebraic modeling using symbolic form. That means you distill the situation into equations and manipulate them algebraically to solve for x (say). Numerical models also require an equation but it can be far simpler. You put numbers or scenarios in on one side, and record the output on the other, using the brute force of modern computers. Then by looking at the aggregate outputs you can see which variables are correlated as well as get the “area under the curve” at specific confidence intervals. Trying to do this with symbolic methods typically yields pages of algebra that we cannot easily understand.
Descriptive vs. Predictive vs. Prescriptive
Descriptive Models are the basis for all further modeling. Descriptive Models are simply a description in mathematical form, usually in deterministic terms. They describe what was. A good example is an accounting report, such as an income statement or perhaps a sales report, which gives you the state at a specific point in time with clear and understood calculations.
Predictive Models come in several flavors such as forecasting, data mining, and machine learning models. Machine learning usually requires lots of historical data: Starting from a standard model form, such as a linear or logistic regression, tree or neural network model, they adjust and “fit” various model parameters to the observed data, until the model’s output on the historical data closely matches the historical outcomes. If a model structure is unknown, machine learning can give clues to the modeler of unobserved underlying relationships. For example, when analyzing credit risk, you may consider that age, annual income, and marital status are good predictors for collecting a loan. Certain combinations of these predictors will dictate the loan has a good chance of being fully collected while others would predict default.
Simulation models lie on the boundary between predictive models and prescriptive models, and include some elements of both. Simulation is used when we don’t have historical data on the whole process – for example, we haven’t built the new assembly line yet – but we may have data on some elements of the process, such as processing times at assembly stages and varying demand for end products. Through simulation, you can incorporate uncertainty in the inputs and assess their impacts on the outcomes, also providing insight into key influencers on the target outcome.
Prescriptive Models also come in several flavors, but unlike predictive models, their output is a decision or action to be taken. Prescriptive modeling also starts with a descriptive model that calculates certain results, but includes further logic to reach a decision. For example, decision trees, multi-attribute decision matrices, or business rule systems may be used to “compute” a decision. For situations involving many resource allocation decisions, mathematical optimization is used. The model includes decision variables that may specify yes/no outcomes such as “build or don’t build the new plant,” or amounts of resources such as “we need x square feet of floor space and y production line workers.”
Most optimization models at present are deterministic. For example, they might assume constant demand for products, and focus on allocating resources to efficiently produce them. But it’s possible to combine simulation modeling and optimization modeling, to find the best decisions in the presence of uncertainty. Depending on the form of the resulting model, we may be able to apply fast methods that yield “known optimal” outcomes, such as stochastic linear programming, or we might have to fall back to brute-force methods that yield only “better, but not proven optimal” outcomes, such as simulation optimization. This approach allows us to take uncertainty into account and optimize our desired measure across the full range of possible outcomes.
Software to do all of this is not only available, but increasingly easy to use. What industry needs is more business analysts who have learned how to use these powerful methods, and can apply them to practical business problems. I hope you are one of them – or you soon will be!
Eric Torkia, MASc, is Executive Partner for Analytics Practice at Technology Partnerz Ltd. St-Lambert, Quebec, Canada. Technology Partnerz is an established reseller for analytics tools and services.