Prior to the roll, we know the number of outcomes, we know what the outcomes are,

and we know that the probability of each outcome is one sixth.

In the second type of uncertainty,

we don't know the probabilities of the outcomes.

And in fact, we might not even know what the possible outcomes are.

In other words,

there is total uncertainty, in the sense that there is nothing measurable.

For example, it is very difficult to assess the probability that

an idea becomes a successful business.

This is a very complex problem that depends on way too many factors and

sources of uncertainty.

And this is why you often hear that successful entrepreneurs are not the ones

that have one great idea, but actually the ones that have lots of ideas.

In risk analysis, we are interested in the first type of uncertainty.

That is, the one for which we either know or

we can estimate the probability of each outcome.

The reason is simple, this type of uncertainty is the one that we can

measure, and therefore we're able to create models to assess risk.

These models take into consideration that some

elements are not known with certainty.

For instance, let's consider profit,

which is probably the most fundamental mathematical model in business.

Profit is calculated as the difference between revenue and cost.

Let's suppose that we know exactly how much everything is going to cost

to make a unit of a product that we want to sell.

It's not a very realistic assumption, but let's just go with it.

To make it simple, we're also going to assume that the single source of

revenue for this business is what we collect from the selling of this product.

If we're the ones determining the price of the product,

then all we need to do to calculate profit is to multiply

the number of units sold by the price per unit, and then subtract all the cost.

Calculating profit is very simple once we know how many units we have sold.

But if we want to do some planning based on future revenues, then things get a bit

more complicated, because we need to be able to estimate demand.

If we take a very simplistic approach, we might just say that demand

is going to stay steady, and therefore we will estimate

future demand with just the average demand that we have observed so far.

And this could work well in environments where things don't change much.

The reality is that in most situations,

things don't stay exactly the same from one period to another.

We must consider variability, and

it is precisely this variability, the one that produces uncertainty.

What we want the models to be able to capture is uncertainty

that is related to elements that include some level of randomness.

We want to extract historical patterns that can help us assess risk.

As we mentioned earlier, the key is to be able to determine all outcomes and

the probabilities.

For example, would you invest in a stock that is expected to double its

price within a year?

Well this seems like a pretty good opportunity.

But the key word here is expected.

Expected values don't tell you anything about the range of outcomes and

their probabilities.

With this limited information we simply don't know what the risk is.

Decisions involve risk and

attitudes toward risk vary from one person to another.

Some can tolerate more, and some less.

But we all need to have a way of measuring the amount of risk involved.

Risk analysis can be done at several levels of complexity.

At the very basic level, we could do a best case, worst case analysis.

Let's suppose that we have a spread sheet model in which some cells represent data

with uncertainty.

For example, we could have some cells that represent uncertain demand.

For the base case, we plug in the most optimistic values in each of these cells.

We can then see what happens to the outcome cell.

In the worst case scenario, we plug in the most pessimistic values for

the uncertain cells.

And once again, we check what happens to the output cells.

This is easy to do, but it doesn't give us information about the distribution of

all possible outcomes between the best and the worst cases.

Take a look at these distributions of values between the best and the worst.

All of these distributions have the same best and worst outcomes, and

even the same average outcome, but they certainly don't look alike.

The bell shape distribution tells us that the average is the most likely outcome,

and the values deviate above and below the average with the same probability.

The distribution in the form of a U tells us that the average almost never happens

and that it's as likely to get the worst possible outcome as it is to get the best.

Most people would associate this distribution to a risky situation.

The other two distributions have long tails indicating that extreme

values do not happen often, but they could happen with a small probability.

These distributions illustrate that examining only the best and

the worst possible outcomes is a limited analysis that ignores

a lot of valuable information.

You now might be wondering well,

then how can we get the values in between the best and the worst?

And the answer is Monte Carlo Simulation, a predictive analytics tool.

The name of this methodology comes from the famous casino in Monaco, and

it has a very interesting story that dates back to the experiments that took

place during the development of the atomic bomb.

Here of course,

we are going to focus on a more peaceful application of this technique.

And easy way of describing how Monte Carlo Simulation works, is by assuming

that we have expression model in which some of the cells contain certain values.

For example, suppose that the green cells are the ones with the uncertain values,

and that the orange cell is the outcome.

For most models, if we use expected values for

the green cells, we obtain an expected value for the orange cell.

We can also plug in the worst and the best estimates for the green cells, and

observe the best and the worst estimates for the orange cell.

In general, any set of values for the green cells generates a value for

the orange cell.

This is known as what-if analysis.

If we plug in a lot of values for the green cells, and

then store all the resulting values for

the orange cell, we could create a picture of all possible outcomes.

This picture is what we have been calling a distribution.

Plugging in a lot of numbers to create a distribution is very tedious,

and not only that, if we are the ones choosing these input values,

we're bound to introduce our own biases in the process.

And this is why we use Monte Carlo Simulation.

The method requires that we make assumptions about

how the values of the uncertain cells behave.

For instance, do the uncertain values follow a uniform distribution?

Or a normal distribution?

We will see that these assumptions are typically based on both experience and

historical data.

The quality of the model will depend on how reasonable these assumptions are.

But once we feel comfortable with the assumptions about the uncertain values,

the method will produce valuable information about the outcomes.

It is really exciting that, thanks to advances in personal computing and

software, we get to use a tool that, not long ago,

was only available to those with advanced programming skills and powerful computers.