0:03

From the previous video

we know how awesome double type works.

But this compares with a 64 bit integer the same size.

Of course, not everything could be done with integers but

there is a phase they turn out to be much better than doubles.

First, Double has less bits for actual digits, 53 versus 64.

As they expand, it should be stored separate.

Doubles could be a bit slower than integers,

one half of the times.

The good thing is that doubles are

susceptible to overflow because the fractional of parts,

is always the first digits.

That could be overflow and expands but it

will happen at magnitude of about 10 to the power of the 300.

So in practice, this is really not the problem.

But the bad thing is that in doubles there are always errors.

We've already seen how the values like square root of two,

or two thirds, are startled with errors.

But even decimal fractions,

they will also have errors.

For example if you are at 0.1 and 0.2,

you won't get exactly 0.3.

Of course, it's much nicer with integers where all values are exact.

So you should only use decimal point numbers only when they are really necessary.

An when integers suffice,

use integers because with them you don't need to treat errors.

The first example when doubles are not needed is with the rational numbers.

As you've seen we could store the numerator and denominator as integers,

but there is a problem of overflow.

Second, if you need to work with

decimal fractions where the number of huge substance point is fixed,

you could just think of them as integers.

Multiply it by the corresponding power of 10.

For example if you work with prices like $2.49 you just think of it as 249 cents.

There are situations, [inaudible] are necessary but you

don't always need to explicitly compute them especially if you are just to compare them.

For example, if you wanted to iterate

overall integers which are less than square root of n,

in a loop you could replace the statements

I is less than square root of n by the equal statement,

I squared is less than 1.

If the numbers are negative,

then the statements are equal,

are equivalent, but the second is always integers.

And another example I wrote,

if you need to compare length and the coordinates are integers,

you might just compare this first offense.

The length is like a square root of X squared plus Y squared.

N square is X squared plus Y squared an integer.

And as length are negative,

the square root is less than square root is equal

to the just the value is less than value.

However, sometimes you can get over the integers and are forced to use doubles.

The most common case is when you thought that some value which is not an integer.

And also the statements were specified.

If how much another you could output the value.

Because we know that the doubles there are always errors.

Usually it sounds like the absolute or relative error

should not be greater than 10 to the minus six.

It means that if you output the value which is either the absolute difference to

the actual value or the relative difference not bigger than 10 to the minus six,

then this value will be considered as correct correct answer.

And then just pitfalls there, is that your number may have enough precision,

but on the output it could be automatically rounded and so have error more than needed.

So it's safer to always output some fixed number of digits after the points.

On the slide, you could see how to do this in the C++, Java, and Python.

However, even when doubles are absolutely necessary,

you should still try to stay in integers for as as long as possible

because the less floating point operations the less errors.

First example, say you need to sum three fractions.

The straightforward way is just to divide each to a double and then sum them up,

that will be five floating point operations.

But there is another way, you could just sum the most marginal fractions,

and don't let events divide the interest to double,

and that'll be one floating point operation.

A similar example we need to do the division in a row.

You could just divide in doubles and so there'll be

three divisions each a floating point operation,

but you could also multiply all divisors and then divide by this value.

Because divisors have integers,

only one operation will be a floating point.

Now, imagine you need to create a square root of 1000,

multiplied by some coefficients.

You could just take a root then double then multiply by the first then by the second.

That will be three floating point operations,

but you could also take everything inside the square root,

multiply that as an integer and only after that,

take a square root and it will be only one floating point operation.

On the other hand, with integers there could also be overflow.

So you need to watch for that and switch to double so it could happen.

So if you decided to use doubles you need to know how to do it correctly.

Are you losing the data errors?

Some usual things do not work,

for example, the equality operator.

This is the variance are exactly equal

and even small errors are rendered practically useless.

The solution is, let's consider A and B equal

if their absolute difference is no greater than some small value epsilon.

This way even even with small errors,

we would consider equal number, equal values as equal.

The other comparisons also need to change.

For example A is strictly less than B should become A is than less than B minus epsilon.

Because the numbers from B minus epsilon to B are now considered equal to B.

A is less than equal to B should become A is less than B plus

epsilon because the numbers from B to B plus epsilon also considered to B now.

For now we learned about equals to, is strictly less than,

and this is less than or equal to equal to but is greater than is defined symmetrical.

Using an epsilon to compare is necessary but it also includes errors.

So, you might want to just use A is less than B, if it's good to know.

For example if you sorting values and it doesn't matter if some are close or not,

you might just use a standard as an operator and so do [inaudible].

There are other operations which are better errors.

For example around floor(a) which rounds

the value to the nearest small integer

and ceil which rounds to the nearest great integer.

The problem here is that

even the small change of argument could change the result by one.

The floor of one is one,

but the floor of one minus one variance is zero.

So, if our value is an integer is an but we still has some small negative error,

you would get our value minus one if we use floor and that is better.

So, you may vote right instead of floor(a) just floor of A plus epsilon,

and if the epsilon is big enough it will overcome all errors.

And if it's officially small, they'll never be such

that A is less than some integer but A plus epsilon is greater than it.

So they need the epsilon constant for comparisons node.

But how to choose that constant?

In fact we can just take each operation in

our program and carefully balance the error which arise here.

And then we could take the highest of the balance and take it for the value of epsilon.

Then all our errors will be nothing greater than

epsilon and it likely to be just a small.

But it will be tedious and time consuming,

so it's hardly ever down on the contest.

Instead people just take some feasible value like 10 to the minus eight or

10 to the minus nine and hope that it works.

What if you said some value of epsilon doesn't work?

Using debug, you could find the first place that the error appears.

It may be that debug is completely unrelated to the epsilon.

But if it is, it either is to do something like

comparing two values which are located correctly but compared incorrectly.

That's one of the cases.

Either unequal values are treated as equal and that

means our epsilon is too big. We should make it smaller.

For example, we would take the next power of 10 and see if it's small enough.

Or the errors got too high and equal value are treated as unequal.

That means our epsilon is too small and we should try to increase it.

We could take the next power of 10 and see if it works.

So, we've learned to use doubles only when they're really needed.

And we've learned to overcome errors by using epsilon.

In the next video, we will cover some more practical issues.