As I said in the previous unit, we decided to open the module on operating systems with a discussion of some efficiency matters. So, once again, here is the Jack operating system, and let us focus on the top left class, so to speak, math. We see that math is a collection, or a library of several widely used mathematical operations. And the first thing that I would like to claim is that when it comes to operating systems, efficiency really matters. You see, every one of the operations that you see here is going to be used by numerous application programs, and perhaps by other classes in the operating system, as well. So, the lower is the service, so to speak, in the software hierarchy the more you want it to be efficient, because it is going to support many more services on top of it. Now think about things like multiplication, division, square root. They'd better be efficient. So, to illustrate, let us focus on the multiplication method, and to begin with, let's take a look at the client perspective, so to speak. So, we can write some Jack program, and at some point, you say x times y. And if you want, by the way, you can also multiply numbers using this syntax. In the first example, in the top multiplication, the compiler is going to be generous enough to understand that asterisk stands for invoking the method that you see at the bottom, and the compiler will automatically replace 419 times 5003 by math, multiply these two numbers. So, you can do either way, right? It doesn't matter. So, this is just a syntactical comment that I wanted to put aside, and someone has to deliver this functionality, and this agent is, of course, the math class in our Gecko operating system. So, here is an example of a possible solution to multiplication. We're going to do what is known as repetitive addition, and basically, we are adding x to some accumulator y times. Now, before you continue, I'd like to say a few words about the syntax that we're going to use in this unit, and in all subsequent units from now until the end of this module. We no longer use the Jack syntax of curly brackets, and semi colons and so on. Instead, we do something which is similar to Python, we use indentation to stand for blocks of code. And we do this because we want you to focus strictly on the algorithm, and ignore all sort of syntax clutters, which are not terribly important when we discuss the operating system. So, with that in mind, what is the running time of this algorithm? Well, I'm not sure what is the exact running time, but I see from the code here that is really completely dominated by y. If y is a large number, then we're going to have many iterations in this program, we're going to have y iterations in this particular algorithm. And if you ask me what is the running time of this algorithm? Then in computer science, when we're asked to give such assessments, we always take a very honest and conservative standpoint. And we try to give what is known as the worst case answer. So, let us assume that N is the largest number that we may be asked to multiply. So N is the largest value of y. And with that in mind, we observed that the algorithms runtime is proportional to N, Y proportional to N. Because we have y iterations, actually, we have N iterations in the worst case. And in every one of this iterations, let us assume that we have to make something like ten machine-level operations after compilation and so on. So, overall, we have to perform ten times N machine operations. So whatever, whether it's ten or 20 or 40, it doesn't really matter, because N dominates the picture here. And therefore, the running time is simply proportional to N. If N is a huge number, this algorithm would run extremely slowly. So, let's remember this algorithm, put it on the side, and consider now another solution. And this solution is based on what we learned in elementary school. So, for example, if I ask you to multiply 27 times nine in binary, then I can do exactly what they learned in grade two or three. I don't remember. Basically, I take the first number, also known as the multiplicand. And this is a big words, so I'll simply use x instead. So we take x, we write it down, and I'm sorry, we take x, we multiply it by the right most digit of the multiplier, which happens to be one. And so we get the same number, the same x. We write it down. Then we take the same x, and we multiply it by the next digit up the significance letter, so to speak, which happens to be zero. So we get a bunch of zeros. Then we do the same with the next bit. Once again, we get a bunch of zeros. Then we do the same thing with the next bit, which happens to be one. And finally, we add up all these bits along the rows, or along the columns. And we get the result, which happens to be 243. All right, now let me present to you a more systematic way of doing the same thing. Well here, I take into consideration the fact that this number does not exist in a vacuum. It sits within some word in memory, and the width of this word in Jack is 16 bits. But in a modern computer, it's typically 64 bits, or maybe even 128 bits. So, I write down the number, and then I pad with zeroes until the end of the word. And I do the same with y, and here is the algorithm. I take x and write it down as is, then I shift x one position to the left, write it again, shift it another position to the left, write it again, and so on and so forth. And in this particular case, I do it four times, why four? Because the second number, the multiplier, has four significant bits, 1 0 0 0 1, so it's enough to carry out this shifts four times. And then I have to add them up. Now, how do I add them up? Well, I take the multiplier, and I write it kind of vertically, just for my own record keeping. And then I'm going to add up only those rows for which the index, or the ith bit of, and the multiplier is one. So, it looks from this example here, that I have to add up only the first and the last row. And indeed, if I do it, I will get the same result that I got before. So, it looks like two very similar procedures, yet, the second procedure is easier to implementing a computer program, and indeed, here is the program that's implements it. We start with some accumulated sum, which is initialized to zero, and then we use a variable that we call shifted x. And at the beginning, shifted x is the same as x, that's the first row below the lines, so to speak. And then we go through the following group. For each i, going from zero to the number of digits that they have in their presentation, w, and so, w can be 16, or 64, or. It is a relatively small number. So, for every one of these digits, if the ice bit of the multiplier is one, I add shifted x to sum. And then irrespective of this test, I do another shift, and I go back to the fore loop and do the same over and over again, until I exhaust all the bits in the representation. So, what I described here is just sort of a sequential way to describe what I did waving my hand, when I gave you this example up there. You can stop the tape and verify that it's exactly the same thing. And at the end of this process, I return the sum to which we arrived. All right, now there are some very nice properties of this algorithm. Once again, let us assume that n is the largest number that we may be asked to multiply. It may be a million, a billion, a trillion. Whatever is the number that you can represent given w bits. And yet, irrespective of the magnitude of this number, the runtime of this algorithm is proportional not to n, but rather to w, to the number of bits which is fixed, right? Because we work with a fixed representation with either 1632, 64 bits and so on. And this constant 16 64 and so on is going to determine how long this algorithm is going to be, is going to perform. Now, it also happens that the number of bits that it takes to represent a number n Is log based two of n, which is a relatively, not relatively, it's a very small number. I mean, when n is very large log two of n is still very small. Now, there's another nice property of this algorithm. If you look at it, it involves only addition operations and shifts. Now, shift is a trivial operation in the world of binary numbers, and also note that shift is simply the same as adding up the same number. So, shifted x is shifted x plus shifted x. It's another way to think about shift. So, anyway you look at it, what we have here is an algorithm that runs just a few iterations. And in each iterations, it carries out only addition operations, which is lovely. It means that we can easily implement this algorithm either in software, as we did here or in hardware. We can easily build a logic gate, or a chip that carries out multiplication using this logic here, all right? Now, if you're not convinced that logarithmic runtime is good, I'd like to give you some examples, and in particular, take a look at this table here. And what I have on the left column is some representative examples of the magnitude of N, the input. Now, if I have an algorithm whose runtime is proportional to N, let's say that the proportion constant or coefficient is ten, then the running time of this algorithm will be as shown here in red. And you see that as the input becomes large, the running time becomes very slow proportionally. And what about log? Well, here's the log function applied to the very same inputs. And you see here that in respective how the input grows, the log remains very small, and we can see the dramatic difference between these two functions very vividly if we plot them one next to the other, so on the x-axis we see the size of the input N. And here is the linear function, the red one. And you see that as N goes up, this function goes through the roof, so to speak. And yet, at the same time, the log remains asymptotically bounded. It will also reach infinity at some point, but it will take a lot of time. It will happen only with astronomically large input sizes. And so, the attractive thing about log of N, is that as you double the input, log of N increases by one unit only. And in words, as the size of the input doubles, the algorithm that has logarithmic running time requires one additional iteration. Now, this statement is quite remarkable, and it has far-reaching practical implications. And to illustrate it, let's take an example which is close to home. All of you guys are searching the Internet, and you are doing it using search engines. And every one of these search engines is using all sorts of versions of an algorithm called binary search. And binary search has this nice property of having a runtime, which is logarithmic in the size of the input. What does this mean? It means that, for example, suppose that the size of the Internet is 1 billion items, and you want to search every one of these items, and find it as fast as you can. Well, according to this table, if you want to search or find one item out of a billion using a logarithmic runtime algorithm. It will take 300 iterations. So far, so good. Suppose now, that tomorrow, the size of the Internet is going to double. It is now going to be 2 billion items. How long will it take now to find any particular item in this huge sort of universe of information? Well, according to the logarithmic rationale, it will take 310 steps, because the constant here equals. So, from 300 went up to 310, and suppose that in a few days later, or a few months later, the size of the Internet will double again, and it will be 4 billion items. How long will it take the search engine to find any one item in this huge pile of information? It will take the algorithm 320 iterations. So, you see that as the size of the Internet shoots out of the roof, the search time is logarithmically bounded, and it never becomes too large. And this is, I think, a dramatic example of the incredible power of logarithmic run time. And that's why in computer science, whether it's a theoretical computer science, or applied computer science, we're always delighted when we manage to come up and conjure logarithmic runtime algorithms. All right, so, to recap, this has been a unit about efficiency, but as a side effect of talking about efficiency, we also talked about fast implementations of multiplication. And what we'll do in the next unit is continue from this point onward and discuss the implementation of some other mathematical operations.