We're discussing further material on data types. Probably the most important datatype and the language is int. So int gives us a basic way of manipulating integer data and that data type includes variations on it, and they will be explained shortly. There is a short form, a long form, and an unsigned form. So let me give you my understanding of how to approach learning a difficult subject like programming. In programming, you have lots and lots of stuff. Much of it is obscure. But if you concentrate on the things that are typical, an int is typical as a data type, then you can get 90 percent of the value of your study or your work by only looking at 10 percent of the entire language. So this is one of the places where the 90/10 rule shows up. Really make an effort to understand the data type int and that will help you with all the other and more complicated data types such as the floating types. On your typical modern machine, an int is stored in 32-bits. That's 32 zeros and ones and to see even worthwhile especially if you're going to go into computer science to learn how to work in binary. That's the base base two as opposed to our writing most things which are base 10. So also worthwhile to learn how to write an octal which is base eight and hexadecimal which is base 16. We may show some of this later on. Now, when you're working with an end stored in 32-bits. Then what you can represent is limited and it's limited to plus or minus two billion, and I've written it out here explicitly. You can go positively up to 2,147, 436, 647 and your smallest number, your largest negative number, another way to put it is a 2,147,483,648. It's not symmetric because of the fancy way people represent integers on the machine. It's called a two's complement representation. Typically, the first bit is in fact the bit that affects how you interpret the sign. So you also have these other forums events short, long, and unsigned. Short it means use less data. Long means you use more, a bigger range. So short can mean a smaller range typically, you're using fewer bytes. You'll see in a second when I write an example that from my machine short is two bytes, ordinary, int is four bytes as we've already said, and then long will be eight bytes. Then there's a further type which is called unsigned. In unsigned, you want the data to only be interpreted as positive. So in fact, if you have four bytes and you only interpreted as positive you now have arranged from zero to a little more than four billion. So again, this can be very helpful and it also can mean an a domain where you don't need negative integers. You have an ability to use the machine to work only with that type. Now, for each of these types there's a way to present constant and I'm just showing you here there's here's 35 that's an ordinary int of 35 long and it also can be a lowercase l. This is 35 unsigned which can also be a lowercase u or an uppercase U and this is 35 unsigned long. So you should have all of these available on your machine. This is quite important to know which type you're using both for input and output because in input and output the different types take different formats. You may find yourself mixing the types and if you mix the types you have to know what domain you're in and you have to know how the operations affect values taken from that domain. So a classic problem in programming with a language like C, is the fact that the operator divide can be either an integer operator or floating-point operator. If you have two integers, it's two divided by three which is zero, has no remainder. If it was 2.0 divided by 3 or 2.0 divided by 3.0 or 2 divided by 3.0 where one of the arguments or both the arguments are double then indeed it would be in the floating point type. You would have a type of floater doubles and your result would be 0.666. So this mistake gets made often when you are thinking you want to do something like average over some integer data and you forget that dividing can be either an integer operation or a floating point operation, you can get a mistake. Let's look at some code. So here's some code that I wrote to illustrate some of these ideas and you can play with code like this. You should play with code like this if these ideas are novel to you and you should extend it. You can see endless code I have a short, short is a keyword and it really means short int. I could say short int, just this by just saying short. I'm typing less. So here's my variable name and I assign it to five. Here's a normal integer and normal A and I've assigned it 67. Here's an unsigned integer and I use, in that literal 67 to have U. This is a long and again, in order to make sure that the literal is in the right domain I say 67L. Then here I'm doing some things to show you what the effects of using that. So here's that divide by two problem where first I divide by integer two and then I'll divide by floating 0.2 and you'll see a different show up. You'll also see that here for short I use the format hd. Whereas, normally I could have just used d. Here's another weird place where you'll see that the use of the format type will affect how something is represented. So 67, which is the value of normally when presented with the format type C, should show up as some kind of character. You'll see it in the ascii table. Here is where we can print on my machine because this is a dynamic operation where size have looks up the type and says a short A which is types short. What size is being stored in on this machine under this system and we'll see those values as well. Let me run that. There we are. Short a was five divide by integer two and we get not 2.5 but two. But short A divided by the float two that should be 2.0 will be 2.5. Sixty seven as interpreted as a character from the ascii table would be a capital C. Then sizes and the bytes on my machines are two, four, four. Second four is for unsigned and then the biggest is long which all make sense.