This is Lecture 1 for Lesson 3. So, where are we now? Locating things used to be really difficult. And it was done using time intensive methods that weren't very accurate. You probably associate location technology today with GPS. GPS stands for the Global Positioning System which is a network of satellites built by the US military beginning in the 1970s. It's been extended and enhanced a whole bunch since then. It was not invented by Apple or Google, believe it or not, but GPS is just one example of a global navigation satellite system, a GNSS. And others include the Russian GLONASS system, which has the worst acronym ever and the EU Gaileo System. And it's already common for consumer devices that you and I both own to use a GNSS to drive locations. Most of them use the GPS system. This location signal from space is often augmented by WiFi hot spot signals and cellphone tower signals that combine that can provide much better accuracy and coverage inside buildings and urban environments, for example. Consumer grade stuff that you and I both own can locate positions within a few meters, but they can be off by hundreds of meters in poor conditions. So, there are professional systems that use fancy hardware and software. They're used for surveying property lines and other serious geotasks. Those professional systems can get you down to just a few centimeters, which is pretty good. A GPS enabled device can give a point location defined by latitude and longitude coordinates. So right now, where I'm sitting, I'm at 40.77 degrees north and 77.89 degrees west. Now, if I walked around my yard collecting multiple points, I can create a polygon that represents the footprint of the property I own. I could define that whole entire space. If I collected points in a row between my couch and fridge, I would have a line feature and I would have a snack. So, points, lines, and polygons are the primary forms of spacial vector data. And vector data is one of the two primary spacial data types. The second data type I want you to know about is raster data. When you think about raster data, I want you to think first about virtual globes. You've probably used stuff like Google Earth a whole bunch. That's a virtual globe. You might have taken this class because you had an experience using something like that. And virtual globes like Google Earth have made images of the earth really easily accessible and fascinating to millions of people. Most geographic image data comes from satellites and airborne sensors. That's what populates stuff like Google Earth. You can even use your own DIY drone now and strap a phone onto a quadcopter and fly it around, and make pictures of your neighbors tanning if you want. And geographic image data like this is raster data, the second major data type I want you to know about. Raster data captures information by assigning values to cells in a grid. And the size of those raster grid cells determines how much resolution you have for the image. Most of you have digital, have digital cameras and they have a certain megapixel sized sensor. That's the same exact principle. Here you can see a graphic showing sort of a fake example here, of vector data versus raster data for the same place. I've concocted a fake geography with Fancy City and Less Fancy City, and Slinkyhead Lake, and Big Green Forest and Wheat Fields, and all that kind of stuff. And you can take a look at that. What you'll see here on the left is, it's got rather big regular-size grid cells with values assigned to them. Right? That's what would happen if you take an image. In this case, a very coarse resolution image of the fake landscape I've created. On the right, you've got much more detailed vector geometry. Vectors allow you to, to be much more precise about the boundaries of, of certain locations. Because you're not limited to the resolution of a sensor. And there are trade offs for using both of these different data models. But they're both extremely important to geography. Now, the science and technology associated with imaging the Earth from above is called remote sensing. And it's a huge discipline that's quite big, involves a lot of engineering, geography, and analytical methods, a lot of Math too. And it's not just photographs. It can also involve the use of lasers and infrared sensors. Other forms of life that are not visible to the human eye. Here's an example of an infrared image. This is a pretty interesting one, I think. it was taken in 2011 after a major tornado hit Tuscaloosa, Alabama. Now, because of infrared light is not visible to the human eye, we have to add false colors to this image in order to make it visible to us. So, in this case, the stuff on the map that's red is the vegetated area that's alive, and the blue stuff includes impervious surfaces dead areas. places that are urban for example, water bodies, things that are not alive, essentially. And you can see this huge swath across this map, right? Of where the tornado caused a huge amount of damage. This is a good example of the use of an infrared detection method for imaging the earth, that provides some sort of different information that we would get from a photograph alone. And here are a couple of examples using lidar, which is a laser based method for measuring elevation on the ground. It's extremely precise, you can get down to just a couple inches of accuracy with this stuff. And what I'm showing you here are two images before and after Hurricane Sandy in 2012. This is part of the New Jersey Coast Line. The top image shows before the storm hit what was the elevation on that coast line. And you can see little footprints of houses sticking up above the sandy base and the roads and stuff like that. And in the second image in the bottom, you can see that there's this brand new channel where there wasn't anything before, that's been gauged out. And a lot of the beach has eroded away. So, laser detection, through lidar, is a great way to map some of these very, very precise differences in the actual landscape topography. Here is a map showing the difference between the pre and post images using lidar. And here you can see with the red areas, where the elevations has actually decreased since the first image to the last image. And there are some places on the north side of the image where sand has actually been deposited and added since the hurricane hit. So, images like these show that now not only do we have visible stuff. just straight up, normal aerial photography we can work with, and satellite imagery. But we also have invisible things we can make visible using lidar and infrared sensors, for example. Radar is another option. To make the environment visible to us and to use it to make maps.