Now we're going to move from the Edge to the Cloud to learn about AWS' object, block, and file storage services. Amazon S3 is an object storage architecture. S3 stands for Simple Storage Service. It allows you to drop static objects into a bucket. In this graphic, the object storage is the one where someone is dropping objects into a volume without organization. If your application needs to pull up that object, for example, to show a user a photo of an Airbnb, it gives S3 an identifying key about the photo it wants and receives the photo. This unstructured approach to storage is less expensive than block or file storage architecture. I'm going to go through three common use cases for S3. The first is hosting many static images. For example, Teespring allows artists to sell their designs on clothing, mugs, mouse pad stickers, and other merchandise. An average of 2,400 creators sign up to Teespring every day. For any given listing on Teespring, it's common to need more than 500 unique images to show the buyer an accurate view of the product. Teespring uses S3 to store and receive all of those unique images for creators and buyers. Pinterest is a global social network that involves pinning images on boards. It uses S3 to help more than 450 million users share images, animated GIFs, and videos by pinning 240 billion of them onto more than five billion boards. Another common use case for S3 is hosting a static website. You'll get an AWS walkthrough for how to do this at the end of the lecture, if you are interested in how to do so. But remember that this is a common use case that may appear on the exam, namely using S3 to host a static website. A final common use for S3 is storing logs and reports from other AWS services. For example, an AWS service called CloudTrail logs all API activity. API activity includes who pushed what buttons, and who ran which commands. You can configure CloudTrail to deliver those logs to an S3 bucket where you can review those logs later. You should keep the following limitations of object storage architectures in mind when working with S3. These may also appear on your exam. Buckets exist in a specific region, but require a globally unique name. Even if you have two buckets in different AWS regions that have nothing to do with each other, you need to give both of them unique names. The unique name comes in handy when you use S3 buckets to host static websites. Buckets have no hierarchy, so if you want subfolders and sub-subfolders, you would want to use a storage service other than S3, which we'll cover later in this lesson. You can add or delete objects, but you cannot modify an existing object. This means S3 is not the right choice for anyone who requires routine configuration or updates such as operating systems or applications. Otherwise, instead of regularly patching an operating system, you'd have to constantly re-install the same operating system to keep it at its current version. We're going to deep dive now into several concepts that apply across many AWS services, but we're going to use S3 to illustrate them. The first concept is folders. I'm mentioning folders specifically because S3 has no hierarchy, so a logical assumption would be that there are no folders, but S3 does support folders. You can't create subfolders or sub-subfolders, but you can use folders. The second concept is versions. You can keep multiple variants of an object in the same bucket by using different versions. For example, version 1 of my Airbnb, I want to keep that on there, but I have a different photograph, a better and more recent photograph, I'm going to make that version 2 of my Airbnb. The third concept is object lock. This prevents the editing or deletion of an object version for a period of time or indefinitely until the end of time. This is useful for complying with US securities laws that require regulated entities to never modify records, so that errors or wrongdoing is always preserved and can't be edited away. Our final concept is cross-region replication. Buckets are regionally unique, meaning that they can desynchronize. If you want two buckets in two regions to contain the exact same information, then you can automatically replicate data between buckets using cross-region replication. Next, I'm going to talk about two services that work with S3 and may appear on the exam. Since you can drop millions of items into S3 buckets, there's a service called Amazon Athena to help you find objects in a bucket. Athena uses Structured Query Language, abbreviated SQL queries. Amazon S3 Glacier provides archival storage classes that cost less than S3 but are slower to retrieve your objects. A use case for archival is when you're required by law to retain something for X number of years, but you are confident that you wouldn't need to access this file normally, during the usual course of business. It might take 12 hours to retrieve your objects in some cases. You can use lifecycle rules to archive objects for cost savings automatically. I just used the term lifecycle rules, what does that mean? Well, you can set lifecycle rules in S3 to transition things from S3 to Glacier. For example, three months after you upload a new version of an object, you can move the earlier version to Glacier, then delete it from S3. But the point of Glacier is not only to lifecycle, but also to retrieve objects when you need them. Retrieval takes two steps. First, you must choose a retrieval option, which are called Expedited, typically 1-5 minutes. Standard, typically 3-5 hours, or Bulk, which is the slowest at 5-12 hours. Then to retrieve the object, you'd give Glacier that identifying key information on which archive you want to retrieve back to S3. At the end of this module, I've included a link to an official AWS tutorial on how to host a static website on S3. Remember the walkthroughs in tutorials don't cover content on the Cloud practitioner exam. So if you are preparing for the exam, you can always do these after you pass.