Welcome back. I'm still here with Phil Kragnes as we talk about universal design looking at cognitive impairments. How are we going to support cognitive accessibility? I'll remind you that we started by identifying two families of cognitive impairment and it's probably a stretch to call them two families, they really belong together as part of a broad family, but we talked about Attention Deficit Hyperactivity Disorder. The challenge of people who have trouble avoiding distraction of maintaining focus of attention for a sustained period of time and we talked about the general collection of learning disabilities that include challenges with acquisition, storage, and retrieval. How do you get information, store it away and get it back when you need it? And, what we're going to learn from how we design for all of these cognitive impairments, will as it turns out show us how do we just design good interfaces to begin with. So let’s dig in. When we're talking about ADHD, we're talking about people who are facing challenges of distraction, lack of focus, it may be harder for them to identify relationships among items on a screen because of some things nearby they'll, their eyes will dart over to it and look at the various things that are going on and, what do we do to design for this type of user? >> Well, keeping it simple, I mean, it sounds pretty basic but it's true, the less information or the less happening on the screen to be distracted by, the less the person with ADHD is going to have trouble with the interface and that's true for all of us. Whether it be something on the screen or something in the environment, when we look away from main content we want to be able to return to it quickly. We want to identify where we left off, what our next steps are if you're moving through an application or a process and maybe you have an infant or a young child vying for your attention or a pet or any, a spouse, a friend, whatever. You want to be able to go back to your screen and pick up right where you left off. You don't want to spend all this time looking and getting back to where you left before you were distracted. Well, with ADHD, the information on the screen can be the distraction as well as environmental and there's just regardless whether it's onscreen or environmental, get them back to where they started or left off. And so, we want to avoid dense, really dense content. I mean, you know a hand held device is not a book if you want dense pros, get a book or whether it's digital or a print copy, get plenty of dense pros. But, in an app we're looking to perform a process, a function, acquire information and we want it as simple as it can be. >> So, this I mean, we actually covered an example on the first course in the sequence of Turbo Techs. And, one of the things that particular nice about Turbo Tech is that it takes a form that is two sides, 130 lines of data a nd it breaks it up into step by step. And it says, okay, right now we're going to answer the question, did you pay real estate taxes in 2015? Yes or no? If you say yes, it'll take you to another screen. If you say no, it'll take you to a different screen, but in the process of that, it keeps your focus on something very simple. It sounds like that kind of a design which combines minimizing, clearly labeling things And then having a persistent status displace so you know what you're doing right now,. It's says, we're in a middle of determining your deductions, we're starting by looking at taxes paid, did you paid this? That feels like that would pretty good designs for somebody with ADHD. >> Yeah, I mean, to not to have them try to get this from what's on the screen is, forming this big picture from the information on the screen. You want, hey, you are here, in this process, you are on this step of this process, and if we even back up a level before, if we're on a screen where we can choose the different forms that can be filled out through this application and there's a little check mark or check mark and highlighted text to identify this was the last form you were working on. They don't have to hunt, they don't have to try to remember those keys or those clues are already built in. >> Fantastic, so, when we look at learning disabilities more generally and the challenge being this complexity, overload. How do I identify what's connected to what? How do I distinguish these alternatives? You've talked about the challenge of if all my alternatives start with the same language I may have trouble remembering what's different if they are clearly labeled differently, that would be easier. Again, it seems to start with the very same basic principle of, keep your design simple and straightforward. >> And I think, many of the things we've talked about in the other modules for other types of disabilities, whole true when we talk about learning disabilities. Individuals with learning disabilities often will use a text to speech application, much like a screen reader and so, providing those alternative text labels for images representing information in multiple formats, whether it be sound, color, text, etc., is important so that, that information is conveyed. We can use background color to a certain extent to say this block of information is related to this block but if there was another chunk of text, maybe we want to change not only the background color, we want to change the font style. And, we want to make use of clear headings, so we can say this heading and then in another block say another heading and maybe identify it as a subcomponent of the previous heading. And this, I mean, the HTML is easy do it with heading tags, we can see this is a level two heading, a main block of text and this chunk is a level three heading meaning it's a sub-section. So, there are many, many attributes that can be used, programmatic structures, things that can be ascertained visually as well as in those cases, when a text to speech application is employed. >> And so again, we're reducing our information density, more bullets, less text or maybe at least we're reducing our information clutter and we're using redundant coding. We can use color but also proximity, borders, labels, all of these different things to help somebody make things as visually and we're supported auditorily distinct and identifiable as possible. >> Yeah, for example, moving away from the digital environment briefly, let's say, you have a big tray full of nuts and bolts and you need to find the bolt of a specific side and the nut to fit it. Well, they're all just jumbled together, have fun, you're going to be there awhile. But, if we have a nice tray that's sorted into compartments and the compartments holding the quarter inch bowls and the quarter inch nuts are both labeled quarter inch and they are both in blue, it's going to be, you're going to get that job done a lot faster. Same thing holds for the digital interface. >> I may have to invite you over to organize my shop. >> [LAUGH] >> It doesn't hold for my physical interface either. But, as we think about this with universal design benefits, the beauty of designing for cognitively impaired users is that typically, you're just doing good design. It's simpler, it requires less attention which means, as we all know today, you may have a really brilliant application that sitting on the desktop, that doesn't mean the person isn't at the same time carrying on a conversation with a live person and chatting with somebody on Facebook and Snapchatting on their phone, so less attention is good. It's easier to resume, because we all get interrupted and it's easier to identify the next steps. One of the things I thought of as we're talking is that in so many cases, I've seen particularly web interfaces where they call the simple to use interface, the mobile interface. And they have a really complicated, 25 things on the screen at once, desktop web interface but if you're willing to trick it into going mobile, you have one thing on the screen at a time and you can just go and make your choices sequentially. And it may be that part of this answer is to think about, what do you do if you have a lot more constraints on what you can do at a time and that might be the best interface for a lot of folks. >> It's not just screen real estate, it's cognitive real estate which you have to be thinking about. And, it's interesting with that illustration from in its early days the Facebook site was completely inaccessible to screen reader users, but we could go to m.facebook.com and use the application. And so the question is, well, why create this inaccessible and many cases, unusable or near to unusable interface for many when you have a model In the mobile interface? So keeping it simple and easy to use, it just works for everyone. >> Well, fantastic. With that, we're going to wrap up our lecture on cognitive impairments and universal design. We'll see you again at the next lecture.