One way to select alternatives is to measure the risk that is involved. Risk management is iterative, as some countermeasures may introduce more risks. Thus, we repeatedly identify, assess, and control all of the risk throughout the requirements engineering process, and potentially into development if it's necessary. Poor risk management is a major cause of software failure. As we're writing requirements, we have a natural inclination to conceive over ideal systems, and we don't think about the bad stuff we just assume that nothing can go wrong. This is where security and safety concerns also tend to get left on the wayside. Avoid doing that. Failure also often occurs, because we have unrecognized, or underestimated risks. These lead to incomplete, and inadequate requirements, and do not give the developer enough information. Risks themselves are uncertain factors whose occurrence may result in the loss of satisfaction of a corresponding objective. A risk for example, might be the chance that a passenger forces train doors open while the train is moving. In a scheduling system, one risk would be that a meeting participant doesn't check their email frequently, and thus won't respond with their constraints in a timely manner. A risk in itself has the likelihood of occurrence, and one or more undesirable consequences. Each risk consequence can be further expanded into the likelihood of the bad consequence if the risk occurs, and the severity of that bad consequence. For example, if the risk is that the passenger forces doors open while the train is moving. A consequence is that passengers fall out of the train while it's moving, because the doors are open. Risk consequence likelihood which is not to be confused with risk likelihood, is pretty small here. Severity is the degree of loss of satisfaction of objective. In this case a passenger or a passenger falling out of the train, that's high severity. The evaluation of severity is much dependent on the system, and often involves security goals. Security is a risk, and you can have low to high levels of severity consequences. Ask yourself how likely is it that this risk could actually occur? If there is risk, what is the likelihood of detrimental effects? What is the severity of those effects? Another way to analyze risk is to look at the system, or create risk checklists for your system. These are product-related risks as well as process-related risk. Product-related risks lead to failure to deliver the services, or failure to deliver a quality of service. These product-related risks often include things like security threats and safety threats. These can have a negative impact on functional or non-functional objectives of the system. For functional risks, products are unable to deliver the required services. In non-functional risks, the product cannot deliver required qualities of service like security. Process-related risks are ones that we often don't think about, but they are things that can lead to delayed delivery of the product, can lead to cost overruns, deterioration of product to morale and so on. For example, if your development company is facing a lot of turnover, tath may not be able to be developed as quickly as expected. This is an example of a process risk. The Software Engineering Institute has made process oriented risk taxonomies, and list of questions to help spot projects specific risks. Some of these are included in the readings for your reference. As you move through your requirements, start by performing component inspection, and find product related risks. Review each component of the system to be including humans, devices, and software components. In your review ask if it can fail. If that component can fail then how? Why? And what are the possible consequences? The finer-grained your components, the more accurate your analysis will be. For example, in our train system the components include an on-board train controller, I would include a station controller, a tracking system, a communication infrastructure, and probably more. At a finer level, we can tone down to where components would include things like the acceleration controller, a door controller, track sensors et cetera. For example, in the doors controller, yes it could potentially fail. The doors could remain open when the train starts moving, or someone could force them open trying to get on at the last second. This is a risk in that people could then fall out especially if the train is full. So ask, in what situations could this occur? If people fall out, how severe is that consequence? Value of human life, injury. That one's kind of a catastrophic one, but as another example, let's say that there's an inaccuracy in train position or speed information. This can lead to risk of accelerations to be computed from inaccurate estimates of locations and speeds of later coming trains. We often have these missing, or inadequate functionalities, and wrong assumptions about the environment. Do remember to consider safety hazards, security threats, vulnerabilities, information inaccuracy, poor performance, and reliability gaps within your requirements.