There's nothing more important to the success of any project, especially in analytic strategy, plan, initiative, or analytics project than support. The support of executives, business leaders, and the IT organization. Executives need to be on board with encouraging or mandating that the organization become more data-driven. They set the tone that no longer is gut instinct going to be the driving force of corporate strategy. Business leaders need to be committed as well both to provide the necessary resources including budget, a lot of times, for particular projects, and for ensuring that the analytic output actually gets acted upon. There's nothing worse than spending hundreds of thousands or millions of dollars on a data warehouse and analytic applications that don't get used, or whose results aren't heated. Of course, the IT organization needs to lead the acquisition, implementation, and support for all the technologies we've discussed. Executive and business leaders support is one thing, but business culture is a more subtle factor in analytics success. Business culture may be a function of strong leadership. But in its absence, one needs to understand the inner dynamics at play. This will include influencers, detractors, and saboteurs, folks who do and don't play well with others. Knowing who these individuals are and navigating and appealing to each can be critical for the success of any project and especially analytics projects. Why? Because analytics more than almost any other kind of enterprise initiative has the potential to radically change the way decisions are made, the way the business operates, the kinds of products and services offered, and can severely impact labor needs, and operational and strategic levels. Speaking of not using analytic results, organizations claim that the number one reason for this is that the data isn't trusted. It suffers from or is believed to suffer from, any multitude of data quality issues including accuracy, completeness, timeliness, integrity, scale, precision, etc. The process of identifying data quality issues and fixing them can be really laborious. There are few technologies for automating these activities. So, it's best to deal with them as far upstream as possible in the systems and processes where the data originates. Data governance as we discussed is an enterprise program to deal with the systemic issues of data quality through the established principles, policies, practices, and a system of carrots, and sticks, or penalties and rewards. All related to the collection, definition, handling, and usage of data. Data literacy as we discussed is an emerging concept of getting business people to speak data. Also, getting data people to speak business. A data illiterate organization and data illiterate individuals are ones that are ill-prepared to develop let alone leverage analytic results. Often, this starts with helping people develop an understanding of what data exists, what it means, where it is, how to get it, and how to use it. You can see how this links closely with data governance. At the intersection of data literacy and data science is the creation of meaningful and useful analytic models from simple queries to complex algorithms. Bad analytic models will lead to bad decisions, and diminishing trust in, and support for analytics overall. Remember, when we talked about correlation versus causality, don't assume because events are correlated that one necessarily causes the other. Also as we discussed, good analytic models are forward-looking. Trying to identify and predict patterns or suggest actions not simply stating what happened in the past. Part of good analytics modelling involves developing useful metrics and key performance indicators or KPIs. Business leaders and executives often place too much emphasis on lagging versus leading indicators in measuring Enterprise performance. Where leading indicators are utilized, sometimes, senior executives don't do a good job of articulating or propagating them. Our research finds that almost 80 percent of enterprises use lagging indicators to assess their performance, even when considering new digital business opportunities. Without leading indicators to provide insight into the progress and performance of new business models, enterprises will often have difficulty measuring, managing, and justifying their initiatives, and showing a path to value. Without leading indicators, businesses lack the early warning signals necessary to validate the efficacy of their assumptions regarding specific digital strategies and execution, and to make any necessary course corrections more rapidly. There are indicators of good indicators. These include that each metric has a pedigree ultimately linking it to the company's mission. Measures need to represent reality, not just estimates, interpolated data, or subjective input. Metrics need to be used by individuals on a regular basis to gauge their process performance. Metrics need to be tied to incentives and effect desired behavioral change. Good metrics are not just actionable but actually result in action. Some metrics can be expensive to produce. So, ensure that they're cost-effective. Finally, metrics change with the times, situations, and available data. They're not like fine wine. They don't get better with age. As well, there are indicators of bad indicators, such as disagreement or mistrust of the underlying data used to compute them, or there's a lack of transparency, and that users don't understand how metrics are computed, and a lot of organizations especially large ones, there are conflicting metrics, sometimes metrics are tied to punishments, but not also to rewards, and too often, metrics are over-aggregated, fabricated ratios, and indices, and include arbitrary weightings. Also, be aware of the potential and reality of unintended consequences from the metrics. Too often, organizations also default to a vendor's package metrics as part of an application. It's good idea to think beyond that. Finally, too much of a reliance on metrics can quell innovations. So, you want to be careful and aware of that as well. One way organizations ensure good alignment of metrics KPIs and other indicators with one another, and with the corporate mission is by using an established metrics frameworks such as the balanced scorecard, stakeholder framework, strategy map, program logic model, cascading framework, or the enterprise performance framework. We won't be able to get into each of these in this course. But if you find yourself involve in helping to define metrics, you might want to ask which framework is being used or help select an overall framework to do so. A general approach to defining metrics is to first define the perspective. That is, the overall scope or focus area of the metrics, such as financial or customer. Next is to establish the critical success factor or a general statement of perspective performance, such as we want to increase annual revenue. Third is to define an objective for the critical success factors, such as to grow the average selling price of product X. Fourth, define a goal or a specific target for the objectives, such as we want to increase our average selling price by 10 percent within the next quarter. Fifth is to define a way to measure the objective and goal that uses data and math, such as our fiscal year transactions divided by the number of units returned. Also, don't forget to define thresholds. Metrics shouldn't be published or used without some range of acceptability, or non-acceptability. For example, bad, fair, good, and excellent. Each should bring about a particular action. Finally, the metric itself including details of how it will be calculated using available data, and how it will be reported and published. Now, formal metrics and indicators are wonderful, but many great business advances come from experimenting with data. Looking at it from different angles, combining it in new ways, and generating, and testing hypotheses. Again, great data science has a bit of research and development or R&D. Oddly, most organizations have an R&D function for developing new products and services, but not an R&D function for developing new opportunities with data. Many organizations don't have a culture of experimentation. Especially, when it comes to testing predictive models say, against customer's. Others like Google, Facebook, Yahoo, and Amazon run thousands and thousands of experiments each day to determine how to generate more revenue. One of the keys to a majority of high-value analytics use cases, especially those that involve experimentation is the incorporation of data from outside the organization. Move beyond the echo chamber of your own data to gather data from partners, Open Data Sources, Syndicated Data Providers, or harvested from others websites. If you don't, then don't expect to have any insight into customers other than your own or products and services other than your own. But some people just don't respond well to numbers. They're scared of them, or don't trust them, you have to be able to appeal to people's trust in authority, their emotions, and logic. As Aristotle called these modes of persuasion, ethos, pathos, and logos. Only with storytelling, not with charts and numbers. Can you appeal to each of these? Combined anecdotes and metaphors with numbers to produce the most convincing arguments, storytelling in analytics is one of the topics heating up the quickest. Organizations are looking for people who can turn data and analyses into compelling narratives. As we've seen, certain analytic technologies can create stories automatically. But even with stories, it's increasingly difficult to convince anyone of numbers or analysis that have been created in the dark by a single person. Even if they're analytic models and underlying data are sound, business cultures as a rule are becoming much more collaborative, so must analytics. Plugging away at your own spreadsheet, for days or weeks, and then presenting it at a meeting is so very 1990s. Fortunately, analytic technologies even some spreadsheet offerings include the means for people to develop models collaboratively, each working on, or tweaking a different part, testing ideas, and validating one another's logic. Related to analytic collaboration is analytic model we use. If for example, someone creates a model for predicting the reaction of suppliers to an increase of capacity or moving a manufacturing plant overseas, why could the same model be adapted to predict the reaction of employees? Employees are like suppliers, right? Unfortunately, few technologies exist for capturing, searching, and reusing analytic models. For now, it's mostly a manual exercise in most companies. This involves a manual effort of catalog in models. Last but certainly not least is ethics. Again, just because you can't do it, doesn't mean you should. With data and analytics, you will undoubtedly uncover opportunities to change the way your employers doing business to increase sales, or reduce costs, or enter new markets, or identify fraud. The economy is littered with examples of businesses that fail to consider the consequences or as economists call them the externalities of using data in untoward ways. From Amazon recommending drug making paraphernalia to those who purchased a particular kind of scale to targets, analytics recognizing through purchase history that a woman is pregnant, then sending her baby related items only to get an uncomfortable call from a man asking, "Why are you sending my teenage daughter these ads?" Especially with Analytics, you need to consider the unintended consequences that may lie at the edge. If your organization doesn't yet have a digital ethicist on staff or on-call, it likely will in the coming years.