Hello, I'm professor Brian Bushee. Welcome back. In this video, we're going to take a look at a fraud prediction model. So this is going to be a very different approach from what we've been looking at so far. So far we've been trying to look at individual line items, like revenues or expenses, or components of earnings like accrual, or cash earnings, to try to see whether they look different or unusual based on what we would expect. The approach today is let's just look at firms that have committed fraud, look at some firms that didn't commit fraud, and try to figure out how they're different. So, throw in as many ratios as we can think about and use statistical techniques to see which ratios best explain what firm ends up being a fraud firm versus what firm ends up being not a fraud firm. So it's a different approach to coming up with detecting earnings management and let's see how it works. Let's get started. In general, the goal of fraud prediction models is to examine companies that have been caught committing fraud, to try to model how they differ from companies that are not caught committing fraud. And they'll use various statistical techniques to try to choose a small set of variables or ratios that provide the best explanatory power, do the best job of identifying the fraud firms versus the not fraud firms. The advantage of this approach is they're specifically tailored to characteristics of fraud firms. So we look at who we actually know is guilty to see how they differ from other companies. And the model parameters are fixed, which means they don't have to be re-estimated for each company. So forget the time series and cross-sectional industry approaches. There's one set of model parameters that are used for every company in every year. Disadvantages, the models are based on companies that were actually caught committing huge frauds, which means it's gonna tend towards more extreme forms of earnings management, and it may not pick up more subtle forms of earnings management. And the models return a large number of false positives. >> False positive? What the diva are you talking about? >> Excellent question, so a positive result would be where the measure indicates something's a fraud, a negative result would be where the measure indicates it's not a fraud. So a false positive would be where the measure says it's a fraud, but it's really not. A false negative would be where the measure says it's not a fraud, but it really is. So any kind of model is gonna trade off these false positives and false negatives. And in these models tend to try to reduce the false negatives, try to reduce the situations where the models say it's not a fraud but it actually is, because you really wanna detect these frauds. But the cost is that you're gonna detect more companies as frauds that really aren't. But I guess better safe than sorry. We're gonna just focus on one of these models and the model is the Beneish M-score, which has been found to have preformed the best over the last 20 years. It was developed by Danielle Beneish in 1999 and successfully flagged 12 of 17 high profile frauds since that point. The Beneish M-Score is based on eight ratios. A higher M-Score means higher likelihood of manipulation. So it's called M-Score because of the manipulation score. And the ratios we're gonna look at use comparisons between the current year and the prior year. So the first ratio is days sales receivable index or DSRI. This is gonna be days receivable divided by prior days receivable, where days receivable is defined a little bit differently than we did it in the past. It's receivables divided by sales times 365. An increase in this could suggest revenue manipulation. The Gross Margin Index, or GMI, is prior gross margin divided by gross margin, where gross margin again is sales minus cost of good sold divided by sales. This is gonna flag deteriorating earnings prospects, which may create an incentive to manipulate earnings during a period. Next is an asset quality index, AQI. This is asset quality divided by prior asset quality, where asset quality is total assets minus the sum of current assets and PP&E, all divided by total assets. This is intended to measure soft assets or intangible assets, for which the realization of benefits is uncertain. For example, they could suggest excessive capitalization of costs, as we saw in the expense recognition videos. Sales growth index, or SGI, is gonna be sales over prior sales. So growth companies often face pressure to meet earnings targets and have high capital needs, providing higher incentives to manipulate earnings. The Depreciation Index, or DEPI, is gonna be prior depreciation rate over current depreciation rate, where depreciation rate is depreciation divided by depreciation plus PP&E. A ratio greater than one could indicate the depreciation rate has slowed, which could reflect income increasing depreciation policy changes. >> Zut Alors! Could you make this any more boring? At least this is named for another citoyen francais! >> Yeah, I'm sorry I can't crunch through these eight ratios in a more exciting fashion, but at least we only have three more to go. And by the way, although Daniel Beneish does speak French, I think he's a Canadian citizen and not a French citizen. Next we have the SG&A index, or SGAI. This is the SG&A ratio divided by prior SG&A ratio where the SG&A ratio is SG&A expense over sales. This would indicate decreasing SG&A efficiency, which may predispose companies to manipulate earnings. We have total accrual to total assets, the TATA ratio. This is a accruals divided by total assets, where accruals are defined similar to what we've done before. It's net income before extraordinary items, minus cash from operations. This is the proxy for non-cash earnings, which as we've seen are easier to manipulate than cash earnings. Then there's a leverage index, LVGI, which is leverage divided by prior leverage, where leverage is long term debt plus liabilities, all divided by total assets. This captures incentives to avoid debt covenants. If you're highly levered, you have more incentives to manipulate your earnings to avoid violating debt covenants and becoming in default. So the M-Score is going to be an intercept, plus the sum of all of the weights on the variables, times the variables themselves. Where in this case the weights are gonna be the same for all companies, and here are the weights that we're gonna use for each of the variables. >> Dude, like what do you mean by variable? And dude, it is like not cool to talk about a company's weight. >> In this case, weight actually refers to a coefficient or parameter estimate, not to the actual heaviness of the company. So although the terminology's different, it's exactly the same approach as we did with discretionary accruals and discretionary expenditures. To get the normal levels of accruals or normal levels of expenditures, we took an intercept, and then we took the estimate b and c parameters, times the value of the variable for the company. In this case, the variables are just the values of the ratios for each company. And what I'll do now is I'll go through an example of how to put together this M-score. So as an example, let's bring in the value of all these variables for company X in the year 2015. So, I've filled that in this column. One not at the bottom is sometimes it's hard to get data to calculate SGAI, AQI, or DEPI. So if you can't do that, Beneish has found that you can just set them equal to one and the model would still work pretty well. So now that we have the variables, we can take the weight times the variable, so the intercept will just carry over. But then for DSRI, we would take the weight on that times the value of DSRI for this company in 2015. To get the product of the weight and the variable, we add up this last column and we get an M-Score of negative 0.442. And what Beneish has found, is that an M-Score that's greater than -1.78 is a red flag that indicates a potential manipulator, a potential fraud firm. >> Dude, where did y'all come up with -1.78? >> So the -1.78 was chosen to try to balance off these false positives and false negatives. So it was the point in the original research where Daniel Beneish found that you got an acceptable small level of false negatives without getting too many false positives. Now, probably could have rescaled the score so that it was zero instead of -1.78, but hey we're accounting professors. Accounting professors are sort of geeky. And so we like things where the cut off is minus 1.78 as opposed to something more user friendly like zero. Let's take a look at how to calculate the M-Score for a company. The case we're gonna do is Dogron. Dogron is one of the world's major electricity, natural gas, commodities, communications, and pulp and paper companies for dogs. >> Really? Dogron? Really? Dogron? Really? >> Yeah, so this one is a little hard to disguise. Dogron is really Enron. Enron, of course, was one of the biggest financial reporting scandals of the last 50 years or so. Huge financial statement fraud, resulting in the biggest bankruptcy in US history at the time. I guess since the company has gone bankrupt, there probably are not any company lawyers that are still around to sue me for saying bad things about the company. So why don't we go ahead and just admit this is Enron and call it Enron. So anyway, Enron was one of the world's major electricity, natural gas, commodities, communications, and pulp and paper companies. In October of 2001, Enron was found to have committed fraud in the reporting of its financial statements. It used Special Purpose Entities, Mark-to-Market Accounting and other tricks to manipulate its financial statements. Ended up declaring bankruptcy within one month of the news of the fraud, so it collapsed very quickly. And it's auditor, Arthur Andersen, was forced to close after an obstruction of justice charge. Andersen was one of the big five auditors, long history, this scandal brought the company down, and they went out of business. So let's see, how does the M-score work for a company like Enron, that we know ended up committing a big fraud? So here is the spreadsheet for Enron, where I've pulled in all the raw data I need to calculate these ratios and you can see that it's a lot of raw data that you need. The columns in purple are the ratios that go into the M-score. Then, what I do is I have an M-score column where I have the formula built in. So if we zoom in on the formula, you can see I've got the intercept, plus the weights times the cell where we calculate each ratio. And because the weights are the same for every firm and ever year, we can just leave this formula and put any company in there and it should work. So what we see is in 2000, the year before the fraud got revealed, there was this -0.44 M-score. Now remember an M-score where the number is less than -1.78 is indication that it's not a fraud. If the M-score is greater than -1.78, then it is a potential fraud. So in this case it would have flagged Enron as a potential fraud. Now what you'd have to do though is you'd have to check their earnings measure incentives, benchmark against competitors. Even so, remember these models have a lot of false positives, so if you saw something like this, then you might wanna go and look at some of the revenue and expense ratios we've talked about, discretionary accruals, discretion expenditures, to see if you can find more evidence. So this is a good first step to red-flag a company where there's potential concerns. But you're gonna wanna gather a lot of other measures before you have confidence that you really have picked up a fraud. So that wraps up our look at the Beneish M-score fraud prediction model. As you can tell, I'm a bit skeptical about it. Anything that's really easy to implement is also gonna be not that powerful as the more complicated models are. It certainly detected fraud for Enron very well, but then again [LAUGH] any fraud prediction model better detect fraud for Enron, otherwise it will never get published. I do think it's a good, easy first step to identify firms that you should be suspicious in, and then you can dig in further with other of these tools that we've talked about. But another tool for your tool kit. We've got one more to add. And I will see you next video. >> See you next video.