Then we get to the Wilcoxon signed-rank test, so

that is different from the Mann-Whitney U.

This is signed rank.

It's also called the Wilcoxon t-test.

And that is really analogous to a normal t-test where we're looking at paired data.

So it's the same set of patients with measurements before and after an event, or

identical twins, as we've discussed.

And remember, this is going to combine signs and ranks.

First we're going to add the signs to a set of values.

And then we're going to rank the absolute values in case they are negatives.

Spearman's rank correlation, article there by Bello et al, you can look at that.

They looked at the knowledge of pregnant women about birth defects in 2013.

So they used the Spearman's rank correlation that is a form of

a correlation and really analogous to our linear regression.

When we do have both sets of numerical values being normally distributed,

if one of them or both of them do not come from a population with a normal

distribution, we use Spearman's rank.

More specifically, it is analogous to the Pearson's product moment correlation.

Remember that went from negative one to positive one, and

we're gonna find exactly the same for Spearman's rank.

The very last one, Kendall's rank correlation.

You can read the article by Paul and colleagues.

They looked at platelet aggregation in the Journal of Applied Basic Medical Research.

Just comparing the platelets there, and they used the Kendall's rank correlation,

so, a bit different from Spearman's rank.

Spearman's rank is very accurate if you find,

end up not rejecting the null hypothesis.

If p-value is more than the alpha value, say for instance, the 0.05,

then Spearman's rank correlation is very accurate.

As soon as you drop down and become significant,

it loses a bit of its correctness if I can say that.

And there is a more sophisticated way to do the ranking that uses this Kendall's

rank correlation proper.

Perhaps more proper way to do your correlation,

if your numerical sets of values are not normally distributed, specifically,

if you do find with your Spearman's rank a significant difference.

So, those would the most common non-parametric tests,

actually, quite a bit of fun, and quite useful.

And you wonder, without access to the data, so

that we can all look at the data ourselves without open data.

You've got to wonder how many times in your life you perhaps would have read

the results of the t-test where it was not proper to use a parametric test.

Nonparametric tests, very good tests, we mentioned in the beginning they

do lose a bit of accuracy when it comes to small differences between groups.

Really not that much, quite a clever way of looking at the data,

safe way of looking at the data.

And as soon as the values that we are looking at do not come from a population

with a normal distribution, we have to use nonparametric tests.

And of course those are going to be the only tests we can use if we're talking

about ordinal categorical data.

As long as you can order the set of values from smallest to highest,

you can use nonparametric tests.

Look out for them in the literature.

They are quite interesting type of tests.