This is a five-section course as part of a two-course sequence in Research Methods in Psychology. This course deals with experimental methods whereas the other course dealt with descriptive methods.

Loading...

From the course by Georgia Institute of Technology

Experimental Research Methods in Psychology

оценки

This is a five-section course as part of a two-course sequence in Research Methods in Psychology. This course deals with experimental methods whereas the other course dealt with descriptive methods.

From the lesson

Evaluating Causal Claims

- Dr. Anderson D. SmithRegents’ Professor Emeritus

School of Psychology

Anderson Smith and we're talking about validity.

Let's spend a little more time talking about internal validity and

using statistics to tell us what is valid, and what isn't valid.

So internal validity means that we, and we've talked about that a lot in this

course means that we are careful in making casual references in designing whatever it

was on and making sure that we don't have confounded variables.

That is experimental manipulations are controlled.

We have to make sure that no confounding variables.

And if there are mediating variables, we have to be aware of what they are.

We have to understand those occur when we are designing an experiment.

We have to also as we talked about,

we have to make sure our measurements are reliable, well-defined and precise.

They're reliable and they're valid.

So internal validity and we discuss about how to control for confounding variables,

and we can do that by matching that as we know what the confounding variable is.

So, we'll simply match at subjects in the groups on that variable.

So now, they can't confound what it is that we're manipulating.

We can actually include the variable in the multi-variable experiment.

That is if we have a confounding variable,

we simply make that another factor in an experiment.

So, we can determine what the relative effects are of that confounding variable

and the variable that we're interested in.

That's probably one of the best ways to control an experiment.

And if they're different measures, one confounding that might confound,

well, then we can actually come of with statistical controls called

a covariance design that partials that out.

So analysis of covariance which its called,

actually looks to see whether that confounding variable.

We use that as a covariate, so we have to measure it.

And then we actually assess that statistically and remove it, so

we can look at is the answer relationship between independent variable and

dependent variable after controlling for the variance or

remove it from the analysis and

that's the way of looking at the effects of this confounding variable.

Now in doing that, in using it analysis of covariance, we're really talking

about this important step after manipulating the independent variable and

collecting our measures to the dependent variable.

In the experimental design, we know we first have a rationale for

making a hypothesis.

We have a hypothesis.

We do the empirical study where we manipulate the independent variable and

measure the dependent variable, and then we do the data analysis.

In only after we do the data analysis, the statistical analysis.

We know whether or not the difference that we looking at is reliable and valid.

It's a good one and

that lead us back a conclusions back to the research literature itself.

So, let's talk a little bit more about statistical analysis.

Now when we analyze an experiment to look at a causal effect,

we're used to what it called Inferential Statistics.

Is there a difference in a dependent variable that is caused by

the manipulation in an independent variable?

Or as we say, is it difference to the chance?

So, is that difference we're looking at a reliable and valid difference?

Is it a good difference that it actually is statistically significant or

is it a difference that is due to chance?

Is the dependent variable a function of the independent variable?

That's the goal of the experiment.

Is there a causal effect between manipulating the independent variable and

measuring the dependent variable?

So, different kinds of statistics.

There are descriptive statistics that are used in descriptive studies and

then there are inferential statistics that are used in experimental studies where

we're looking for causal effects.

So in descriptive studies, we collect data.

We describe the data in a meaningful way.

We organize the data.

We summarize the data.

We correlate the data.

We're simply describing what's there.

With inferential statistics, however, we're doing hypothesis testing.

We're actually making a hypothesis and then testing it.

We're making inferences about whether or

not the effect we see is a meaningful effect, or not.

We're determining whether there's a relationship between the independent

variable and the dependent variable.

Very different kinds of outcomes from just describing the data,

which we'd use in descriptive studies.

So inferential statistics or hypothesis testing, we try to draw

conclusions about the population based on the sample we used in an experiment.

The conclusion now is not guaranteed to be correct.

What the statistics will tell us is to what extent is it a correct conclusion?

Is it something which is a difference that we observe?

Something which we will assume is a good,

different set shows the effect and the conclusion therefore is a good one.

But remember, it's not guarantee we might be making an error.

There's something called the null hypothesis and

the null hypothesis will state there is no difference in the manipulation.

Now testing the null hypothesis is very difficult,

because sometimes it's difficult as I'll show

you to disprove or prove no difference.

So, inferential statistics allow us to at least test it and

come up with a probability that we are correct or incorrect.

That's what they do.

So with hypothesis testing, we have two kinds of errors.

If the true state of world might be that the hypothesis is false or

that the hypothesis is true, then null hypothesis.

So that means if the null hypothesis is false, there is a real difference.

And if the null hypothesis is true, there is no difference.

And so if we reject the null hypothesis when, in fact,

the null hypothesis is false, that's a correct decision.

A type II error would be that the null hypothesis is false and

we say, the null hypothesis is true.

We say, there's no difference when there's really a difference.

That's kind of rare.

More likely, however is the null hypothesis is true and

we say that the null hypothesis is false.

That is a type I error.

We reject the null hypothesis.

We say that there is a difference when, in fact, there's no difference and

that's a type I and that's really what we want to avoid.

We want to find the difference and show whether or

not we have a difference or not.

And so we want to be able to not have a type I error which says that

there's no difference and we say, there's a difference.

And so in type I error which is basically saying

that the sampling error can produce extreme results and

type II error says, research lacks in statistical power a low sample size.

So, we use inferential statistics to inform us about

the probability of making a type I error.

This one we want to really avoid.

We do not want to say, hypothesis is true when it is in fact,

not true and our difference is really chance, type I error.

So we have to have some idea of what is a probability that our difference is due to

chance and we can do that through inferential statistics,

because what they tell us is the probability of making that error.

So what we do is we have some, it's truly arbitrary.

But it's collectively agreed upon chance level and we'll adopt that.

Typically, what you used is a chance of 5%.

So we want to show an experiment that the probability of making a type I error

that is of saying that we have a difference when, in fact, it's only chance

is less than 5% and that's the standard which is used in inferential statistics.

We might adopt a more stringent criterion like 1% or

0.01% or 0.001%, but

most researchers would agree that the 5% is sort of at

least the maximum when distributed used and this is how we do it.

In hypothesis testing,

we have this probability that a null hypothesis is true and we want to say,

only look at the difference that meets this P value of less than 5%.

If the probability that Ho is true is less than 5%, then we'll say, okay,

we've got a difference.

It's not a null hypothesis.

It's a difference based on a hypothesis of a difference.

Thank you.

Coursera делает лучшее в мире образование доступным каждому, предлагая онлайн-курсы от ведущих университетов и организаций.