This course covers the analysis of Functional Magnetic Resonance Imaging (fMRI) data. It is a continuation of the course “Principles of fMRI, Part 1”

Loading...

Из курса от партнера Johns Hopkins University

Principles of fMRI 2

90 оценки

This course covers the analysis of Functional Magnetic Resonance Imaging (fMRI) data. It is a continuation of the course “Principles of fMRI, Part 1”

Из урока

Week 2

This week we will continue with advanced experimental design, and also discuss advanced GLM modeling.

- Martin Lindquist, PhD, MScProfessor, Biostatistics

Bloomberg School of Public Health | Johns Hopkins University - Tor WagerPhD

Department of Psychology and Neuroscience, The Institute of Cognitive Science | University of Colorado at Boulder

[SOUND] Welcome back again.

In this module,

we're going to talk about optimizing designs with the genetic eye rhythm.

Before, we talked about eight principles of experimental design, including

sample size and scan time, the number and grouping of events and conditions.

The temporal frequencies, randomization,

the effects of non linearity and some basic optimization principles.

There are many considerations and rules in the previous lecture and

computer aided designs can help to put all those rules into one picture.

And essentially give you an optimized design under many circumstances.

The computer-aided designs search through a set of random designs and

identify the best ones.

And there are several ways to do this.

The simplest way is to generate a series of random designs and test all of them and

pick the best ones.

So Doug Greve's OptSeq program does this, and that's available.

A second way is to generate a population of random designs and

do a smart search to optimize the choice of the best ones, and

that's the principal behind the genetic algorithm.

So, we have a version that I'll talk to you about and

then there are other versions that have been developed since that as well.

And finally the mathematical sequences with some optimal properties for

design under some circumstances.

And M-sequences are one that we'll talk a little bit about later, and

that's another option.

So genetic algorithms are an evolutionarily

inspired set of optimization routines, and they're good for

problems where the fitness landscape is rough, or the solution is not Convex.

And you can see what that means here with this diagram.

This is a simple convex fitness landscape, and there's a parameter space with

two parameters, and on the vertical axis is the fitness or

goodness of the design given those parameters.

And as you can see there's a smooth curve with one global optimum value.

So that's what it means to be convex in standard optimization programs like hill

climbing algorithms and algorithms worked on a gradients of the derivatives

of the model functions can solve those problems.

But often the fitness landscape doesn't look like this and there are local minima.

In this case, genetic algorithms in similar designs

like simulated annealing algorithms can work well to find solutions.

We like to think of a design as parameterized by a sequence of parameters,

and that's the kind of DNA of the design in a genetic sense.

So in our case the DNA might consist of a series of numbers

that indicate which of 4 events types to present in a particular time slot.

The numbers one though four can indicate fou different trial

types in an FRMI design.

So given this list of design parameters then we can generate design matrix and

we can test its efficiency.

And if you think about now a generation

in the Genetic algorithms is a population designs here.

Therefore been in practice we might used at least say 500 or

a thousands designs in a population and there all different sequences numbers.

The ideas is to test efficiency of each of those.

And now, we can get a number of the efficiency for

each and we can take the best half of them, these two for example.

And then, we can combine their parameters into a new design.

So this is called crossover and

it's inspired again by genetics in which chromosomes actually meet together.

And they crossover at random points and they exchange material.

So now, our designs are going to be paired up, the best designs, and

they're going to crossover.

So we'll take half of each design with a random crossover point,

and we'll generate a new baby design.

[LAUGH] And that's going to fill out the generation and

fill out the population for the next round of testing or the next generation.

So what this procedure does, essentially, is in a rough fitness landscape.

Combining the two designs takes jumps across the fitness landscape and

parameter space.

So we don't have to search through every possible set of parameters,

we're making jumps.

And what that means is that we can jump over a local optima, valleys, if you will,

so that we can be higher on the hill at the end and find the global optimum.

So pretty cool [LAUGH].

Perhaps random search would do just as well and in some cases that's true, but

in this case consider the sequence of designs for one run.

There are may choices.

So if I present a stimulus every two seconds that;s

four choices every two seconds and 240 stimuli, six minute run.

There are 3 times 10 to the 144th possible designs.

I looked it up.

That's way more than there are stars in the known universe, and

it would potentially take forever to optimize even one run completely.

So, the Genetic augurhythm isn't really an appealing solution then

because it's really in taking leaps in cutting off computation time and

searching through designs efficiently.

Here's a simple example, where optimizing to match a target face,

and there are 17 parameters that specify what that face should look like.

It's color, the position of the eyes and nose, etcetera.

We're going to start with a random population of the faces,

just a few in this case.

And we're going to allow the top 50% of each generation to breed together.

And, we'll see what happens.

So, there's the population random faces at the left.

On the bottom right, is the face that suppose to match, that's the target face.

And on the top right,

you can see the best example in each generation as it's picking out winners.

And as you can see the population converges as a whole on the good face,

the right kind of face and it becomes more homogenous.

So what we can do is also introduce some randomness every once in a while,

which we'll see popping up in order to keep the design fresh.

And that's another kind of way in which this algorithm parallels evolution.So

some random mutation is good.

Another principle is that diversity is really good and

important because if you have a population that's too homogenous.

It's easy to get stuck in a local minimum.

So this has some design features that I really like as general

principles for optimization and analogs of what what might be important in real life.

So there we pretty much converge on the optimum design.

So the optimized design program that we have the generic algorithm code.

We'll search through random signs and identify the best ones.

In this smart way by crossing over, by cross generations.

Some of its features are,

it provides rapid convergence on optimal designs, much faster than random search.

It can optimize across multiple contrasts that you care about, and

you can specify the relative importance of each contrast, or the weights.

So, let's say in a Nova design you can say I care about the main effect of factor 1,

the main effect of factor 2, and the interaction between them.

And you can specify those contrasts and

the importance in your over all design and experiment.

It accounts for high pass filtering the you specify and

simple kind of autocorrelation matrix that's a pretty reasonable form.

It also has a simple model that accounts for nonlinearity.

In the design, in the bold response which is also quite important.

And finally you can optimize for a combination of detection power,

HRF shape estimation efficiency with a FIR model and

other factors including the counter balancing of stimuli.

Which you might want to do in certain cases.

And that we're taking a little bit closer look at detection power

versus hf estimation efficiency.

So as I mentioned before, there is a fundamental trade off

between your ability to detect the difference a versus b and

estimate those shapes of the responses that are linked to each of those results.

And so as a rule of thumb, blocks of the same trial, the block design,

has greater power to detect differences among conditions.

And you can think of this as being the case because you build up a giant sort of

hump of activity when you present the same trial over and over and over again.

But it's not very specific to events that are locked to those events themselves.

Block designs can pick up things that are happening between trials or

other sorts of global effects.

And so they're not able to link activity

to particular event onsets at particular times following the event onset.

Another way of thinking about this is plaque designs are very robust to miss

specification of the HRF,

because they everything together into a big camp of activity.

Anyway.

But at the same time, they're very poor at recovering what the shape actually is.

Now unpredictable sequences of trials have the complementary issues and strengths.

So they have greater power to estimate the shape of the hemodynamic response, but

they're poor at contrast detection.

And an n sequence, is [INAUDIBLE] but

an n sequence is design that's orthogonal to itself shifted over in time.

So for single event types at least it's optimal for estimating

the HRF shape with the FRI model, but a very important contrast detection.

So over that I got here is two runs of he generic algorithm running in mid time

really sycosis moving for responsible safe.

And on the left side, we're optimizing for contrast attraction power for

a minus b design on this is soft.

And I as you'll see the across the different generations the best of

each generations being saved.

And the design is converging on a block design, with about 18 second periodicity.

On the right, we're optimizing for HRF shape estimation power with the FRI model.

And what we're converging on here is a very different kind of design.

You'll see much parser events that are much more spread out in time and

it's converging on essentially what would be like an M sequence.

So I'll let it run for a couple minutes and we can see how rapidly we converge

on these two different solutions and actually how different the solutions are.

Okay.

So moving on, the ideas you might care about both of those things and

it make sense to be able to detect differences of cross-conditions.

But also have some ability to infer that activity belongs

to particular trial types and particular moments in time.

So we might want to optimize for a combination of.

Contrast section power and HRF shape estimation efficiency.

And this slide is one of my favorites here,

because it shows you a bunch of different designs in the space of that tradeoff.

So on the y axis is the detection power for a minus b, two events.

And on the x axis is HRF shape estimation power.

And what you can see is A block design with 16 seconds on off

is in the top left corner.

And that's maximal in terms of contrast detection power but

very poor in terms of HRF shape estimation power.

So that's the best for detection.

On the other side of the graph,

you see a series of M sequences that are truncated M sequences.

And they're all different because they're truncated and because we have two event

types, and there's cross correlations across the event types.

And what you see here is the m sequences are optimal in terms of HRF shape

estimation power, but they're the poorest in terms of contrast detection power.

Now let's look at random event related designs which are blue and

those are so-so on both I'm sort of in the middle on both.

And then we see in the white circles is runs of an optimized

design with a genetic algorithm.

And what I've done is I've optimized the design to be waited more towards

contrast detection in some cases, and

more towards a stripe of shape estimation in other cases.

So it traces this trade off pushing outward towards the theoretical

optimal limit.

But weighted more towards one or the other.

And what you can see is that the optima's designs can be substantially

better than random event related designs on both the contrast detection and

the shape estimation.

So that's really appealing fi you're working with event related designs and

you'd like to maximize a combination of these factors.

So that's the end of this module hope you enjoy the design optimization sequence and

tune in for more soon.

[LAUGH]

[INAUDIBLE]

Coursera делает лучшее в мире образование доступным каждому, предлагая онлайн-курсы от ведущих университетов и организаций.