This course covers the design, acquisition, and analysis of Functional Magnetic Resonance Imaging (fMRI) data. A book related to the class can be found here: https://leanpub.com/principlesoffmri

Loading...

From the course by Johns Hopkins University

Principles of fMRI 1

259 ratings

Johns Hopkins University

259 ratings

This course covers the design, acquisition, and analysis of Functional Magnetic Resonance Imaging (fMRI) data. A book related to the class can be found here: https://leanpub.com/principlesoffmri

From the lesson

Week 3

This week we will discuss the General Linear Model (GLM).

- Martin Lindquist, PhD, MScProfessor, Biostatistics

Bloomberg School of Public Health | Johns Hopkins University - Tor WagerPhD

Department of Psychology and Neuroscience, The Institute of Cognitive Science | University of Colorado at Boulder

Hi, in this module we'll continue with model building.

Â So this is model building part 3, and we'll talk about filtering and

Â nuisance covariance.

Â So to recap where we are,

Â we're working with the standard GLM model which can be written in the following way.

Â So we have Y, which is the FMRI data strung out over time from a single voxel.

Â We have the design matrix X.

Â We have the regression coefficients beta, and we have the noise vector epsilon.

Â And epsilon is assumed to follow a normal distribution with mean 0 and variance,

Â covariance matrix V, whose format depends on the noise model.

Â So what we've been talking about is how to build a good design matrix,

Â and often model factors associated with known sources of variability

Â that are not directly related to the task or

Â the experimental hypothesis also need to be included in the GLM model.

Â Examples of such possible nuisance regressors include the signal drift,

Â physiological artifacts such as respiration and head motion,

Â so sometimes we include six regressors comprising of three translations and

Â three rotations, which are estimated during the preprocessing stage.

Â Also sometimes we use transformations of these six regressors are also included.

Â So to start, let's talk a little bit about how to include drift into our model.

Â And so again, to recap what we talked about a few modules ago,

Â drift is slow changes in voxel intensity over time,

Â which is low-frequency noise, and this is often present in fMRI signal.

Â And so again, scanner instabilities and

Â not motion or physiological noise is the main cause of drift.

Â Drift has also been seen in phantoms and cadavers.

Â And so we need to include drift parameters in our models.

Â So we often model drift using, say, splines, or polynomial basis sets, or

Â discrete cosine basis sets.

Â So here's an example of a model, a GLM model, with drift components included.

Â And this is a discrete cosine basis.

Â So here the design matrix has 11 columns.

Â One is sort of a boxcar shape which corresponds to the task.

Â The second one, column, is a baseline, and

Â then columns 3 through 9 correspond to the discrete cosine basis set,

Â which is supposed to model the drift component present in the data.

Â So here we see an illustration of the relative

Â contribution of each of these columns.

Â So here, the true data is in blue here, so

Â you see that the data has this sort of boxcar shape but is drifting across time.

Â Now if we were to fit,

Â use all the columns of the design matrix, we would get the green predicted response.

Â And so this takes into account the low-frequency drift.

Â So here you see that the green curve fits the data quite well.

Â Now what are the relative contributions of drift and of the boxcar?

Â Well, if we looked at the red curve,

Â this is the predictor response with the low-frequency drift explained away.

Â So here we see the size of the activation in controlling for the effects of drift.

Â The black curve, on the other hand, shows us the low-frequency drift,

Â and that's sort of a nuisance parameter that we want to remove.

Â We don't think that that's important, and

Â doesn't tell us anything about the task at hand.

Â But this is sort of just instabilities in the scanners.

Â We want to remove the black line and get to the red line,

Â which we find controlling for the drift is sort of the signal of interest.

Â Another type of artifact that we should control for

Â is transient gradient artifacts.

Â So we talked a little bit about this in the artifact module,

Â that we often get kind of spikes in the data due to artifacts.

Â And here we see examples of a few spikes in the data.

Â So we often want to kind of control for these spikes in our subsequent models.

Â So here's a way of modeling transient gradient artifacts, and

Â so there's a number of ways to check on this.

Â And so here we're seeing that,

Â a little movie which is showing how we can do outlier detection.

Â So there's two sort of curves that we're looking at here.

Â The top curve is the global mean.

Â Here we don't really spot very much, but

Â if you look at the middle one, this is the successive differences,

Â where it's actually the root mean square of the successive differences.

Â And this allows us to see transient gradient artifacts very nicely, and

Â you can see every time there's a spike there you get

Â kind of a funny looking image where there looks like there's artifacts in it.

Â So these are sort of the types of images that we want to control for and

Â include as covariates in our design matrix.

Â So we want to include one regressor per bad image.

Â And so here is what example nuisance regressors in x might look like.

Â So, we would have first, we would control usually the first four images or

Â something, are removed or

Â not included in the analysis because of equilibrium issues.

Â So we usually treat them as nuisance regressors, and

Â then we include a nuisance regressor which is just a spike,

Â indicating the image where we had an artifact.

Â And that sort of uses one degree of freedom to kind of mop up the variation

Â due to that spike, and that's a way that we often analyze data in practice.

Â So physiological noise such as respiration and

Â heart rate, again as we talked about earlier,

Â give rise to periodic noise, which is often aliased into the task frequencies.

Â And it can potentially be modeled if the temporal resolution of the study is high

Â enough, but if the TR is too low, there's always going to be problems with aliasing.

Â And so, again, according to the Nyquist criterion, the sampling rating must be at

Â least twice as big as the frequency of the curve that we seek to model.

Â So, for these reasons, this type of noise is often difficult to remove,

Â and is often left in the data, giving rise to temporal autocorrelations.

Â However, there are ways to sort of monitor physiological artifacts and

Â thereafter remove them from, include them in your model.

Â So, there's two main ways of modelling this,

Â and this includes RETROICOR and RVHRCOR.

Â And so they do it in slightly different ways,

Â taking into consideration factors such as neuronal activation,

Â respiration cycle, the cardiac cycle, respiration volume, and heart rate.

Â Here's a slide showing differences in activation maps

Â when you use no RETROICOR and when you use RETROICOR.

Â Here we see that there's more activation when using RETROICOR in areas

Â that we expect to be active during this particular

Â task >> Head movement

Â presents one of the biggest challenges in the analysis and correction of artifacts.

Â What you're seeing here is the head movement parameter

Â estimates from the realignment from one person.

Â And as you can see, everybody moves their head, some people more than others,

Â sometimes more than others.

Â And often people will exclude participants who move their head more than a certain

Â amount, like more than one millimeter, for example, within a run.

Â But this can also present its own challenges.

Â So head movement can give rise to serious problems.

Â Basic motion correction, or image realignment,

Â is performed in the preprocessing stages of the analysis.

Â And this takes care of the gross adjustment differences across the images,

Â for the most part.

Â However, motion also induces complex artifacts due to the spin history and

Â due to changes in the magnetic field that are introduced by motion,

Â and these cannot be removed.

Â So at least two important papers recently have highlighted the influence of head

Â motion and how it can be a compound in a number of analyses.

Â So for example, if you're looking at functional connectivity across old and

Â young subjects, and young subjects move their head more, you can end up with

Â a systematic bias towards increased functional connectivity that's local in

Â the young subjects, because you're essentially blurring the brain locally.

Â And that's just an example, one example,

Â of many kinds of head movement related artifacts that we might run into.

Â So we have to be very careful.

Â There are two basic approaches for how to deal with head movement now.

Â One of them is to include nuisance regressors in your design matrix

Â that model movement, and we'll see an example of that later.

Â People are also sometimes include measurements of global cerebrospinal fluid

Â spinal fluid, ventricle activity, as covariates to account for

Â movement in various kinds of other physiological noise or junk.

Â The second approach is called scrubbing,

Â which refers to the practice of dropping images with high estimated movement.

Â So essentially you're removing a number of images from the time series,

Â entering those as missing data.

Â This is an example of what it would look like to model movement with additional

Â nuisance covariates.

Â So here on the left, what you see is a design matrix, or part of one.

Â And each of those blocks that you see includes some task-related regressors, and

Â a number of regressors that we've added to capture head movement.

Â So we have a bunch of them, because we're modeling not just the linear movement

Â parameter estimates that you saw from the previous slide,

Â we're also modeling their squares, there are successive differences,

Â which is related to the derivative and their squared successive differences.

Â So from every run, we include 24 additional movement parameter covariates.

Â So let's look at an example of how movement can be a problem, and

Â how this practice of introducing additional covariates might help.

Â So this is an example of a group analysis from 25 people

Â who performed a fear conditioning task.

Â And so we're looking at activity related to the CS+, which was cues that predict

Â shock, versus the CS-, which is cues that don't predict shock, safe cues.

Â And you think in a group analysis that a lot of the problems with individual images

Â and artifacts should average out.

Â But in this case, they don't,

Â because we see significant results in the group analysis in many areas of the brain

Â that are physiologically implausible, in the ventricles, for example.

Â Now, let's take a closer look at the images that went into that group analysis.

Â And this can help give us a clue about where some of the problems might lie.

Â So what you're seeing here is one histogram for every participant

Â that shows the contrast values across the brain for that participant.

Â So this is, for one subject here in the top left,

Â you can see the distribution across the entire brain.

Â Now this should be roughly mean 0, unless there's whole brain activation or

Â deactivation.

Â And it should also be on the same scale for all the participants.

Â That's one of the basic assumptions that go into doing a group analysis in

Â the first place.

Â So here, what do we see?

Â Well, we see a lot of problems.

Â So look at this subject here, number 4.

Â We see physiologically implausible whole-brain deactivation.

Â The entire set of contrast values across the brain

Â have shifted towards deactivated, and

Â they're massively deactivated compared to the range in most of the subjects.

Â And look at this.

Â Now, this subject down here is an example of one that shows physiologically and

Â plausible whole-brain activation.

Â So it is possible that we can get some diffuse modulatory effects that can induce

Â some global shifts in the images in the contrast values.

Â But changes on this scale and changes that are so inconsistent across participants

Â really are way outside the range of what's physiologically plausible.

Â So now let's adjust our design matrix.

Â What we're going to do is add nuisance covariates.

Â So you see on the left the previous design

Â where we've modeled the various kids of events involved.

Â And we're interested in just the CS+ versus CS- comparison here, so

Â it's a contrast across those regressors that we're interested in.

Â Now we've added a number of motion covariates, the 24 per run that I told you

Â about earlier, and those are in green and we've also done some outlier detection and

Â estimated where we might have in spikes in the data.

Â We've modeled those as well.

Â So now, let's see what happens afterwards.

Â Well, if we look at the histograms of the contrast values,

Â there's still some problems.

Â Not every subject looks the same, essentially, but they're much better.

Â Almost all of them are really centered very closely on 0,

Â which means there's no whole brain activation or deactivation with the CS+.

Â And the distributions look more on the same scale, although still,

Â as we said, it's not perfect, and that's the noise that we have to live with.

Â So now let's look at what happens in our group analysis.

Â This was before, and this is after.

Â So things look much more physiologically plausible.

Â So there's the implausible ventricle deactivation, and

Â here we see an expected pattern based on previous studies.

Â And what we should see is dorsal anterior cingulate increases,

Â which you see in yellow, and PAG increases, which you see in yellow,

Â among other regions, and deactivation of the so called default mode network

Â in the ventromedial prefrontal cortex and posterior cingulate cortex.

Â So this looks like a very plausible map.

Â That's the end of this module on artifacts and noise.

Â Coursera provides universal access to the worldâ€™s best education, partnering with top universities and organizations to offer courses online.