This course covers the design, acquisition, and analysis of Functional Magnetic Resonance Imaging (fMRI) data. A book related to the class can be found here: https://leanpub.com/principlesoffmri

Loading...

Из курса от партнера Johns Hopkins University

Principles of fMRI 1

392 оценки

This course covers the design, acquisition, and analysis of Functional Magnetic Resonance Imaging (fMRI) data. A book related to the class can be found here: https://leanpub.com/principlesoffmri

Из урока

Week 1

This week we will introduce fMRI, and talk about data acquisition and reconstruction.

- Martin Lindquist, PhD, MScProfessor, Biostatistics

Bloomberg School of Public Health | Johns Hopkins University - Tor WagerPhD

Department of Psychology and Neuroscience, The Institute of Cognitive Science | University of Colorado at Boulder

So in the last module we talked about how we could take this

signal that was acquired from an MR scanner and create an image.

And what we talked about was that the image was not actually created

in image space, but rather in something called k-space.

So in this module, I want to talk a little more about K-space

to gain a little bit more understanding of that concept.

So here's the little cartoon I used last time.

So, data is acquired in the K-space, and

so here I show it being acquired in a grid like fashion.

And so, once you've acquired that data, you apply the inverse for

your transform, and you get this beautiful image in image space.

So it's important to note that there's not a one to one relationship

between image space and K-space.

So we don't have, like there's a single measurement in k-space

that gives you all the information about a single voxel of the brain.

But rather what happens is that,

all the points in k-space contain a little information about every voxel.

So by removing a k-space point,

we lose information about, over every voxel of the brain.

So each individual point in image space depends on all the points contained

in k-space.

So to illustrate the meaning of each k-space based points,

I like to think about this in one dimension.

So let's say that you make three sinusoidal curves as follows.

These are three sinusoidal curves with different frequencies, and

let's say that we take the linear combination of them.

So we take the top sinusoidal curve, and we multiply that by 0.5.

We take the second one and we multiply it by two, then we take the third one and

we multiply it by one, and then we add these curves up, and

then we get the following curve.

So this is a linear combination of three sinusoids.

So if I asked you what three sinusoidal functions went into making this curve,

by looking at it it might be hard to tell.

However, by taking the Fourier transform of this time series

we get the following information.

We get three different spikes in the frequency domain, and so basically,

the frequency domain, the x axis here is frequency,

which is one over the periodicity.

So if we have a sinusoid with a a long period,

we're going to get a spike in the low frequency portions.

So this first spike to the absolute left here, the low one,

which has the magnitude of .5, represents this curve with a height periodicity,

the top one here, and the .5 represents its relative contribution to the signal.

The second peak we have represents the middle curve and

its amplitude is two and the last one, which is the most at high

frequency is the one that oscillates the most, and that has a peak of one.

So by looking at the Fourier transform of this time series we were not only able to

reconstruct the periodicity of the three functions that went into it, but

also the relative contribution to the times series.

And so this is sort of in case base, we do this in two dimensions.

So let's now look at this in two dimensions and

say, well what are the contributions of the each of these k-space points?

Let's say that we have a blank k-space, so it's zero everywhere, and we have our

single measurements here, which is sort of to the Northwest of the origin.

So what happens when you put a value one in this case, and

you take the Fourier transform?

Well, it turns out that if I take the Fourier transform and

transform this into image space, you get a sinusoid here, but in two dimensions.

So, a two dimensional sinusoid, kind of a wave here, going here, and you're seeing

it's going in the direction from which the point, it travels from the origin.

And also the periodicity of this wave depends on how far away

from the center k-space you are.

So this is sort of, in the low frequency parts of k-space, so

it has a high periodicity.

So if you move in the same direction, but towards the high frequency portions,

then we would expect lower periodicity, and so indeed we do that.

So the wave starts oscillating more frequently because we're in the higher

frequency portions, but it's going in the same direction because we're moving in

the same direction from the origin.

Let's say instead we moved

to the Northeast instead of the Northwest from the origin of k-space.

In this case we're still in the same high frequency parts of k-space,

so we would accept the same periodicity, but we're moving in a different direction.

So if we reconstructed this point we would get a wave with the same periodicity,

the same frequency, but

now moving in the Northeast direction instead of the Northwest direction.

So basically, what each point in k-space gives us,

it gives us one these waves, and basically, the value of the point, k-space

point tells us the relative contribution of that wave in reconstructing this image.

Now this is really almost hard to believe because basically when we have

k-space here, what I'm saying here is that the k-space measurements that we see

to the left here are simply weights of these different waves, and we

take the linear combination based on these waves and we get the image to the right.

And so it's hard to believe that this image is just made up of different waves,

but we can show that this is true if we take a single k-space point and

just manipulate it by doubling it.

So let's say that we take this case spaced point, which is sort of to the Northeast

of the center, and I double its value, and now I reconstruct the image again.

Basically what you see is, you see that the way of going in that direction

becomes overvalued, and now we get this kind of grid like artifact over the brain.

So, now we're just overvaluing the wave going in a certain direction,

and this is giving rise to this artifact.

So this sort of illustrates that this image is

a finely kind of balanced combination of these waves, and

if we overvalue one of them, it kind of ruins the whole image.

If we go in the opposite direction, we ge the same type of grid

like pattern, but moving now in the Northwest direction.

So in this case, so the k-space contains information about the entire brain in this

case but now if we're interested in the relative contribution of the high

versus the low frequency parts of k-space, we can do the following example.

Let's split k-space up into nine equally sized boxes, and we take the center box,

and we reconstruct the image using that data, and then we take the outer

eight boxes, just removing the center, and we reconstruct the data using this.

Because the Fourier transform is a linear operation,

the sum of those two should add up to the original image.

So now by doing this little thought experiment, we can see what the relative

contribution of the center of k-space is, versus the outskirts of k-space.

So if we reconstruct the image using the center of k-space,

we get something that looks like this.

It looks very much like the original image, but a little bit blurrier and

you'll see that the detail is not as fine as it was in the original image,

but we've retained most of the information of the brain.

And that's just using one-ninth of the k-space measurement, so

about 11% of the data.

So if we look at what information is conveyed by the additional 88%,

or 89%, we can make that reconstruction.

Here you'll see that we're only getting detail, we're seeing the boundary between

the ventricles in the brain and between the skull and the brain.

So basically, these high frequency parts are the ones

that are oscillating very quickly so those are giving us a lot of the fine detail,

and while the low frequency parts are the things that are changing very slowly and

that's giving us most of the contrast here.

So in general, if we use this as an illustration,

we can see that the low spatial frequency of k-space

represent parts of the object that are changed in a spatially slow manner.

This is contrast.

In contrast, high spatial frequencies represent

small structures whose size in on the order of the voxel size.

So these are usually tissue boundaries and things like that.

So if you want to make out fine spatial resolution,

and you want to make out the difference between grey and

white matter, you need a lot of these high spatial frequencies.

If you're just interested in contrast, you primarily need the center of k-space.

So again, the farther out, the more we sample in k-space,

the more detail we get, and this goes back to spatial resolution.

So if we want to acquire a 32 by 32 image,

we need to sample about 1024 points in k-space.

If we do that, we're primarily using the slowly varying waves and

we're getting this sort of, not much detail.

We're getting very little detail about the brain.

We're just kind of getting a blurry version of the brain.

If instead, we sample a 64 k-space in a 64 by 64 grid,

we have to make 4096 different k-space measurements, and

by doing this, we're now starting to incorporate some of the more high

frequency parts of k-space, and this is giving us more spatial detail.

So now we can start making out differences in the brain.

If we go even higher to a 128 by 128 image,

now we need to sample at 16 roughly 1,6000 points in k-space, but

by doing this we're now getting a lot of high frequency parts of k-space and

I was making wherever to make out a lot of detail.

Like for example, the boundaries between grey and white matter, and

between csf and what not.

And so, what we have here is a very high spatial resolution image of the brain,

which gives us a lot of information and a lot of detail.

Now if we have our druthers, we would like these high resolution images.

But the point is in order to go from the 32 by 32 image to the 128 by 128 image,

we had to make 16 times as many measurements.

And so there's a cost in doing this, because we have to make a lot more

measurements of the brain, so it takes a lot longer for us to do this.

So, in general, there's sort of a trade off between spacial and

temporal resolution here that we want to have adequate

spacial resolution in order to make out what's going on in the brain, but we also

want to acquire the images in a fairly rapid manner because we're going to need

these when we do functional imaging, but we'll come back to this in coming modules.

Okay, so this is the end of this module.

Here we've talked about k-space, and

we talked about the information contact in k-space, and so this sort of ends this set

of three modules where we talked about image acquisition and reconstruction.

And now we're going to move on and talk about some other topics.

Okay, I'll see you in the next module, bye.

Coursera делает лучшее в мире образование доступным каждому, предлагая онлайн-курсы от ведущих университетов и организаций.