Are you the publisher? Claim or contact us about this channel


Embed this content in your HTML

Search

Report adult content:

click to rate:

Account: (login)

More Channels


Channel Catalog


(Page 1) | 2 | 3 | 4 | newer

    0 0


    More fMRI experiments are ruined by subject motion than any other single cause. At least, that is my anecdotal conclusion from a dozen years' performing post-acquisition autopsies on "bad" data. The reasons for this vulnerability are manifold, starting with the type of subjects you're trying to scan. You may be interested in people for whom remaining still is difficult or impossible without sedation of some kind.

    However, I think there is another reason why many (most?) fMRIers end up with more subject motion than is practicable: they haven't taken the time to think through the different ways that subjects can thwart your best efforts. In other words, what we are considering is largely experimental technique, or bedside manner as medical types refer to this stuff.

    With the possible (and debatable) exception of bite bars, which aren't popular for myriad reasons, there is no panacea for motion. Why? As we shall see, it's not just movement of the head that's a concern. You need to consider a subject's comfort, arousal level, propensity to want to breathe, and many other things that might be peripheral to your task but are very much on the mind of your (often fMRI-naive) subjects.

    Now, before we get any farther I need to outline what this post will cover, and what it won't. The focus of this post is on single-shot, unaccelerated gradient echo EPI - the sort of plain vanilla sequence that the majority of sites use for fMRI. I won't be covering the effects of motion on parallel imaging such as GRAPPA, for example. I will also restrict discussion here to the effects of motion on axial slices. Hopefully you can extrapolate to different slice prescriptions. But, rest assured that this isn't the last word in motion, not by a long chalk. Motion has come up before on this blog, e.g. in relation to GRAPPA for EPI, and the ubiquity of the problem implies that the issue will arise in many subsequent posts, too. Take today's post as an introduction to the general problem.

    My final caveat on the utility of today's post. As this blog is focused on practical matters I will restrict the bulk of the discussion to things that you'll see and can control online, in real time. There are many tools that can be used to provide useful diagnostics post hoc, some of which I will mention. But this isn't a post aimed at showing you what went wrong. Rather, the intent of this post is to describe what is going wrong, such that you might be able to intercede and fix the situation. Some sites have useful real-time diagnostics that can tell you when (and perhaps how) a subject is moving, but they aren't widespread. Thus, for today's post we shall keep things simple and restrict the discussion to what can be seen in the EPIs themselves, as they are acquired.

    WARNING: If you haven't run an fMRI experiment in a while then you might want to stop reading this post here and go and review the earlier post, Understanding fMRI artifacts: "Good" axial data. That post highlights our target: the low motion case.


    Eye movements

    Let's start simply. Here is a video of a subject intentionally moving his eyes to a target. Saccading is the technical term, I hear. (See Note 1 for experimental details. Parameters were fixed throughout for this post, unless mentioned to the contrary in any section below.) There are twenty volumes played back at a rate of 5 frames/sec:


    Movement of the eyeballs and optic nerves is quite obvious. But is there any effect on brain signals proper? To assess that it's easier to switch to looking at the standard deviation image for the time series:


    We can now clearly see the effects of cardiac pulsation - blood vessels stand out, and CSF has higher variance than brain tissue, in agreement with previous good data. We can now also see that the muscles surrounding the eyes have high variance, albeit not quite as high as the eyes themselves. (See this post for further information on standard deviation and TSNR images.)

    I find it quite interesting that there is minimal effect of the eye movements on the variance of brain signals in those slices containing the eyes. Phase encoding is anterior-posterior, so we might expect regions of the slices containing eyes to have higher overall variance. Specifically, for eye-containing slices, we might expect twin parallel columns in the A-P direction to have higher variance where the N/2 ghosts (from eyes) will occur. What does it mean that brain regions parallel to eye signals have similar variance to the rest of the brain? Not very much, since this is a case study. But I know that there are some fMRIers who advocate trying to position slices such that little or no eye signal is encompassed within slices, to avoid the effects of eye movement on variance of parallel brain regions altogether. (There are obviously limits to how far one can go with this tactic.) Perhaps, then, it is movement of the entire head - with those twin globes of delightfully high SNR - that leads to variance issues for eye-containing slices, rather than eye movements per se. Let's take a look.


    Head nodding

    Placing foam padding down the sides of your subject's head will do a pretty good job of preventing side-to-side head motion. But movement in the chin-to-chest direction is near impossible to prevent without a bite bar or some other form of skull restraint. (Neck muscles are really strong!) So nine times out of ten... okay, ninety nine times out of a hundred, it's this movement axis that is of primary concern.

    Here is an example of what happens to EPIs when a subject intentionally moves (rotates) in the chin-to-chest direction. As before, there are twenty volumes played at a rate of 5 frames/sec:




    Hmm. What can we say about that? Lots of stuff changes, doesn't it? The anatomical content of each slice clearly changes with the movement, as you would expect for a phenomenon that is essentially through-plane. And if you look carefully you might be able to see the degradation of the shim on high magnetic susceptibility regions - the frontal and temporal lobes, where distortion is highest. Can the standard deviation image decipher anything more subtle?

    The standard deviation image for that time series confirms that the eyes, by virtue of being big signal generators, do indeed generate high variance when the whole head is moved:


    So even if the eyes are kept still relative to the subject's head, movement of the entire head leads to large instability of the eye signals. Not a massive surprise.

    We can also see that the cerebellum and frontal lobes generate high variance, presumably due to degradation of the shim but perhaps also due to the magnitude of displacement of these regions. (An aside: relative to these slices it's difficult to estimate the axis of rotation but it's probably quite close to the center of the brain.) Overall, though, I don't think this image provides any breakthrough insights. Rather, it simply confirms that if the head is rotated then bad stuff happens to the stats, and there are degrees of bad depending on where in the brain one considers. Not the most precise diagnosis.

    There is, however, one additional factor to consider here: slice ordering. In the data shown above, slices were acquired in descending order; that is, contiguously in the head-to-foot direction. But what if slices had been acquired interleaved - odds then evens - as is sometimes done to reduce crosstalk? (There's a section on interleaved versus contiguous slicing in my user training guide/FAQ.) Here's a video showing the effects of a similar head nodding motion to the one above but on interleaved slices (all other parameters held constant):


    In the case of interleaved slices, then, head movements cause a perturbation of the T1 steady state. This leads to a period of recovery of something like two to three TRs, for a TR of 2 sec and typical T1s at 3 T for brain tissue and CSF in the range 1-3 seconds, following each head movement. There are T1 effects in contiguous slices, too, but the magnitude of the perturbation tends to be smaller. (See Note 2.)

    When the head moves in the slice direction with interleaved slices, some slices are displaced into brain regions that were just excited, during the current TR period. These slices get darker because the spins have had very little time to recover. Then, in regions of the brain that get skipped (omitted) during the current TR, the signal will become brighter than normal in the subsequent TR because those spins will have had more than one TR to recover. We need to wait for the dynamic equilibrium to re-establish itself (after the movement has stopped) before the signal intensities return to normal. Interleaved slices will show banding in the slice direction during and after head movement. This is seen as banding if the 3D volume is reconstructed and viewed. (There's a little more information and an example available from the CBU wiki.) Banding from T1 effects is also visible in the TSNR image of the 20-volume time series just shown above:



    Before considering the next type of motion I'll make a quick statement about interleaved versus contiguous slices. For identical head movement in the slice direction, interleaved slices will experience prolonged image artifacts compared to contiguous slices. Thus, all other things being equal, and assuming negligible slice-to-slice crosstalk, it is generally the case that contiguous slices are preferred for fMRI.


    Talking

    Unless your task requires verbal responses there shouldn't be any reason for your subjects to talk during a scan. Even if they have a penchant for talking to themselves outside of the scanner, trying to do it inside is likely to be a frustrating experience, given the scanner noise. Still, one never knows... I've actually had subjects sing to themselves to alleviate boredom. Really.

    Here is what talking at a normal conversational level, with no attempt by the subject to compensate for head movement, does to an EPI time series (of contiguous slices):


    The movement is similar, although smaller, than that observed by intentional head nodding. This isn't at all surprising given that talking involves movements of the jaw that are primarily in the same direction as nodding, and also that head movements tend to accompany speech for emphasis. But, even if the skull was stationary during the speech, movements of the jaw itself may be sufficient to cause changes in the shim, leading to modulation of the frontal lobes and inferior portions of the brain in particular. Chest movements to support voice production add a further level of complexity. The bottom line: you don't want it. (Using voice responses for your task? See Note 3.)

    Talking of talking, if you interact with your subject a lot between EPI blocks there is a chance your subject's head may end up in a new position between runs. Subjects with a penchant for nodding in the affirmative - try not to do it, it's really hard! - are especially likely to end up displaced. Thus, after extensive conversations, and/or after an amount of time during which the scanner is likely to have cooled from its working steady state, and/or you suspect that the foam supporting the subject's head has compressed from its starting shape, I would suggest that you consider re-shimming before starting the next block in order to ensure the ghost level is low and the effects of distortion are minimized. It only takes a few tens of seconds to re-shim, and in the case a subject has moved several millimeters it could save your data. (Siemens users, see Note 3 in this post for the procedure to force a new shim.) 


    Coughing, swallowing, yawning and sneezing

    These four actions are pretty much unavoidable in subjects lying supine, in a relatively cool, dark, often dry and soporific environment. What's more they often aren't exactly subtle, offering the potential for stark head movements.

    In this next video the subject coughed once then swallowed immediately thereafter, resulting in two obvious head displacements:


    The movements are again similar in nature to the intentional head nod, except that the magnitude of displacements is larger, if anything. No surprises here.

    Coughing and sneezing can result in especially hefty movements of the head. Swallowing and yawning tend to cause less dynamic head movements, but the caveat is that most people swallow once every minute or two, and yawning seems to be a by-product of being in an MRI scanner. Someone might go hours between coughs and days between sneezes. Swallowing and yawning, then, are the actions that you might want to pay the most attention to.

    There's really not much you can do except to ask your subjects to own up to coughing or sneezing during a run. You should consider re-shimming if your subject experiences a sneeze or a violent cough. I always suggest to users that if they know or suspect that a subject may have moved his head since the last shim was performed, re-shim. Acquire a new localizer scan and check your slice prescription if you suspect the movement is substantial. If you're studying heavy smokers or other subjects prone to extensive coughing, I have nothing much further to offer you except the very best of luck!

    To combat yawning and fatigue in general, ask your subject to take a couple of easy, deep breaths in between EPI blocks. Try to flush as much CO2 as you can from your subject's blood. As a side benefit you may help to keep your subjects more alert, yielding better overall task performance (and higher activation!) in addition to reducing motion. You might also consider turning on the scanner's bore fan periodically, to reduce CO2 build-up. I wouldn't run the fan continuously, however, because it could cause throat irritation leading to more coughing or swallowing, and dust could lead to sneezing. Dry eyes are also likely to produce fidgeting as a subject blinks to generate tears. So, use the bore fan in occasional, short bursts.

    As for swallowing, a useful tactic can be to ask your subject to swallow just prior to the start of each block of EPI and to be mindful to wait until the noise stops before swallowing again. However, don't excessively focus your subject's attention on it or you'll find your subject can do nothing but swallow!


    Body movements

    And finally, movement of body parts other than the head. There are twin concerns with body and limb movements. The first issue is the direct mechanical effect. It's exceedingly difficult to move an arm or a leg, or even a foot, without that motion being conducted via the spine into the head. Try it. Lay on the floor or a bed and move your ankle in any direction you like. The load on your heel causes your entire body - and your head - to move. It really is that simple.

    Here is what ankle motion looks like in an EPI time series of the head:



    If you have tried the ankle movement test then you're unsurprised to see movement that looks depressingly similar to the intentional head nods shown previously. The driving muscles may be different but the net result for the brain is the same.

    In this test I moved both of my ankles very slightly, as if I was adjusting for comfort. I made the movements slowly and deliberately, trying as hard as I could to keep my head stationary. Yet I could still feel my bum and back moving on the patient bed, and of course I could detect that my head was moving in spite of my best efforts to counteract it.

    The secondary effect of limb/body movements in the scanner is a change in magnetic field due to susceptibility. I've covered this topic briefly before, in relation to GRAPPA. I don't think this effect is nearly as problematic as the direct head movement, especially for single-shot (unaccelerated) EPI, but it's something to keep in mind.

    So, what to do? If your subjects are using button response boxes, put the boxes in a position where only facile finger movements are required. If you're still worried, run some pilot tests on a volunteer and ensure that the way you're setting up doesn't invoke arm or shoulder movements. (Best case, run the tests with yourself as the subject. As recommended by Neuroskeptic on Neuroconscience's blog, testing on yourself is the only true way to understand your experimental setup.) And another simple mistake to avoid: don't just tell your subject not to move his head during the scan! Rather, inform your subjects that all forms of motion (except any required by the experiment, such as responding with button pushes) should be avoided any time the scanner is making a noise. If the subject needs to adjust for comfort, ask her to do so in between EPI blocks. Here are two simple rules to give to your subjects:
    • Scanner noise = No moving whatsoever!
    • No scanner noise = Movement with permission from the experimenter
    Then, as already described for sneezing, etc., if you know or suspect that your subject has moved his head, initiate a re-shim.


    Other handy tactics and things to pay attention to during an experiment

    Whenever I'm watching EPIs as they acquire, using the inline display window on my Siemens scanner, I do a few things. Firstly, I "scan" the entire set of slices looking for any major changes with the window contrasted to show the brain. Secondly, I pay close attention to superior slices (if your slices are axial); those that contain just small, nearly circular "patches" of brain. Any movement in the chin-to-chest direction will cause these circular patches to get larger or smaller by a goodly amount. It's quite easy to see; often far easier than looking at ghosts or other signal regions. It's almost as if the brain signal in that end slice is breathing - moving in and out. And finally, I periodically re-contrast the window to show the noise and look for any telltale changes in ghosts. As a general rule, any time your subject moves you will see an increase in the N/2 ghosts. If the ghost increase is short-lived then the movement is/was short-lived.

    But is it possible to tell from looking at your EPIs precisely how your subject is moving? As the results above suggest: not really. It's very difficult to differentiate between forms of chin-to-chest rotation, whether it arises from swallowing, fidgety feet or straining to see a target image. Instead, you should use knowledge of your task to try to refine your diagnosis. Thus, if you see movement more frequently than once every ten to twenty seconds, it's unlikely to be the subject swallowing (unless the task involves sipping coffee in the magnet). Take a peek through the magnet room window to see if you can see the soles of your subject's feet moving. (Most people move their feet whenever they move their bum, stretch their back, etc.) You might also simply ask the subject to own up to a particular movement. "Hi! So, I don't suppose you were talking to yourself during that last block, were you?" Be kind, remember that being in an fMRI is far more boring than watching paint dry, and remind your subject not to move at all during the scan. Don't just tell your subject not to move her head! As you have seen, hands, arms, bums, backs, feet,... all tend to couple through the skeleton into the old noggin. Bad, bad, bad.

    ______________________



    Notes:

    1.  Siemens Trio/TIM scanner with 12-channel receive-only RF head coil, 33 axial slices, 3 mm slice thickness, 0.3 mm gap, descending slice order, single-shot gradient echo EPI with 64x64 matrix, FOV=224x224 mm, echo spacing = 0.51 ms, anterior-posterior phase encoding, TR=2000 ms, TE=28 ms, 78 degree RF flip angle. The subject was a male in "early middle age," neurologically normal (to the best of his knowledge), and highly experienced with the fMRI environment. His head was supported with a Siemens-supplied foam pad. Head restraint was achieved courtesy of as many curved foam pieces as could be fit between the intercom headphones and the sides of the RF coil; some additional curved foam pieces were placed between the crown of the subject's head and the rear interior surface of the RF coil.

    2.  A further complication for both interleaved and contiguous slices, i.e. for all multislice 2D scans, is that the timing of any head movement relative to the slice number in the TR period determines the actual appearance of the set of slices for that TR. It's obvious, but given the common treatment of each block of 2D slices as a single block of 3D data in post-processing, it's worth stating explicitly. Thus, if your subject just happens to sneeze at the very end of a TR period then you might get lucky and have the pre-sneeze set of slices at one position of the head and the post-sneeze set of slices in a new head position. Motion correction algorithms would have a relatively easy time reconciling the post-sneeze position to the pre-sneeze position. Not so, however, if the sneeze happens in the middle of the TR. Now, some of the slices for this TR are in one head position, the rest in a new position. What does the motion correction algorithm do now? And what about slice timing correction? Which slabs of 3D space were sampled when? There is clearly blurring in the slice direction and it isn't at all obvious which position is "correct." For this reason, some people simply discard volumes of data when an acute movement occurs. But this is another one of those truly massive topics that deserves at least one post of it's own, so I shall stop here for now.

    3.  There are microphones, e.g. optical microphones, that can be used safely and effectively in the MRI environment. If you are using such a device then I would recommend training your subjects on a mock scanner or, at a minimum, have them practice speech production with minimal head and jaw movements, before you scan them. At some point I'll do a whole post on speech and fMRI.




    0 0
  • 05/23/12--13:50: The 3 T gets a new home
  • After a very long wait that spanned two prefabricated buildings - we weren't supposed to call them trailers, some sort of negative connotation - the Henry H. Wheeler, Jr. Brain Imaging Center took its first step into a permanent home yesterday with the move of the BIC's existing 3 T Siemens Trio scanner into one of the magnet bays in Li Ka Shing Center for Biomedical and Health Sciences (LKS). With space for two 3 T MRIs, a 7 T MRI, a MEG and a host of functional support facilities, including TMS and EEG prep rooms as well as mechanical and electrical workshops, there will be quite a lot of moving in to be done over the years ahead. For the time being, however, the task is to get the very busy Trio back up and scanning as quickly as possible. Here are a few pictorial and video highlights of the magnet move, with a couple of interesting and hopefully educational features indicated.

    The magnet had been ramped down a week before, allowing a lot of preparatory work disconnecting cables and getting access to the removable roof section above the scanner's old home. In this photo you can see the copper foil of the removable roof section, a component of the scanner's Faraday shield (to reject external RF):



    The removable roof section was lifted out by crane:


    Then the magnet, weighing some 32,000 lbs with the patient table sled attached and the cryostat full of liquid helium, was lifted out and staged in an adjacent parking lot:



    Once the crane had been relocated, the magnet was lifted again onto a waiting flatbed truck:


    Here's a view of an MRI magnet you don't see every day. Covers removed, all the functional parts of the RF electronics, the helium refrigeration system, patient bed controllers and all kinds of stuff are revealed to the world:



    After a short drive from the Wellman Court onto Oxford Street in front of Li Ka Shing Center, the magnet was again staged to allow the flatbed truck to depart. One less obstacle in the road!



    The next task was to remove the 68,000 lb access hatch cover located in a sidewalk adjacent to LKS. The hatch is a steel frame covered in concrete, with six lifting points. Eye bolts were put into place ready for the lift:


    Here's the underside of the hatch cover, seen from the corridor inside the Brain Imaging Center space:


    The hatch cover was craned away...


    ...and stored temporarily on the street.


    This left a sizable hole - one large enough to get a 7 T human MRI through - above the corridor of the BIC space:


    Then it was time to lift the magnet into its new digs. Here's a little bit of video of the lift and some photos of the magnet being placed in the BIC corridor:






    Skates were fitted to the steel rails on which the magnet rests:


    The magnet was then rolled along steel plates - don't want to ruin a brand new floor! - down the loooong corridor of the new BIC facility, from the access hatch at the far end to the farthest 3 T bay. A section of the magnet room wall - occupied by three gents in orange, below - had been left open to provides access to the permanent location:


    Four seismic anchor bolts await the magnet's arrival in its new home:



    After some jigging to make the turn and some lifting to clear the sill of the RF shield wall, the magnet ended its day-long journey in its new berth:


    That just left several more hours' work by the riggers to set and level the magnet, followed by some poor chap to weld a new quench duct to the top of the magnet (he'd been told he'd be needed at 2 pm, he was still hanging out reading fiction at nearly 8 pm), but I wouldn't know much about that because it was getting close to my bed time. They worked on late into the evening though, because by this morning all manner of cables had appeared, an RF screen wall was being finished, and there was frantic activity to prepare the electrical and water connections to allow the magnet's cold head (helium refrigerator) to be reconnected as soon as possible:



    So there you have it. One magnet successfully relocated. With luck the device survived its trip and will be in rude good health within a couple of weeks. I get to hang about and spectate for another 7-10 days before I have to do any actual work. Sadly, however, that means I have no excuse not to produce the new facility's revised safety training manual and get a bunch of other admin tasks completed. A change is as good as a rest, right? Mmmm.




    0 0

    A tour through a real EPI pulse sequence

    In some posts I've got planned it will be important for you to know something about all of the different functional modules that are included in a real EPI pulse sequence. So far in this PFUFA series I've used schematics of the particular segment of the sequence that I was writing about, e.g. the echo train that covers 2D k-space for single-shot EPI. Except that there comes a time when you need to know about the sequence in its entirety, as it is implemented on a scanner. Why? Because there are various events that I've given short shrift - fat saturation and N/2 ghost correction, for instance - that have significant temporal overheads in the sequence, and these additional delays obviously affect how quickly one can scan a brain.

    So, without further ado, here is a pulse sequence for fat-suppressed, single-shot gradient echo EPI, as used for fMRI:

    (Click to enlarge.)

    Okay, so it's not the entire pulse sequence. The readout gradient echo train in this diagram has been curtailed after just nine of 64 total gradient echoes that will be acquired, for EPI with a matrix of 64x64 pixels. The omitted 55 echoes are simply clones of the nine echoes that you can see. (Note that there are no additional gradient episodes at the end of this particular EPI sequence; all the crusher gradients occur at the start of the sequence and these are visible in the above figure. More on crusher gradients below.) I should also point out that this is the timing diagram for acquisition of a single 64x64 matrix EPI slice. The pulse sequence as shown would be repeated n times for n slices within each TR. (See Note 1.)


    Interpreting what you see

    Let's first determine what information is being displayed on the figure above. There are five axes, all handily labeled on the far right-hand side of the figure. The top axis is the RF transmit channel; we've got two RF pulses in this sequence. The second axis down is the receiver, or analog-to-digital converter (ADC) channel. The scanner is receiving signals only when there's a rectangle specified on the second axis. Finally, the bottom three axes represent the pulsed field gradients, in the order X, Y, Z.

    Just for fun, let's quickly determine what the scanner is doing in the logical frame of reference, before we delve into the nitty-gritty. The slice selection gradient will occur in concert with an RF excitation pulse, and we have two RF pulses to choose from. Slice selection can't be the first RF pulse because that pulse occurs without any concomitant gradients. Thus, the slice excitation pulse must be the second one and we can deduce that slice selection is along the Z axis, which is the magnet bore axis. We're doing axial slices.

    The read gradient axis is the one that does most of the work during spatial encoding; it's recognizable as a gradient echo train that is coincident with the data sampling (ADC) periods. (The first two diagrams in PFUFA Part Twelve should provide a useful reminder. The read gradients were colored green in that post.) There are a lot of gradient echoes on the X channel, so it's safe to assume that X is the read dimension, making the Y axis the phase encoding dimension by deduction. So, we're doing axial slices, with the read gradient aligned left-right (magnet X axis) and phase encoding aligned anterior-posterior (magnet Y axis) relative to the subject's brain.

    Pulse sequences are generally schematic, but it is standard procedure to try to show the gradient amplitudes with correct intensities as far as possible. Thus, if we look at the X axis for a moment, you can see that the dark and light blue gradient episodes are equal magnitude and area but opposite sign; they are a balanced pair. Furthermore, the blue gradient episodes are larger (magnitude and area) than the green readout gradients. Think of the diagrams as being pseudo-quantitative. In this particular case the diagram displays accurate timing as well as accurate gradient intensities, but that isn't always the case with pulse sequences.

    To interpret the sequence I'm going to break it down into functional blocks (which is, interestingly enough, often how the pulse sequence is actually programmed).


    Fat saturation

    Ignore for a moment the three dark blue gradients that occur first in the pulse sequence diagram. Instead, consider the action of the Gaussian-shaped RF pulse labeled FatSat in the diagram. This RF pulse occurs in the absence of pulsed gradients, making it chemical shift-selective rather than slice selective.

    Fat is a particular problem for EPI because it resonates several hundred Hz away from water, leading to a strong N/2 ghost for subcutaneous (scalp) fat unless the fat resonances are either avoided or, as here, presaturated so that there is minimal fat signal during the subsequent slice excitation and signal readout. (An example of the intense ghosting that occurs in the absence of fat suppression (or some other scheme to avoid fat signals) was shown in the post, Common persistent EPI artifacts: Abnormally high N/2 ghosts (2/2), in the section entitled "No fat suppression.") (See Note 2.)

    How does the fat saturation work? In brief, the Gaussian-shaped RF pulse excites another Gaussian shaped "notch" of frequencies symmetrically about the transmitter offset frequency, where the transmitter frequency is set at the fat resonances. (See Note 3.) The duration of the Gaussian RF pulse (in the time domain) is sufficiently long that the width of frequencies excited is broad enough to affect the fat resonances but not so broad as to impact the water resonance a few hundred Hz away. (Recall that there is a reciprocal relationship between the time and frequency domains, so a broad RF pulse will excite a narrow band of frequencies.)

    The width of the RF pulse is carefully set to affect fat resonances without any significant effect on water resonances (see Note 4), and the amplitude is set to achieve a 90 degree (i.e. maximum) excitation of fat. In this way the fat resonances in the scalp are maximally excited and would, in the absence of any other gradients or RF pulses, generate a strange image that would be just a ring of scalp signal. But of course that's exactly the reverse of the intent of the fat saturation scheme, so we need to consider how this RF pulse works in conjunction with the gradients.


    Crusher gradients for fat suppression

    Now let's take a step back and consider those dark blue crusher gradients that occur prior to the fat suppression RF pulse. We'll consider the light blue gradients following that RF pulse at the same time. Notice that the dark and light blue gradients are balanced - equal areas with opposite signs - for each gradient axis. Any signal that exists prior to the fat saturation pulse experiences the dark and light blue gradients as a gradient echo; no net phase is imparted to the magnetization. However, any magnetization created by the fat saturation pulse experiences just the light blue gradients following the pulse, and this causes a net phase shift that is designed to render zero signal at the end of the light blue gradients. (There will be no net signal once the phase variation across a length scale equal to a pixel's dimensions becomes 360 degrees or greater. This is analogous to a T2* relaxation process. See PFUFA Part Seven for a reminder of how gradients impart phase across a sample, and lead to loss of signal intensity.)

    To recap, then, the fat saturation pulse is designed to excite maximally the lipid resonances in the subcutaneous (scalp) fat, and this excitation is followed by a spoiling, or crushing, of the lipid signals by a set of gradients. The net result is (ideally) zero signal from subcutaneous fat, such that there is no fat magnetization available for excitation by the slice selection process that occurs later in the sequence. Provided the time between the fat suppression and slice selection pulses is short relative to the T1 of fat then there will be minimal recovery of fat spins. This assumption is usually well satisfied; the time between the two RF pulses is typically less than 10 ms whereas the T1 for fat is several hundred milliseconds at 3 T.


    Crusher gradients to eliminate signal from prior slice excitations

    There is another set of crusher gradients in the pulse sequence, following the light blue fat signal crushers, and these are colored purple in the diagram. Now, although these gradients would also lead to dephasing of signal generated by the preceding fat suppression RF pulse, the primary intent is to dephase any signal that remains from the previous slice excitation. That's because signal surviving from a prior slice would experience the dark and light blue crusher gradients as a balanced set - no net dephasing. (Water signal surviving from a prior slice would also be unaffected by the fat suppression pulse of the current slice because the fat suppression pulse is restricted to a narrow band of fat resonance frequencies.) Thus, there is a need to specifically target prior slice water signal(s).

    Now, there are many ways that signals can survive from earlier slices. The simplest residual signal to understand is that left over from the very last slice just acquired, i.e. signal that survived because its T2* is long relative to the readout time. (There will be essentially no signal remaining once the time following slice excitation reaches three times T2* for each signal in the sample.) CSF in ventricles and some parts of brain tissue can easily pass this threshold, and thus provide spurious signal for the subsequent slice acquisition.

    But there are other ways that signals can survive: from multiple prior RF excitations, as spin or stimulated echoes. The precise "coherence pathways," as they're called, aren't especially important for today's post (but are critical when setting up crusher gradients in a real sequence). We have to assume that the person who wrote and tested the sequence took into account these various contaminating signal pathways, and set up the crusher gradients appropriately. (See Note 5.)

    Why are the purple crusher gradients operating on only two of the three axes, and why with negative sign? The short answer is efficacy. Using a gradient on the read channel (X in our example) might prove to be less effective than a gradient on Y or Z, because there are already multiple gradient echoes being (intentionally) played out along X, as we shall see in a moment. Adding a crusher gradient along X introduces the chance of eliminating some spurious signals while enhancing others, unless the crusher gradient area is made large relative to the readout gradient episodes. That adds time and can increase N/2 ghosts arising from gradient eddy currents. (See Note 6.)

    The signs of the crusher gradients along Y and Z, as well as the relative intensities/areas of those gradients, are set to minimize spurious signals while simultaneously minimizing any eddy current effects. Other crusher gradient schemes are possible, even advisable, if it is determined that prior slice signal(s) are surviving the crushers as shown here, but the principles of operation are the same. (There are multiple "coherence pathways" that may be established when so many slice excitation RF pulses are applied so quickly, relative to T1 and T2 values for the signals in the head. A slice excitation pulse occurs every 40-60 ms, which is comparable to brain T2 values and far shorter than T1 values. This almost guarantees that spin and stimulated echoes will form in the steady state, unless crusher gradients are used to disrupt them.)


    Slice selection

    The gradients for slice selection and refocusing are colored orange in the figure. In concert with the slice selection gradient is a slice-selective RF pulse; a filtered sinc shape, which excites an approximate square in the frequency domain. (Slice selection was introduced in PFUFA Part Eight.) The magnetization excited by the RF pulse is created in the presence of a magnetic field gradient, and this causes an undesirable phase gradient in addition to the (desired) frequency-selective excitation. Thus, we need to rephase (eliminate) this phase gradient by applying a correction gradient episode, which is the positive gradient in orange following the slice selection itself. As described in PFUFA Part Eight, this is a simple example of a gradient echo. At the end of the rephasing gradient we have what we want: a slice of coherent magnetization with no phase gradient across it.

    The sign of the slice select gradient doesn't matter; the slice selection gradient just happens to be negative here. Provided the refocusing gradient following the excitation pulse has the opposite sign a gradient echo is created. (See Note 1 for more information on how the different slices within TR differ only in one aspect. There may be practical benefits for using a negative rather than a positive slice selection gradient, but it isn't usually something a routine user has control over. Compared to other parameters that can be controlled, the benefits or otherwise aren't huge and it's not worth worrying whether your EPI sequence has negative or positive slice selection gradients. If that situation ever changes then rest assured I'll write a blog post on it!)


    N/2 ghost correction echoes

    The next event in the sequence is the acquisition of three gradient echoes for the purposes of N/2 ghost correction. (See the section entitled "Ghosting" in PFUFA Part Twelve.) The gradient episodes are shown in bright yellow while the concomitant acquisition periods are shown in pale yellow. Note that these three echoes are acquired in the absence of any phase encoding gradients; the k-space trajectory is restricted to the k-read axis, as we saw in the one-dimensional k-space representation of a gradient echo in PFUFA Part Nine.

    So what's the point of these three echoes, and why are they restricted to one-dimensional projections of the read axis? If you go back to the explanation of how EPI ghosting arises in PFUFA Part Twelve you will see how the offsets alternate in a zigzag pattern across the phase encoding dimension. Thus, in principle, all we need to do is get a measurement of the amount of zig and the amount of zag and we can correct the odd and even lines of 2D k-space, leaving no zigzag offsets whatsoever. The three echoes give us an estimate of the zigzag by forming two comparisons: the second echo compared to the first echo estimates the zig, the third echo compared to the second echo estimates the zag. The zigzag estimates are then essentially subtracted out of the phase-encoded gradient echoes that will be acquired during the 64-echo readout train to come. (See Note 7.)


    Phase encode dephasing gradient

    Now we come to the 2D k-space readout. (The 2D k-space trajectory for EPI was explained in PFUFA Part Twelve.) The three phase correction echoes have left the magnetization at the positive k-read position; it wasn't necessary to refocus the latter half of the final phase correction echo period because we need to start the 2D k-space readout on one side of the readout k-space dimension. But we do need to shift our 2D readout starting point in the phase encoding dimension, so that we are in one corner of 2D k-space before we zip back and forth across the 2D k-space plane. Accordingly, a large red triangle of phase encoding gradient - often referred to as the dephasing gradient - moves us in the positive k-phase direction, placing us in the (+k-read, +k-phase) corner of the 2D k-space plane, all ready for the 2D readout echo train.


    Readout gradient echoes

    We've come to the final act of the EPI pulse sequence: acquisition of the rapid back and forth journey across a 2D k-space plane. It's difficult to see in the figure, but each alternating positive and negative readout gradient, colored dark green (with concomitant data acquisition periods colored pale green), has a small negative red triangular blip of phase encoding between them. I've added red dotted lines to help you line up the events. Note that the red gradient triangles occur in between the data readout periods - the phase encoding triangles are ramped at the same time as the readout gradients are switched quickly from positive to negative or from negative to positive. (See Note 8.)

    If the final image matrix is 64x64 pixels then the area of thirty-two small red triangles is equivalent to the single large (positive) dephasing gradient in the phase encode direction, causing the center of k-space to occur after 32 readout echoes in the train. (Again, only the first nine echoes are shown in the figure.)


    Relative duration of functional modules in the EPI sequence

    Now that you've got an appreciation of the different functional events in a real EPI sequence you can start to assess which events take the majority of the time for each 2D image. (The figure is displayed to scale.) You should be able to recognize that the fat saturation pulse and its associated crusher gradients takes some 3-4 times longer to achieve than does the slice selection. The three ghost correction echoes add only a small temporal overhead, about the same duration as the slice selection. But the bulk of the time to prepare and acquire a single EPI slice is spent reading out the spatial information during the multi-echo readout echo train. The nine echoes in the readout train already take as long as the fat saturation scheme and we have 55 more echoes to go, making the total readout (of 64 echoes) at least six times longer than the next longest functional module of the sequence. If we're looking to save time, then, the first place to look is in the multi-echo readout train!

    In the next few posts of this series (and some tangential posts to them) I want to start to consider "go faster" methods that you might want to consider for fMRI. As we consider each option we will need to refer back to this real EPI sequence, in order to comprehend the temporal savings on offer.

    ______________________



    Notes:

    1.  I'm not going to get into the details here because it doesn't affect sequence timing, but for those of you wondering, each slice of the n slices in TR differs in only one aspect: the "carrier," or central frequency, of the RF excitation pulse is shifted for each slice by an amount sufficient to shift the excitation under the slice select gradient - on the Z axis here - to a new spatial position. Thus, the pulse sequence as displayed would be identical for any of the n slices in TR, because the RF excitation carrier frequency isn't displayed. Now, there are EPI pulse sequence variants that are exceptions to my prior statement - where there might be a slightly different gradient scheme for odd or even slices, say - but they're not common. For example, an option might be to use crusher gradients that alternate polarity with each successive slice; positive crushers for odd slices, negative crushers for even slices. But these aren't issues that we need to be concerned with today. I may discuss these issues in a future post, dedicated to crusher gradients.

    2.  Chemical shift-selective RF saturation - or fatsat for short - is only one way to avoid the intense N/2 ghosts from fat. Another commonly used method, especially on GE scanners, is to use a spatial-spectral RF pulse for excitation. This pulse is designed to excite a 2D slab of water signal only, avoiding excitation of the fat signal in that slab. Yet another method, used in anatomical scanning but not so much for EPI because it is slow, is to use a T1 inversion null, i.e. an inversion recovery sequence where the post-inversion delay is set so that the fat signals are passing through the T1 null at the time of slice excitation. Yet another scheme is known as "the Dixon method," after it's inventor, and this uses a timed phase difference between the water and the fat spins based on their chemical shift differences. Choices, choices! I may do a separate post on some or all of the common fat avoidance schemes and include a comprehensive review of the pros and cons. But the bottom line for today is that fat suppression is generally regarded as the most robust and efficacious of the methods when applied to single-shot EPI for fMRI. A spatial-spectral pulse can work pretty well, too, but it tends to be limited to fairly thick slices compared to a standard (sinc-shaped) RF excitation applied following a fatsat pulse.

    3.  The Fourier transform of a Gaussian is another Gaussian, as explained under the Properties section of this wiki page. The transmitter frequency of the time domain Gaussian-shaped RF pulse (where the Gaussian shape is achieved by amplitude modulation of RF intensity) is placed at or near the major fat resonance, at around 1.32 parts per million (ppm), where the ppm scale is referenced to the chemical shift of the proton resonance of tetramethylsilane (TMS) at 0.0 ppm. Relative to TMS, water protons resonate at approximately 4.7 ppm. Thus, the frequency difference between water protons and fat protons is approximately (4.7 - 1.32)ppm * 127 MHz = 429 Hz at 3 T.

    4.  Some of you may be aware of something called "magnetization transfer" contrast (MTC), which is a mechanism whereby an off-resonance RF pulse (off-resonance relative to the usual water resonance at 4.7 ppm) is used to presaturate bound water spins. This pulse is followed by a delay that allows the bound water spins to exchange with free water spins, thereby decreasing slightly the total amount of free water signal available to be imaged. MTC can be a useful anatomical contrast mechanism, but it has little application for fMRI. Except that the fat saturation pulse will actually generate a teeny bit of MTC in addition to fat suppression because it is an off-resonance pulse with respect to free water! Thus, if you were to compare the gray/white/CSF anatomical contrast with and without fat suppression enabled, you should expect to see differences (leaving aside the N/2 ghost differences) because CSF exhibits no MTC while white matter, possessing a lot of bound water spins (because of myelin) exhibits stronger MTC than gray matter. Is this MTC of consequence for fMRI? No, not really. I just wanted to be complete! And talking of being complete, there's another source of MTC, too: other slice excitation RF pulses than the slice presently being considered. When an RF pulse is on resonance for one slab of water spins, it is off resonance for the "unexcited" water spins. I've used quotation marks because, strictly speaking, only the mobile, free water protons are off resonance outside of the excited slice. The broader resonances of bound water protons can still be partially excited by these other slice excitation pulses. Thus, if you do a single slice excitation with a TR of, say, 2 seconds and compare the gray/white/CSF contrast to that same slice when multiple slices are being excited in the same TR, you can expect to see a subtle contrast difference. MTC at work again! But again, it's of no practical consequence for fMRI.

    5.  I happen to know that there are limitations in the crusher gradient scheme as shown in this sequence, but these limitations don't, as far as I know, show up in the conventional ways that this EPI sequence is applied for fMRI. The limitations can show up if one is using a 32-channel RF coil, and there may be circumstances when they become a concern in some phantom studies, where the overall signal level can be much higher than is typically seen in a brain. I will cover these limitations in detail in a dedicated post if it ever becomes necessary, but right now it's not an issue that's high on my priority list.

    6.  Eddy currents are residual magnetic fields induced in the metallic components of the scanner, such as the steel cryostat that holds the superconducting magnet, and even the copper components of the receive RF coil. These spurious time-dependent magnetic fields are also spatial gradients, and they are created by induction when the main imaging gradients are switched on or off. A lot of engineering goes into a modern scanner to eliminate eddy currents, but it's physically impossible to remove all of the effects. And, as a general rule, the larger the pulsed imaging gradient amplitude the larger the eddy current it leaves in its wake. I don't presently have plans to do a post on eddy currents, but I may well do one eventually.

    7.  There are other ways to correct the phase errors across the phase encoding dimension of EPI k-space, some of which use similar echo schemes and some that attempt more involved estimates of the odd/even offsets. It's a big subject that may become the focus of a separate blog post at some point. For now, however, it suffices to point out one of the experimental limitations of the three-echo scheme as presented. Note that the three correction echoes occur earlier in the sequence - that is, closer to the slice excitation - than the 2D readout period. Magnetic susceptibility gradients will therefore affect the correction echoes and the imaging readout train echoes differently; T2* effects will differ between the two periods. Furthermore, the assumption in this scheme is that any errors that are present during the 2D readout train are also present (with the same properties, such as magnitude) during the three correction echoes. This assumption may be invalidated by subject motion, eddy curents, the effects of RF interference and other noise sources, and these differing contaminants limit the practical benefits to the N/2 ghost correction. Some error sources simply cannot be estimated properly by the scheme, leading to residual ghosts that aren't fixed.

    8.  Ramp sampling is often used to accelerate the total 2D readout, but I won't go into details here. I have another post planned on ramp sampling because there are performance issues. All that matters here is that the small red triangles happen in between data readout periods, whether or not ramp sampling is being used.




    0 0
  • 07/06/12--10:33: Siemens slice ordering
  • I've heard on the wind that there is still confusion or even a total lack of awareness of the change in slice ordering for interleaved slices when going from an odd number to an even number of slices, or vice versa. It makes a big difference for slice timing correction. So I though I'd post below a section from my user training guide/FAQ as a ready reference. Note that as far as I know this change in slice ordering is only an issue for Siemens scanners running VB15 or VB17 software, I can't comment on VD11 or other versions, and I haven't actually tested any scanner platform except a TIM Trio. Furthermore, it's only an issue if you're using interleaved slices. If anyone has additional information, especially if it conflicts with the situation posted here, then the field would probably appreciate a comment!

    __________________


    In what order does the scanner acquire EPI slices?

    There are three options for slice ordering for EPI. To understand the ordering you first need to know the Siemens reference frame for the slice axis: the negative direction is [Right, Anterior, Foot] and the positive direction is [Left, Posterior, Head]. The modes are then:
    • Ascending - In this mode, slices are acquired from the negative direction to the positive direction.
    • Descending - In this mode, slices are acquired from the positive direction to the negative direction.
    • Interleaved - In this mode, the order of acquisition depends on the number of slices acquired:
      • If there is an odd number of slices, say 27, the slices will be collected as:
    1 3 5 7 9 11 13 15 17 19 21 23 25 27 2 4 6 8 10 12 14 16 18 20 22 24 26.
      • If there is an even number of slices (say 28) the slices will be collected as:
    2 4 6 8 10 12 14 16 18 20 22 24 26 28 1 3 5 7 9 11 13 15 17 19 21 23 25 27.

    Interleaved always goes in the negative to positive direction, e.g.foot-to-head for transverse slices.

    So, if you are doing 28 interleaved axial slices the order will be evens then odds in the foot-to-head direction. 27 interleaved axial slices would also be acquired in the foot-to-head direction but would be in the order odds then evens. If you switch to 28 descending axial slices the acquisition order will become 1,2,3,4,5…28 and the direction will swap to being head-to-foot.


    0 0


    My colleague, MathematiCal Neuroimaging and I have been discussing what we see as flaws or limitations in current functional MRI scanners and methods, and what the future might look like were there ways to change things. So, in part to force us to consider each limitation with more rigor, and in part to stimulate thought and even activity within the neuroimaging community towards a brighter future, we decided to start a new series of posts that we'll cross-reference on our blogs. This blog will focus on the hand-wavy, conceptual side of things while at MathematiCal Neuroimaging you'll find the formal details and the mathematics.

    We have loose plans at the moment to address the following topics: magnetic field strength considerations, gradient coil design considerations, RF coil design considerations, pulse sequences, contrast mechanisms, and motion and motion correction. We're going to hit a topic based on our developing interests and the issues that our local user community brings to us, so apologies if your fave doesn't actually appear in a post for months or years to come.


    "You wanna go where? I wouldn't start from here, mate."

    Blogs seem like the perfect vehicle for idle speculation about a fantasy future. The issues and limitations are very real, however, so that's where we will initiate the discussions. Then, wherever possible, we will gladly speculate on potential solutions and offer our opinions on the solutions that seem apparent today. But we're not going to try to predict the future; we will invariably be wrong. That would also be beside the point. What we want to do is motivate researchers, engineers and scanner vendors to consider the manifold ways an fMRI scanner and fMRI methods might evolve.

    Note that in the last paragraph I referred specifically to an "fMRI scanner." A moment's consideration, however, reveals that most of the technology used for fMRI didn't arise out of dedicated efforts to produce a functional brain imager per se. Instead, we got lucky. Scanners are designed and built as clinical devices (worldwide sales in the hundreds to thousands) and not research tools for neuroscience (worldwide sales in the tens per year for pure research applications). A typical MRI scanner has compromises due to expense, size of subjects, stray magnetic field, applicability of methods to (paid) clinical markets, etc. Other forces are at work besides the quality and utility of fMRI. And these forces can be a mixed blessing.

    Thus, part of the motivation for writing this post series is to provoke consideration of alternative current technologies; hardware or methods that exist right now but for whatever reason aren't available on the scanner you use for fMRI. Perhaps there are simple changes that can benefit fMRI applications even if these changes compromise a clinical application. For some facilities, like mine, that would be an acceptable trade.


    What's in a name?

    On this blog I'll use the moniker i-fMRI to label these op-ed posts. You can interpret the i however you like. Mathematicians might want to consider an imaginary scanner. Engineers might want to consider an impractical scanner. (This variant happens to be my preference.) Economists and business types might think of an inflationary fMRI scanner, because it's likely that the developments we seek will only drive the cost up, not down. And you neuroscientists? Well, we hope you'll consider your ideal fMRI scanner.

    (Apple, if you're reading this - too late. We already sold the i-fMRI trademark to some company in China. Sorry.)


    0 0


    (Thanks to Micah Allen for the original Tweet and toCraig Bennett for the Retweet.)


    If you do fMRI you should read this paper by Joshua Carp asap:

    "The secret lives of experiments: Methods reporting in the fMRI literature."

    It's a fascinating and sometimes troubling view of fMRI as a scientific method. Doubtless there will be many reviews of this paper and hopefully a few suggestions of ways to improve our lot. I'm hoping other bloggers take a stab at it, especially on the post-processing and stats/modeling issues.

    At the end the author suggests adopting some sort of new reporting structure. I concur. We have many variables for sure, but not an infinite number. With a little thought we could devise a simple, logical reporting structure that could be decoded by a reader just like a header can be interpreted from a headed file. (Dicom and numerous other file types manage it, you'd think we could do it too!)

    To get things started I propose a shorthand notation for the acquisition side of the methods; this is the only part I'm intimately involved with. All we need to do is make an exhaustive list of the parameters and sequence options that can be used for fMRI, then sort them into a logical order and decide on how to encode each one. Thus, if I am doing single-shot EPI on a 3 T Siemens TIM/Trio with a 12-channel receive-only head coil, 150 volumes, two dummy scans, a TR of 2000 ms, TE of 30 ms, 30 descending 3 mm slices with 10% gap, echo spacing 0.50 ms, 22 degrees axial-coronal prescription, FOV 22.4x22.4 cm, 64x64 matrix, etc. then I might have a reporting string that looks something like this:

    3T/SIEM/TRIO/VB17/12CH/TR2000/TE30/150V/2DS/30SLD/THK3/GAP0.3/ESP050/22AXCOR/FOV224x224/MAT64x64

    Interleaved or ascending slices? Well, SLI or SLA, of course! 

    Next we add in options for parallel imaging, then options for inline motion correction such as PACE, and extend the string until we have exhausted all the options that Siemens has to offer for EPI. All the information is available from the scanner, much of it is included in the data header.

    But that's just the first pass. Now we consider a GE running spiral, then we consider a GE running SENSE-EPI, then a Philips running SENSE-EPI, etc. Sure, it's a teeny bit involved but I'm pretty sure it wouldn't take a whole lot of work to collect all the information used in 99% of the fMRI studies out there. Some of the stuff that could be included is rarely if ever reported, so we could actually be doing a whole lot better than even the most thorough methods sections today. Did you notice how I included the software version in my Siemens string example above? VB17? I could even include the specific type of shimming routine used, even the number and type of shim coils!

    If an option is unused then it is simply included with a blank entry: /-/ And if we include a few well-positioned blanks in the sequence for future development then we can insert some options and append those we can't envisage today. With sufficient thought we could encapsulate everything that is required to replicate a study in a few lines of text in a process that should see us through the next several years. (We just review and revise the reporting structure periodically, and we naturally include a version number at the very start so that we immediately know what we're looking at!)

    There, that's my contribution to the common good for today. I just made up a silly syntax by way of example. The precise separators, use of decimal points, etc. would have to be thrashed out. But if this effort has legs then count me in as a willing participant in capturing the acquisition side of this business. We clearly need to do better for a litany of reasons. One of them is purely selfish: I find it hard or impossible to evaluate many fMRI papers for wont of just a few key parameters. I think we can fix that. We really don't have an excuse not to.



    0 0


    An ideal fMRI scanner might have the ability to update some scan parameters on-the-fly, in order to reduce or eliminate the effects of subject motion. Today, this approach is commonly referred to as "prospective motion correction" because the idea is to adapt the acquisition so that (some of) the effects of motion aren't recorded in the data, in contrast to the routinely employed retrospective motion correction schemes, such as an affine registration algorithm applied during post-processing; that is, in between the acquisition and the stats/modeling, which can lead some people to refer to such steps as "pre-processing" if you have a stats/modeling-centric view of the fMRI pipeline.

    On the face of it, ameliorating motion effects by not permitting them to be recorded in the time series data is a wonderful idea. Indeed, as the subtitle to this blog attests, I am a huge fan of fixes applied during the acquisition rather than waiting until afterwards to try to post-process away unwanted effects. But this preference assumes that any method actually works, and works robustly, in everyday use. For sure there will be limitations and compromises, yet the central question is whether the benefits outweigh the costs. In the specific case of prospective motion correction, then, does a scheme (a) eliminate the need to use retrospective motion correction, and (b) does it reduce the effects of motion without bizarre failure modes that can't be predicted or circumvented easily?

    A good place to begin evaluating prospective motion correction schemes - indeed, all motion correction schemes - is to first asses their vulnerabilities. It's no good if the act of fixing one part of the acquisition introduces an instability elsewhere. Failure modes should be benign. Below, I list the major hurdles for motion correction schemes to overcome, then I consider how elaborate any solutions might need to be. The goal is to decide whether - or when - prospective motion correction can be considered better than the alternative (default) approach of trying to limit all subject motion, and deal with the consequences in post-processing.


    What do we mean by motion correction anyway?

    As conducted today, motion correction applied during post-processing generally refers to an affine or sometimes a non-linear registration algorithm that seeks to maintain a constant anatomical content in a stack of slices throughout a time series acquisition. Prospective motion correction generally refers to the same goal: conserving the anatomical content over time. But, as is well known, there are concomitant changes in the imaging signal, and perhaps the noise, when a head moves inside the magnet. Other signal changes that are driven by motion may remain in the time series data after "correction." Indeed, depending on the cost function being used, the performance of the motion correction algorithm to maintain constant anatomy over time may be compromised by these concomitant modulations.

    Now, we obviously want to try to maintain the anatomical content of a particular voxel constant through time or we have a big problem for analysis! But as a goal we should use a more restrictive definition for an ideal motion correction method: after correction we seek the elimination of all motion-driven signal (and noise) modulations. The only signal changes remaining should be neurally-driven BOLD changes (if we're using BOLD contrast, which I assume in this post) and "physiologic noise" that isn't strongly coupled to head (skull) motion. (Accounting for physiologic noise is usually treated separately. That's the assumption I'll use in this post, although at a very fine spatial scale it's clear that physiologic noise is another form of motion sensitivity.)


    Motion sensitivities in fMRI experiments

    A useful first task is to consider all the substantial signal changes in a time series acquisition that can be driven by subject motion. What signal changes are concomitant with changes of anatomical content as the brain moves relative to the imaging volume? How complicated is this motion sensitivity? What aspects of the signal changes will require hardware upgrades to the scanner, and/or pulse sequence modifications in order to negate them? And are these capabilities already designed into a modern scanner or will they require substantial re-design? These are the questions to keep in mind as we review the major motion sensitivities.


    T1 steady state perturbation

    EPI for fMRI is usually acquired with significant T1 weighting because the TR is insufficient to permit complete T1 relaxation for the excitation flip angle used. (We acquire EPI in a time series so a steady state is achieved after three or four TRs, a condition usually established by some dummy EPI scans at the start of the time series.) Any displacement in the through-slice direction will cause perturbations to this T1 steady state. Perturbation of the signals from their T1 steady state causes bright/dark artifacts in the current volume of data, and possibly some 'hangover' artifacts in one or two later volumes as the steady state is reestablished (assuming there is no further motion).

    There are a few simple tactics that can lessen, but won't eliminate completely, the effects of through-slice motion on the T1 steady state, including the use of contiguous rather than interleaved slices, and a small RF flip angle. With respect to slice ordering, brief, acute movements along a contiguous slice axis tends to restrict perturbation of the T1 steady state to just one or two slices whereas the same movement along an interleaved slice axis will tend to produce banding across the entire slice dimension.

    Using a small RF flip angle makes the need to maintain a T1 steady state less stringent, but again the issue isn't entirely eliminated. A flip angle of, say, 15 degrees and a TR of 2000 ms will provide near complete relaxation from TR to TR. However, we aren't out of the woods because a movement in the slice axis that is also in the direction the slices are being acquired might cause quick repetitive excitation of some or all of the slice(s) just acquired. Rotations create further complication, introducing the possibility of regional steady state perturbations in one or a few slices. So low flip angles can help but are not a complete fix.

    A method presented in a recent paper sought to eliminate TR-by-TR steady state effects through the use of a 2D navigator echo ("A dual echo approach to motion correction for functional connectivity studies." A. Ing and C. Schwarzbauer, NeuroImage 2012.) Essentially, two complete sets of slices are acquired each TR, with the first set of slices used as non-BOLD-weighted navigators for the second set. There are important methodological limitations - parallel imaging was needed to attain a practicable per slice acquisition time, introducing a new form of motion sensitivity - and there is a considerable temporal overhead inherent in the scheme - essentially, we have to acquire the same k-space twice per slice - but it is an intriguing idea that might stimulate further pulse sequence-based approaches to motion mitigation; perhaps as a component of some ideal prospective motion correction scheme.

    Thus, in order to properly compensate for T1 effects we should be considering only slice-by-slice correction schemes rather than TR-by-TR correction schemes for multislice 2D imaging. (True 3D imaging has its own set of problems that I won't address in this post.) If the motion detection method is perfect such that the slice prescription can be maintained precisely on the anatomical targets then, in principle, the T1 steady state won't get perturbed no matter how much movement occurs. The scanner would simply 'lock on' to the brain anatomy and follow it faithfully. So far so good.


    Shim effects

    It is possible to update the shim gradient coils in real time on a suitably modified scanner. However, the problem is in knowing what the correct shim currents need to be for each potential head position inside the magnet! You will know that it takes some 20-30 seconds to acquire a single 3D field map shim on a modern scanner; this gives (hopefully) the best currents to use in the shim coils for the head position during the field map shim acquisition. The moment the subject moves his head then the non-linear magnetic fields inside the brain will change and the shim currents will be inappropriate to some extent. Where and by how much the shim field is now incorrect depends on the nature of the movement, the size and shape of the head, the number of shim coils, etc. But the bottom line is that this is an incredibly complex spatial dependency.

    In principle, it might be possible to acquire a series of shim fields for different head positions in some sort of pre-scan, then use an interpolated set of shim values in concert with slice position updates as components of a simultaneous prospective motion correction method. But the way I see this problem today, the non-linearity and complexity of the magnetic susceptibility gradients in many interesting brain areas, particularly the frontal and temporal lobes and the inferior brain surface, limits even the potential of such methods. Correcting for small regions, such as for an MR spectroscopy volume, seems eminently feasible, but I don't foresee a global brain solution arising any time soon. Feel free to prove me wrong, I'd love to have such a method on my scanner!

    There's another issue that needs to be considered, too: non-head motion. We already know that chest movement from breathing modulates B0 across the head. What about movement of the lower body or extremities? A subject could move his arms and legs to get comfortable, because he's fidgety or because the task requires it. How will these movements affect the magnetic field homogeneity across the head? (Extremity movement was shown to be a problem for GRAPPA.) Which body movements can be ignored safely, if any? In evaluating any prospective motion correction method we will need to consider whether the effects of non-head movements will remain in the data and perhaps require a further correction step during post-processing.


    Receive field heterogeneity

    As was shown in a post on EPI artifacts, modern phase array RF receive coils impose a spatial heterogeneity across the brain. (In a forthcoming paper (see Note 1) we investigate how this lab frame contrast mechanism interacts with movement to produce major challenges for motion correction.) The receive field is fixed to the magnet, so any change in the subject's head position will cause changes in signal intensity that will either confuse the prospective motion scheme, if the correction is based on the content of the images, or be missed by the scheme entirely. Perhaps it is possible to make a set of reference maps in a pre-scan, as suggested for the shim fields, but the performance of such an approach is likely to be severely limited; the more so the higher the coil array, as the receive field heterogeneity is increased.

    There could be a simple solution to the receive field issue: eliminate it. One could nix the phase array coil in favor of a plain vanilla birdcage receiver; one that is sufficiently large that its receive field can be considered constant relative to the brain displacements during each fMRI run. But this hardware isn't common on modern MRI scanners, and the decreased SNR that comes from a birdcage coil would likely degrade the performance of other scan types where thermal noise (rather than physiologic noise) may be limiting, e.g. diffusion-weighted imaging or high-resolution anatomical scanning. Then again, we're talking about subjects who move a lot, so the SNR and thermal noise limits could be utterly irrelevant! Remember, we're in a subject-motion-limiting regime. If we weren't we wouldn't need to be considering such convoluted acquisition strategies!


    Transmit field heterogeneity

    This isn't much of a problem right now because most of us use large 'body' B1 transmit coils that can be considered homogeneous relative to the dimensions of a human brain. But the advent of multi-channel transmission, especially for high magnetic fields (7 T), will provide another challenge analogous to the receive field heterogeneity issues just considered. Prospective motion correction schemes that work perfectly well on today's scanners might end up compromised on future scanners if the B1 transmission contains significant heterogeneity relative to the image contrast. But I'm afraid I don't know enough about parallel transmission to offer anything more concrete than a vague concern, based entirely on what I do know about receive field heterogeneity. It's an issue that I shall watch with interest as parallel transmission moves out of the engineering realm and towards neuroscience.


    Other considerations for prospective motion correction schemes

    Now that we have assessed the primary motion-dependent instabilities in the time series data we can start to look at a few general principles of prospective correction schemes. I'm not going to try to specify particular hardware or software designs, rather I want to consider whether certain approaches might contain intrinsic problems that mandate one approach over another.


    Aims of a prospective motion correction scheme

    As mentioned at the outset, what we generally mean by 'motion correction' is the maintenance of anatomical content over time. We thus require as a basis the measurement of brain and/or head location inside the magnet. The usual approach is to measure displacements of the subject's head - usually it's just the head, not the entire body - either externally with some sort of non-MR measure (such as a camera or other optical recording mechanism), or internally via some metric derived from MR data itself (such as the anatomical content of EPI slices). The slice positions are then updated based on the displacement information just recorded, and hopefully the MR slices remain at a constant position relative to the subject's brain anatomy. Simultaneously there is an implicit goal to maintain all other sources of signal (and noise) modulation constant across time.


    Prospective correction of head movement effects: when should we do it?

    Most prospective schemes proposed to date, e.g.PACE, have operated on a TR-by-TR basis, which is equivalent to a volume-by-volume update. A more sophisticated but technically more challenging approach is to attempt to update each slice position in "real time," i.e. to apply a correction on a slice-by-slice basis. However, as we saw above when considering the main motion dependencies, only slice-by-slice correction is expected to be capable of avoiding some major limitations.

    Essentially, what we're considering is the relative time scale of the motion and the motion correction. If the motion is slow with respect to the TR - a drift over tens of seconds to minutes, say - then a TR-by-TR correction is probably sufficient. But if the motion is relatively fast - a swallow, a sneeze or adjusting for comfort, say - then the motion will likely be faster than the TR. For typical repetition times of 2-3 seconds the motion correction isn't going to be able to keep up; some of the slices will be acquired assuming old, incorrect position information.

    Of course, motion that is rapid compared to the acquisition time of a single 2D slice, which might be around 50-70 ms, is going to cause problems even for a slice-by-slice correction scheme. But, given that such rapid motion will also likely cause problems during a slice acquisition as well as between slices we can see that there will be a fundamental limit to how well any prospective motion correction scheme can perform. For the remainder of this post I shall assume that (correctable) motions of interest may be rapid relative to TR but are slow relative to a single slice acquisition. We can debate whether this assumption is useful in practice, given that motion during a slice is essentially uncorrectable. (See Note 2.)

    There is one further benefit to a slice-by-slice correction which was pointed out by my colleague, DS in a comment on Neuroskeptic's blog:

    "Another potential benefit of independent prospective correction of motion would be the decoupling of the temporal interpolation (often used in fMRI) and motion correction. With retrospective correction this decoupling is not possible. Nevertheless folks have been proceeding as though it were. It would be really great to decouple these problems."
    Temporal interpolation is also known as slice timing correction. Decoupling the slice timing from the motion correction steps should, in principle, enhance performance of both steps, but it remains to be seen how much benefit might be attained in practice. Still, for our i-fMRI scanner I think we should be aiming for slice-by-slice correction.


    Measurement of head movement: how should we do it?

    I am ambivalent on the particular measurement scheme used to provide spatial information for a prospective motion correction scheme. However, there is an important observation that can be made. As already noted, there are essentially two forms of spatial information we could use. Either we could record position information using the MRI data itself - an internal measure - or we could use something else entirely; any external scheme that isn't using the time series MRI data itself as the template. In the latter camp I include video and optical measures, for example.

    If the motion correction technique uses the MRI data as the basis for the correction then, for optimal results, the measurement will need to be updated slice-by-slice, as already specified. We then need to add one more restriction: the spatial information obtainable from the most recently acquired slice must be sufficient to accurately correct the next slice. If there is poor or low signal, e.g. an axial slice that captures just the very top of the head, or perhaps it's pure noise in a slice above the head, then there is very little (or no) information with which to establish whether the subject has moved recently or not. Significant motion could go undetected for several slices. And if that motion causes subsequent slices to be contaminated by motion, e.g. perturbation of the T1 steady state, because of undetected movement in the slice select axis, then it is possible to obtain corrupted target information that leads to an inaccurate specification for the update. It could take several subsequent slices to get back on track. (See Note 3.)

    Perhaps the previous paragraph implies that external motion detection is the only way to go. I don't know. But I do know that the external motion detection schemes publicized to date only measure the head position, they make no attempt to estimate the shim field or the receive field, say. Perhaps this implies a combination approach; MRI plus external measures. There are several fields that need to be measured/estimated if we are to approach a "fix" for subject movement....


    Wither prospective motion correction?

    Certain types of motion are easier to fix than others. Slow drifts, whether from scanner heating, muscle relaxation or (head support) foam compression, should be amenable to prospective motion correction. But under the right circumstances they are amenable to retrospective motion correction, too. Would we have produced sufficient gain from our prospective correction scheme if it works only under a restrictive set of conditions, especially if we can't know ahead of time whether those conditions can be maintained?

    I can see strong motivation for using prospective motion correction methods for clinical MRI, and perhaps for fMRI subjects who, for whatever reason, can't or won't remain still for a scan. Candidate subjects might include neonates, children and poorly compliant adolescents, adults with movement disorders, and perhaps "normal" fMRI subjects performing an experiment where head restraint is incompatible with the task. In all these situations the alternatives might be worse than dealing with the limitations of a prospective correction method, when faced with the option of not acquiring fMRI at all. However, we do need to recognize that what is on offer probably won't be a panacea for motion; there are too many complicated ways that motion can modulate the fMRI time series to assume that "motion is fixed" by using any sort of motion correction, prospective or otherwise.

    People are belatedly recognizing systematic motion confounds in group studies, especially in resting state fMRI data, and realizing that motion correction algorithms don't remove all the effects of motion from the data. It would be nice if we could avoid repeating this mistake with prospective motion correction methods. We need validation, and we need to know what happens when the methods fail. Thus, when someone offers you the option of some fancy new prospective motion correction scheme you should ask "What residual motion dependency is there in my data, and what does it mean for my experiment?"

    FMRI is endless compromises, we should expect prospective motion correction to be but another one on the list. At the present time I remain unconvinced that prospective motion correction - however it is implemented - is a panacea for motion confounds in fMRI. I think we have some good starting points for more engineering, but that's about it. What do you think?

    __________________



    Notes:

    1.  The paper, written by my colleague, DS from the MathematiCal Neuroimaging blog investigates the magnitude of spurious signal changes arising from small movements, and shows that these changes compete with, and possibly exceed, the magnitude of BOLD changes. We suspect that what we term the "receive field contrast-motion correction" (RFC-MoCo) effect is responsible for a significant fraction of the erroneous correlations in resting-state fMRI data. As soon as the paper is available I'll link to it from this blog.

    2.  Movement during k-space acquisition is always going to be a major problem that will lead to blurring, dropout, ghosting and other unwanted image artifacts. We tend to assume that single-shot EPI is sufficiently rapid to avoid motion contamination during the multi-echo readout train, but it's clear that rapid movement will compromise this assumption. At the present time there's not much that can be done about corrupted k-space, unless one has excellent navigator echoes and/or independent information on the interaction of the movement and the modulated k-space trajectory. And even then the results may not be pretty! Hey, we can't expect to fix every form of movement! There has to be a limit. I mean, we pose for photographs for a reason, no? Every technology has a limit.

    3.  The PACE method, which does a TR-by-TR update on the Siemens Trio scanner, nicely demonstrates this limitation. Motion that happens during the previous TR is embedded into the template for the subsequent volume of data to attain. Thus, movement during the previous TR contaminates that volume but also has a "hangover" effect for the next TR, too. The method tries to make the next TR match the current, artifact-contaminated one! A single acute movement may take several TRs to "work its way out" of the compensation scheme, thus prolonging the deleterious effects in the data. A slice-by-slice correction using image information as the basis for correction would have much reduced sensitivity to this "hangover" effect, but it wouldn't be completely immune. It all depends on how well the subject's motion - that is, the new position - can be deciphered from the most recently acquired image plane.


    0 0


    Have you ever wondered whether it's appropriate to put a research subject into a dark, confined tube that makes an awful din, whereupon the subject may learn that his brain has some abnormality, and still expect the subject's brain to operate in a state representative of his normal cognition (and not that of a stressed out basket-case)? And what about the bioeffects of the high magnetic field itself, or of the rapidly switched gradients and their induced electric currents in body tissue? To date there has been scant evidence that the action of studying human cognition via an MRI scanner actually modifies that brain function in a manner that might be considered a significant issue for interpretation of fMRI results.

    Putting aside the cognitive effects of a loud background noise and claustrophobia, the question remains whether the static and time-varying magnetic fields are modifying brain function in a substantial fashion. There are some well-known side effects of high magnetic fields: vertigo (see Note 1), and a metallic taste are the two phenomena tied directly to presence of, or movement through, a high magnetic field. (See Note 2.) But these effects tend to be mild and/or transitory, as a subject acclimatizes to the magnetic field, and can usually be rendered negligible by taking care not to make rapid head movements in or around the magnet.

    A colleague forwarded to me yesterday a paper from a Dutch group (van Nierop et al., "Effects of magnetic stray fields from a 7 tesla MRI scanner on neurocognition: a double-blind randomized crossover study." Occup. Environ. Med. 2012 Epub) that investigates the effects of head movements in the intense stray field region of a 7 T magnet. So, first of all, some good news: if you're doing fMRI at 1.5 or 3 T and you're not in the habit of asking your subjects to thrash their heads around wildly at the mouth of the magnet or once inside the magnet bore, then so far as is known today you're in the clear. The effects reported in this paper pertain specifically to head movement in the really intense gradients that comprise the stray magnetic field around the outside of a passively shielded 7 T magnet. (The iron shield is outside the magnet, leaving considerable gradients in the vicinity of the magnet when compared to the actively shielded 1.5 and 3 T magnets most of us have nowadays.)

    And with that preamble let's look at the summary of the paper:


    OBJECTIVE:  This study characterises neurocognitive domains that are affected by movement-induced time-varying magnetic fields (TVMF) within a static magnetic stray field (SMF) of a 7 Tesla (T) MRI scanner.

    METHODS:  Using a double-blind randomised crossover design, 31 healthy volunteers were tested in a sham (0 T), low (0.5 T) and high (1.0 T) SMF exposure condition. Standardised head movements were made before every neurocognitive task to induce TVMF.

    RESULTS:  Of the six tested neurocognitive domains, we demonstrated that attention and concentration were negatively affected when exposed to TVMF within an SMF (varying from 5.0% to 21.1% per Tesla exposure, p<0.05), particular in situations were high working memory performance was required. In addition, visuospatial orientation was affected after exposure (46.7% per Tesla exposure, p=0.05).

    CONCLUSION:  Neurocognitive functioning is modulated when exposed to movement-induced TVMF within an SMF of a 7 T MRI scanner. Domains that were affected include attention/concentration and visuospatial orientation. Further studies are needed to better understand the mechanisms and possible practical safety and health implications of these acute neurocognitive effects.


    Okay, so let's make sure we're clear that although the test magnetic field strengths mentioned are 0.5 and 1.0 T, this refers to two heterogeneous regions of a stray magnetic field on the outside of a 7 T magnet:



    These two locations are not equivalent to the homogeneous fields that you would find at the center of a 0.5 or a 1.0 T MRI magnet. What is important here is the strong gradient of magnetic field, with each location in space being designated by its point amplitude of either 0.5 or 1.0 T, as shown in the contours of the above figure. Fore and aft of these points the magnetic field varies very quickly with distance, whereas at the center of a 0.5 or a 1.0 T magnet there would be a homogeneous field region larger than the dimensions of a human head. Thus, it is important to recognize that in this study it's the head's movement through these intense gradients that is generating the bioeffects. The study says nothing whatsoever about head movements inside the homogeneous field region at the center of 0.5, 1.0, 3 or even 7 T magnets.

    And so to the behavioral portion of the study. Subjects moved their head in prescribed ways in between a suite of behavioral tasks:
    "The head movements consisted of 10 movements in vertical and 10 in horizontal direction (covering an angle of 180 degrees in 0.8 seconds), the start of each movement indicated by an auditory cue. The accompanying TVMF at head height in sitting position in the 0.5 (low) and 1.0 T (high) conditions were on average approximately 1200 and 2400 mT/s, respectively..."

    Although the pulsed magnetic field gradients we use for spatial encoding may be driven as fast as 20 T/s, the time integral of our familiar gradients is small; a ramp up time is usually around 200 microseconds, after which the gradient is constant (and of low amplitude, maybe 35 mT/m, relative to the polarizing magnet strength of 1.5, 3 or 7 T) until it is switched off in a similarly short time. So 1.2 - 2.4 T/s for almost a second (800 ms) is quite a lot of time varying magnetic field; considerably more than arises from a scanner's pulsed gradients. And each movement was repeated ten times in a row (I think at the 800 ms rate, although I wasn't absolutely clear on that point from the methods section).

    Once each subject had shaken then nodded his head the requisite ten times apiece, it was time to assess the magnetic hangover effects. The behavioral tasks performed after a new block of head movements were as follows:
    "Neurocognitive domains were selected based on brain functions that are most relevant for surgeons and other medical professionals operating near MRI, for example, visual perception, motor performance as well as more general functions concerning attention, concentration and (working) memory.

    "...the test battery was composed of tasks that are relatively short (<4 min each), insensitive to ceiling effects and to influences of practice and level of intelligence."

    And, of course, they took good care to ensure that participants weren't able to figure out whether they were in either experimental condition or in the sham condition (a mock scanner) while doing each of the three sessions.

    You'll have to read the paper for the details of the tasks, it is all Swahili to me. I shall assume that the tasks were all sound, and report their findings:

    (Click to enlarge.)

    "...we observed a significant exposure-response relationship, indicating a decrease in attention related to a reduced working memory and a decrease in visuospatial perception. Also in verbal memory functioning (story recall), a subtle decrease was seen, but this association did not reach statistical significance (p=0.07)."


    Naturally, this study has some limitations and the authors point them out:
    "The current study design does not allow us to disentangle any effect to be associated only with SMF or TVMF or with the combination of both."

    In other words they're not sure whether it's the strong gradient or the strong magnetic field that is causing the electrical currents in the brain that lead to behavioral changes. This would leave open the possibility that similar head movements conducted inside the homogeneous magnetic field at the center of your 1.5 or 3 T scanner could have similar bioeffects. Not that subjects are encouraged to make such movements, but according to this study the possibility cannot be excluded. Interesting.

    A further limitation concerns the duration of the bioeffects relative to the task durations:
    "...the duration of any effect of motion-induced TVMF is unknown. Since it is not feasible to induce strong TVMF (by head movements) during the completion of a task, subjects performed head movements immediately before each single task. This implies that we would only pick up an effect of TVMF lasting longer than the duration of a single task (from 30 to 180 seconds). Our results show that effects due to TVMF would have to last for at least 90 s, that is the longest task for which we found a statistically significant effect (reaction time task). This is longer than most other tasks except for the Kappers, memory and letter/number sequencing tasks which took up to 180 s and did not show significant effects of exposure."

    One would naively expect there to be some sort of temporal relationship between the time (and magnitude) of head movements and the length of the hangover from them. Still, the hangover seemed to affect only certain cognitive domains, at least as could be measured by these relatively long task blocks.

    What about the other, better-known effects of magnetic field exposure?
    "Based on the questionnaire after each session, in the sham, low and high exposure condition, 4, 10 and 19 subjects, respectively, reported sensory symptoms. For example, in the highest exposure condition, a metallic taste (12 subjects) was most commonly reported followed by dizziness (six subjects), headache (five subjects) and nausea (one subject)."


    So, what does it all mean for fMRI? Are our subjects essentially "drunk on MRI" at the time of their scan? This study was focused on the safety of personnel working around high magnetic fields and not on the performance of fMRI subjects, so we need to read between the lines a little bit. For starters, unless you are exceedingly reckless in the way you manage your subjects as you install them in the scanner, you're not likely to encounter the sorts of head movements used in this study. Furthermore, I don't think you can extrapolate from these results and say anything meaningful about potential bioeffects in the homogeneous center of MRI magnets, even 7 T ones. Still, it would be interesting to know if rapid head movements once inside the scanner have effects on cognition. And it might be useful to know whether the movements considered in this paper would show (reduced?) bioeffects in the stray field of 3 T and lower magnets, if only to know precisely what bioeffects it is we are avoiding by having our subjects move slowly (and then not at all once they have been inserted into the magnet bore).

    __________________



    Notes

    1.  A 2011 paper from Johns Hopkins suggested the following mechanism for MR-induced vertigo or dizziness:
    Our calculations and geometric model suggest that magnetic vestibular stimulation (MVS) derives from a Lorentz force resulting from interaction between the magnetic field and naturally occurring ionic currents in the labyrinthine endolymph fluid. This force pushes on the semicircular canal cupula, leading to nystagmus.

    From Roberts et al., "MRI Magnetic Field Stimulates Rotational Sensors of the Brain." Current Biology21(19), 1635-40 (2011). PDF is here.

    2.  Peripheral nerve stimulation and the (rare) generation of magnetophosphenes is tied to the rapidly switched magnetic field gradients. PNS can be quite easily achieved on a modern 3 T scanner and can be distracting even if there isn't a direct modulation of the brain by the magnetic fields. It's important to ensure subjects don't form big loops by crossing their feet or clasping their hands together during a scan. But magnetophosphenes - flashes in the retina - are hard to generate in a typical clinical MRI. I would expect high-powered insert gradient sets to be capable of triggering them, but I've never experienced them myself in any standard body gradient set. I have, however, experienced them in an experimental MRI scanner that uses a pre-polarizing (pulsed) electromagnet.




    0 0


    An organizational post I'd been meaning to get to for a while. There are some posts to come in this series, in parentheses below. I'll update this page with links as these posts get published.


    Understanding fMRI artifacts

    An introduction to the post series, defining what we mean by "good" data, and general discussion on viewing and interpreting EPI artifacts in a time series.



    Good data


    Understanding fMRI artifacts: "Good" axial data

    Includes cine loops through time series EPI and statistical images to evaluate the data.


    Understanding fMRI artifacts: "Good" coronal and sagittal data

    Includes cine loops through time series EPI and statistical images to evaluate the data. (The notes include a description of the slice-dependent gradient switching limits that can prohibit certain slice orientations.)



    Common persistent EPI artifacts


    Common persistent EPI artifacts: Aliasing, or wraparound

    Aliasing effects in the frequency and phase encoding dimensions.


    Common persistent EPI artifacts: Gibbs artifact, or ringing

    The origin of the ringing problem and demonstrations in phantom and brain data.


    Common persistent EPI artifacts: Abnormally high N/2 ghosts (1/2)

    Subject-dependent conditions:
    • Asymmetric orientation of the subject's head leading to a poor shim
    • Poor shim as a result of subject motion during/immediately after shimming
    • Presence of FOD or an implant causing a poor shim


    Common persistent EPI artifacts: Abnormally high N/2 ghosts (2/2)

    Scanner-dependent conditions:
    • Rotated read/phase encode axes
    • No fat suppression
    • Mechanical resonances
    • Excessive ramp sampling


    Common persistent EPI artifacts: Distortion and dropout

    A brief overview of these two plagues of EPI.


    Common persistent EPI artifacts: RF interference

    RF screening, adding devices to the scanner environment, modifying the magnet room, and standard operating procedures for fMRI labs.


    Common persistent EPI artifacts: Receive coil heterogeneity

    Receive fields for phased-array RF coils, and removing receive field heterogeneity with prescan normalization.



    Common intermittent EPI artifacts


    Common intermittent EPI artifacts: Subject movement
    • Eye movements
    • Head nodding
    • Talking
    • Coughing, swallowing, yawning and sneezing
    • Body movements


    (Common intermittent EPI artifacts: Signal drift due to gradient heating)

    Use of the scanner in a non-steady state.


    Rare intermittent EPI artifacts


    Rare intermittent EPI artifacts: Spiking, sparking and arcing

    Most common causes:
    • Movement of locking nuts or some other metal component of the gradient electrical cables
    • Movement of locking nuts on shim trays or other components inside the bore tube
    • Conductive debris in the RF coil sockets on the patient bed
    • Other sources of metal-on-metal friction inside the magnet room
    • Items of clothing on subjects

    Detecting spikes outside of QA tests.


    (Rare intermittent EPI artifacts: Fluctuating ghosts with a 32-ch Rx coil)

    Siemens Trio-specific problem.


    (Rare intermittent EPI artifacts: Spurious signals due to imperfect crusher gradients)

    Pulse sequence-specific problem.



    Rare persistent EPI artifacts


    (Rare persistent EPI artifacts: Poor shim, even on a phantom)

    Movement/loosening of passive shim trays. (May also manifest as an inability to achieve good fat suppression for EPI of head.)


    (Rare persistent EPI artifacts: Increased signal dropout with partial Fourier)

    Loss of echoes that refocus away from theoretical k-space center.





    0 0


    In another move to accelerate the development of methods for neuroimaging applications, some colleagues and I recently decided to abandon a second attempt to publish a paper in traditional journals and opted for the immediacy of arXiv instead. (Damn, it feels good to be free of reviewers claiming "What problem? I don't see why a solution is even needed?" Whatever.) We've got another paper coming out on arXiv in a few days, too, although in this case we are exploring the possibility of a simultaneous submission to IEEE Trans Med Physics since it allows such tactics, and my colleagues in "real" physics do this all the time. Whether or not the IEEE submission happens the material will be out there in the world, naked, for all to view and poke at. Isn't this how science is supposed to work? I love it!

    Anyway, for today, here's the skinny on the arXiv submission from August (which I inadvertently forgot to hawk on this blog even after tweeting it):

    http://arxiv.org/abs/1208.0972

    (Get a PDF fo' free via the link.)


    Simultaneous Reduction of Two Common Autocalibration Errors in GRAPPA EPI Time Series Data
     
    D. Sheltraw, B. Inglis, V. Deshpande, M. Trumpis *
    The GRAPPA (GeneRalized Autocalibrating Partially Parallel Acquisitions) method of parallel MRI makes use of an autocalibration scan (ACS) to determine a set of synthesis coefficients to be used in the image reconstruction. For EPI time series the ACS data is usually acquired once prior to the time series. In this case the interleaved R-shot EPI trajectory, where R is the GRAPPA reduction factor, offers advantages which we justify from a theoretical and experimental perspective. Unfortunately, interleaved R-shot ACS can be corrupted due to perturbations to the signal (such as direct and indirect motion effects) occurring between the shots, and these perturbations may lead to artifacts in GRAPPA-reconstructed images. Consequently we also present a method of acquiring interleaved ACS data in a manner which can reduce the effects of inter-shot signal perturbations. This method makes use of the phase correction data, conveniently a part of many standard EPI sequences, to assess the signal perturbations between the segments of R-shot EPI ACS scans. The phase correction scans serve as navigator echoes, or more accurately a perturbation-sensitive signal, to which a root-mean-square deviation perturbation metric is applied for the determination of the best available complete ACS data set among multiple complete sets of ACS data acquired prior to the EPI time series. This best set (assumed to be that with the smallest valued perturbation metric) is used in the GRAPPA autocalibration algorithm, thereby permitting considerable improvement in both image quality and temporal signal-to-noise ratio of the subsequent EPI time series at the expense of a small increase in overall acquisition time.


    * For some strange arXiv technical reason the author list is reordered from that which appears (correctly) on the PDF. C'est la vie.

    0 0
  • 10/03/12--22:29: Quench!!!
  • I was persuaded by Tobias Gilk to post a video of the quench of Berkeley's old 4 T magnet, a fairly momentous event that a lot of people have enjoyed watching in private (whether they were absent or witnessed it live). The quench happened back in 2009. We didn't publicize the video at the time because we didn't want a bunch of know-nothings accusing us of wasting resources. (See the FAQ in the video comments if you want to know what happened to the magnet - we turned it into a mock scanner - and why we didn't try to recover the helium.) But there comes a time when the value to others becomes greater than the annoyance of poorly informed trolls venting their spleens on YouTube. So here it is, finally:




    In case you missed seeing some of our antics in the couple of days leading up to the quench, here's that video, too:



    And finally, while uploading the most recent video I tripped over another quench video from what looks and sounds like some Scandinavians: (I'm not even going to guess between Finland, Denmark, Norway, Sweden, Iceland,...)




    Looks like these guys had as much fun as we did! What's really clear in their tests is the oscillation of magnetic objects between the regions of peak gradient at either end of the magnet - a couple of feet out from the faces of the magnet at either end, the magnetic field and cryostat being symmetrical. The speed of movement is sufficiently slow at 1.5 T to see things clearly, versus the crazy violent movement of objects in the 4 T field. They have better music, too.


    0 0


    I recently came across some extremely informative online resources for learning the basics of (nuclear) magnetic resonance. The first (via Agilent's Spinsights.net blog) is an online simulator that is nicely introduced in a series of four YouTube tutorials (see below). The simulator allows you to demonstrate such concepts as RF excitation, the rotating frame of reference, relaxation and even a 1D gradient for spatial encoding. If you are brand new to MR then you might need some assistance in understanding things for yourself, and I would think this tool (and the supporting tutorials) would be best used by an instructor in a class, but I don't want to dissuade you from taking a stab on your own. Watch the videos first (see below), then check out the simulator. (You can also find technical info and links to the tutorials at www.drcmr.dk/bloch.)

    The other resource I found just about blew me away, not so much for the NMR lectures themselves, as good as they are, but because they are part of an extensive biophysics course covering everything from electromagnetic radiation to flow cytometry and sedimentation methods! The lectures are by Yair Meiry, a fellow who is apparently now working as a skydiving instructor in Canada (assuming my Internet sleuthing has improved since yesterday's attempt to divine the Scandinavian country of origin of another YouTube video). Channeling his inner Garrett Lisi, perhaps? I know I'm impressed.


    Bloch Equations Simulator Tutorials

    Part 1: Introduction to the Bloch Simulator made for basic MRI and NMR education



    Part 2, NMR/MRI-education: Simple spin dynamics explored using the Bloch Simulator



    Part 3, NMR/MRI-education: Spin-echoes explored using the Bloch Simulator



    Part 4, NMR/MRI-education: K-space imaging in 1D explored using the Bloch Simulator



    Link to the simulator itself, which runs inside your web browser: www.drcmr.dk/BlochSimulator/

    PS Another Bloch equations simulator for Mac OS X: Spin Bench. Thanks to Miki Lustig for the tip.

    _____________________

    Understanding NMR, by Yair Meiry


    Part 1



    Part 2



    Part 3 (solving problems for a formal class)



    A link to the parent biophysics course: https://sites.google.com/site/yairmeiry/home


    0 0


    Tal Yarkoni has a paper out in Frontiers in Neuroscience, "Designing next-generation platforms for evaluating scientific output: what scientists can learn from the social web." As someone who has recently taken the plunge into 'pre-publication' submissions, I shall be interested to hear others' opinions on the manifold issues surrounding online publication, peer review, post-publication review, etc.

    To be honest I'm a little surprised someone down in the South Bay (that's Silicon Valley to you non-Bay Area locals) hasn't already created a startup company offering us software to do this stuff. Surely there's money to be made. Until then, I for one have moved in toto to faster online models, whether it's this blog for my local user support (which just happens to take precisely the same amount of work whether fifty or fifty million people read it) or arXiv.org for papers. I'm adopting the Nike model: Just do it. But I realize it's a lot more complicated and nuanced than one rebellious Limey who already has a secure job. If we all went off piste there'd be chaos. So, how do we get from Tal's circumspect arguments to a workable platform?

    0 0


    It took a little longer to get to than I'd planned, but contained in this post is a first pass at a checklist for acquisition parameters that I think should be included in the methods section of fMRI papers. This draft is an attempt to expand and update the list that was given in the 2008 paper from Poldrack et al. (I have reproduced below the section on acquisition that appeared in that 2008 paper.) Here, I tried to focus on the bulk of fMRI experiments that use 1.5 to 3 T scanners with standard hardware today. I further assumed that you're reporting 2D multislice EPI or spiral scanning. Advanced and custom options, such as multiband EPI, 3D scans and 7 T, will have to be added down the road.

    In an attempt to make it didactic I have included explanatory notes. I went verbose instead of shorthand on the assumption that many fMRI papers don't include a lot of experimental detail perhaps because the authors don't possess that level of knowledge. We might as well learn something new whilst satisfying our collective desire for better manuscripts, eh? So, I haven't even tried to determine a shorthand notation yet. As others have already commented, having a checklist is probably more useful in the near term and the idea of a shorthand is a secondary consideration that has most value only if/when a journal is attempting to curtail the length of methods sections. But I'll take a stab at a shorthand notation once the checklist has been refined in light of feedback.

    I've sorted the parameters into Essential, Useful and Supplemental categories in terms of value to a reader of a typical fMRI paper. Within each category the parameters are loosely sorted by functional similarity. In the Essential category are parameters whose omission would challenge a reader's ability to comprehend the experiment. Thus, there are several acquisition options - sparse EPI for auditory stimulus delivery is one example - that appear under Essential even though they are rarely used. The assumption is that everyone would report all the Essential parameters, i.e. that a reviewer should be expected to fault a paper that doesn't contain all the Essential parameters (and a journal should be held accountable for not permitting inclusion of all Essential parameters in the published methods section rather than consigning them to supplemental material).

    Useful parameters are for anal types like me who want to be reasonably assured that the authors knew what they were doing when they ran their experiments. And a good way to provide that assurance is to get into a little bit of detail on parameters that have a fundamental, if infrequently considered, effect on the data. Perhaps the best example is the readout echo spacing. We all know that EPI is fundamentally distorted in one direction, so why wouldn't we include the parameter that allows a reader to qualitatively assess the effects of distortion on the images? If the experiment claims the X node was activated yet you know the X node is just millimeters away from the W and Y nodes then you'd probably be critically interested in the spatial accuracy.

    Finally there are Supplemental parameters. Unless heavy customization is being used then a reader should be able to infer many of the Supplemental parameters from the scanner model and software version. However, if you are publishing data that you expect others to be able to use for their own analyses, e.g. data that will be made available via a public database, then I would argue that many of the Supplemental parameters become essential. Furthermore, some of the Supplemental parameters may be required in order for another lab to replicate your experiment. Perhaps the Supplemental (and Useful) parameters should be included routinely in the supplemental methods section of the paper?

    _________________


    Here is the list of parameters recommended for reporting according to Poldrack et al. in 2008:




    The new draft list is a slightly more expansive version of this list, although some of the items included above don't feature in my draft list because in my opinion they are issues that relate more closely to the experimental design than the acquisition. Specifically, the number of experimental sessions and the volumes acquired per session (as well as the number of separate time series acquisitions per session, which wasn't included previously) is usually a function of the task, not the scanner performance. Likewise, the scanner's performance is the same whether the slice prescription is whole brain for an adolescent or part-brain for an adult. I think these sorts of issues should be described separately in the larger description of the experiment, leaving us with the task of what the scanner is doing during each time series acquisition. However, for the most part the Essential column of the new draft list matches the previous suggestions given above.

    Here's the new draft list, with explanatory notes (and a glossary of abbreviations) below:


    (Click to enlarge.)


    Questions:

    Are there better ways to separate parameters into families? Does it even matter yet? (Having an ordered system is critical for a shorthand but if it's just a checklist the order of reporting should matter far less.) Any glaring omissions? Are the parameters categorized appropriately? Have I permitted a full report of parameters independent of the scanner manufacturer?

    _________________



    Explanatory notes:

    Essential - Scanner

    Magnetic field strength: In lieu of magnetic field strength the scanner operating frequency (in MHz) might be considered acceptable. I'm assuming we're all doing 1-H fMRI. If not, chances are the methods section's going to be very detailed anyway.


    Essential - Hardware Options

    Rx coil type: For standard coils provided by the scanner vendor a simple description consisting of the number of independent elements or channels should suffice, e.g. a 12-channel phased array coil, a 16-leg birdcage coil. Custom or third-party coils might warrant more detailed information, including the manufacturer. Most head-sized coils are Rx-only these days, but specifying Rx-only doesn't hurt if there could be any ambiguity, e.g. a birdcage coil could quite easily be Tx/Rx.


    Essential - Spatial Encoding

    Pulse sequence type: A generic name/descriptor is preferred, e.g. single-shot EPI, or spiral in/out.

    Number of shots (if > 1): Multi-shot EPI isn't a common pulse sequence; single-shot EPI is by far the most common variant, even if acquired using parallel imaging acceleration. I include this option to reinforce the importance of reporting the spatial encoding accurately.

    PE acceleration factor (if > 1): This is usually called the acceleration factor, R in the literature. Vendors use their own notation, e.g. iPAT factor, SENSE factor, etc.

    PE acceleration type (if > 1): It seems that most vendors use the generic, or published, names for parallel imaging methods such as SENSE, mSENSE and GRAPPA. I would think that trade names would also be acceptable provided that the actual (published) method can be deciphered from the scanner vendor and scanner type fields. But generic names/acroynms are to be preferred.

    PE partial Fourier scheme (if used): Convention suggests listing the acquired portion/fraction of k-space rather than the omitted fraction. Any fraction that makes sense could be used, e.g. 6/8 or 48/64 are clearly equivalent.

    Note: I plan on adding a list of parameters suitable for slice direction acceleration - the so-called multiband sequences - in a future version.


    Essential - Spatial Parameters

    In-plane matrix: This should be the acquired matrix. If partial Fourier is used then I would suggest reporting the corresponding full k-space matrix and giving the partial Fourier scheme as listed above. But I wouldn't object to you reporting that you acquired a 64x48 partial Fourier matrix and later reported the reconstructed matrix size as 64x64. So long as everything is consistent it's all interpretable by a reader. (But see also the In-plane reconstructed matrix parameter in the Supplemental section.)

    In-plane inline filtering (if any): This may be the biggest omission of the previous checklist. Non-experts may be unaware that filtering might be applied to their "raw" images before they come off the scanner. It's imperative to check and report whether any spatial smoothing was applied on the scanner as well as during any "pre-processing" steps subsequent to porting the time series data offline.

    Slice thickness and Inter-slice gap: For now I would use the numbers reported by the scanner, even though there may be some small variation across scanner vendors and across pulse sequences. For example, some vendors may use full width at half slice height while others may use a definition of width at or near the base, there may be pulse sequence options for RF excitation pulse shape and duration, etc. I see these details as secondary to the essential reporting improvements we're aiming for.

    Slice acquisition order:Interleaved or contiguous would be sufficient, although explicit descending (e.g. head-to-foot) or ascending (e.g. foot-to-head) options for contiguous slices would be acceptable, too. Presumably, the subsequent use of slice timing correction will be reported under the post-processing steps (where most fMRIers call these "pre-processing" because they are applied before the statistical analysis).


    Essential - Timing Parameters

    TR: For single-shot EPI with no sparse sampling delay the TR also happens to be the acquisition time per volume of data. But if sparse sampling or multiple shot acquisition is being used then the TR should be clearly reported relative to these options. The conventional definition of TR is the time between successive RF excitations of the same slice. Thus, by this definition the reported TR would include any sparse sampling delay, but it would specify the time of each separate shot in a multi-shot acquisition and the per volume acquisition time would become TR x nshots.

    No. of volumes in time series: Dummy scans (not saved to disk) should be reported separately. Likewise, the use or rejection of the first n volumes for task-related reasons, e.g. to allow a subject to acclimatize to the scanner sounds, should also be reported separately in the post-processing segment of the experiment. In this field we are only interested in the total quantity of data available to the experimenter.

    No. of averages/volume (if > 1): I don't think I've ever seen anyone do anything but one average per TR for single-shot EPI/spiral (unless they've screwed something up) and I can't think of a reason why someone would want to do it for fMRI. But, if it happens for some reason then it's really, really important to specify it.


    Essential - RF & Contrast

    Fat suppression scheme: It's sufficient to state that fat saturation or fat suppression was used, for example. Further details aren't required unless the scheme was non-standard, e.g. a custom spatial-spectral excitation scheme. 


    Essential - Customization

    Sparse sampling delay (if used): Sometimes called "Delay in TR" on the scanner interface. Used most often for auditory stimulus or auditory response fMRI.

    Prospective motion correction scheme (if used): PACE is one commercial option. These schemes fundamentally change the nature of the time series data that is available for subsequent processing and should be distinguished from retrospective (post-processing) corrections, e.g. affine registration such as MCFLIRT in FSL. It is also critical to know the difference between motion correction options on your scanner. On a Siemens Trio running VB15 or VB17, for instance, selecting the MoCo option enables PACE and a post hoc motion correction algorithm if you are using the sequence, ep2d_pace whereas only the post hoc motion correction algorithm - no PACE - is applied if you are using the ep2d_bold sequence. There's more detailed information on these options in my user training guide/FAQ.

    Cardiac gating (if used): This isn't a common procedure for fMRI, and recording of cardiac information, e.g. using a pulse oximeter, is radically different from controlling the scanner acquisition via the subject's physiology. The recording of physiological information doesn't usually alter the MRI data acquisition, whereas gating does. Reporting of physio information is tangential to the reporting structure here, but if you are recording (and using) physio data then presumably you will report it accordingly somewhere in the experiment description.


    Useful - Hardware Options

    Tx coil type (if non-standard): It should be possible to infer the Tx coil from the scanner model. If not, e.g. because of a custom upgrade or use of a combined Tx/Rx coil, then the new Tx coil should be reported independently. I would also advocate including the drive system used if the coil is used in anything but the typical quadrature mode.

    Matrix coil mode (if used): There are typically default modes set on the scanner when one is using un-accelerated or accelerated (e.g. GRAPPA, SENSE) imaging.  If a non-standard coil element combination is used, e.g. acquisition of individual coil elements followed by an offline reconstruction using custom software, then that should be stated.

    Coil combination method: Almost all fMRI studies using phased-array coils use root-sum-of-squares (rSOS) combination, but other methods exist. The image reconstruction is changed by the coil combination method (as for the matrix coil mode above), so anything non-standard should be reported.


    Useful - Spatial Encoding

    PE direction: If you've shown any examples of EPI in your paper then the PE direction can usually be determined from the image. If N/2 ghosts or distortions aren't obvious, however, then it's rather important that the phase encode direction is stated, in concert with the readout echo spacing, so that a reader can infer your spatial accuracy.

    Phase oversampling (if used): There's no reason to use phase oversampling for EPI - you're wasting acquisition time - but if you are using it for some reason then it should be reported consistent with the acquired matrix, acquired FOV, echo spacing and associated parameters.

    Read gradient bandwith: Not an intrinsically useful parameter on its own, it does have value if reported in conjunction with the echo spacing. (An alternative to this single parameter would be the read gradient strength (in mT/m) and the digitizer bandwidth (in kHz).)

    Readout echo spacing: Rarely reported but really useful! This number, in conjunction with the FOV and acquired matrix size, allows a reader to estimate the likely distortion in the phase encode direction.


    Useful - Spatial Parameters

    In-plane resolution: This field is redundant if the reconstructed matrix (In-plane matrix parameter) and FOV are reported, but I for one wouldn't object to seeing the nominal in-plane pixel size given anyway. It may make the paper a faster read. Probably not worth arguing about. (Cue extensive debate...!)


    Useful - Customization

    Image reconstruction type: Unless specified to the contrary, most readers will assume that magnitude images were taken from the 2D Fourier transform that yielded each individual EPI. If you use complex data - magnitude and phase - then that option should be specified along with the particular processing pipeline used to accommodate the atypical data type.


    Supplemental - Hardware Options

    Gradient set type: It should be possible to infer the gradient coil from the scanner model. If not, e.g. because of a custom upgrade or use of a gradient insert set, then the specifications of the actual gradient coil should be reported independently.


    Supplemental - Spatial Encoding

    Pulse sequence name: Could be invaluable for someone wanting to replicate a study. There may be multiple similar pulse sequences available, all capable of attaining the specifications given, but it is entirely feasible that only one of the sequences has a particular quirk in it!

    k-space scheme: Readers will assume linear (monotonic) k-space steps in the PE direction unless indicated to the contrary. Centric ordering or other atypical schemes should be indicated, especially in concert with multiple shots if the Number of shots parameter is greater than one.

    Read gradient strength: Could be useful in conjunction with information about the ramp sampling percentage and echo spacing time, otherwise probably of limited value to most readers.

    (Ramp sampling percentage:) Ramp sampling can increase the N/2 ghost level considerably if there is appreciable gradient and digitization (data readout) mismatch. But determining the percentage of readout data points that are acquired on the flat vs. the ramp portions of each readout gradient episode can be involved. And for routine studies, as opposed to method development studies, there's probably not a whole lot of value here. Maybe remove it?

    (Ghost correction method:) N/2 ghost correction usually happens invisibly to the user, but there are some options becoming available, especially useful for large array coils (e.g. 32-channel coils) where there may be local instabilities with some ghost correction methods. If known, and if non-standard, then it would be nice to report. But perhaps more overkill for fMRI methods?

    PE partial Fourier reconstruction method: If the scanner offers more than one reconstruction option then the chosen option should be reported.


    Supplemental - Spatial Parameters

    In-plane reconstructed matrix: This is for reporting of zero filling (beyond the default zero filling that may have been done for a partial Fourier acquisition) to a larger matrix than acquired, prior to 2D FT. There may be modeling issues associated with the number of voxels in the image, not least of which is the size of the data set to be manipulated! It could save someone a lot of angst if she knows what you did to the data prior to uploading it to the public database.


    Supplemental - Timing Parameters

    No. of dummy scans: This is the number of dummy scans used to establish the T1 steady state. Many fMRI experiments also use acquired volumes subsequent to the (unsaved) dummy scans for neuroscientific reasons, e.g. to allow definition of a BOLD baseline or adjustment to the scanner noise. These two parameters should be reported separately.


    Supplemental - RF & Contrast

    Excitation RF pulse shape and Excitation RF pulse duration:  Not critical for standard pulse sequences on commercial scanners, but if atypical values are set in order to achieve very thin slices, for example, then reporting these parameters may be benenficial. These two parameters will become essential, however, when considering multiband EPI in the future.


    Supplemental - Customization

    Shim routine: If manual shimming or an advanced phase map shimming routine is used, especially to improve the magnetic field over restricted brain volumes, then this information should be reported.

    Receiver gain: Most scanners use some form of autogain to ensure that the dynamic range at the receiver is acceptable. If manual control over receiver gain is an option and is used then it should be reported because a mis-set gain could lead to artifacts that aren't typically seen in EPI, and a reader could subsequently attribute certain image features to other artifact sources.


    _________________



    Abbreviations:

    FOV - field-of-view
    N/2 - Half-FOV (for Nyquist ghosts)
    PE - phase encode
    Rx - Radiofrequency (RF) receiver
    Tx - Radiofrequency (RF) transmitter
    TE - echo time
    TR - repetition time



    0 0


    Motion has been identified as a pernicious artifact in resting-state connectivity studies in particular. What part might the scanner hardware play in exacerbating the effects of subject motion?



    My colleague over at MathematiCal Neuroimaging has been busy doing simulations of the interaction between the image contrast imposed by the receiver coil (the so-called "head coil") and motion of a sample (the head) inside that coil. The effects are striking. Typical amounts of motion create signal amplitude changes that easily rival the BOLD signal changes, and spurious spatial correlations can be introduced in a time series of simulated data.

    The issue of receive field contrast was recognized in a recent review article by Larry Wald:
    "Highly parallel array coils and accelerated imaging cause some problems as well as the benefits discussed above. The most problematic issue is the increased sensitivity to motion. Part of the problem arises from the use of reference data or coil sensitivity maps taken at the beginning of the scan. Movement then leads to changing levels of residual aliasing in the time-series. A second issue derives from the spatially varying signal levels present in an array coil image. Even after perfect rigid-body alignment (motion correction), the signal time-course in a given brain structure will be modulated by the motion of that structure through the steep sensitivity gradient. Motion correction (prospective or retrospective) brings brain structures into alignment across the time-series but does not alter their intensity changes incurred from moving through the coil profiles of the fixed-position coils. This effect can be partially removed by regression of the residuals of the motion parameters; a step that has been shown to be very successful in removing nuisance variance in ultra-high field array coil data (Hutton et al., 2011). An improved strategy might be to model and remove the expected nuisance intensity changes using the motion parameters and the coil sensitivity map."

    In our recent work we take a first step towards understanding the rank importance of the receive field contrast as it may introduce spurious correlations in fMRI data. It's early days, there are more simulations ongoing, and at this point we don't have much of anything to offer by way of solutions. But, as a first step we are able to show that receive field contrast is ignored at our peril. With luck, improved definition of the problem will lead to clever ways to separate instrumental effects from truly biological ones.

    Anyway, if you're doing connectivity analysis or otherwise have an interest in resting-state fMRI in general, take a read of MathematiCal Neuroimaging's latest blog post and then peruse the paper submitted to arXiv, abstract below.

    ____________________


    A Simulation of the Effects of Receive Field Contrast on Motion-Corrected EPI Time Series

    D. Sheltraw, B. Inglis
    The receive field of MRI imparts an image contrast which is spatially fixed relative to the receive coil. If motion correction is used to correct subject motion occurring during an EPI time series then the receiver contrast will effectively move relative to the subject and produce temporal modulations in the image amplitude. This effect, which we will call the RFC-MoCo effect, may have consequences in the analysis and interpretation of fMRI results. There are many potential causes of motion-related noise and systematic error in EPI time series and isolating the RFC-MoCo effect would be difficult. Therefore, we have undertaken a simulation of this effect to better understand its severity. The simulations examine this effect for a receive-only single-channel 16-leg birdcage coil and a receive-only 12-channel phased array. In particular we study: (1) The effect size; (2) Its consequences to the temporal correlations between signals arising at different spatial locations (spatial-temporal correlations) as is often calculated in resting state fMRI analyses; and (3) Its impact on the temporal signal-to-noise ratio of an EPI time series. We find that signal changes arising from the RFC-MoCo effect are likely to compete with BOLD (blood-oxygen-level-dependent) signal changes in the presence of significant motion, even under the assumption of perfect motion correction. Consequently, we find that the RFC-MoCo effect may lead to spurious temporal correlations across the image space, and that temporal SNR may be degraded with increasing motion.

    0 0


    In a new paper entitled "Effects of image contrast on functional MRI image registration,"Gonzalez-Castillo et al. evaluate the performance of motion correction (a.k.a. registration) following a pre-processing step that aims to remove the contrast imparted across images due to receive (and/or transmit) field heterogeneity. A bias field map is estimated from a target EPI, and this reference image is then used to normalize the other images in the time series. There are other aims in the paper, too: specifically, to evaluate the performance of image registration (EPI to EPI, or EPI to MP-RAGE anatomical) when the T1 contrast of time series EPIs is altered via the excitation RF flip angle. But in this post I am going to focus on the normalization part because it involves the RF receive field heterogeneity, and this instrumentally-induced contrast is of particular concern for exacerbating motion sensitivity in fMRI (as explained here).

    Although others have compared prescan normalization between different array coils (see the references in this paper), this is the first paper I've seen that compares motion correction performance for EPI time series acquired with an array coil (a 16-channel array) to a single channel birdcage coil. Now, this isn't quite the straightforward comparison I might like - with the receive fields being the only difference - because in this instance the birdcage is also used to transmit the excitation RF pulses, making the transmission (Tx) field for the birdcage experiment more spatially heterogeneous than will be produced from the body RF coil that's used when acquiring with a receive-only array coil. Following? In other words, for the 16-channel array the receive (Rx) field heterogeneity is likely to dominate whereas for the birdcage coil the heterogeneities of both the transmit and receive fields are salient. Still, it's worth a look since the coil comparison highlights the issue of the scanner hardware's influence on EPI contrast, and on subsequent motion correction.


    The experimental stuff

    Images were acquired on a 3 T GE HDx scanner. For the time series EPI measurements the study used a gradient echo EPI sequence with TR/TE of 2000/30 ms, 3.75x3.75x4.00 mm voxels, 33 axial slices and 64 volumes per time series. Scans were acquired using excitation flip angles from 10 to 90 degrees (10 degree steps) in order to assess the effects of T1 contrast on image registration accuracy. Subjects (6 male, 2 female; 24-36 years, mean 29 years) were scanned over two sessions, once using a Tx/Rx birdcage coil and once with a 16-channel Rx-only coil (and the scanner's built-in whole body RF coil for transmission). The entire protocol was repeated for each session (i.e. for each coil) on each subject.

    Although it isn't specified in the paper, my assumption is that the authors used the GE default excitation paradigm for EPI: that is, a water- and slice-selective composite RF pulse (known colloquially as "water excite") rather than a separate fat saturation pre-pulse (a.k.a. "fatsat") with a separate slice-selective excitation pulse. (Siemens scanners use fatsat by default.) The use of water excitation rather than fat suppression - if that's indeed what was used - does have some implications for those using fatsat, but I'll get to that point later on.

    Prior to further processing, all images in each EPI time series were masked using 3dAutomask in AFNI, to restrict analysis to intra-cranial voxels. For each EPI time series, an intensity bias correction map was derived from the 6th volume in the series,to ensure steady state T1 contrast, using the segmentation function available in SPM8. (No name was given for this function, I assume those familiar with SPM8 will recognize it.) This map was then used to normalize all the other images of the series, a process the authors refer to as "intensity uniformization." Time series images were then registered - or motion corrected if you prefer - to the 6th EPI volume of that series using AFNI's 3dvolreg algorithm, an affine (six degrees of freedom) transformation that minimizes least-squares differences.

    A parameter called Mean Voxel Distance (MVD) was used to quantify registration performance to a single value, where MVD (in mm) is defined as the average Euclidean distance between the position of all intra-cranial voxels arising out of a reference registration and a trial registration. The reference  is obtained from registering the 6th EPI to the first EPI in each time series, the first EPI being acquired fully relaxed - what the paper calls the Infinite TR Volume. (Siemens users, see Note 1.) MVD then quantifies the difference between the trial registration - 6th EPI to nth EPI - with the reference - 6th EPI to 1st EPI. Hence:

    "The interpretation of the MVD is as follows: the larger the MVD, the larger the inconsistency between the reference alignment (the one considered the gold standard) and the alignment under consideration."

    As the authors point out, any errors in the reference alignment propagate into the measurement of interest. It's not an ideal quantification in light of this reference dependence, but for the purposes of this review I shall assume that it produced no systematic errors. (The authors include some simulations to characterize the behavior of MVD.) Instead, I want to get to the results so that we can assess any implications arising from this paper.


    Bias field normalization improves motion correction

    The particular comparisons of interest to me in this post were made between motion correction of time series EPIs with and without the bias field normalization, for both the Tx/Rx birdcage coil and the Rx-only 16-channel array coil.

    The use of bias field normalization improved the subsequent registration (motion correction) results, as measured by reducing the mean voxel displacement, with the improvement being larger (bigger reduction in MVD) for the 16-channel array coil than for the birdcage coil:


    Figure 6 from Gonzalez-Castillo et al.


    For both RF coils the bias field correction improved the registration accuracy linearly with flip angle, indicating that this normalization process is somewhat robust to the underlying image contrast. In these experiments a lower flip angle corresponded to greater image (T1-based) contrast, showing that as the amount of brain contrast increases, relative to the (fixed) bias field contrasts, the ability of the registration algorithm to correct for head motion was improved. Thus, whether brain contrast is increased (with lower flip angle) or the extent of bias field contrast is reduced (via the bias field correction), it all leads to improved motion correction.

    It's interesting to note that the array coil produces slightly better registration (that is, lower MVD) than the birdcage coil in the absence of bias field correction. This would suggest that the bias fields arising out of the combination of Tx and Rx heterogeneity for the birdcage coil are greater than the bias field being dominated by the Rx heterogeneity of the array coil; the Tx heterogeneity of the body RF coil would probably be quite small compared to that of the Rx field of the array.

    Once the bias field correction is applied, however, the array coil seems to out-perform the birdcage coil by a significant amount. Put another way, the bias field correction appears to be better able to mitigate the receive field bias from the array than it is able to mitigate the transmit and receive field biases of the birdcage, even though the birdcage receive field heterogeneity is much lower than that of the array. But because the bias field is derived from an EPI template that includes Tx as well as Rx effects for each case, there's no easy way to estimate how much of the improvement for the birdcage coil data can be attributed to the Rx field heterogeneity alone. Even so, the improvement is still far greater for the 16-ch data which naturally suggests that the Rx bias field of the 16-channel array has greater heterogeneity than the combination of Tx and Rx field heterogeneities for the birdcage coil. (See Note 2.) One can only wonder what a body coil transmit, receive-only birdcage combination would have yielded.

    Still, it is a tad surprising that the 16-channel array performs comparably (as measured by MVD) to the Tx/Rx birdcage when bias field correction isn't used. Could the array coil's strongly heterogeneous receive field be "anchoring" the registration algorithm? Is that why there is so much improvement when the bias field correction is applied to the array coil data?


    Limitations and considerations of the current study

    The bias field approach used here would include Rx field heterogeneity as well as Tx field heterogeneity, and also has an inherent bias towards signal having longish T2* because regions of signal dropout on gradient echo EPI at 30 ms will not provide information for the correction map, a limitation that may make a difference if subject movement is appreciable. (See Note 3.)

    The use of a fully relaxed scan as a reference target, and EPI acquired with water excitation rather than fatsat, means that the quantitative results presented here are likely to differ from a study that used a Siemens scanner with fatsat. Still, my suspicion is that the benefit of using the bias field normalization could remain, but until such a test is actually performed it's all speculation.

    Although I didn't report the variable flip angle results except in passing, there was an effect of changing flip angle on the registration efficacy. If fatsat were used instead of water excitation (where I am continuing to assume that water excitation was indeed used in this study) I wouldn't expect the flip angle dependency to be as strong because there is already more contrast within the image with fatsat. This is because fatsat generates some magnetization transfer (MT) contrast, especially in white matter and less so in gray matter, which tends to enhance the contrast between CSF, GM and WM by making the latter even darker. (CSF is brightest, GM intermediate, then WM darkest. See Note 4 in this post for more information on MT contrast.) Some MT contrast is also generated with the water excite scheme but it's less than when using fatsat, a fact that is readily appreciated if one compares typical EPI data from a GE scanner to those of a Siemens scanner: EPI from a GE scanner generally appear a lot flatter. Still, it will be interesting to see if the flip angle dependency persists once fatsat is being used instead of water excitation.

    As a receiver coil, the single channel birdcage doesn't have the complicating factors of requiring some sort of element combination because all the signal contributions are summed in analog; with one voltage to be detected (in quadrature). The sixteen individual signals obtained from the 16-channel array, on the other hand, must be combined to produce each final image; in this case the method is the standard root-sum-of-squares approach. Other element combination methods are available, often at the click of a button on the scanner, so be careful if you're using something other than root-SOS because image contrast (as modulated by the receive field heterogeneity) could appear slightly different. I wouldn't expect there to be a major departure from the root-SOS, but it often pays to be circumspect when dealing with the complexities of fMRI!


    Should we all be using bias field normalization before motion correction?

    Is a bias field correction a useful pre-processing step in fMRI? Based on these results alone I can't say for sure. But I do think that some sort of intermediate correction step could be useful when using an array receiver coil. The perennial "more work is needed" is true here, and that's why I'm reviewing the paper. I think it's a piece of a complex puzzle that we all need to be looking at, if not actively working on.

    For example, is a bias field derived from the actual EPI data sufficient, or best, for mitigating receive field contrast interaction with the motion correction algorithm? A major practical benefit of the bias field approach as used in the paper is that it can be obtained from any existing data; no new acquisition step is required. But there are other ways to produce a correction map, e.g. using the "prescan normalization" option that is available on most scanners. The prescan normalization routines I'm familiar with (on a Siemens TIM/Trio) use a standard gradient echo imaging acquisition with lowish resolution (circa 8 mm), which immediately raises further questions: Does the resolution of the prescan need to be better to map the bias field gradients, or should it just match the EPI resolution? And, given the mismatch between the distorted EPI and the undistorted gradient echo image (which uses conventional "spin warp" phase encoding), shouldn't there be a benefit to using an EPI-based prescan instead? (See Note 3 again.)

    A major limitation of the bias field approach as suggested in the paper is likely to be the lack of support in regions containing no signal - those regions of signal dropout, for example. Our target template has limitations. Thus, a further benefit of a separate prescan of some sort could be to improve signal coverage, possibly leading to more robust registration. Another concern of the bias field derived from individual EPIs, rather than scans that aim to map the receive (or transmit) field itself, is that the algorithm used to generate the bias field estimate may well interpret some real brain features as parts of the bias field rather than anatomy. It's a fit to an EPI, not a derivative map of a field per se. Thus, in this bias field normalization approach it's possible that some real image contrast will be removed in the normalization step. That could reduce the efficacy of the subsequent realignment by some amount, or it could be so subtle as to be inconsequential for real data.

    My final concern is the use of the sixth volume of EPI for the normalization. Here I am going to invoke the general concern that applies to reference scans of any sort: they have limits! Selecting the sixth volume is a perfectly principled thing to do. Is it best? It depends on how the head moves in the time series! For example, if it transpires that the subject moves once near the start of the time series and then stays stationary at the new position for the bulk of it, the prediction would be that a template selected from the end of the run - the very last volume, perhaps - would be a better target than the sixth volume. Conversely, a subject may be more compliant in the early part of a run and become more fidgety later on, making an early target volume a better bet, for fear of getting a motion-contaminated target late on. Of course, there's no way to know ahead of time which target is "best."

    Whether a gradient echo or EPI-based normalization map is best (e.g. a separate prescan normalization), whether a single map acquired before (or after) a time series is sufficient and appropriate for correction of an entire time series when the subject is moving, and what other unintended consequences might arise out of this latest correction step, well, that's what we need to figure out. Thus, I'll close with a warning not to take anything that you read here, in the Gonzalez-Castillo paper or elsewhere, as gospel and to test out prescan normalization and/or bias field correction for yourself. There is a tendency when faced with a problem (such as motion and its correction) in fMRI that something must be done. It then follows that since this is something it must be done. Not necessarily. Luckily for you, however, you're in the driving seat because whether you use a bias field map derived from the data itself or you opt to acquire a separate prescan normalization, unless you do something silly you aren't committed (Siemens users see Note 4, added post publication) to a particular pipeline at the point of acquisition. You can take the time to evaluate your data as twin streams, with and without the correction du jour. And that, my friends, is an option you don't get very often in this game.

    _____________________




    Notes:

    1.  Siemens users should note that EPI acquired with product pulse sequences commences only after a few dummy scans, depending on the TR, and thus the first volume in any EPI time series is not fully relaxed but in an approximate steady state. If you wanted a fully relaxed EPI then you'd need to acquire a separate EPI acquisition with the TR set very long - 15 or more seconds, to allow full T1 relaxation of CSF. A single volume acquisition would suffice.

    2.  The sensitivity of a tuned MRI coil when it acts as a transmitter is the same as its sensitivity when used as a receiver, a property that is encapsulated in something called the reciprocity principle. However, there is one complicating factor for our purposes. During transmission, the actual field heterogeneity depends upon the power being deposited into the coil, i.e. the heterogeneity of the transmit field depends on the RF flip angle. Still, the overall heterogeneity of a Tx/Rx coil will be closely related for transmission and reception, it's just the amount of current (driving or induced) that is changing. This is in contrast to the situation when separate coils (with active decoupling between them) are used to transmit and receive RF. In the case of a large body transmit coil, which is typically of birdcage design, and a receive-only head-sized array, the Tx field will appear relatively homogeneous (a few percent) across the head whereas the Rx field could change considerably (tens to hundreds of percent).

    3.  Using a short TE, non-EPI scan such as a conventional (spin warp) GRE to generate the bias field estimate might offer improved performance near to regions of dropout on the EPIs. However, that approach suffers from its own lack of support restriction: there will be differences in distortion characteristics. A short TE or spin echo EPI with matched distortion might be the best compromise here. Using an EPI-based prescan brings its own complexities, of course. One would presumably like to use a very short TE or even a spin echo EPI as the prescan image, in order to reduce as far as possible any regions of signal dropout whilst retaining the desired distortion properties. If one uses an EPI with the same TE as used for BOLD imaging then one never has any information on the bias fields residing in the signal voids (or outside of the head for that matter!), and so head movement that takes the time series data into these voids would necessarily be corrected poorly. This isn't a trivial problem!

    4.  Under certain circumstances it is possible to acquire twin data streams in the database; one "raw" and one to which prescan normalization has been applied. A restriction concerns the use of online (i.e. on the scanner) motion correction - the Siemens "MoCo" option - as well. In that case the first stream would be prescan normalized but not motion-corrected, the second would be prescan normalized and motion-corrected. There is a little more detail in my user training guide/FAQ under the section entitled "What is the practical difference between the 12-channel and 32-channel head coils? Which one is best for fMRI?" The Prescan Normalize option may be enabled for any receive-only coil.

    0 0


    Disclaimer: I'm afraid I haven't done a very good job reviewing the entirety of this paper because the stats/processing part was pretty much opaque to me. I've done my best to glean what I can out of it, and then I've focused as much as I can on the acquisition, since that is one part where I can penetrate the text and offer some useful commentary. Perhaps someone with better knowledge of stats/ICA/processing will review those sections elsewhere.


    The last paper I reviewed used a bias field map to attempt to correct for some of the effects of subject motion in time series EPI. A different approach is taken by Prantik Kundu et al. in another recently published study. In their paper, Differentiating BOLD from non-BOLD signals in fMRI time series using multi-echo EPI, Kundu et al. set out to differentiate between signal changes that have a plausible neurally-driven BOLD origin from those that are likely to have been modulated by something other than neuronal activity. In the latter category we have cardiac and respiratory fluctuations and, of course, subject motion.

    The method involves sorting BOLD-like from spurious changes using an independent component analysis (ICA) and to then "de-noise" the time series before applying connectivity analysis. For resting state fMRI in particular, the lack of any sort of ground truth and an absence of independent knowledge that one has with task-based fMRI makes disambiguating neurally driven signal changes from artifacts a major problem. Kundu et al. use a relatively simple philosophical approach to the separation:

    "We hypothesized that if TE-dependence could be used to differentiate BOLD and non-BOLD signals, non-BOLD signal could be removed to denoise data without conventional noise modeling. To test this hypothesis, whole brain multi-echo data were acquired at 3 TEs and decomposed with Independent Components Analysis (ICA) after spatially concatenating data across space and TE. Components were analyzed for the degree to which their signal changes fit models for R2* and S0 change, and summary scores were developed to characterize each component as BOLD-like or not BOLD-like."

    And, noting again the caveat that there is an absence of ground truth, the approach seems to work:
    "These scores clearly differentiated BOLD-like “functional network” components from non BOLD-like components related to motion, pulsatility, and other nuisance effects. Using non BOLD-like component time courses as noise regressors dramatically improved seed-based correlation mapping by reducing the effects of high and low frequency non-BOLD fluctuations."


    What does it mean to be BOLD-like?

    BOLD contrast is achieved via changes of T2* in and around the venous blood. In a typical fMRI experiment the TE of a T2*-sensitive image acquisition such as EPI is set approximately equal to the T2* of the gray matter, thereby producing a contrast mechanism that is maximally sensitive to changes in T2*. (See Note 1.) Instead, Kundu et al. acquire three different T2*-weighted images per slice, i.e. three successive images are acquired at different TEs, allowing a fit for T2* for each voxel in the brain that contains signal. (In the paper the authors usually use the relaxivity, defined as 1/T2* = R2*, rather than the relaxation rate constant, T2* but the terms are clearly interchangeable.) 

    Multi-echo acquisition has been used before to characterize BOLD changes, or to combat dropout effects that arise from the distribution of T2* values across a brain. (See the paper's Introduction for a review.) What's new in this work is the use of ICA to separate R2*-dependent components for each voxel's time series, where the goodness-of-fit to an R2* model can be used to characterize whether a particular time series component is more likely to be neurally-driven, i.e. BOLD-like, than a non-BOLD change, such as could arise from head motion.

    So, the method relies upon the ability to model appropriately signal changes that are BOLD-like from everything else. What does it mean to be BOLD-like? Kundu et al. explain it in detail in their Theory section, but in brief it means that a signal has a mono-exponential TE dependence that is consistent with small changes in magnetic susceptibility due to small changes in oxygenation in the (venous) blood. For BOLD-like modulation, then, the change of signal level from a baseline state to an activated state, cast as ΔS/S, is linearly dependent on acquisition TE in the limit of small changes in T2*, as shown at bottom-right in the figure below. Non-BOLD-like modulations don't fit this model. Instead, the ΔS/S is TE-invariant, as shown at bottom-left:



    Mono-exponentiality is assumed in the BOLD versus non-BOLD characterization. Is that fair? I think it probably is, for voxels that aren't significantly larger than about 4 mm on a side. But the assumption is likely to be better the smaller one can make the voxels. There are only three TEs with which to characterize the TE dependence anyway, so for the time being that would seem to be a limitation that we are required to live with.

    The other assumption is that only small changes in magnetic susceptibility are expected in BOLD-like fluctuations. The concentration of deoxyhemoglobin in venous blood varies by a few percent as the upstream neural activity varies; we're not expecting huge shifts of T2*. Large changes in magnetic susceptibility across a voxel can accompany movement, however. But in the case of head movement the dephasing across a voxel leads to changes in signal level that will be reflected in the term ΔS0/S0 as well. Thus, movement may also cause a change in the TE dependence but we can still separate movement from BOLD-driven signals because we want changes in the TE dependence only; no change in ΔS0/S0.


    The practical stuff

    The immediate problem is obtaining data suitable to fitR2*. This isn't trivial because it takes tens of milliseconds to acquire a single image, a problem that manifests in routine EPI as distortion in the phase encoding dimension. On their 3 T GE scanner, Kundu et al. were able to acquire three different T2* weightings (TE = 15, 39 and 63 ms) using relatively large voxels (3.75x3.75 mm in-plane, 4.2 mm slice thickness) by employing SENSE parallel imaging with an acceleration factor of two. Even so the TR was 2500 ms for 31 slices (0.3 mm gap) to cover the whole brain.

    Pulse and respiration data were recorded separately and allowed different types of de-noising to be compared. I won't get into the comparisons for brevity; also because I'm only interested in the multi-echo data characterized by ICA. For the ICA pipeline, then, conventional slice timing correction was applied and motion correction (a.k.a. realignment) was applied using the central TE image for each time point.  I don't know why the central TE image was selected, but perhaps it was because it's the median value and thus might be expected to minimally bias the time series to BOLD-like or non-BOLD-like changes. I think I would have been tempted to use the first TE images because they exhibit less dropout, thus there should be more brain signal for the realignment algorithm to work with.

    At this point some sort of magic happens. As previously mentioned, I have zero knowledge of ICA and only very slightly greater knowledge about stats in general. Apologies for glossing over this part and making an assumption that the fitting and statistical procedures were valid; I'm not qualified to do anything else. Anyway, ICA was applied to the time series data by treating the three TE images at each time point as a fourth spatial variable for spatial ICA.Each ICA component was then analyzed for its TE dependence: BOLD-like or non-BOLD-like, as established by the TE-dependent criteria mentioned previously. The magic thus produces two new variables, κ and ρ, to characterize each independent component of a voxel's time series:
    "High κ indicated strong ΔR2*-like character (BOLD-like), and high ρ indicated strong ΔS0-like character (non BOLD-like)."


    But will it blend?

    By this point I feel like my head has been in a Blendtec. (This is the last time I try to review a heavy stats/modeling paper!) However, the results are compelling (for someone who has no knowledge of ICA or the actual processing used in the paper). For example, here is a comparison of a BOLD-like (top panel) versus a non-BOLD component (bottom panel):


    The TE dependence of images in the lower left quadrant suggests that the non-BOLD modulation isn't driven by neurons, instead the peripheral signal changes resemble classic head movement artifacts. Thus, for the non-BOLD (artifact) component there is a corresponding ring on the periphery of the brain having high percent ΔS0 (bottom-right corner image); head movement primarily modulates signal level without a TE dependence. In contrast, BOLD-like modulations show very little change in percent ΔS0, instead showing strong, localized changes in ΔR2* (top-right corner image). For the BOLD-like component shown here, κ was 184 and ρ was 15 whereas for the artifact component κ was 22 and ρ was 90.

    The generation of κ and ρ permittedautomated sorting of the BOLD-like wheat from the non-BOLD-like chaff:
    "The ICA components were rank-ordered based on their κ and ρ scores. These two rank orderings (κ-spectrum and ρ-spectrum) were used to differentiate BOLD components from non-BOLD components. Both κ and ρ spectra were found to be L-curves with well-defined elbows distinguishing high score and low score regimes. This inherent separation was used to identify BOLD components in an automated procedure. First, the elbows of κ and ρ spectra were identified. The spectra were scanned from right to left to identify an abruptly high score following a series of similarly valued low scores. The κ and ρ scores marking abrupt changes were used as thresholds. Those components with κ greater than the κ threshold and ρ less than the ρ threshold were considered BOLD components. All other components were considered non-BOLD components. These were used as noise regressors in time course de-noising."

    The elbows in the L-shaped rank plots of independent components seemed to be clearer for κ (BOLD-like) than for ρ (non-BOLD) values, but the features were certainly complementary:



    Maps corresponding to high κ matched resting state networks seen in other studies. I'm not sure if that is, by itself, a good thing but let's move on. Maps of the highest ρ tended to feature brain edges or CSF-filled spaces, a strong indication that they would be motion-related artifacts. Maps of components near the elbows of the κ and ρ spectra were more difficult to interpret, but tended to be more suggestive of artifact than "proper" BOLD networks, according to the authors' interpretation.

    Which brings us to the final proof: connectivity analysis. Time series correlation using seeds in hippocampus and brain stem showed that the de-noising using multi-echo ICA yielded spatial patterns that were more consistent across subjects than when standard de-noising techniques (such as RETROICOR) were used instead:
    "The group T-maps based on low κ de-noising showed much higher T-statistics for connected regions than the group T-maps based on standard de-noising. This indicated that (Z-transformed) correlation coefficients based on ME-ICA were more consistent across subjects than Z-transformed correlation coefficients based on standard de-noising."

    That seems like a good finding. One naively assumes that brains are connected with greater similarity than dissimilarity when examined with our relatively coarse fMRI tools.

    But why the hippocampal and brain stem seeds?
    "Studying functional connectivity of subcortical regions is challenging due to low functional contrast-to-noise due to CSF and blood flow pulsatility and distance from receiver elements. Where standard de-noising showed no clear correlation patterns for the hippocampal and brain stem seeds, ME-ICA de-noising revealed robust correlation patterns. The brain stem seed was localized to the anterior pons that contains corticospinal (pyramindal) tracts connecting to premotor, parietal, and motor regions (Kiernan, 2009). This pattern of anatomical connectivity agrees well with the pattern of functional connectivity exposed after ME-ICA de-noising. The hippocampus seed was localized to the head of the right hippocampus that has anatomical connectivity to sensory regions via temporal and entorhinal cortices (Kiernan, 2009). The pattern of functional connectivity exposed after ME-ICA denoising agreed with this pattern of anatomical connectivity."

    This also seems to be good news.


    Limitations of the study and ME-ICA

    Assuming that the statistical evaluation works as presented, what are the limitations of using ME-ICA for de-noising fMRI data? My first concern is the use of three TEs, two of which (39 ms and 65 ms) may not offer very much signal in important brain regions having short T2* (below perhaps 20 ms); namely, portions of frontal and temporal lobes. There is the danger of a 2-point or even a 1-point fit to the TE dependence in these regions. How does the method fare when the SNR is very low? I would want to assess regional variations in the de-noising beforepushing for widespread adoption.

    And talking of validation, de-noising doesn't turn a bad experiment into a good one. I assume - because the paper gives no indication to the contrary - that all eight subjects were compliant. Perhaps they were all experienced fMRI subjects, too. Thus, I would put the data acquired in the experiments into the "good" bin; the subjects probably didn't move much compared to your typical, off the street fMRI volunteers, or kids, or elderly subjects with a medical condition. How does ME-ICA fare when movement is higher than for these eight subjects? Does the model always capture the non-BOLD components and leave the same BOLD-like networks? I would want to see some failure tests before moving to global use of ME-ICA. It would be very useful to have the same subjects scanned under different intentional movement regimes, for example.

    But I also have a minor concern about how the method fares with very small amounts of subject motion, too. Very small head movements could cause small T2* changes with minuscule concomitant signal intensity changes, through small shifts in magnetic susceptibility gradients across tissue boundaries, for instance. I wonder, then, whether the method might characterize very small movements as being BOLD-like. That would be perverse. Again, it would be important to test the method under the movement extremes to see how it could work (and fail) in practice.

    My final concern is whether the method would be adopted based on the acquisition parameters presented. The acquisition requires multiple TEs for each slice, a temporally expensive thing to do. In fact, getting down to the TEs of 15, 39 and 65 ms as used in the study required the use of parallel imaging (SENSE with acceleration factor of two), an option that isn't without its own penalties (of increased motion sensitivity and decreased SNR). And even so, it was only possible to acquire 31 slices in TR = 2500 ms. I can see a lot of people turning their noses up at that performance. It might be feasible to decrease the TEs used, thereby increasing the number of slices/TR, by using even higher acceleration factors, but again the SNR goes farther down and the motion sensitivity gets larger. More on the prospects for pulse sequence developments in the final section, below.

    What I do like about the proposed method, however, is the principle. This paper tries to develop a conceptual framework for discerning noise components, using a simple model of how a signal should behave with TE in order to be considered BOLD-like. It's a step up from the T2*-weighted acquisitions everyone else uses for connectivity. I note that nobody does plain diffusion-weighted imaging any more, we've moved to diffusion models such as tensors to fit and interpret the anisotropic motion-encoded signals that we acquire. Yet we haven't made that step en masse for fMRI as yet. We're still doing the same T2*-weighted imaging that we were doing in the early nineties. A move towards principled evaluations - something more quantitative, as presented in this paper - would seem to be a worthwhile advance, and I welcome it.


    Whatdevelopments might benefit multi-echo acquisitions?

    As I mentioned, I suspect the biggest hurdle facing ME-ICA isn't the complexity of the analysis or the lack of rigorous failure analysis when the method is applied to wiggly subjects, but rather the temporal overhead associated with the multiple echo acquisition. I did a quick back-of-the-envelope calculation and I reckon it is possible to get the same TE=15,39,65 ms performance without parallel imaging (no SENSE or GRAPPA) using 6/8ths partial Fourier acquisitions instead; assuming 0.5 ms echo spacing for a 64x64 matrix over a 220x220 mm field-of-view. That eliminates the increased motion sensitivity of parallel imaging but it doesn't get the data into the bag any faster.

    The authors suggest solutions to the speed issue in their Discussion section: use multi-band (MB) imaging in the slice direction to speed things up. I've recently started tinkering with MB-EPI and the acceleration is really quite amazing. With the University of Minnesota's variant (developed as part of the Human Connectome Project) it is possible to get whole brain, 2 mm isotropic voxels with a BOLD-optimal TE of 38 ms in a TR of 1300 ms using a 6-fold acceleration in the slice dimension. Damn, that's fast. Now, before you all rush off to use MB-EPI for all of your experiments I would caution you that there are likely similar motion sensitivities in MB-EPI as there are for in-plane GRAPPA - they use similar principles applied orthogonally. So, there is a good chance that motion may hamper MB-EPI performance a lot more than you'd like. The validations haven't yet been presented. But if we assume that those clever pulse sequence folk can maintain a reasonable degree of robustness to motion then what might a multi-echo, MB-EPI acquisition look like?

    Avoiding in-plane parallel imaging and sticking to the partial Fourier scheme I suggested above, it would be reasonable to obtain TE=15,39,65 ms images at each slice position using a multi-band factor of between two and six, and expect to get a corresponding improvement in the number of slices/TR. We really only need a factor of two to get full brain coverage when the slices are approximately 3 mm thick. But this assumes comparable in-plane resolution - between 3 and 4 mm - to that used in the current study. Higher resolution necessarily increases the echo train length of the EPI readout and would thus extend the TEs well beyond the 15-70 ms range we need to fit T2* at 3 T.

    What are the options for substantially increasing the in-plane resolution beyond 3 mm? The 2 mm voxels for the MB-EPI sequence I've tested had a minimum TE of 38 ms. The echo train length per image, using 6/8ths partial Fourier, is already some 38 ms long, too. I don't think many parts of the brain would fit T2* very well with TE=38,76,114 ms images, even with the reduced dephasing (via modest extension of T2*) that one gets from smaller voxels. To reduce the TEs to values that could be expected to fit T2* for most brain regions at 3 T requires either faster gradients or in-plane parallel imaging. With respect to gradient speed, we are already at the limit set by cardiac stimulation when using a whole body gradient set, so that's off the table at the present time. Perhaps in-plane parallel imaging can be used profitably? Doubtless someone will be able to show that using in-plane GRAPPA and high resolution can be made to work with multi-band imaging under certain circumstances - good subjects not prone to moving very much, perhaps - but at the risk of being boring I don't think these are the sorts of sequences that should be applied in routine practice. Motion would have to be very low indeed or the image quality would be awful.

    Another option might be to move away from T2* BOLD and instead try to do the same sort of ICA decomposition for T2 BOLD, using multiple spin echoes in conjunction with multi-band encoding. Such an approach has its own limitations, of course: power deposition goes up a lot, maybe prohibitively, while overall BOLD sensitivity (at 3 T) is diminished by about 50%. Or, perhaps asymmetric spin echo images could be obtained, using the spin echo to extend the lifetime of the signal but offsetting the center of each image readout to encode some T2* rather than T2. We have options to explore, perhaps trading sensitivity for specificity. And that goal - specificity - is the main lesson of the paper, I think. It's where we should be aiming. Doing the same basic T2*-weighted resting state acquisitions over and over isn't getting us very far. I'm glad that Kundu et al. are suggesting ways to push through our present inertia.

    In sum, then, I don't see everyone switching to multi-echo EPI acquisitions by this time next year. Best case, I suspect some hardy types might try ME-ICA as a way to validate what others are seeing; to use it as a yardstick. We still don't have ground truth, but I would put more faith into a network derived from ME-ICA than I would from the coincidental findings of a hundred "standard" resting state fMRI studies. 

    _________________




    Notes:

    1.  It is well known that different regions of the brain have different T2*, thus requiring different TE for optimal BOLD contrast. The voxel resolution as well as neuroanatomical variations, and magnetic susceptibility arising from the skull and sinuses all interact to produce a complicated T2* dependence across a brain. Therefore, a compromise value of TE is usually set to achieve sufficient BOLD contrast in "good" regions of the brain, such as parietal and occipital lobes, where T2* is 30-50 ms at 3 T, as well as in "bad" regions of the brain, such as frontal and temporal lobes, where the T2* is generally below 25 ms. Echo times in the range 25-35 ms are thus typical for 3-4 mm voxels. The TE for optimal BOLD contrast can sometimes be increased when voxels smaller than about 3 mm are used. An example of adjusting TE with voxel resolution is given in this paper, on amygdala fMRI.



    References and Links:

    Differentiating BOLD and non-BOLD signals in fMRI time series using multi-echo EPI.
    P Kundu, SJ Inati, JW Evans, W-M Luh and PA Bandettini. NeuroImage60 (3), 1759-70 (2012).
    http://www.sciencedirect.com/science/article/pii/S1053811911014303
    http://dx.doi.org/10.1016/j.neuroimage.2011.12.028
    PMID: 22209809



    0 0


    Diffusion imagingis often included as a component of functional neuroimaging protocols these days. While fMRI examines functional changes on the timescale of seconds to minutes, diffusion imaging is able to detect changes over weeks to years. Furthermore, there may be complimentary information from the white matter connectivity obtainable from diffusion imaging – that is, from tractography - and the functional connectivity of gray matter regions that can be derived from resting state or task-based fMRI experiments.

    I was recently made aware of some artifacts on diffusion-weighted EPI scans acquired on a colleagues’ scanner. When I was able to replicate the issue on my own scanner, and even make the problem worse, it was time to do a serious investigation. The origin of the problem was finally confirmed after exhaustive checks involving the assistance of several engineers and scientists from Siemens. The conclusion isn't exactly a major surprise: fat suppression for diffusion-weighted imaging of brain is often insufficient. And it seems that although the need for good fat suppression is well known amongst physics types, it’s not common knowledge in the neuroscience community. What’s more, the definition of “sufficient” may vary from experiment to experiment and it may well be that numerous centers are unaware that they may have a problem.

    Let’s start out by assessing a bad example of the problem. The diffusion-weighted images you’re about to see were acquired from a typical volunteer on a Siemens TIM/Trio using a 32-channel receive-only head coil, with b=3000 s/mm2 (see Note 1), 2 mm isotropic voxels, and GRAPPA with twofold (R=2) acceleration. These are three successive axial slices:


    (Click to enlarge.)

    The blue arrows mark hypointense artifacts whereas the orange arrow picks out a hyperintense artifact. Even my knowledge of neuroanatomy is sufficient to recognize that these crescents are not brain structures. They are actually fat signals, shifted up in the image plane from the scalp tissue at the back of the head. (If you look carefully you may be able to trace the entire outline of the scalp, including fat from around the eye sockets, all displaced anterior by a fixed amount.) I’ll discuss the mechanism later on, but at this point I’ll note that the two principal concerns are the b value (of 3000 s/mm2) and the use of a 32-channel array coil. GRAPPA isn’t a prime suspect for once!

    Now, part of the problem is that the intensity of the artifacts – but not their location - changes as the direction of the diffusion-weighting gradients changes. In the following video you see the diffusion-weighted images as the diffusion gradient orientation is changed through thirty-two directions (see Note 2):



    The signal from white matter fibers changes as the diffusion gradient direction changes. That’s what you want to happen. But the displaced fat artifacts also change intensity with diffusion gradient direction, meaning that the artifact is erroneously encoded as regions of anisotropic diffusion. Thus, when one computes the final diffusion model, the brain regions contaminated by fat artifacts end up looking like white matter tracts. In the next figure the data shown above was fit to a simple tensor model, from which a color-coded anisotropy map can be obtained:



    The white arrow picks out the false “tract” corresponding to the artifact signal crescent we saw on the raw diffusion-weighted images. I suppose it’s remotely possible that this is the iTract, a new fasciculus that has evolved to connect the subject’s ear to his smart phone, but my money is on the fat artifact explanation.

    Clearly, in the above image there is no easy way to distinguish the artifact from real white matter tracts by eye, except by using your prior anatomical knowledge. And it's likely to confuse tractographic methods, too, because it has very similar geometric properties to those that tractographic methods attempt to trace. So let's take a look at the origin of the problem and then we can get into what you want: solutions. 


    Fat head!

    That's right, you heard me. You probably already know about the subcutaneous fat (or lipid, if you prefer) that surrounds the outside of your skull. While this relatively small amount of fat helps keep your noggin warm and provides a modicum of cushioning for when you bang your head on something, it's not ideal when it comes to brain imaging. The problem is that fat protons - and there are lots of them because fat is essentially long chains of carbonswith 1-3 hydrogen atoms attached to each one - resonate at a different frequency than water protons; a difference amounting to more than 400 Hz at 3 tesla. This so-called chemical shift difference causes a systematic spatial displacement of the fat signal from the water signal. And because the phase encoding dimension of a typical EPI has spatial information amounting to around 30 Hz per pixel, you can see immediately that a systematic 400 Hz offset amounts to a shift of a dozen or more pixels in the phase encoding dimension. (See Note 3.)

    Scalp fat is thus a concern for all EPI-based imaging. Fat suppression is therefore included as a standard step in a typical EPI sequence forfMRI, for example. In fMRI, insufficient fat suppression in EPI leads to larger than ideal ghosting, which can cause regions of unnecessarily high signal variance for regions of the brain overlapped by the fat ghosts, as well as the fat shift you saw above. But, as will be discussed below, the requirements for fat suppression for diffusion imaging can be even more stringent.


    What can make the problemworse?
     
    Why might EPI-based diffusion imaging need enhanced fat suppression compared to EPI for fMRI? The simple explanation is as follows: fat doesn't diffuse very quickly compared to water in tissues. Thus, for any given diffusion weighting gradient value, the amount of signal attenuation according to S/S0 = e-bD, where S0 is the signal intensity with the diffusion gradient turned off, is much lower for fat than for water because D, the diffusion coefficient, is lower for fat than for water. When the b value starts to get high the amount of residual water signal from brain tissue drops considerably, making the residual fat signal comparable to the brain signals. If we now couple a relatively intense fat signal with the spatial displacement arising from the chemical shift… Presto! You have the problem we saw in the opening figures.

    Your immediate question is, of course: “But if fat suppression is enabled as a standard step, why could I possibly have a problem?” The prosaic answer is that the efficacy of the fat suppression technique may be imperfect and may need to be adjusted depending on the specific parameters you use for acquiring your diffusion data. We need to evaluate a few aspects of the brain and fat signals in a little more detail.

    As I mentioned above, the essential issue here is one of relative signal magnitude, coupled with the displacement of scalp fat signals into regions that should be occupied by brain only. If, under diffusion weighting, the residual fat signals become significant relative to the residual brain tissue water signals then we have a concern. And if, because of the chemical shift, the scalp fat signals end up parked on regions that should be occupied only by brain then we have a problem. So let’s next look at a few of the situations where the residual fat signal may become problematic:

    High b values -The higher the b value the greater the residual fat signal is likely to be relative to the brain water signal.

    A sensitive receiver array coil - Another good way to enhance the scalp fat signal is to use a very sensitive array coil, such as a 32-channel coil. The coil also boosts signal from the deeper brain signal to be sure, but the scalp fat is inconveniently located even closer to the coil elements and therefore gets an unwelcome supercharged boost. Note, however, that the fat artifact problem may arise with any RF coil.

    A long TE– Using an unnecessarily long TE will tend to preserve fat signal over water signal because fat has a long T2. Try to use the shortest possible TE for the b value that you want.

    In-plane parallel imaging– So, yeah, I lied ever so slightly earlier on. Although they aren’t the prime suspects because GRAPPA, SENSE and their ilk don’t create the fat artifact problem, they can exacerbate it because they tend to decrease the image SNR, especially the SNR of brain regions far from the periphery where the receive coil elements are located. (This is the so-called geometry, or g, factor.) There is also an overall root-R reduction of image SNR for R-fold acceleration.
    Unfortunate head positioning– Some head sizes and shapes, and some head positions relative to the imaging gradients, may make the residual fat artifact more or less of a problem for you. The position of any fat artifacts will tend to vary slightly from subject to subject. Sometimes residual fat crescents may remain outside the brain, sometimes not. (See Note 4.)


    Make it go away, please
     
    How do you know if your fat suppression scheme is sufficient, or that the imaging parameters render the experiment vulnerable to fat artifacts parking themselves on the brain? Checking your human brain data for fat artifacts may not suffice. As I've mentioned, it's often quite difficult to detect by eye artifacts that are overlaying complicated brain anatomy. The crescents from scalp fat look awfully like genuine white matter tracts a lot of the time. There is also the issue of biological variability to contend with. Just because the ten test subjects you evaluate don’t exhibit a clear problem doesn’t imply that your experimental subjects will always be problem-free. This is a situation where a phantom experiment can really help.

    To test the default situation on my scanner (Siemens TIM/Trio) I filled a plastic Ziploc bag with olive oil (extra virgin, cold press, Italian, if you must know) and placed the bag under a water-filled sphere to mimic a scalp at the back of a head. I used a 32-channel head coil to accentuate signal located closer to the coil elements; this should be a stringent test. To preserve some water signal I restricted the b value to 1000 (purely for testing purposes) and then ran the diffusion-weighted scans that some of my users have got in their research protocols. The results weren't pretty. In each case there was residual signal, like the example below which steps through 64 different diffusion gradient directions:
     

     
    Note how the olive oil signal has been shifted into the region of image that should be occupied by water in the sphere. (The olive oil is actually located a couple of centimeters beneath the sphere.) And, of course, if the b value were increased beyond 1000 the residual signal ratio of oil to water would be further enhanced.

    On the product diffusion imaging sequence (ep2d_diff) on my scanner there are three options to eliminate fat signals: fat suppression (a.k.a. fatsat, based on a chemical shift-selective pre-pulse that targets the fat protons), fat-selective inversion recovery (SPAIR - which uses an inversion pulse targeted at just the fat resonances), and a composite spatial-spectral pulse (usually called "water excite" because it is designed to avoid the excitation of fat rather than eliminate the signal per se). These three fat elimination schemes were compared on the same phantom setup.

    Below are example diffusion-weighted image data sets obtained from the oil-under-sphere phantom acquired with (from left to right) fatsat, SPAIR and water excite:


    Left to right: Fatsat, SPAIR and water excite fat suppression options on a product diffusion imaging sequence.


    None of the fat elimination options on the product diffusion sequence was able to eliminate the fat signal. There was a clear oil artifact – a bright crescent in this particular instance - shifted into each image. 

    At which point I was out of options with the standard (product) software, Syngo MR B17, on my Trio because there is only the one diffusion-weighted EPI sequence. Fortunately, however, I have a research diffusion imaging pulse sequence that has an option called "Extra Fat Suppr." (See Note 5.) So I tried it. And finally the artifact could be eliminated! On the left is the standard fat saturation while on the right is the Extra Fat Suppr. option:

    (Click to enlarge.)

     
    Note the complete elimination of fat artifacts overlapping the water phantom only in the right-hand images. With sufficient fat elimination, all that remains is a thin crescent of oil signal located correctly underneath the water-filled sphere. The fat suppression demonstrated on the right is what we require for diffusion imaging of human brain. It implies that the b value could be increased, imaging parameters could be altered, etc. and there would be no residual fat signal to be concerned about.


    Checking that you don't have a fat suppression problem

    The need for excellent fat suppression for diffusion imaging is well known. I've included a couple of references in Note 6, papers that show examples of scalp fat artifact as strikingly as those you've just seen. But these relatively new methods may not be available on your scanner; they aren't on mine.

    So, where does this leave you? Whichever scanner and diffusion imaging pulse sequence you use, I would suggest checking your fat suppression performance before applying it on a person. (Siemens users, see Note 7.) It's easy enough to put some vegetable oil in a container and include it with a water-filled phantom. You don't have to put the oil in a bag like I did; I was trying to replicate a real scalp effect. But bags can leak fairly easily and you don't want a mess to clean up! It should suffice to include the oil in a small leak-proof plastic container. Then, once you have your sample ready to test it's as easy as acquiring your diffusion imaging protocol and assessing the raw, diffusion-weighted images. (See Note 8.) If there's any doubt, disable the fat suppression scheme and do a comparison. The difference with and without fat suppression should be striking.

    _________________



    Notes:


    1.  The so-called b value (or b factor) is simply the reciprocal of the diffusion coefficient, hence the units of s/mm2. D, the mean diffusivity (sometimes called the apparent diffusion coefficient, ADC) is in units of mm2/s. The b value takes into account all of the dephasing caused by applied magnetic field gradients (it doesn't include dephasing caused by magnetic susceptibility gradients because these are unknown) in a simple term that relates signal loss according to:

    S/S0 = e-bD

    where S0 is the signal in the absence of applied diffusion gradients. The imaging gradients themselves should be included in the calculation of b, although for conceptual simplicity we can think of b=0 for an image with no diffusion weighting gradients enabled.


    2.  It doesn’t actually matter what these directions were or why there were 32 of them in order to understand the artifact and why it’s a problem. But in case you know about such things, it was to fit to a HARDI model and this was half of the total data acquired. Diffusion scans differ in how the gradients are applied because different models of diffusion may be used to reconstruct the data. A simple tensor process may be used, for instance. Several models use a fixed magnitude of the b value (which is, strictly speaking, a 3x3 matrix for 3D gradient encoding and not a single number) and simply rotate its direction. The trigonometry is done behind the scenes. This is akin to sampling different points on a sphere of constant b radius.


    3.  There is also a smaller shift of about a quarter of a pixel in the readout dimension, which we can safely ignore. The effect is far smaller in the readout dimension than in the phase encoding dimension of EPI because the readout gradient bandwidth is typically between 1500-3000 Hz per pixel. It’s also useful to recognize that the chemical shift difference between water and fat scales in proportion to the magnetic field strength, making the effect at 3 T twice as bad as at 1.5 T.


    4.  It would be terrific if there were a reliable way to arrange the phase encoding parameters such that the fat artifacts always fell in non-critical parts of the image. This is easier said than done, however. One tactic might be to adjust the head position relative to the phase encode gradient center such that the crescents fall mostly outside of the brain. Another tactic is to increase the image field-of-view (FOV), but that is inefficient use of spatial encoding. You could try swapping the readout and phase encoding axes, thereby putting the fat artifacts in the orthogonal dimension; left-right in the case of the axial slices shown so far. But that causes the phase encode distortion to also shift to the L-R direction, and people seem to have an inherent dislike of kidney bean-shaped brains (even if the actual distortion level is similar on a quantitative basis). Finally, you could try reversing the phase encoding axis; here it would be P-A from A-P. Altering the phase encoding axis necessarily alters the distortion: compressions in A-P become stretches in P-A, and vice versa. Furthermore, you may have to choose between signal from the back of the head being displaced up into the brain for one phase encoding direction, versus fat signal from the forehead being displaced down into the brain if the phase encode direction is reversed! Pick your poison. Similarly, if the phase encoding dimension is left-right (for axial slices) then the scalp fat from one side or other will be displaced into the brain; you only get to choose whichside gets contaminated in selecting the phase encode direction. None of these tricks is ideal, and none would be easy to implement across an array of disparate heads.


    5.  I don't know what the extra fat suppression option is doingexcept that it increases the minimum TE by about a millisecond, so it may be adding an extra fat suppression pulse. I honestly can't tell you right now, but I'll add a footnote to this post if I ever find out.


    6.  Some references for more information on improved fat elimination schemes, including examples of the problem:

    Robust fat suppression at 3T in high-resolution diffusion-weighted single-shot echo-planar imaging of human brain. 
    JE Sarlls, C Pierpaoli, SL Talagala, WM Luh. Magn. Reson. Med.66(6):1658-65 (2011). 
    PMID: 21604298 
    DOI: 10.1002/mrm.22940

    Efficient fat suppression by slice-selection gradient reversal in twice-refocused diffusion encoding.
    Z Nagy, N Weiskopf. Magn. Reson. Med.60(5):1256-60 (2008).
    DOI: 10.1002/mrm.21746


    7.  If you own a Siemens scanner running Syngo MR B15 or B17 (which includes Trio and Verio scanners), you have the product diffusion imaging sequence (ep2d_diff) as your only option, and you find that your fat suppression is insufficient when you test then you may want to talk to your local applications support people. To my way of thinking, insufficient fat suppression is a pulse sequence bug that should be patched. However, if you are fortunate to have a research agreement with Siemens then you should be able to get a work-in-progress (WIP) sequence to use instead of ep2d_diff. I tested the WIP sequence numbered 511E, but I note that 511C also has the Extra Fat Suppr. option in it. Finally, I found a document online that claims Syngo MR D11, the software that comes on the Magnetom Skyra 3 T scanner, has "improved fat saturation schemes" as part of the product diffusion sequence. I haven't seen it or tested it for myself, but I would hope that the performance of the WIP sequence I have tested is matched in the latest product.

    Update(4th March, 2013):There is an "extra fatsat" option on the product diffusion imaging sequence (ep2d_diff) available on the Skyra running Syngo MR D11. I haven't tested the standard or the extra options but I would bet considerable money that the "extra" option is required in order to fully eliminate scalp fat signal with b > 1000 s/mm^2.


    8.  You should test the diffusion protocol as similarly as possible to the conditions to be used for brain imaging. However, if you use b values significantly above 1000 you won't see very much residual water signal in an isotropic water phantom. That may not be a big deal because it would still be possible to see chemical shift artifacts from an oil sample once the water signal has been eliminated. Indeed, there's no strict need to have a water phantom in the coil with the oil at all! But I prefer to have a water signal to shim on, and to have a water signal background against which to contrast residual fat signals. It's all a matter of preference, and my preference is to maintain all other parameters as used in the brain experiment, but to reduce the b value to 1000 for testing purposes. If I was paranoid I might then repeat the tests with the higher b value of an actual experiment, but once I'm satisfied the fat suppression is working at b=1000 then I am confident it will work well at other values.


    0 0


    This post updates the draft checklist that was presented back in October. Thanks to all who provided feedback. The updated checklist, denoted version 1.1, incorporates a lot of the suggestions made previously. The main difference is the reduction from three to two categories. The logic is that we should be encouraging reporting of "All of Essential plus any of Supplemental" parameters in the methods section of any fMRI publication.

    (Click to enlarge.)

    Explanatory notes, consolidated from the post on the draft list, and abbreviations appear below.

    Note that the present list aims primarily at fMRI experiments conducted on 1.5 or 3 T scanners. I further assumed that you're reporting 2D multislice EPI or spiral scanning. Advanced and custom options, such as multiband EPI, 3D scans and 7 T, will be added in future versions. Please see the post on the draft list for a complete explanation of how the list evolved to this point.

    Keep the comments coming. This is an ongoing process.

    _________________



    Explanatory notes:

    Essential - Scanner

    Magnetic field strength: In lieu of magnetic field strength the scanner operating frequency (in MHz) might be considered acceptable. I'm assuming we're all doing 1-H fMRI. If not, chances are the methods section's going to be very detailed anyway.


    Essential - Hardware Options

    Rx coil type: For standard coils provided by the scanner vendor a simple description consisting of the number of independent elements or channels should suffice, e.g. a 12-channel phased array coil, a 16-leg birdcage coil. Custom or third-party coils might warrant more detailed information, including the manufacturer. Most head-sized coils are Rx-only these days, but specifying Rx-only doesn't hurt if there could be any ambiguity, e.g. a birdcage coil could quite easily be Tx/Rx.


    Essential - Spatial Encoding

    Pulse sequence type: A generic name/descriptor is preferred, e.g. single-shot EPI, or spiral in/out.

    Number of shots (if > 1): Multi-shot EPI isn't a common pulse sequence; single-shot EPI is by far the most common variant, even if acquired using parallel imaging acceleration. I include this option to reinforce the importance of reporting the spatial encoding accurately.

    PE acceleration factor (if > 1): This is usually called the acceleration factor, R in the literature. Vendors use their own notation, e.g. iPAT factor, SENSE factor, etc.

    PE acceleration type (if > 1): It seems that most vendors use the generic, or published, names for parallel imaging methods such as SENSE, mSENSE and GRAPPA. I would think that trade names would also be acceptable provided that the actual (published) method can be deciphered from the scanner vendor and scanner type fields. But generic names/acroynms are to be preferred.

    PE partial Fourier scheme (if used): Convention suggests listing the acquired portion/fraction of k-space rather than the omitted fraction. Any fraction that makes sense could be used, e.g. 6/8 or 48/64 are clearly equivalent.

    Note: I plan on adding a list of parameters suitable for slice direction acceleration - the so-called multiband sequences - in a future version.


    Essential - Spatial Parameters

    In-plane matrix: This should be the acquired matrix. If partial Fourier is used then I would suggest reporting the corresponding full k-space matrix and giving the partial Fourier scheme as listed above. But I wouldn't object to you reporting that you acquired a 64x48 partial Fourier matrix and later reported the reconstructed matrix size as 64x64. So long as everything is consistent it's all interpretable by a reader. (But see also the In-plane reconstructed matrix parameter in the Supplemental section.)

    In-plane inline filtering (if any): Non-experts may be unaware that filtering might be applied to their "raw" images before they come off the scanner. It's imperative to check and report whether any spatial smoothing was applied on the scanner as well as during any "pre-processing" steps subsequent to porting the time series data offline.

    Slice thickness and Inter-slice gap: For now I would use the numbers reported by the scanner, even though there may be some small variation across scanner vendors and across pulse sequences. For example, some vendors may use full width at half slice height while others may use a definition of width at or near the base, there may be pulse sequence options for RF excitation pulse shape and duration, etc. I see these details as secondary to the essential reporting improvements we're aiming for.

    Slice acquisition order:Interleaved or contiguous would be sufficient, although explicit descending (e.g. head-to-foot) or ascending (e.g. foot-to-head) options for contiguous slices would be acceptable, too. Presumably, the subsequent use of slice timing correction will be reported under the post-processing steps (where most fMRIers call these "pre-processing" because they are applied before the statistical analysis).


    Essential - Timing Parameters

    TR: For single-shot EPI with no sparse sampling delay the TR also happens to be the acquisition time per volume of data. But if sparse sampling or multiple shot acquisition is being used then the TR should be clearly reported relative to these options. The conventional definition of TR is the time between successive RF excitations of the same slice. Thus, by this definition the reported TR would include any sparse sampling delay, but it would specify the time of each separate shot in a multi-shot acquisition and the per volume acquisition time would become TR x nshots.

    No. of volumes in time series: Dummy scans (not saved to disk) should be reported separately. Likewise, the use or rejection of the first n volumes for task-related reasons, e.g. to allow a subject to acclimatize to the scanner sounds, should also be reported separately in the post-processing segment of the experiment. In this field we are only interested in the total quantity of data available to the experimenter.

    No. of averages/volume (if > 1): I don't think I've ever seen anyone do anything but one average per TR for single-shot EPI/spiral (unless they've screwed something up) and I can't think of a reason why someone would want to do it for fMRI. But, if it happens for some reason then it's really, really important to specify it.


    Essential - RF & Contrast

    Fat suppression scheme: It's sufficient to state that fat saturation or fat suppression was used, for example. Further details aren't required unless the scheme was non-standard, e.g. a custom spatial-spectral excitation scheme. 


    Essential - Customization

    Sparse sampling delay (if used): Sometimes called "Delay in TR" on the scanner interface. Used most often for auditory stimulus or auditory response fMRI.

    Prospective motion correction scheme (if used): PACE is one commercial option. These schemes fundamentally change the nature of the time series data that is available for subsequent processing and should be distinguished from retrospective (post-processing) corrections, e.g. affine registration such as MCFLIRT in FSL. It is also critical to know the difference between motion correction options on your scanner. On a Siemens Trio running VB15 or VB17, for instance, selecting the MoCo option enables PACE and a post hoc motion correction algorithm if you are using the sequence, ep2d_pace whereas only the post hoc motion correction algorithm - no PACE - is applied if you are using the ep2d_bold sequence. There's more detailed information on these options in my user training guide/FAQ.

    Cardiac gating (if used): This isn't a common procedure for fMRI, and recording of cardiac information, e.g. using a pulse oximeter, is radically different from controlling the scanner acquisition via the subject's physiology. The recording of physiological information doesn't usually alter the MRI data acquisition, whereas gating does. Reporting of physio information is tangential to the reporting structure here, but if you are recording (and using) physio data then presumably you will report it accordingly somewhere in the experiment description.


    Supplemental - Hardware Options

    Gradient set type: It should be possible to infer the gradient coil from the scanner model. If not, e.g. because of a custom upgrade or use of a gradient insert set, then the specifications of the actual gradient coil should be reported independently.

    Tx coil type (if non-standard): It should be possible to infer the Tx coil from the scanner model. If not, e.g. because of a custom upgrade or use of a combined Tx/Rx coil, then the new Tx coil should be reported independently. I would also advocate including the drive system used if the coil is used in anything but the typical quadrature mode.

    Matrix coil mode (if used): There are typically default modes set on the scanner when one is using un-accelerated or accelerated (e.g. GRAPPA, SENSE) imaging.  If a non-standard coil element combination is used, e.g. acquisition of individual coil elements followed by an offline reconstruction using custom software, then that should be stated.

    Coil combination method: Almost all fMRI studies using phased-array coils use root-sum-of-squares (rSOS) combination, but other methods exist. The image reconstruction is changed by the coil combination method (as for the matrix coil mode above), so anything non-standard should be reported.


    Supplemental - Spatial Encoding

    PE direction: If you've shown any examples of EPI in your paper then the PE direction can usually be determined from the image. If N/2 ghosts or distortions aren't obvious, however, then it's rather important that the phase encode direction is stated, in concert with the readout echo spacing, so that a reader can infer your spatial accuracy.

    Phase oversampling (if used): There's no reason to use phase oversampling for EPI - you're wasting acquisition time - but if you are using it for some reason then it should be reported consistent with the acquired matrix, acquired FOV, echo spacing and associated parameters.

    Read gradient bandwith: Not an intrinsically useful parameter on its own, it does have value if reported in conjunction with the echo spacing. (An alternative to this single parameter would be the read gradient strength (in mT/m) and the digitizer bandwidth (in kHz).)

    Readout echo spacing: Rarely reported but really useful! This number, in conjunction with the FOV and acquired matrix size, allows a reader to estimate the likely distortion in the phase encode direction.

    Pulse sequence name: Could be invaluable for someone wanting to replicate a study. There may be multiple similar pulse sequences available, all capable of attaining the specifications given, but it is entirely feasible that only one of the sequences has a particular quirk in it!

    k-space scheme: Readers will assume linear (monotonic) k-space steps in the PE direction unless indicated to the contrary. Centric ordering or other atypical schemes should be indicated, especially in concert with multiple shots if the Number of shots parameter is greater than one.

    Read gradient strength: Could be useful in conjunction with information about the ramp sampling percentage and echo spacing time, otherwise probably of limited value to most readers.

    (Ramp sampling percentage:) Ramp sampling can increase the N/2 ghost level considerably if there is appreciable gradient and digitization (data readout) mismatch. But determining the percentage of readout data points that are acquired on the flat vs. the ramp portions of each readout gradient episode can be involved. And for routine studies, as opposed to method development studies, there's probably not a whole lot of value here. Maybe remove it?

    (Ghost correction method:) N/2 ghost correction usually happens invisibly to the user, but there are some options becoming available, especially useful for large array coils (e.g. 32-channel coils) where there may be local instabilities with some ghost correction methods. If known, and if non-standard, then it would be nice to report. But perhaps more overkill for fMRI methods?

    PE partial Fourier reconstruction method: If the scanner offers more than one reconstruction option then the chosen option should be reported.


    Supplemental - Spatial Parameters

    In-plane resolution: This field is redundant if the reconstructed matrix (In-plane matrix parameter) and FOV are reported, but I for one wouldn't object to seeing the nominal in-plane pixel size given anyway. It may make the paper a faster read. Probably not worth arguing about. (Cue extensive debate...!)

    In-plane reconstructed matrix: This is for reporting of zero filling (beyond the default zero filling that may have been done for a partial Fourier acquisition) to a larger matrix than acquired, prior to 2D FT. There may be modeling issues associated with the number of voxels in the image, not least of which is the size of the data set to be manipulated! It could save someone a lot of angst if she knows what you did to the data prior to uploading it to the public database.


    Supplemental - Timing Parameters

    No. of dummy scans: This is the number of dummy scans used to establish the T1 steady state. Many fMRI experiments also use acquired volumes subsequent to the (unsaved) dummy scans for neuroscientific reasons, e.g. to allow definition of a BOLD baseline or adjustment to the scanner noise. These two parameters should be reported separately.


    Supplemental - RF & Contrast

    Excitation RF pulse shape and Excitation RF pulse duration:  Not critical for standard pulse sequences on commercial scanners, but if atypical values are set in order to achieve very thin slices, for example, then reporting these parameters may be benenficial. These two parameters will become essential, however, when considering multiband EPI in the future.


    Supplemental - Customization

    Image reconstruction type: Unless specified to the contrary, most readers will assume that magnitude images were taken from the 2D Fourier transform that yielded each individual EPI. If you use complex data - magnitude and phase - then that option should be specified along with the particular processing pipeline used to accommodate the atypical data type.

    Shim routine: If manual shimming or an advanced phase map shimming routine is used, especially to improve the magnetic field over restricted brain volumes, then this information should be reported.

    Receiver gain: Most scanners use some form of autogain to ensure that the dynamic range at the receiver is acceptable. If manual control over receiver gain is an option and is used then it should be reported because a mis-set gain could lead to artifacts that aren't typically seen in EPI, and a reader could subsequently attribute certain image features to other artifact sources.


    _________________



    Abbreviations:

    FOV - field-of-view
    N/2 - Half-FOV (for Nyquist ghosts)
    PE - phase encode
    Rx - Radiofrequency (RF) receiver
    Tx - Radiofrequency (RF) transmitter
    TE - echo time
    TR - repetition time



    0 0


    Apologies for the lengthy absence. Many irons in the fire, etc. So until I can provide a more considered post I give you these three random tidbits:

    1. Syngo MR version D13 for Verio and Skyra

    There is an EPI sequence in VD13 that has a real time update of the on-resonance frequency, i.e. one that is computed and applied TR by TR, to combat drift caused by gradient heating. There are apparently versions for fMRI and diffusion-weighted imaging. I don't have any detailed information but if you are working on a Verio or a Skyra it might be time to talk to your physicist and/or local Siemens rep.

    2. Phase encode direction for axial and axial-oblique EPI

    Siemens uses A-P phase encoding by default whereas GE uses P-A by default. Essentially, for axial (and axial oblique) EPI the A-P direction compresses the frontal lobe but stretches occipital lobe whereas P-A stretches frontal lobe and compresses occipital. Pick your poison. (See Note 1.) Test each one out by setting the Phase enc. dir. parameter on the Routine tab. To set P-A from A-P (default) first click the three dots (...) to the right of the parameter field and open the dialog box, then enter 180 <return> instead of 0. You will probably find that the parameter change doesn't "stick" for appended scans, so saving a modified protocol in the Exam Explorer is a way to ensure the default (A-P) doesn't get reinstated without you noticing. More details to come in the next version of my user training/FAQ document.

    3. Another way to force a re-shim

    In my last user training/FAQ document (and here) I gave a simple way to force the scanner to re-shim at any point, e.g. when you know or strongly suspect the subject may have moved, or between lengthy blocks as a way to maintain high quality data in spite of slow subject motion and scanner drifts. But there is another way to do it and from some basic tests it looks to be superior. Here's a shaky video of the procedure conducted on a Trio running Syngo MR B17 (see Note 2):



    (The essential procedure is the same for later software versions, but the layout of the 3D Shim window is slightly different.)

    Here are the steps:
    1. Ensure the scanner isn't already running and also that a scan is open (i.e. "working man"), ready to go.
    2. Select the Adjustments option in the Options menu.
    3. Click the 3D Shim option towards the bottom of the Adjustments window.
    4. Start the field map shim by clicking the Measure button, towards the top right. It takes ~30 sec to acquire; you'll hear the scanner buzzing.
    5. Click the Calculate button (middle right) to compute a new set of shims from the new field map. (If you accidentally click twice or more the same calculation is performed.)
    6. Apply the new shims by clicking the Apply button towards the bottom right of the window. (Ignore any other Apply button(s) farther up if you have them, they relate to other adjustments. If you have more than one, go for the one that's nearest the bottom of the screen.)
    7. Close the Adjustments window.
    8. Start your next scan. (In the video I start a GRE sequence - actually a field mapping sequence - but any sequence can be acquired next.)

    That's it! What's more, running this procedure after a Standard shim is equivalent to running an Advanced shim, except that the total time to do the Standard shim (~30 sec) plus this "manual" 3D Shim (another ~30 sec) is shorter than the approximately 90 secs it takes to do the Advanced shim. (See Note 3.) Thus, you can expect better results with this re-shim procedure compared to the simple "Invalidate All" approach that I gave in the last user training/FAQ. I'll write up a detailed explanation, along with results (field maps) showing the improved performance compared to the Standard shim, in the next version of the user training/FAQ.

    Carry on!

    _______________



    Notes:

    1.  Some background information on distortion in the phase encoding axis is given in this post. An introduction to the acquisition and use of field maps to tackle distortion is included in my user training/FAQ document.

    2.  No, that's not the monitor delivered with the scanner. Thanks for asking!! Some larger monitors will work perfectly well with the scanner and swapping for a larger one may be an option if you don't do clinical work and need to maintain that FDA approval thing. My scanner is pure research. If there's interest I'll post the specifications in a comment to this post. Also please note that the central GUI area cannot be enlarged, but you can use the black border around it to good effect with other windows. The various display programs can be run full screen, however. It's a really nice thing to have for teaching or to impress visitors.

    3.  You can select a Standard or an Advanced shim on the product EPI sequences via the System tab, under Adjustments. It's the Shim mode parameter:




(Page 1) | 2 | 3 | 4 | newer