Quantcast
Channel: practiCal fMRI: the nuts & bolts
Viewing all 87 articles
Browse latest View live

Impressively rapid follow-ups to a published fMRI study

$
0
0
Alternative post title: Why blogs can be seriously useful in research.


Last week there was quite a lot of attention to an article published in PNAS by Aharoni et al. In their study they claimed that fMRI could be useful in predicting the likelihood of rearrest in a group of convicts up for parole:
"Identification of factors that predict recurrent antisocial behavior is integral to the social sciences, criminal justice procedures, and the effective treatment of high-risk individuals. Here we show that error-related brain activity elicited during performance of an inhibitory task prospectively predicted subsequent rearrest among adult offenders within 4 y of release (N = 96). The odds that an offender with relatively low anterior cingulate activity would be rearrested were approximately double that of an offender with high activity in this region, holding constant other observed risk factors. These results suggest a potential neurocognitive biomarker for persistent antisocial behavior."

The senior author, Kent Kiehl, was interviewed on National Public Radio on Friday morning. I heard it on my way into work. An NPR interview would suggest the media attention was widespread, although I haven't looked at this aspect specifically.

What I did notice, however, was that The Neurocritic came out with two quick posts (here and here) wherein he brought up a couple of interesting limitations of the study and even ran his own re-analysis of the data, the PNAS authors having been kind enough to make their data available publicly.

This afternoon, Russ Poldrack has followed up with his own analysis and interpretation of the study's data. I'll be honest, all the stats leaves me flat-footed. But I am very seriously impressed by the way the blogosphere, combined with shared data, has been able to poke and prod the original study's conclusions.

Why am I so enthused? Because the mainstream media (still) has the power to dominate the narrative in the public sphere, and it is especially important that specific criticisms can be leveled within the same news cycle, while the public might still be paying attention to the story. So, while I think it's highly unlikely that NPR will interview the senior author of the next study that finds there is no predictive use of fMRI for recidivism - we seem to have a serious positive results bias in science - maybe there's a slim chance that NPR will interview Russ about his follow-up analysis, just to balance the record. And if not, at least those in the field have the benefit of the post-publication peer review that blogs can offer.


Resting state fMRI confounds

$
0
0
(Thanks to Dave J. Hayes for tweeting the publication of these papers.)

Two new papers provide comprehensive reviews of some of the confounds to the acquisition, processing and interpretation of resting state fMRI data. In the paper, "Resting-state fMRI confounds and cleanup," Murphy, Birn and Bandettini consider in some detail many of the noise sources in rs-fMRI, especially those having a physiologic origin.

In "Overview of potential procedural and participant-related confounds for neuroimaging of the resting state," Duncan and Northoff review the effects that other circumstantial factors, such as the scanner's acoustic noise, subject instructions, subjects' emotional state, and caffeine might have on rs-fMRI studies. Without due consideration, some or all of these factors may inadvertently become experimental variables; the implications for inter-individual differences are considerable. (I've reviewed some of the issues concerning what we can permit subjects to do before and during rs-fMRI in this post.)

While we're on the subject of confounds in rs-fMRI - especially those with a motion component - another confound that motion introduces is a sensitivity to the receive field heterogeneity of the head coil. This problem gets worse the more channels the coil has, because the coil elements get smaller as the number of channels goes up. For an introduction to the issue see this arXiv paper; there will also be simulations of the effect for a 32-channel coil at the ISMRM conference in a couple of weeks' time. (See e-poster, abstract #3352.) The result is that spurious correlations and anti-correlations can result, necessitating some sort of clever sorting or de-noising scheme to distinguish them from "true" brain correlations. I mention it here because there is a common misconception in the field that applying a retrospective motion correction step fixes all motion-related artifacts. It doesn't. Nor does including all of the motion parameters as regressors in a model. Motion has some insidious ways in which it can modulate the MRI signal level, and it is high time that we, as a field, reconsider very carefully what we are doing for motion correction, and why.

Finally, I'll note in passing that slice timing correction may not be a good idea for rs-fMRI. It's been known since the correction was first proposed that it should interact a with a motion correction step. (The two corrections should be applied simultaneously, as one 4D space-time correction rather than a separate 3D space then time correction, or vice versa.) I don't have data to share just yet, but if anyone is wondering whether they should include STC in their rs-fMRI analysis, as they would do for event-related fMRI, then my advice is to skip it until someone can prove to you that it has no unintended consequences. (Demonstration of unintended consequences to follow eventually....)


References:

Resting state fMRI confounds and cleanup. K Murphy, RM Birn and PA Bandettini, NeuroImage Epub.
DOI: 10.1016/j.neuroimage.2013.04.001

Overview of potential procedural and participant-related confounds for neuroimaging of the resting state. NW Duncan and G Northoff, J. Psychiatry Neurosci. 2013, 38(2), 84-96.
PMID: 22964258
DOI: 10.1503/jpn.120059

Multiband (aka simultaneous multislice) EPI validation in progress!

$
0
0

I am pleased to see a couple of presentations at next week's ISMRM conference in Salt Lake City dealing with some of the important validation steps that should be performed before multiband (MB) EPI (or simultaneous multislice (SMS) EPI if you prefer) is adopted for routine use by the neuroimaging community:

Characterization of Artifactual Correlation in Highly-Accelerated Simultaneous Multi-Slice (SMS) fMRI Acquisitions

Abstract #0410, ISMRM Annual Meeting, 2013.

Kawin Setsompop, Jonathan R. Polimeni, Himanshu Bhat, and Lawrence L. Wald

Simultaneous Multi-Slice (SMS) acquisition with blipped-CAIPI scheme has enabled dramatic reduction in imaging time for fMRI acquisitions, enabling high-resolution whole-brain acquisitions with short repetition times. The characterization of SMS acquisition performance is crucial to wide adoption of the technique. In this work, we examine an important source of artifact: spurious thermal noise correlation between aliased imaging voxels. This artifactual correlation can create undesirable bias in fMRI resting-state functional connectivity analysis. Here we provide a simple method for characterizing this artifactual correlation, which should aid in guiding the selection of appropriate slice- and inplane-acceleration factors for SMS acquisitions during protocol design.

An Assessment of Motion Artefacts in Multi Band EPI for High Spatial and Temporal Resolution Resting State fMRI

Abstract #3275, ISMRM Annual Meeting, 2013.

Michael E. Kelly, Eugene P. Duff, Janine D. Bijsterbosch, Natalie L. Voets, Nicola Filippini, Steen Moeller, Junqian Xu, Essa S. Yacoub, Edward J. Auerbach, Kamil Ugurbil, Stephen M. Smith, and Karla L. Miller

Multiband (MB) EPI is a recent MRI technique that offers increased temporal and/or spatial resolution as well as increased temporal SNR due to increased temporal degrees-of-freedom (DoF). However, MB-EPI may exhibit increased motion sensitivity due to the combination of short TR with parallel imaging. In this study, the performance of MB-EPI with different acceleration factors was compared to that of standard EPI, with respect to subject motion. Although MB-EPI with 4 and 8 times acceleration exhibited some motion sensitivity, retrospective clean-up of the data using independent component analysis was successful at removing artefacts. By increasing temporal DoF, accelerated MB-EPI supports higher spatial resolution, with no loss in statistical significance compared to standard EPI. MB-EPI is therefore an important new technique capable of providing high resolution, temporally rich FMRI datasets for more interpretable mapping of the brain's functional networks.


The natural question to ask next occurs at the interface of these two topics: what about head motion-driven artifactual correlations between simultaneously excited slices? I am also curious to see how retrospective motion correction, e.g. affine registration algorithms, performs with MB-EPI that contains appreciable motion contamination. Is the "pre-processing" pipeline that we use for single-shot EPI appropriate for MB-EPI?

In-plane parallel imaging such as GRAPPA and SENSE were adopted for EPI-based fMRI experiments prematurely in my view, i.e. before full validations had been conducted. (Mea culpa. I was one of those beguiled by GRAPPA when I first saw it.) The failure modes - like motion sensitivity - hadn't been fully explored before a lot of us began employing the methods for their purported benefits. It would be nice if the failure modes of MB-EPI get a thorough workout before the neuroimaging community adopts it en masse

That said, I am still very excited that MB-EPI may offer the most significant performance boost for fMRI acquisition for more than a decade (since the introduction of scanners capable of EPI readout on all three gradient axes). But I continue to seek validation before recommending widespread adoption of MB-EPI (or any other method) and I look forward to seeing more reports such as these in the literature and online, prior to people using them in experiments to solve the brain.

Physics for understanding fMRI artifacts: Part Fourteen

$
0
0
Partial Fourier EPI

(The full contents for the PFUFA series of posts is here.)

In PFUFA Part Twelve you saw how 2D k-space for EPI is achieved in a single shot, i.e. using a repetitive gradient echo series following a single excitation RF pulse. The back and forth gradient echo trajectory permits the acquisition of a 2D plane of k-space in tens of milliseconds. That's fast to be sure, but when one wants to achieve a lot of three-dimensional brain coverage then every millisecond counts.

In the EPI method as presented in PFUFA Part Twelve it was (apparently) necessary to cover - that is, to sample - the entire k-space plane in order to then perform a 2D Fourier transform (FT) and recover the desired image. Indeed, this "complete" sampling requirement was developed earlier, in PFUFA Part Nine, when we looked at 2D k-space and its relationship to image space.

One aspect of the FT that I glossed over in previous posts has to do with symmetry. Perhaps the eagle-eyed among you spotted the symmetry in the 2D k-space of the first couple of pictures in PFUFA Part Nine. If you didn't, don't worry about it because I'm about to show it to you in detail. It turns out that there's actually no need to acquire the entire 2D k-space plane; it suffices to acquire some of it - at least half - and then use post-processing methods to fill in the missing part. At that point one can apply the 2D FT and recover the desired image.

Now, as you would expect, there's no free lunch on offer. There are practical consequences from not acquiring the full k-space plane. In this post we will look briefly at the physical principles of partial Fourier EPI, then in the next post we'll take a look at some example data that will provide a basis for evaluating partial versus full k-space coverage for fMRI.


What's in a name?

Before getting into the crux of the method I want to spend a moment considering the moniker, "partial Fourier." You may have come across alternative terms such as "partial NEX," "half NEX," "half Fourier" or "half scan." Furthermore, those of you familiar with clinical anatomical scans may have come across sequences such as HASTE, which stands for "Half Fourier Acquisition Single shot Turbo spin Echo." If it doesn't have a contrived acronym, it doesn't belong in MRI!

All of these various descriptors refer to the general principle of omitting from the acquisition some fraction of the k-space data, then synthesizing the omitted part by virtue of the property of "Hermitian symmetry," something we'll see in more detail in a moment. Whether exactly half of the k-space plane is acquired, or whether 5/8ths, 6/8ths, 7/8ths or some other fraction, I think it is minimally confusing to use the term "partial Fourier" for the acquisition. (In some cases the moniker is downright misleading when applied to EPI, as for "half NEX." See Note 1.) Thus, from now on I shall use partial Fourier (pF) to encompass all variants where only a portion of the final k-space plane is acquired, and where the k-space step size is unchanged. (See Note 2 for the difference between partial Fourier and parallel imaging when it comes to omitting k-space lines.) But you should note that in order to apply what you see here on your scanner you may need to translate into your scanner's vernacular first.


Conjugate symmetry in the k-space of real objects

PFUFA Part Nine covered the equivalency of information contained in image space and k-space (or reciprocal space), two domains known as conjugate domains and represented by the conjugate variables of cm and 1/cm. Here's a figure reproduced from PFUFA Part Nine:


Take a closer look at the k-space plane on the right. Notice how symmetric it is, left to right and top to bottom? The symmetry is diagonal: what's in the bottom left quadrant is repeated in the top right quadrant. That's because this k-space plane was obtained from the 2D FT of a real object; in this instance the (digital) picture of a Spitfire on the left. The same symmetry property exists for brains as for pictures of planes.

Provided the object we are trying to image is real - and with the possible exception of a few of the extraordinary people I've seen around downtown Berkeley, human brains are real objects - then the k-space representation of that object will exhibit what is called Hermitian symmetry. Signals, S, in one half of k-space are the complex conjugate of the other half of k-space:

S(kx,ky) = S*(-kx,-ky)

where * denotes the complex conjugate. Here is what that relationship looks like:


where the magnitude of the signal at coordinate (kx, ky) is given by either (a + ib) or its complex conjugate, (a - ib). (Recall that the magnitude of the real part of the signal is 'a' while the magnitude of the imaginary part of the signal is 'b.') If you're interested in learning more about the complex conjugate symmetry of k-space and related phenomena then this wiki page provides a nice summary, while this excellent tutorial from SCOPE Online reviews many issues pertaining to k-space, including issues of partial k-space. Or, you can just accept the general idea and move on because it's all you really need to know to comprehend partial Fourier EPI.


Going faster by leaving stuff out

The above symmetry implies that, in principle, we don't actually need to acquire the entire 2D k-space plane in order to reconstruct an image from a 2D FT. I'll get to the practical issues associated with that statement below, but for now let's assume that practice and theory are in agreement. Furthermore, let's put aside the different spatial encoding mechanisms - phase encoding and frequency encoding - that are typically used for 2D MRI sequences such as EPI. What are the implications of conjugate symmetry of k-space for MRI?

Recognizing the symmetry in k-space we could choose to omit the top or bottom, or the left or right half of k-space, and use the mathematics of complex symmetry to "fill in" the omitted half. Or we could compromise somewhere between full and half k-space acquisition, and omit just a portion of the top or bottom, or the left or right half of the acquired k-space, then reconstruct the missing part. Here's an illustration of the process applied to a 2D k-space for which the bottom quarter has been omitted, requiring some sort of mathematical "transfer" of parts of the diagonal quadrants into the gaps:

An illustration of the utility of conjugate symmetry. Some of the bottom half of k-space has been omitted (solid black region), requiring that the  k-space portions from the diagonal quadrants (regions above the blue dashed line) be copied (dashed white arrows) into the missing spaces prior to 2D FT.

I trust it's clear that it doesn't have to be the bottom half of k-space that's omitted. There will be experimental implications for which portion of k-space we omit, but the mathematics is the same whether we omit an upper, lower, left or right portion. And, while I have omitted just one quarter of k-space - half of the bottom half - we could, in principle, omit an entire half of k-space and reconstruct the missing half from the half we acquire.


How do we reconstruct the missing part of k-space?

Before we look at the reconstruction options we first need to consider the experimental limitations that can hamper any method we select. As already mentioned, the imaged object must be real; that is, its mathematical description must not require any imaginary component. This requirement restricts us to obtaining the magnitude of the resulting image; we won't be able to reconstruct properly any phase variations across the image, assuming that we might be interested in such information. (Phase information isn't often used in fMRI, nearly all experiments use magnitude EPIs as the starting point.)

But there's another limitation concerning phase that might affect us, even in the world of magnitude EPI. Since the raw data - the 2D k-space data - is intrinsically complex i.e. it is composed of real and imaginary parts, even when the object being imaged is purely real, we have to take care that any phase variations that do arise across the image - even though we're ignoring them by looking only at the magnitude of the image - are relatively small. If there are large phase variations across the image then there may be insufficient information in the truncated k-space to properly reconstruct the missing part. We would violate a basic premise of the Nyquist sampling frequency for some of the information in what would otherwise be the fully sampled k-space, and that would produce N/2-like ghost artifacts.

The final restriction seems ridiculous at first glance, but when we come to look at pF EPI in future posts you'll see that it can actually be the most pernicious of all: the signal we're imaging has to be somewhere in the sampled k-space. "What?" you say? "It's possible to miss the signal?" Yup. Magnetic susceptibility gradients, in particular, can cause the actual (partial) k-space trajectory to differ significantly from the theoretical one that's imposed by the imaging gradients alone. (See Note 3.)

To illustrate the last restriction, take a look at this example of 2D k-space for a T2-weighted anatomical scan:

Example k-space and resulting image, taken from the SCOPE online tutorial on k-space.

There is only modest symmetry in the k-space on the left, even for this anatomical scan. In particular, notice how there are signals in the bottom half of the k-space - there's a band about one third of the way up, for instance - that don't have counterparts in the upper half. Thus, if a partial Fourier scheme was used such that the bottom third (or more) of k-space were omitted from the acquisition, we can expect that there would be signal voids and perhaps other artifacts somewhere in the resulting image, compared to the full k-space image. A pF version wouldn't look precisely like the full k-space image that appears on the right-hand side of the figure above.

The degree of symmetry of the signals in (full) k-space has immediate implications for the method used to reconstruct the missing k-space from a partial Fourier acquisition. We are reliant on conjugate symmetry and no matter what post-processing magic we might consider, we can't create information that doesn't exist in the acquired partial k-space! This limitation on the post-processing can, a little perversely, be considered an advantage for pF EPI: there may not be a significant benefit to fancy reconstruction schemes because the primary damage has likely already been done in the acquisition!


Nothing added, nothing taken away

Thus we get the simplest approach to computing the missing k-space data: don't bother. On Siemens scanners, at least, the default is to simply "zero fill" the omitted k-space points. A complete k-space matrix is constructed from the acquired k-space by appending as many lines of zero valued data points as needed to make the matrix rectangular. The resulting matrix is then fed into the 2D FT as usual, producing the final image.

Completing the k-space matrix with zeros adds no signal and no noise to the image and is generally a benign way to prepare the matrix for 2D FT, provided you weren't too aggressive with the omitted portion of k-space in the first place. (I'll deal with real examples of pF EPI in the next post.) But zero filling does have one immediate cost: it smooths the resulting images a bit, because we have effectively multiplied the fully acquired k-space matrix by a step function that produces zeros in that part of the 2D matrix we didn't actually acquire. Or, if you want another way to conceptualize the smoothing, refer back to PFUFA Part Eleven where we looked at "what lives where in k-space" and note how edges in the image reside in the high spatial frequencies in k-space. By acquiring only a partial k-space in one dimension, the signal-to-noise of the edge information has been degraded relative to the low spatial frequencies.

What other alternatives are there for reconstruction of the omitted k-space portion? There are various ways to estimate the omitted portion from the acquired portion, including ways to estimate the phase variation. These are nicely summarized in this PDF (which isn't EPI-specific). However, there isn't much literature comparing the different reconstruction methods for EPI, or more specifically pF EPI for fMRI applications. What's more, they all seem to be a large amount of work for a small potential benefit, and they might have costs that would affect time series EPI in a bad way. Since the acquisition already imposes constraints on the final images, and because most people will use whatever reconstruction is the default on their scanner anyway, I will leave the reconstruction issue for the time being and return to the acquisition options. In the next couple of posts I'll be using zero filling for reconstruction - it is the Siemens default, as already mentioned.


Which portion of k-space should we leave out for EPI?

To complete this post let's look at two options for partial Fourier EPI: omitting either the early or the late echoes from the gradient echo train. (We could conceivably omit readout data points instead - left- or right-hand portions of k-space in the figures to come - but this doesn't save us as much time and we generally focus on the phase encoding dimension for fMRI.) So, let's return to the (full) k-space trajectory for EPI that we saw in PFUFA Part Twelve:




I've already indicated that we could either omit some of the early echoes or some of the late echoes. (Leaving out both early and late echoes is tantamount to reducing the maximum k-space extent, which has the effect of reducing the image resolution.)

If we decide to omit some of the early echoes then our k-space trajectory might look like this:


This trajectory means that we can, if we choose, hit the center of k-space - the point which defines the echo time, TE - sooner than we would have had we acquired the full matrix. Thus, one immediate consequence of omitting early echoes is to permit a shorter minimum TE for what is essentially the same resolution image. (Recall, however, the smoothing that I already mentioned. We'll look at smoothing effects in a later post.)

Whether a shorter TE is beneficial for fMRI applications will depend on many factors. We might expect less signal dropout, but we might also expect lower BOLD functional contrast if the TE we use departs significantly from the T2* of the gray matter. (If these statements are baffling then you might want to read the pertinent sections in my user training guide/FAQ.) But there is one thing that omitting the early echoes doesn't do: it doesn't allow more slices within the TR. Put another way, it doesn't save us any time per slice unless we also shorten TE. I'll deal with the issue of speed - slices/TR - in a subsequent post.

What if we acquire all the early echoes and omit some of the later ones instead? That k-space trajectory looks like this:



The minimum TE is unchanged compared to acquiring the full k-space matrix. Now, however, we reach the end of data acquisition for this slice sooner (relative to TE) than we would for a complete k-space plane. Thus, omitting the later echoes permits us to increase the number of slices per TR without changing TE (or any other timing parameter).


That'll do for this post. In the next post I will show some examples of partial Fourier EPI from phantoms and brains. We will investigate the practical consequences of using pF with zero-filled reconstruction of the omitted k-space portion, in particular the effects on signal dropout and image smoothing. And in the post after that we will look at imaging speed issues, with a view to generating some guidelines for selecting, and using, partial Fourier EPI for fMRI.

____________________




Notes:

1.  The term, half NEX seems to be an historical reference to the application of partial Fourier methodology to a standard spin warp-style phase encoded scan, where just one line of phase-encoded information (one line of k-space) is acquired following each RF pulse. (See PFUFA Part Ten for a description of gradient echo imaging using conventional, spin warp-style phase encoding.) In that case there is one line of k-space detected per excitation RF pulse; there will be N pulses for N phase-encoded lines of k-space in the final image. Hence, omitting some of the N EXcitations makes sense for spin warp phase encoding. But when applied to EPI, which is most commonly acquired as a single shot (single RF) acquisition, the historical term just gets confusing. We only use a single excitation! N=1. I understand there are practical reasons for not changing terminology on a scanner - not least keeping the installed base of users happy - so I'm not pointing fingers, just trying to clarify what might otherwise lead to some bewilderment amongst those not fluent in multiple scanner languages. This stuff's complicated enough as it is!


2.  With partial Fourier encoding the k-space step size is unchanged from that used for the full k-space coverage. In other words, the distance between successive frequency-encoded lines of k-space is exactly as was demonstrated in earlier posts on EPI. All we are going to do is (intentionally) fail to acquire either the first few or the last few frequency-encoded lines, thereby failing to fill the entire 2D k-space plane with real data. But there are other ways to save time by skipping over some fraction of k-space. Parallel imaging (PI) methods, such as SENSE and GRAPPA, make use of spatial information encoded in the receive field of the RF coil, and then save acquisition time by skipping some k-space lines, e.g. all the odd-numbered frequency-encoded k-space lines might be skipped for an acceleration factor (R) of two. Note, however, that parallel imaging changes the k-space step size in the phase encoding direction - it is doubled for R=2, for instance - whereas partial Fourier omits a continuous patch of k-space while leaving the k-space step size unchanged for the acquired portion, as shown here:



The total number of lines of k-space omitted is generally larger for PI than for pF. Thus, the time savings can be larger for PI than for pF, but at the expense, usually, of increased motion sensitivity. The issue of whether (and when) to use PI, pF or other schemes for saving time will be considered in future posts. But there is a brief comparison of PI (GRAPPA) versus pF in my user training guide/FAQ to be going on with.


3.  The restriction that the signals actually get sampled in the (partial) k-space plane is not fundamentally different from the case when we are sampling the entire plane: magnetic susceptibility gradients may interact with the imaging gradients and cause the signals to be moved entirely out of the (theoretical) k-space plane then, too. This is one of the sources of the infamous EPI dropout! So, what we're really doing is imposing an even stricter requirement that the magnetic susceptibility gradients be minimized through processes like shimming, or we can expect to pay a penalty in additional regions of signal loss.

12-channel versus 32-channel head coils for fMRI

$
0
0

At last month's Human Brain Mapping conference in Seattle, a poster by Harvard scientists Stephanie McMains and Ross Mair (poster 3412) showed yet more evidence that the benefits of a 32-channel coil for fMRI at 3 T aren't immediately obvious. Previous work by Kaza, Klose and Lotze in 2011 (doi: 10.1002/jmri.22614) had suggested that the benefits were regional, with cortical areas benefiting from the additional signal-to-noise ratio (SNR) whereas the standard 12-channel coil was superior for fMRI of deeper structures such as thalamus and cerebellum. The latest work by McMains and Mair confirms an earlier report from Li, Wang and Wang (ISMRM 17th Annual Meeting, 2009. Abstract #1614) that showed spatial resolution also affects the benefit, if any. In a nutshell, if a typical voxel resolution of 3 mm is used then the 32-channel coil provides no benefit over a 12-channel coil. The 32-channel coil was best only when resolution was pushed to 2 mm, thereby pushing the SNR down towards the thermal noise limit, or when using high acceleration, e.g. GRAPPA with acceleration, R > 2.

What's going on? In the first instance we need to think about the regimes that limit fMRI at different spatial resolutions. In the absence of subject motion and physiologic noise, the SNR of an EPI voxel will tend towards a thermal noise-limiting regime as it gets smaller. Let's assume a fairly typical SNR of 60 for a voxel that has dimensions 3.5x3.5x3.5 mm^3, as detected by a 12-channel head coil at 3 T. If we shrink the voxel to 3x3x3 mm^3 the SNR will decrease by ~27/43, to about 38, while if we shrink to 2x2x2 mm^3 the SNR will decrease to about 11. (Here I am assuming that all factors affecting N are invariant to resolution while S scales with voxel volume, which is sufficient for this discussion.) If we decrease the voxels to 1.5x1.5x1.5 mm^3 the SNR decreases to below five. The SNR is barely above one if we push all the way to 1x1x1 mm^3 resolution, which is why you don't often see fMRI resolution better than 2 mm at 3 T. Thus, if high spatial resolution is the goal then one needs to boost the SNR well beyond what we started of with to achieve a reasonable image. Hence the move to larger phased-array receive coils.

So that's the situation when the thermal noise is limiting. This is generally the case for anatomical MRI, but does it apply to fMRI? If something else is limiting - either physiologic noise or subject motion - then increasing the raw SNR may not help as expected. In fMRI we are generally less concerned with true (white) thermal noise than we are with erroneous modulation of our signal. It's not noise so much as it is signal changes of no interest. For this reason, Gonzalez-Castillo et al. (doi: 10.1016/j.neuroimage.2010.11.020) recently proposed using a very low flip angle in order to minimize physiologic noise while leaving functional signal changes unchanged.


From ISMRM e-poster 3352, available as a PDF via this link.


What if we can't even attain the physiologic noise-limiting regime? It's quite possible to be in a subject motion-limiting regime, as anyone who has run an fMRI experiment can attest. In that case, the use of a high dimensional array coil (of 32 channels, say) could actually impose a higher motion sensitivity on the time series than it would have had were it detected by a smaller array coil (of 12 channels, say), due to the greater receive field heterogeneity of the 32-channel coil. This was something a colleague and I considered last year, in an arXiv paper (http://arxiv.org/abs/1210.3633) and accompanying blog post. In an e-poster at this year's ISMRM Annual Meeting (abstract #3352; a PDF of the slides is available via this Dropbox link) we simulated the effects of motion on temporal SNR (tSNR), as well as the potential for spurious correlations in resting-state fMRI, when using a 32-channel coil. In doing these simulations we assumed perfect motion correction yet there were still drastic effects, as the above figure illustrates.

Whether the equivocal benefits of a 32-channel coil for routine fMRI (that is, using 3-ish mm voxels) are due to enhanced motion sensitivity, higher physiologic noise or some other factor I'm not in a position to say with any certainty. My colleagues and I, and others, are investigating ways that we might reduce the effects of receive field contrast on motion correction. The use of a prescan normalization is one idea that might help, at least a bit. The process has many assumptions and potential flaws, but it may offer the prospect of getting back some of what might be lost courtesy of the enhanced motion sensitivity. We simply don't know yet. The bigger problem, however, seems to be that a heterogeneous receive field contrast will impart motion sensitivity on a time series even if motion correction were perfect. Strong receive field heterogeneity, of the sort exhibited by a 32-channel head coil, is a killer if the subject moves.

Unless you are attempting to use highly accelerated parallel imaging (in particular the multiband sequences) and/or pushing your voxel size towards 2 mm, then, you're almost certainly better off sticking with the 12-channel coil as far as fMRI performance is concerned. Other scans, in particular anatomical scans and perhaps some diffusion-weighted scans, may benefit from larger array coils (because these scans may be in the thermal noise-limiting regime), but each application will need to be verified independently.

Shared MB-EPI data

$
0
0

This is cool, publicly available test-retest pilot data sets using MB-EPI and conventional EPI on the same subjects courtesy of Nathan Kline Institute:



What's available:


The acquisition protocols are available as PDFs via the links given in the release website (and copied here). I like that they restricted the acceleration (MB) factor to four. I also like that the 3 mm isotropic MB-EPI data acquired at TR=645 ms used full Fourier acquisition (no partial Fourier) and an echo spacing of 0.51 ms. The former may help with signal in deep brain regions as well as frontal and temporal lobes, while the latter avoids mechanical resonances in the range 0.6-0.8 ms on a Trio, and also keeps the phase encode distortion reasonable.

There are already studies coming out that use these data sets, such as this one by Liao et al (which is how I learned of their existence). I don't yet know which reconstruction version was used for these data sets, but those of you who are tinkering should be aware that the latest version from CMRR, version R009a, has significantly lower artifacts and less smoothing than prior versions:

MB-EPI using CMRR sequence version R008 on a Siemens Trio with 32ch coil. MB=6, 72 slices, TE=38 ms, 2 mm isotropic voxels.

MB-EPI using CMRR sequence version R009a on a Siemens Trio with 32ch coil. MB=6, 72 slices, TE=38 ms, 2 mm isotropic voxels.


The bubbles visible in the bottom image of a gel phantom are real. The other intensity variations are artifacts. In both images one can easily make out the receive field heterogeneity of the 32-channel head coil.

----
Note added post publication

From Dan Lurie (@dantekgeek): We’re also collecting/sharing data from 1000 subjects using the same sequences, plus deep phenotyping

The experimental consequences of using partial Fourier for EPI

$
0
0

PFUFA Part Fourteen introduced the idea of acquiring partial k-space and explained how the method, hereafter referred to as partial Fourier (pF), is typically used for EPI acquisitions. At this point it is useful to look at some example data and to begin to assess the options for using pF-EPI for experiments.


Image smoothing

The first consequence of using pF is image smoothing. It arises because we've acquired all of the low spatial frequency information twice - on both halves of k-space - but only half of some of the high spatial frequency information. We've then zero-filled that part of k-space that was omitted. This has the immediate effect of degrading the signal-to-noise ratio (SNR) for the high spatial frequencies that reside in the omitted portion of k-space. (PFUFA Part Eleven dealt with where different spatial frequencies are to be found in k-space.) Thus, the final image has less detail and is smoother than it would have been had we acquired the full k-space matrix, and because of the smoothing the final image SNR tends to be higher for pF-EPI than for the full k-space variant.

It was surprising to me that pF-EPI has higher SNR - due to smoothing - than full Fourier EPI in spite of the reduced data sampling in the acquisition. Conventional wisdom, which is technically correct, states that acquiring less data will degrade SNR. To understand this conundrum, we can think of pF as being like a square filter applied asymmetrically to the phase encoding dimension of an EPI obtained from a complete k-space acquisition. Indeed, as we start to evaluate the costs and benefits of pF for EPI we should probably be thinking about a minimum of a three-way comparison. Firstly, we obviously want to compare our pF-EPI to the full k-space alternative having the same nominal resolution. But we should also consider whether there is any advantage over a lower resolution EPI with full k-space coverage, too. Why? Because this lower resolution version is, in effect, what you get when partial Fourier is applied symmetrically, i.e. when the high spatial frequencies are omitted from both halves of the phase encoding dimension!

Let's do our first assessment of pF on a phantom. There are four images of interest: the full k-space image, two versions of pF - omitting the early or the late echoes from the echo train - and, for the sake of quantifying the amount of smoothing, a lower resolution full k-space image which is tantamount to omitting both the early and late echoes. (See Note 1.) From this point on I'm going to refer to omission of the early and late echo variants as pF(early)-EPI and pF(late)-EPI, respectively.

Images acquired from a structural phantom with a 12-channel head coil on a Siemens Trio. All parameters except the phase encode k-space sampling were fixed. Top left: 64x64 full Fourier EPI. Top right: 64x48 full Fourier EPI. Bottom left: 6/8ths pF(early)-EPI, reconstructed to 64x64. Bottom right: 6/8ths pF(late)-EPI, reconstructed to 64x64. (Click image to enlarge.)


I'm afraid it's not immediately clear, but if you look carefully you should be able to see that there are small features - a group of little dots in the middle of the central circle, for example - that are better resolved in the 64x64 full Fourier image than in either of the partial Fourier variants. The 64x48 matrix image is smoothest of all, as we would expect. It's also interesting to note that (Gibbs) ringing is prominent in the 64x64 matrix but much less so in the other three images. This prominence is another consequence of improved spatial resolution: ringing is always present to some extent because our pixels are, strictly speaking, sinc-shaped rather than square. Hard edges, as in this phantom, tend to exhibit the strongest ringing, and the higher the resolution the better the ringing is defined. (There's a review of ringing in this post.)


In fMRI we are interested in temporal stability, of course, so let's take a look at how partial Fourier affects temporal SNR (TSNR) when all other parameters (including TE) are held constant:

TSNR images derived from 100 volumes of EPI acquired from a structural phantom with a 12-channel head coil on a Siemens Trio. All parameters except the phase encode k-space sampling were fixed. Top left: 64x64 full Fourier EPI. Top right: 64x48 full Fourier EPI. Bottom left: 6/8ths pF(early)-EPI, reconstructed to 64x64. Bottom right: 6/8ths pF(late)-EPI, reconstructed to 64x64. (Click image to enlarge.)


The TSNR for the regions of interest in the above figure are as follows:

Top left            Full 64x64             TSNR = 343
Top right          Full 64x48             TSNR = 437
Bottom left      6/8ths pF(early)     TSNR = 435
Bottom right    6/8ths pF(late)       TSNR =  425

(Note: this is a throwaway comparison, for the purposes of illustration only! Please don't take the numbers you see here as absolutes, I am simply showing the effects of smoothing via SNR and TSNR because it may be difficult to see the smoothing on the limited details in these phantom images.)

The low resolution image (64x48) has higher TSNR than the higher resolution image (64x64 full k-space). We should expect a boost in SNR (and hence TSNR in a stationary object) from 343 to (64/48 x 343) = 457 because of the difference in voxel size. The observed TSNR of 437 isn't too far off.

Where things get slightly more interesting is for the pF-EPI variants. Conventional wisdom states that using partial Fourier will degrade the SNR in an image because less data is being recorded for an image that has the same nominal spatial resolution. For pF-EPI, however, the effect of smoothing (that is, the broadened point-spread function for the pixels) outweighs the signal-reducing effect of acquiring less data. Indeed, the observed TSNR is very close to that for the 64x48 full Fourier acquisition, indicating that the smoothing function is pronounced.

What about differences between the early and late echo variants of pF-EPI? Omitting the early echoes seems to boost TSNR very slightly more than omitting the late echoes, which is counter-intuitive because the early echoes will almost certainly have higher signal than the late echoes. Whether the difference in TSNR significant I won't get into because the difference is quite small and in other ROIs (not shown) the TSNR is almost identical. Besides, as you'll see below, there are other differences that might subjugate any smoothing differences. So, what's important at this juncture is that we recognize that the use of partial Fourier - omitting either the early or the late echoes - generates considerable image smoothing for EPI reconstructed with zero filling of the missing k-space.


Before we leave the smoothing issue, let's take a quick look at the effects on brain data since that's probably your interest. (My apologies, I didn't acquire the 64x48 full Fourier option from the brain. I'll do so for the next post, when I consider different pF-EPI schemes for fMRI.) Here's how 64x64 matrix full Fourier EPI compares to early and late 6/8ths pF-EPI variants:

Left: Full Fourier EPI acquired and processed as a 64x64 matrix. Center: 6/8ths pF(early)-EPI reconstructed to a 64x64 matrix with zero filling. Right: 6/8ths pF(late)-EPI reconstructed to a 64x64 matrix with zero filling. (Click picture to enlarge.)

If you have a good eye you may be able to see that the full Fourier acquisition, on the left, has finer detail than either of the pF-EPI options. I haven't quantified the SNR because it is highly region-dependent. (I cover the TSNR for these three acquisitions below.) On the basis of the phantom data above, however, we should expect the SNR to be increased for the pF variants entirely due to the smoothing effect. Whether this smoothing is acceptable or not for fMRI will be covered in the next post. Before we can make that determination we need to consider something else.


Signal dropout

Not all brain regions will see increased SNR because of smoothing. Some regions will see a degradation of SNR as a result of enhanced dropout. The origin of this effect was explained in PFUFA Part Fourteen. It is a consequence of the signals "falling off the edge" of the (curtailed) k-space plane because of magnetic susceptibility gradients.

Here are some brain images using 64x64 full Fourier EPI compared to 6/8ths pF(early)-EPI and 6/8ths pF(late)-EPI:

Left: 64x64 full Fourier EPI. Center: 6/8ths pF(early)-EPI. Right: 6/8ths pF(late)-EPI. (Click image to enlarge.)

Omitting the early echoes tends to enhance signal dropout in the temporal and frontal lobes (red and yellow arrows) while omitting the late echoes preserves temporal and frontal lobes but causes enhanced dropout in deep brain regions (blue arrows). This is interesting because it suggest that we have a degree of flexibility over where we pay the penalty for using partial Fourier EPI. I'll return to this issue later on, and in a subsequent post, because when we start to consider all the costs and benefits of pF-EPI we need to consider other parameters that might be changed in concert, such as the phase encoding direction, TE and the slice thickness.

Let's finish up this first look at pF-EPI in brain by assessing the TSNR. These images were obtained from 100 volumes with TE=22 ms and TR=2000 ms. All parameters were constant except the degree of partial Fourier sampling in the phase encoding dimension:

TSNR for different EPI sampling schemes. Left: 64x64 full Fourier EPI. Center: 6/8ths pF(early)-EPI. Right: 6/8ths pF(late)-EPI. (Click image to enlarge.)

In the figure I've included an ROI so that we can do a throwaway quantitative comparison. (As before, don't over-interpret what you see.)

Left            Full 64x64              TSNR =100
Center       6/8ths pF(early)      TSNR =115
Right          6/8ths pF(late)       TSNR = 123

All we can state with confidence is that the full Fourier images show a TSNR that is lower than either pF-EPI variants because of the smoothing, and that there are regions that have far lower - approaching zero - TSNR for the pF-EPI, due to the enhanced dropout that we saw above. Nothing new here, except one thing to note in passing: there doesn't appear to be a substantial difference in motion sensitivity when using pF-EPI. The smoothing-induced boost in TSNR is preserved in the brain images as it was in the stationary phantom images. This is as we expect because all we're doing is shortening a single-shot acquisition. (See Note 2.)


Options with partial Fourier

In the comparisons presented here I intentionally fixed all parameters except the partial Fourier scheme. That way you were able to get a sense of the direct costs or benefits of a partial Fourier scheme. But there are at least three other parameters that should be considered when setting up a partial Fourier scheme: (i) omitting the early echoes will permit a shorter minimum TE, (ii) omitting the late echoes will permit faster acquisition (i.e. more slices per TR) even when TE is unchanged, and (iii) the phase encoding direction makes "early" and "late" a relative property of the echo train. I'm going to leave consideration of these issues for the next post, when I will look at setting up pF-EPI for an fMRI experiment. None of these options is trivial. The TE affects BOLD sensitivity, the TR affects statistical power and brain coverage, while the phase encoding direction establishes whether distortions will be a stretch or a compression in a particular brain region. All of these issues interact depending on what parameters we change having selected a particular pF-EPI option, and the  optimal combination of parameters will depend on the brain region(s) of interest in your experiment.

___________________



Notes:

1.  Please note that Siemens' product EPI sequences don't have a way to select the late echoes for their partial Fourier options. The early echoes are always omitted with these sequences. However, in the coming months/years I hope to offer to research users a modified sequence that has early/late echo omission as an option, along with a host of other small tweaks that can be useful for fMRI. As soon as this sequence is available for distribution I'll be sure to blog about it.

2.  Pedants will note that the actual motion sensitivity is a function of the underlying image contrast so that, in strict terms, the pF-EPI and full Fourier EPI scans do have different motion sensitivities. But this difference also exists for different brain shapes, different head orientations, different RF flip angles (because flip angle and TR establish the T1-based image contrast), etc. Might the motion sensitivity actually be reduced with pF-EPI,? It seems unlikely. Although the per slice time is decreased with pF-EPI, we also have to recognize that the effects of magnetic susceptibility are changed, too. So, what we gain with speed on the one hand we might give up with susceptibility contrast effects - signal dropout in other words - on the other. I really couldn't say whether these effects will be offsetting or not, and as far as I know nobody has ever assessed it. My bet would be that proving a systematic difference would be difficult because I suspect the motion sensitivity differences would be tiny.

i-fMRI: BRAIN scanners of the past, present and future

$
0
0

Have you ever wondered why your fMRI scanner is the way it is? Why, for example, is the magnet typically operated at 1.5 or 3 T, and why is there a body-sized transmission coil for the RF? The prosaic answer to these questions is the same: it's what's for sale. We are fortunate that MRI is a cardinal method for radiology, and this clinical utility means that large medical device companies have invested hundreds of millions of dollars (and other currencies) into its development. The hardware and pulse sequences required to do fMRI research aren't fundamentally different from those required to do radiological MRI so we get to use a medical device as a scientific instrument with relative ease.

But what would our fMRI scanners look like today had they been developed as dedicated scientific instruments, with little or no application to something as lucrative as radiology? Surely the scanner-as-research-device would differ in some major ways from that which is equally at home in the hospital or the laboratory. Or would it? While it's clear that the fMRI revolution of the past twenty years has ridden piggyback on the growing clinical importance of diffusion and other advanced anatomical imaging techniques, what's less obvious is the impact of these external factors on how we conduct functional neuroimaging today. State-of-the-art fMRI might have looked quite different had we been forced to develop scanners explicitly for neuroscience.


"I wouldn't start from here, mate."

This week's interim report from the BRAIN Initiative's working group is an opportunity for all of us involved in fMRI to think seriously about our tools. We've come a long way with BOLD contrast to be sure, even though we don't fully understand its origins or its complexities. Should I be delighted or frustrated at my capacity to operate a push-button clinical machine at 3 T in order to get this stuff to work? It's undoubtedly convenient, but at what cost to science?

I can't help but wonder what my fMRI scanner might look like if it was designed specifically for task. Would the polarizing magnet be horizontal or would a subject sit on a chair in a vertical bore? How large would the polarizing magnet be, and what would be its field strength? The gradient set specifications? And finally, if I'm not totally sold on BOLD contrast as my reporting mechanism for neural activity, what sort of signal do I really want? In all cases I am especially interested in why I should prefer one particular answer over the other alternatives.

Note that I'm not suggesting we all dream of voltage-sensitive contrast agents. That's the point of the BRAIN Initiative according to my reading of it. All I'm suggesting is that we spend a few moments considering what we are currently doing, and whether there might be a better way. Unless there has been a remarkable set of coincidences over the last two decades, the chances are good that an fMRI scanner designed specifically for science would have differed in some major ways from the refined medical device that presently occupies my basement lab. There would be more duct tape for a start.



CALAMARI: Doing MRI at 130 microtesla with a SQUID

$
0
0

I've been dabbling in some ultralow field (ULF) MRI over the past several years, trying first to get functional brain imaging to work (more on that another day, perhaps) and more recently looking at the contrast properties of normal and diseased brains. We detect MR signals at less than three times the earth's magnetic field (of approximately 50 microtesla) using an ultra-sensitive superconducting quantum interference device (SQUID). The system is usually referred to as "The Cube" on account of the large aluminum box surrounding the entire apparatus; it provides magnetic shielding for the SQUID. But my own nickname for the system is CALAMARI - the CAL Apparatus for MAgnetic Resonance Imaging. Deep-fried rings or grilled strips, it's all good. Anyway, should you wish to know more about this home-built system and what it might be able to do, there's a new paper (John Clarke's inaugural article after being elected to the NAS) now out in PNAS. At some point I'll put up more blog posts on both anatomical and functional ULFMRI, and go over some of the work that's being done at high fields (1.5+ T) that may be relevant to ULFMRI.





Using partial Fourier EPI for fMRI

$
0
0

Back in August I did a post on the experimental consequences of using partial Fourier for EPI. (An earlier post, PFUFA Part Fourteen introduces partial Fourier EPI.) The main point of that post was to demonstrate how, with all other parameters fixed, there are two principal effects on an EPI obtained with partial Fourier (pF) compared to using full phase encoding: global image smoothing, and regionally enhanced signal dropout. (See Note 1.)

In this post I want to look a little more closely at how pF-EPI works in practice, on a brain, with fMRI as the intended application, and to consider what other parameter options we have once we select pF over full k-space. I'll do two sets of comparisons. In the first comparison all parameters except the phase encoding k-space fraction will be fixed so that we can again consider the first stage consequences of using pF. In the second comparison each pF-EPI scheme will be optimized in a "maximum performance" test. The former is an apples to apples comparison, with essentially one variable changing at a time, whereas the latter is how you would ordinarily want to consider the pF options available to you.


Why might we want to consider partial Fourier EPI for fMRI anyway?

If we assume a typical in-plane matrix of 64 x 64 pixels, an echo spacing (the time for each phase-encoded gradient echo in the train, as explained in PFUFA Part Twelve) of 0.5 ms and a TE of 30 ms for BOLD contrast then it takes approximately 61 ms to acquire each EPI slice. (See Note 2 for the details.) The immediate consequence should be obvious: at 61 ms per slice we will be limited to 32 slices in a TR of 2000 ms. If the slice thickness is 3 mm then the total brain coverage in the slice dimension will be ~106 mm, assuming a 10% nominal inter-slice gap (i.e. 32 x 3.3 mm slices). With axial slices we aren't going to be able to cover the entire adult brain. We will have to omit either the top of parietal lobes or the bottom of the temporal lobes, midbrain, OFC and cerebellum. Judicious tilting might be able to capture all of the regions of primary interest to you, but we either need to reduce the time taken per slice or increase the TR to cover the entire brain.

Partial Fourier is one way to reduce the time spent acquiring each EPI slice. There are two basic ways to approach it: eliminate either the early echoes or the late echoes in the echo train, as described at the end of PFUFA: Part Fourteen. Eliminating the early echoes doesn't, by itself, save any time at all. Only if the TE is reduced in concert is there any time saving. But omitting the late echoes will mean that we complete the data acquisition for the current slice earlier than we would for full Fourier sampling, hence there is some intrinsic speed benefit. I'll come back to the time savings and their consequences later on. Let's first look at what happens when we enable partial Fourier without changing anything else.


Image quality assessment for pF-EPI

Our gold standard will be full k-space EPI with a 64 x 64 matrix. For this post I am only going to use the 6/8ths partial Fourier option, meaning that one quarter (2/8ths) of the phase encoding k-space will be omitted from the acquisition. Thus, we will have acquired 48 of 64 phase encode lines and will simply zero fill the missing lines prior to 2D FT of a (synthetic) 64 x 64 matrix. Again, see PFUFA: Part Fourteen for an introduction to partial Fourier EPI if this vernacular leaves you cold.

As we saw previously, one effect of acquiring a partial k-space is image smoothing. Which immediately begs the question: why bother using pF at all, and why not just reduce the matrix size (symmetrically) instead? So, one comparison we want to make, specifically to evaluate image smoothing, is the acquisition of a full Fourier 64 x 48 matrix lower resolution EPI. In this case we acquire k-space symmetrically in the phase encoding dimension; we're leaving off 1/8th of the early and 1/8th of the late echoes compared to the full 64 x 64 matrix acquisition.

As we've seen previously, there are two options for 6/8ths pF-EPI. We can omit the early or the late phase encoded echoes, as illustrated in this figure (see Note 3):



I shall try always to refer consistently to the former as pF(early) and the latter as pF(late), but in some of the images you may notice that in practice I tend to refer to the former as simply pF while the latter is pFrev, for "reversed" pF. So if you see "rev" or "reversed" in any data just think "late" instead.

I also want to emphasize here that early and late (or reversed) are designations made relative to the phase encoding direction that's being used. For axial slices the Siemens default is to use anterior-posterior (A-P) phase encoding. (I've noted previously that GE uses P-A by default.) If the imaging gradients were perfect and there were no magnetic susceptibility gradients across the head then omitting the late echoes for A-P phase encoding would be tantamount to omitting the early echoes for P-A phase encoding. But we don't have a perfect system and we shall therefore want to do a separate set of comparisons for P-A phase encoding, distinct from those for A-P. The imperfections? Mostly, it's those pesky magnetic susceptibility gradients that cause distortion and dropout. The phase encoding dimension dictates the direction of distortion and you will almost certainly have a preference. Also, the local regions that exhibit enhanced signal dropout will differ with phase encoding direction.

Disclaimer Do not, under any circumstances, treat these results as a validation of either of the pF variants!!! All I offer is a starting point for you to ponder your alternatives. Unless and until someone provides a validation of pF you should remain skeptical. At a minimum, you would want to conduct a thorough pilot experiment before selecting a pF variant for a full-blown fMRI experiment.


Disclaimer over, here is our first set of comparisons, in this case using A-P phase encoding:

EPI with phase encoding set A-P and all parameters held constant except for the phase encode sampling scheme. Top left: 64x64 full Fourier. Top right: 64x48 full Fourier. Bottom left: 6/8pF(early). Bottom right: 6/8pF(late). (Click image to enlarge.)

All parameters except the phase encoding fraction are constant: Siemens TIM/Trio, 12-channel head coil, TR = 2000 ms, TE = 22 ms, FOV = 224 mm x 224 mm, slice thickness = 3 mm, inter-slice gap = 0.3 mm, echo spacing = 0.5 ms, bandwidth = 2232 Hz/pixel, flip angle = 70 deg. Each EPI was reconstructed as a 64x64 matrix however much actual k-space was acquired, and any omitted portions were zero-filled prior to 2D FT.

Let's zoom in a bit to get a better look at those slices that typically exhibit regions of dropout:

EPI with phase encoding set A-P and all parameters held constant except for the phase encode sampling scheme. Top left: 64x64 full Fourier. Top right: 64x48 full Fourier. Bottom left: 6/8pF(early). Bottom right: 6/8pF(late). (Click image to enlarge.)

Additional dropout is evident for both of the pF options as well as for the low resolution full k-space EPI, compared to the 64 x 64 reference images. Temporal lobes and midbrain are affected most, consistent with the brain data shown in the last post on pF-EPI. (See Note 4 for more information on the effects of resolution on dropout.)

What about image smoothing? It's hard to see on brains, but there are a couple of slices on which you can, if you have a good eye, just about discern the different edge detail:

EPI with phase encoding set A-P and all parameters held constant except for the phase encode sampling scheme. Top left: 64x64 full Fourier. Top right: 64x48 full Fourier. Bottom left: 6/8pF(early). Bottom right: 6/8pF(late). (Click image to enlarge.)


We aren't exclusively interested in the appearance of EPI or the brightness of a particular region when doing fMRI, however. We are using time series acquisitions so we need to consider motion sensitivity and the signal stability over time. So let's shift to assessing temporal images.

We can make a reasonable assessment of any differential motion sensitivity by looking at standard deviation images. Here are the results for the A-P phase encoding data from above, for fifty-volume time series acquisitions (100 secs of data) in each case:

Standard deviation images for fifty EPI time series acquisitions, with phase encoding set A-P and all parameters held constant except for the phase encode sampling scheme. Top left: 64x64 full Fourier. Top right: 64x48 full Fourier. Bottom left: 6/8pF(early). Bottom right: 6/8pF(late). (Click image to enlarge.)

As we would expect for single shot EPI acquisitions, the motion sensitivity is fairly consistent. Using partial Fourier doesn't change the fact that the acquisition is still a single echo train acquired after a single slice-selective RF pulse. Thus, no one scheme exhibits vastly different variance than the others. There will probably be localized differences, however. In the case of partial Fourier, signals might be on the very edge of the k-space "cliff" and they might be one or other side of that drop with quite small subject movement. But determining relative performance requires regions-of-interest - something that will vary depending on your application - or some way to collapse the signal stability for the whole brain down to a single value, a process that might easily obscure subtle effects that are actually important. So, let's just accept that the motion sensitivity is operationally similar, and move on.

Of even more relevance to fMRI is the temporal SNR (tSNR), a handy proxy for signal level as well as stability. Here are voxelwise tSNR maps of the same fifty-volume time series as used in the standard deviation images above:

Temporal SNR images for fifty EPI time series acquisitions, with phase encoding set A-P and all parameters held constant except for the phase encode sampling scheme. Top left: 64x64 full Fourier. Top right: 64x48 full Fourier. Bottom left: 6/8pF(early). Bottom right: 6/8pF(late). (Click image to enlarge.)

Now we can see the full effects of image smoothing. The tSNR is higher for 64x48 full Fourier and both partial Fourier options compared to the 64x64 full Fourier baseline image. But we have netted "extra" SNR purely by smoothing the image, an effect we could get in post processing with a smoothing function applied to the 64x64 full Fourier image! Is there a difference between the 64x48 full Fourier and either of the pF options? In regions with good signal, not really. But where the sampling scheme enhances signal dropout compared to 64x64 full Fourier then we see the same holes in the tSNR images as we saw in the raw EPI above. What's gone is gone.

That about wraps up the quick and dirty visual assessment of the pF-EPI options using A-P phase encoding. There is another entire four-way comparison on offer, however: the same four sampling schemes but applied with P-A phase encoding! I've put all the data for P-A phase encoding into Appendix 1. Here, let's stick with A-P phase encoding but turn our attention to some comparisons when the timing parameters aren't held constant, which is far more realistic.


How fast can we make pF-EPI go?

Let's assume we have valid reasons for not wanting to increase TR beyond 2000 ms, and let's further assume that the gradients are being driven as fast as they can go. (In all of the data shown in this post the echo spacing is fixed at 0.5 ms.) We need a way to save some time if we are to acquire more slices in the specified TR. Partial Fourier is one option for saving time per slice.

By setting the parameters for "maximum performance" - meaning the use of minimum TE and as many slices as we can fit in TR=2000 ms - it turns out that we get 43 slices for the two pF options as well as for the 64x48 low-res option, compared to 37 slices for the 64x64 full k-space standard. But in achieving the 43 slices, only 6/8pF(late) uses the same TE=22 ms as the 64x64 standard. For pF(early) the TE is reduced to the minimum value of 14 ms while for 64x48 low-res the TE is reduced to 18 ms. Using longer than the minimum TE in either case results in fewer than 43 slices in TR=2000 ms.

For space considerations, and to allow you to make better comparisons on your own screens, I've put large matrix figures for each of the four options (all using A-P phase encoding) in Appendix 2. I'll note in passing that these images show the same effects of smoothing and regional dropout as the constant parameter comparisons above, and move on.

With the "maximum performance" parameter settings the motion sensitivity remains quite similar to the prior comparisons. This is as we should expect for single shot EPI; minor differences in TE won't have a large effect. Thus, the standard deviation images have comparable artifact levels for edges (due to motion), physiologic fluctuations and for N/2 ghosts:

Standard deviation images for "maximum performance" acquisitions with phase encoding set A-P.  Top left: 64x64 full Fourier. Top right: 64x48 full Fourier. Bottom left: 6/8pF(early). Bottom right: 6/8pF(late). (Click image to enlarge.)

What about tSNR over a fifty volume time series?

Temporal SNR images for "maximum performance" acquisitions with phase encoding set A-P.  Top left: 64x64 full Fourier. Top right: 64x48 full Fourier. Bottom left: 6/8pF(early). Bottom right: 6/8pF(late). (Click image to enlarge.)

As with the fixed parameter comparison, the effects of smoothing - enhanced tSNR - on the 64x48 images is especially noticeable compared to the 64x64 case. The tSNR is higher still for the 6/8pF(early) case, a combination of image smoothing plus the use of a shorter TE of 14 ms. The tSNR for 6/8pF(late) is comparable to that for 64x64 full Fourier, however. But this is only part of the story, which is why I'm avoiding quantitative comparisons for any one region. Let's consider dropout. We can see that dropout of midbrain signal is higher for 6/8pF(late) than it is for any of the other three options. Yet signal in temporal lobes is well preserved for 6/8pF(late), comparable to that in the full Fourier 64x64 images. Signal for temporal lobes in both the 64x48 full and 6/8pF(early) images show more dropout. Thus, if you were interested in auditory fMRI you might want to consider 64x64 full or 6/8pF(late), but if you're doing, say, hypothalamus then either 64x48 full or 6/8pF(early) look to be better candidates.

A comparison for "maximum performance" parameters but with P-A phase encoding is given in the appendices. The individual image mosaics are in Appendix 3 while the standard deviation and tSNR images for fifty-volume time series are in Appendix 4.


Summary

In the previous post on partial Fourier EPI you saw how partial Fourier affects a single image. In this post, the analysis was expanded to consider the effects on a time series and also different parameter combinations for time series acquisitions. What are the broader lessons to take away so far?
  • Partial Fourier leads to image smoothing. It's important to note that any apparent gain in SNR (for otherwise fixed parameters) is due to the smoothing.
  • Partial Fourier usually contributes to enhanced signal dropout, especially in the "problem" brain regions of midbrain, frontal lobes and temporal lobes where magnetic susceptibility gradients are worst. You may be able to select which regions exhibit worse dropout by judicious combination of phase encode direction and early or late echo omission.
  • Omitting the early echoes from the gradient echo train can benefit EPI by permitting a shorter TE. If we use pF(early) and we don't shorten the TE then all we're really doing is giving up SNR, especially for the regions mentioned in the previous point. (Remember that there is still considerable BOLD (T2*) weighting during the EPI echo train. BOLD contrast isn't entirely dependent on the TE!)
  • Omitting the late echoes from the gradient echo train doesn't change the minimum TE but it does permit faster acquisition of slices, i.e. more slices in TR.
  • Partial Fourier doesn't make EPI more motion-sensitive. Strictly speaking, the image contrast does impact motion sensitivity and motion correction a little bit, but these factors are affected by many other parameters, too, such as the excitation flip angle.

To finish up, here is a little general guidance when considering partial Fourier EPI:
  • Consider what you want to do with TE whenever you are assessing partial Fourier as an option.
  • If you omit early echoes then you'll almost certainly want to reduce TE as well.
  • If reduced dropout is your focus then you may want a reduced TE for its own sake, and perhaps thinner slices (see two points down).
  • If you omit late echoes then the assumption is that you're aiming for more slices in TR.
  • Even if you are happy with your slice coverage sans pF, using pF may permit a greater number of thinner slices for the same total coverage in the slice dimension. But you would have to determine whether there is net benefit from thinner slices versus the enhanced regional dropout mentioned already.
  • Regarding regional dropout, you may have a degree of choice as to which signals are sacrificed in setting early or late echo omission by setting the phase encode gradient polarity, e.g. P-A instead of A-P. But there is a concomitant effect on distortion direction, too.

The next post in this series will consider partial Fourier EPI compared to alternative "go faster" options, in particular the use of GRAPPA. And then we'll shift focus to simultaneous multislice (SMS), aka multiband (MB) EPI.

___________________




Notes:

1.  In textbooks you will usually encounter a description of partial Fourier phase encoding that involves the decrease of image SNR because of the reduced signal averaging compared to a fully sampled k-space plane. Strictly speaking, it will be accurate. In practice, however, a loss of SNR with pF doesn't manifest in EPI the way the textbooks describe it. Instead, we tend to fnd an apparent increase of image SNR across most of the EPI, arising from the smoothing imposed by the 'zero filling' filter effect. Thus, a higher apparent SNR resulting from pF isn't a "real" SNR gain but comes from smoothing. You could get the same - or better - SNR from taking a fully sampled EPI and applying a smoothing function in post-processing. We do see decreased SNR but it tends to be regional. Signals from some brain areas 'fall off' the sampled k-space plane due to magnetic susceptibility gradients. Keep these points in mind when comparing the different SNR levels observed in the comparisons that follow.

2.  A real EPI pulse sequence was considered in PFUFA: Part Thirteen. In addition to sampling of the k-space plane with the repeated gradient echoes, there are also temporal overheads for each slice: fat suppression, slice selection, and a short crusher gradient at the end of each slice that eliminates any residual signal prior to the next slice (hopefully). For simplicity, let's assume that it takes a total of 15 ms to do a fat suppression pulse, the first half of a slice selection (the second half being accounted for within TE), and a short crusher gradient after each slice is acquired. This is the temporal overhead per slice. Next we need to determine the time taken to sample the 2D k-space plane.
         For a 64 x 64 matrix EPI with 0.5 ms echo spacing it takes 32 x 0.5 ms = 16 ms to reach the center of k-space, then a further 16 ms to reach the end of the in-plane information. The TE defines the center of k-space, however, so the mid-point of the 64 echoes has to be "parked" at TE. Thus, the first 32 echoes, taking 16 ms, can be acquired within the 30 ms allowed for TE. The latter 32 echoes take a further 16 ms after TE to acquire. Thus, the total time per slice is  TE + 16 ms + 15 ms (overhead) = 61 ms to acquire a single 2D plane. There may be small variations but this is a pretty good estimate.

3.  Siemens users, I'm afraid that you can only neglect the early echoes in the product EPI sequences such as ep2d_bold and ep2d_pace. I'm working on getting an early/late option into a subsequent product sequence, and/or making available a research version of ep2d_bold. Big, bureaucratic subject for another day. Right now the question is whether there's any benefit to having the early/late option at all!

4.  There is a general principle at work here: higher resolution for EPI - whether in-plane or thinner slices or both - will tend to reduce the extent of magnetic susceptibility gradients across a voxel and thus tend to reduce the dephasing causing signal loss. It's the same principle that was demonstrated for the slice thickness in the "Signal dropout" section of PFUFA: Part Twelve, but we can extend it to 3D. Now, there's no free lunch. In exchange for reducing the dephasing across a (smaller) voxel we lose the SNR on a volumetric basis; voxels with 2 mm sides produce base SNR that is less than one third that of voxels with 3 mm sides. And because we have smaller voxels we now have a potential brain coverage issue, especially in the slice dimension. Still, aiming for smaller voxels is one of the tactics for reducing dropout in EPI.


Appendix 1:

All parameters except the phase encoding fraction are constant: Siemens TIM/Trio, 12-channel head coil, TR = 2000 ms, TE = 22 ms, FOV = 224 mm x 224 mm, slice thickness = 3 mm, inter-slice gap = 0.3 mm, echo spacing = 0.5 ms, bandwidth = 2232 Hz/pixel, flip angle = 70 deg, phase encoding direction = P-A. Each EPI was reconstructed as a 64x64 matrix however much actual k-space was acquired:

EPI with phase encoding set P-A and all parameters held constant except for the phase encode sampling scheme. Top left: 64x64 full Fourier. Top right: 64x48 full Fourier. Bottom left: 6/8pF(early). Bottom right: 6/8pF(late). (Click image to enlarge.)

Again, we can zoom in to assess likely problem regions:

EPI with phase encoding set P-A and all parameters held constant except for the phase encode sampling scheme. Top left: 64x64 full Fourier. Top right: 64x48 full Fourier. Bottom left: 6/8pF(early). Bottom right: 6/8pF(late). (Click image to enlarge.)

And check on smoothing:

EPI with phase encoding set P-A and all parameters held constant except for the phase encode sampling scheme. Top left: 64x64 full Fourier. Top right: 64x48 full Fourier. Bottom left: 6/8pF(early). Bottom right: 6/8pF(late). (Click image to enlarge.)

Standard deviation images for fifty-volume time series acquisitions with P-A phase encoding:

Standard deviation images for fifty EPI time series acquisitions, with phase encoding set P-A and all parameters held constant except for the phase encode sampling scheme. Top left: 64x64 full Fourier. Top right: 64x48 full Fourier. Bottom left: 6/8pF(early). Bottom right: 6/8pF(late). (Click image to enlarge.)

Temporal SNR images for fifty-volume time series acquisitions with P-A phase encoding:

Temporal SNR images for fifty EPI time series acquisitions, with phase encoding set P-A and all parameters held constant except for the phase encode sampling scheme. Top left: 64x64 full Fourier. Top right: 64x48 full Fourier. Bottom left: 6/8pF(early). Bottom right: 6/8pF(late). (Click image to enlarge.)


Appendix 2:

High resolution versions of the four "maximum performance" acquisitions with A-P phase encoding:

64x64 full Fourier, A-P phase encoding, 37 slices in TR=2000 ms, TE = 22 ms.
64x48 full Fourier, A-P phase encoding, 43 slices in TR=2000 ms, TE = 18 ms.
6/8pF(early), A-P phase encoding, 43 slices in TR=2000 ms, TE = 14 ms.
6/8pF(late), A-P phase encoding, 43 slices in TR=2000 ms, TE = 22 ms.


Appendix 3:

High resolution versions of the four "maximum performance" acquisitions with P-A phase encoding:

64x64 full Fourier, P-A phase encoding, 37 slices in TR=2000 ms, TE = 22 ms.
64x48 full Fourier, P-A phase encoding, 43 slices in TR=2000 ms, TE = 18 ms.
6/8pF(early), P-A phase encoding, 43 slices in TR=2000 ms, TE = 14 ms.
6/8pF(late), P-A phase encoding, 43 slices in TR=2000 ms, TE = 22 ms.


Appendix 4:

Standard deviation images for fifty-volume time series acquisitions with P-A phase encoding and "maximum performance" parameters:

Standard deviation images for "maximum performance" acquisitions with phase encoding set P-A.  Top left: 64x64 full Fourier. Top right: 64x48 full Fourier. Bottom left: 6/8pF(early). Bottom right: 6/8pF(late). (Click image to enlarge.)

 Temporal SNR images for fifty-volume time series acquisitions with P-A phase encoding and "maximum performance" parameters:

Temporal SNR images for "maximum performance" acquisitions with phase encoding set P-A.  Top left: 64x64 full Fourier. Top right: 64x48 full Fourier. Bottom left: 6/8pF(early). Bottom right: 6/8pF(late). (Click image to enlarge.)


Partial Fourier versus GRAPPA for increasing EPI slice coverage

$
0
0

This is the final post in a short series concerning partial Fourier EPI for fMRI. The previous post showed how partial Fourier phase encoding can accelerate the slice acquisition rate for EPI. It is possible, in principle, to omit as much as half the phase encode data, but for practical reasons the omission is generally limited to around 25% before image artifacts - mainly enhanced regional dropout - make the speed gain too costly for fMRI use. Omitting 25% of the phase encode sampling allows a slice rate acceleration of up to about 20%, depending on whether the early or the late echoes are omitted and whether other timing parameters, most notably the TE, are changed in concert.

But what other options do you have for gaining approximately 20% more slices in a fixed TR? A common tactic for reducing the amount of phase-encoded data is to use an in-plane parallel imaging method such as SENSE or GRAPPA. Now, I've written previously about the motion sensitivity of parallel imaging methods for EPI, in particular the motion sensitivity of GRAPPA-EPI, which is the preferred parallel imaging method on a Siemens scanner. (See posts here, here and here.) In short, the requirement to obtain a basis set of spatial information - that is, a map of the receive coil sensitivities for SENSE and a set of so-called auto-calibration scan (ACS) data for GRAPPA - means that any motion that occurs between the basis set and the current volume of (accelerated) EPI data is likely to cause some degree of mismatch that will result in artifacts. Precisely how and where the artifacts will appear, their intensity, etc. will depend on the type of motion that occurs, whether the subject's head returns to the initial location, and so on. Still, it behooves us to check whether parallel imaging might be a better option for accelerating slice coverage than partial Fourier.


Deciding what to compare

Disclaimer: As always with these throwaway comparisons, use what you see here as a starting point for thinking about your options and perhaps determining your own set of pilot experiments. It is not the final word on either partial Fourier or GRAPPA! It is just one worked example.

Okay, so what should we look at? In selecting 6/8ths partial Fourier it appears that we can get about 15-20% more slices for a fixed TR. It turns out that this gain is comparable to using GRAPPA with R=2 acceleration with the same TE. To keep things manageable - a five-way comparison is a sod to illustrate - I am going to drop the low-resolution 64x48 full Fourier EPI that featured in the last post in favor of the R=2 GRAPPA-EPI that we're now interested in. For the sake of this comparison I'm assuming that we have decided to go with either pF-EPI or GRAPPA, but you should note that the 64x48 full Fourier EPI remains an option for you in practice. (Download all the data here to perform for your own comparisons!)

I will retain the original 64x64 full Fourier EPI as our "gold standard" for image quality as well as the two pF-EPI variants, yielding a new four-way comparison: 64x64 full Fourier EPI, 6/8pF(early), 6/8pF(late), and GRAPPA with R=2. Partial Fourier nomenclature is as used previously. All parameters except the specific phase encode sampling schemes were held constant. Data was collected on a Siemens TIM/Trio with 12-channel head coil, TR = 2000 ms, TE = 22 ms, FOV = 224 mm x 224 mm, slice thickness = 3 mm, inter-slice gap = 0.3 mm, echo spacing = 0.5 ms, bandwidth = 2232 Hz/pixel, flip angle = 70 deg. Each EPI was reconstructed as a 64x64 matrix however much actual k-space was acquired. Partial Fourier schemes used zero filling prior to 2D FT. GRAPPA reconstruction was performed on the scanner with the default vendor reconstruction program. (Siemens users, see Note 1.)


Image quality assessment

In this comparison the phase encoding direction is anterior-posterior (A-P), the Siemens default. (See Appendix 1, below, for a similar four-way comparison using P-A phase encoding.) There are 37 slices in TR=2000 ms, which is the maximum number of slices permitted by the full Fourier 64x64 matrix EPI. Here are the images after zooming to crop the uppermost two slices from each data set:


EPI with phase encoding set A-P and all parameters held constant except for the phase encode sampling scheme. Top left: 64x64 full Fourier. Top right: GRAPPA-EPI with R=2 acceleration. Bottom left: 6/8pF(early). Bottom right: 6/8pF(late). (Click image to enlarge.)


And here are the same images but zoomed so that we can get a better look at likely problem areas:


EPI with phase encoding set A-P and all parameters held constant except for the phase encode sampling scheme. Top left: 64x64 full Fourier. Top right: GRAPPA-EPI with R=2 acceleration. Bottom left: 6/8pF(early). Bottom right: 6/8pF(late). (Click image to enlarge.)


Comparing the pF schemes to the full Fourier EPI first, we see the now familiar regions of enhanced dropout - primarily temporal lobes (and eyes!) for 6/8pF(early), midbrain for 6/8pF(late) - and also the smoother images arising from zero filling the partial Fourier EPIs.

The most immediate difference between the GRAPPA-EPI and the other three data sets is the reduced distortion in the A-P direction. Partial Fourier doesn't alter the amount of distortion whereas GRAPPA reduces distortion by the acceleration factor, R=2 in this case. The distortion is worst where the magnetic susceptibility gradients are worst, so the reduced distortion is most evident in the temporal lobes. Distortion of the frontal lobe signal is also halved but the benefit is less obvious because it appears that there might be additional dropout with the GRAPPA acquisition. Why the dropout should get worse isn't immediately obvious, but we can speculate that it's a reconstruction error arising from a mismatch between the ACS and this undersampled volume. Not a good sign.

It's time to look at the performance of GRAPPA in a time series. Here are the standard deviation images for 50-volume time series:


Standard deviation images for fifty EPI time series acquisitions, with phase encoding set A-P and all parameters held constant except for the phase encode sampling scheme. Top left: 64x64 full Fourier. Top right: GRAPPA-EPI with R=2 acceleration. Bottom left: 6/8pF(early). Bottom right: 6/8pF(late). (Click image to enlarge.)


Uh-oh. Clearly, the temporal stability of the GRAPPA data is worse than all the other three schemes. The experienced subject was careful not to move during the ACS - for instance, he swallowed immediately before the start of the scan - and did his best not to move during the time series, too. Yet the frontal lobes in particular exhibit large standard deviations, and there is a pronounced ring around the circumference of the head for all slices. What does this do to the temporal SNR? Let's look:


Temporal SNR images for fifty EPI time series acquisitions, with phase encoding set A-P and all parameters held constant except for the phase encode sampling scheme. Top left: 64x64 full Fourier. Top right: GRAPPA-EPI with R=2 acceleration. Bottom left: 6/8pF(early). Bottom right: 6/8pF(late). (Click image to enlarge.)


As we might expect when the standard deviation is high, the tSNR for the GRAPPA scheme is reduced below that for the two partial Fourier schemes as well as the reference 64x64 full Fourier EPI. The price for the speed gain seems to be about half of the tSNR, according to the region-of-interest selected in this throwaway comparison.

It is important to note that the performance of GRAPPA is quite variable. In Appendix 1, below, you will find the same four-way comparison but with the phase encoding direction reversed, to P-A. In those images you'll see that the GRAPPA stability is still the worst of the four, but it isn't quite as bad as in the A-P data above. This is the problem when using GRAPPA: just one head movement - a swallow, say - can have very severe consequences for the overall time series. More on the costs and benefits below.


Going at maximum speed

For the purposes of this post, the motivation for adopting partial Fourier or GRAPPA is to attain more slices in the TR. So let's look at the time series statistics when the TE is reduced as far as possible in order to permit the maximum number of slices in TR = 2000 ms. (Reducing TE to the minimum attainable isn't always what you would want to do for BOLD contrast, but I'm doing it here to get the maximum number of slices.) Except for the TE and the number of slices, all other parameters were left set at the values given previously.

The good news is that GRAPPA with R=2 acceleration and a minimum TE of 14 ms allows a whopping 52 slices in TR = 2000 ms! Mission accomplished, right? Perhaps. If you don't mind giving up that temporal stability. Here is the four-way comparison of standard deviations for fifty-volume time series acquisitions:



Standard deviation images for "maximum performance" acquisitions with phase encoding set A-P. Top left: 64x64 full Fourier. Top right: GRAPPA-EPI with R=2 acceleration. Bottom left: 6/8pF(early). Bottom right: 6/8pF(late). (Click image to enlarge.)


It looks like we have a similar motion sensitivity as before. The shorter TE for GRAPPA will enhance the image SNR and this should translate into improved temporal SNR in the absence of motion. We end up seeing a net loss, however, because of the motion sensitivity:


Temporal SNR images for "maximum performance" acquisitions with phase encoding set A-P. Top left: 64x64 full Fourier. Top right: GRAPPA-EPI with R=2 acceleration. Bottom left: 6/8pF(early). Bottom right: 6/8pF(late). (Click image to enlarge.)


Note that the region-of-interest in the top-right matrix doesn't precisely match the other three because the high number of slices caused the image display to shift in Osirix. I did my best to get a similar region. Still, it is clear that there is a rather large global penalty in the GRAPPA data compared to the partial Fourier options.

Appendix 2 contains the same four-way comparison but with the phase encoding reversed, to P-A. GRAPPA again performs the worst of the bunch.


Lessons learned

It appears that using GRAPPA with R=2 is quite costly in terms of reduced temporal SNR. In the "maximum performance" test I reduced the TE to the minimum of 14 ms, a situation that probably isn't something that you would do for fMRI. You might reduce the TE to around 20 ms for BOLD.

At a TE of around 20 ms the major apparent benefit of GRAPPA - more slices per TR than for partial Fourier - becomes marginal, yet it comes at the cost of greatly enhanced motion sensitivity. To me, it doesn't seem worth the cost for such a relatively small gain in imaging speed. If the objective is to tease out an additional 20% more slices in TR then it appears that partial Fourier EPI is the better (safer) alternative.

So, what about going even faster? Why stop at R=2 for GRAPPA? It is certainly possible to use R=3 or 4 with large phased-array coils, but at the cost of further enhancement of motion sensitivity. What's more, using in-plane acceleration gets us percentages of speed increase but it doesn't get us factors. What if you wanted to get twice as many slices in a fixed TR, or three times as many? In that case you should probably focus on the slice dimension itself and accelerate it directly, using simultaneous multi-slice (aka multi-band) EPI. That will be the subject of the next post.

____________________



Notes:

1.  On VB17A software, and previously on VB15, the product EPI sequence uses a single ACS for R=2 accelerated GRAPPA EPI. This means that there is a mismatch between the k-space step size for the ACS and the step size - twice as big - for the undersampled EPI of the time series. Such a mismatch leads to reconstruction errors whenever there are appreciable magnetic susceptibility gradients acting to distort the phase encoding. On my scanner we therefore use a tweaked version of ep2d_bold for which the correct R-shot ACS is acquired for R=2. Note, however, that Siemens does correctly acquire 3-shot and 4-shot ACS for R=3 and 4. It's just R=2 that has the potential mismatch. See the introduction section of this arXiv paper for more details.



Appendix 1:

All parameters except the phase encoding fraction are constant: Siemens TIM/Trio, 12-channel head coil, TR = 2000 ms, TE = 22 ms, FOV = 224 mm x 224 mm, slice thickness = 3 mm, inter-slice gap = 0.3 mm, echo spacing = 0.5 ms, bandwidth = 2232 Hz/pixel, flip angle = 70 deg, phase encoding direction = P-A. Each EPI was reconstructed as a 64x64 matrix however much actual k-space was acquired:

EPI with phase encoding set P-A and all parameters held constant except for the phase encode sampling scheme. Top left: 64x64 full Fourier. Top right: GRAPPA-EPI with R=2 acceleration. Bottom left: 6/8pF(early). Bottom right: 6/8pF(late). (Click image to enlarge.)


Zoomed to show likely problem regions:

EPI with phase encoding set P-A and all parameters held constant except for the phase encode sampling scheme. Top left: 64x64 full Fourier. Top right: GRAPPA-EPI with R=2 acceleration. Bottom left: 6/8pF(early). Bottom right: 6/8pF(late). (Click image to enlarge.)


Standard deviation images for fifty-volume time series acquisitions with P-A phase encoding:

Standard deviation images for fifty EPI time series acquisitions, with phase encoding set P-A and all parameters held constant except for the phase encode sampling scheme. Top left: 64x64 full Fourier. Top right: GRAPPA-EPI with R=2 acceleration. Bottom left: 6/8pF(early). Bottom right: 6/8pF(late). (Click image to enlarge.)


Temporal SNR images for fifty-volume time series acquisitions with P-A phase encoding:

Temporal SNR images for fifty EPI time series acquisitions, with phase encoding set P-A and all parameters held constant except for the phase encode sampling scheme. Top left: 64x64 full Fourier. Top right: GRAPPA-EPI with R=2 acceleration. Bottom left: 6/8pF(early). Bottom right: 6/8pF(late). (Click image to enlarge.)



Appendix 2:

Standard deviation images for fifty-volume time series acquisitions with P-A phase encoding and "maximum performance" parameters:

Standard deviation images for "maximum performance" acquisitions with phase encoding set P-A. Top left: 64x64 full Fourier. Top right: GRAPPA-EPI with R=2 acceleration. Bottom left: 6/8pF(early). Bottom right: 6/8pF(late). (Click image to enlarge.)


Temporal SNR images for fifty-volume time series acquisitions with P-A phase encoding and "maximum performance" parameters:

Temporal SNR images for "maximum performance" acquisitions with phase encoding set P-A. Top left: 64x64 full Fourier. Top right: GRAPPA-EPI with R=2 acceleration. Bottom left: 6/8pF(early). Bottom right: 6/8pF(late). (Click image to enlarge.)

Using someone else's data

$
0
0

There was quite a lot of activity yesterday in response to PLOS ONE's announcement regarding its data policy. Most of the discussion I saw concerned rights of use and credit, completeness of data (e.g. the need for stimulus scripts for task-based fMRI) and ethics (e.g. the need to get subjects' consent to permit further distribution of their fMRI data beyond the original purpose). I am leaving all of these very important issues to others. Instead, I want to pose a couple of questions to the fMRI community specifically, because they concern data quality and data quality is what I spend almost all of my time dealing with, directly or indirectly. Here goes.


1.  Under what circumstances would you agree to use someone else's data to test a hypothesis of your own?

Possible concerns: scanner field strength and manufacturer, scan parameters, operator experience, reputation of acquiring lab.

2. What form of quality control would you insist on before relying on someone else's data?

Possible QA measures: independent verification of a simple task such as a button press response encoded in the same data, realignment "motion parameters" below/within some prior limit, temporal SNR above some prior value.


If anyone has other questions related to data quality that I haven't covered with these two, please let me know and I'll update the post. Until then I'll leave you with a couple of loaded comments. I wouldn't trust anyone's data if I didn't know the scanner operator personally and I knew first-hand that they had excellent standard operating procedures, a.k.a. excellent experimental technique. Furthermore, I wouldn't trust realignment algorithm reports (so-called motion parameters) as a reliable proxy for data quality in the same way that chemicals have purity values, for instance. The use of single value decomposition - "My motion is less than 0.5 mm over the entire run!" is especially nonsensical in my opinion, considering that the typical voxel resolution exceeds 2 mm on a side. Okay, discuss.

WARNING! Stimulation threshold exceeded!

$
0
0

When running fMRI experiments it's not uncommon for the scanner to prohibit what you'd like to do because of a gradient stimulation limit. You may even hit the limit "out of the blue," e.g. when attempting an oblique slice prescription for a scan protocol that has run just fine for you in the past. I'd covered the anisotropy of the gradient stimulation limit as a footnote in an old post on coronal and sagittal fMRI, but it's an issue that causes untold stress and confusion when it happens so I decided to make a dedicated post.

Some of the following is take from Siemens manuals but the principles apply for all scanners. There may be vendor-specific differences in the way the safety checking is computed, however. Check your scanner manuals for details on the particular implementation of stimulus monitoring on your scanner.

According to Siemens, then:



The scanner monitors the physiological effects of the gradients and prohibits initiating scans that exceed some predefined thresholds. On a Siemens scanner the limits are established according to two models, used simultaneously:



The scanner computes the expected stimulation that will arise from the gradient waveforms in the sequence you are attempting to run. If one or both models suggests that a limit will be exceeded, you get an error message. I'll note here that the scanner also monitors in real time the actual gradients being played out in case some sort of fault occurs with the gradient control.


Flirting with failure

If you try to initiate a scan that is predicted to exceed the SAFE model limits then you will get a warning:



Clicking OK starts the scan by informing the safety monitor to change from "normal mode" to a "first level" mode. The scan is still below the legal threshold for dB/dt. From your perspective there's really no difference from a scan that initiates without the warning, but it's prudent to be aware that a subject who hasn't complained of sensations thus far might be about to report feeling something. (See Note 1.) For this reason, you should have ensured during setup that the subject doesn't have his feet crossed or his hands together because either of these body positions creates big "pickup" loops for the switching magnetic fields.

An attempt to exceed the legal limits for dB/dt, established in the "first level" mode, causes a hard failure, indicated by this unequivocal message:



You can cancel the scan, or you can let the scanner compute parameters that will get the stimulation limit back below the dB/dt threshold (Calculate option), or you can reopen the protocol and try to figure out which parameter(s) to change in order to get the scan to run. This is where things get interesting.

As a general rule - and especially when running EPI for fMRI - any change to your acquisition parameters is ill-advised without full consideration of the consequences! Standard procedure in a scientific experiment is to use fixed acquisition parameters. So, before proceeding we need to consider in more detail why the failure occurred. Is it a consistent failure, or is it the first time this particular scan has failed to run in spite of having been used successfully on umpteen prior subjects? If the failure is consistent regardless of subject (or phantom) then you clearly need to change something to render the scan usable under any circumstances. Such a situation is common when you're using a new sequence for the first time and you don't yet know how fast you can go. You will probably need to talk to your support physicist to get more insight.

With EPI in particular, you may be prohibited from scanning for some slice prescriptions while others are perfectly acceptable; a classic "intermittent" failure. Why should this be?


Stimulus limits are ansiotropic

The amount of current induced in the subject's (electrically conductive) body is proportional to the cross-sectional area in the plane perpendicular to the switched gradient direction. Given that the largest, fastest-switched gradient for EPI is the read (or frequency encode) gradient then it's this gradient that is of prime concern for the stimulus limit. And, once the slice selection direction is established - by virtue of your slice prescription - it leaves just two options for the read gradient direction, the other axis becoming the phase-encoded axis by default.

In understanding the safety limits for switched gradients it is useful to consider the body's three planes as if they act like pick-up coils, that is, loops of wire that can sense changing magnetic fields by having an electric current induced in them. Consider this cartoon showing the effective current loops formed in a subject's body when a gradient is switched along one of three cardinal axes (in each case the switched gradient axis is perpendicular to the plane of your screen, and to the black loops in the figures):

The relative areas of effective current loops (in black) produced by gradient switching. An effective current loop is induced in the plane perpendicular to the switched gradient axis. The three principal switched gradient axes are anterior-posterior (A-P), left-right (L-R) and head-foot (H-F), corresponding to effective current loops in the subject's coronal, sagittal and axial planes, respectively.

So let's consider our options for a coronal EPI slice prescription. For coronal slices the slice selection axis is along the subject's A-P direction (which is the Y axis of the magnet for you physicist types). We can use either H-F or L-R for the read gradient direction in the image plane. Now, according to the above cartoon, the body's cross-sectional area in the plane perpendicular to the H-F axis, i.e. the subject's axial plane, is smaller than the cross-sectional area in the plane perpendicular to the L-R axis, i.e. the subject's sagittal plane. (I'm comparing H-F switching to L-R switching in the cartoon, and H-F switching invokes a smaller effective current loop.) Thus, for a fixed read gradient amplitude and switching speed, the induced currents in the subject will be lower if we choose H-F for the read gradient direction instead of L-R. (This assumes, as is virtually always the case for conventional fMRI, that the read gradient is larger than the phase encoding gradient.) We could use L-R for the read axis (along with H-F for phase encoding), but we are going to run into a stimulus limit at a lower read gradient strength (or speed) than if we use H-F for the read gradient.

That's coronal slices dealt with. A quick glance at the cartoon reveals the preferred read gradient axes for the other two cardinal slice prescriptions. For axial slices, the preferred read gradient direction is L-R, making the phase encode axis A-P. For sagittal slices the preferred read gradient direction is H-F, making the phase encode axis A-P as well.

And now for the intermittent behavior. As you tilt an axial slice prescription towards coronal, at some point the scanner transitions from considering your slices as axial obliques to being coronal obliques instead, and it changes the gradient priority according to the previous assignment. Axial slices use L-R for the read gradient by preference, whereas coronal slices use H-F. It is this transition that can trigger the stimulus limit for no "apparent" reason. You may be able to change the slice tilt back towards axial and get back under the limit so that the scan will start.


Body position again

You can now see why making big loops with your arms or legs is ill-advised inside an MRI. Even so, some people have different dimensions than others and some people are simply more sensitive to peripheral nerve stimulation (PNS) than others. On a Siemens scanner, at least, the actual subject geometry isn't considered when computing the stimulus limits. Nor is the actual subject's position. It is therefore entirely possible for a subject to experience PNS even when the scanner doesn't issue any sort of warning. And even if the subject isn't bothered by it, the fact that he's experiencing PNS is likely to be a distraction for whatever fMRI experiment you're running.

In this simple overview I have conveniently neglected the PNS potential of the slice select and phase encode gradients, as well as other gradients that may be active during a scan. For instance, you could be running a thin slice experiment which uses very large, fast-switched slice select gradients. These might be felt by a subject even when the read gradient (for EPI) isn't causing an issue. Finally, give a thought to non-fMRI scans that might trigger a problem for the subject, perhaps requiring early termination of a session before you've obtained all the data you need.


Key points to take away

  • Don't allow subjects to link their arms or feet and make large loops.
  • If a subject reports feeling PNS in his arms, shoulders or legs, see if you can reduce the sensation by moving the pertinent body part farther away from the magnet bore. Padding to keep a subject's arms off the bore may help here, for example.
  • Be aware of your slice prescription and ensure in pilot testing that tilting "too far" won't trigger a stimulus limit issue due to changed default gradient ordering.
  • On a Siemens scanner the stimulus monitor doesn't track the actual subject size or body position, it relies on predefined models.

________________________



Notes:

1.  For fMRI studies, I don't recommend actually informing the subject that he might feel peripheral nerve stimulation during the upcoming scan. There is nothing quite like priming an fMRI subject for behavior you probably don't want! Instead, always ask the subject to squeeze the ball if she feels anything she doesn't like. You should already have briefed your subject on the common sensations - loud noise and vibrations - so you want to know if something changes and/or becomes uncomfortable. The better you have briefed your subject beforehand - a mock scanner with sounds is really useful here - then the easier you will get only correct positive feedback for PNS.

i-fMRI: A virtual whiteboard discussion on multi-echo, simultaneous multi-slice EPI

$
0
0
Disclaimer: This isn't an April Fool!

I'd like to use the collective wisdom of the Internet to discuss the pros and cons of a general approach to simultaneous multislice (SMS) EPI that I've been thinking about recently, before anyone wastes time doing any actual programming or data acquisition.


Multi-echo EPI for de-noising fMRI data


These methods rest on one critical aspect: they use in-plane parallel imaging (GRAPPA or SENSE, usually depending on the scanner vendor) to render the per slice acquisition time reasonable. For example, with R=2 acceleration it's possible to get three echo planar images per slice at TEs of around 15, 40 and 60 ms. The multiple echoes can then be used to characterize BOLD from non-BOLD signal variations, etc.
The immediate problem with this scheme is that the per slice acquisition time is still a lot longer than for normal EPI, meaning less brain coverage. The suggestion has been to use MB/SMS to regain speed in the slice dimension. This results in the combination of MB/SMS in the slice dimension and GRAPPA/SENSE in-plane, thereby complicating the reconstruction, possibly (probably) amplifying artifacts, enhancing motion sensitivity, etc. If we could eliminate the in-plane parallel imaging and do all the acceleration through MB/SMS then that would possibly reduce some of the artifact amplification, might simplify (slightly) the necessary reference data, etc.


A different approach? 

What I've been mulling is using the PRESTO approach of echo shifting in combination with MB/SMS. Right now we assume that we are crushing all residual signal at the end of each slice acquisition whatever EPI scheme we're using. But if instead we set up the gradients at the end of each readout period so that they are fully balanced/refocused prior to the next excitation then we might have some signal from the prior slice(s) existing at the same time as new signal from the subsequent slice(s). In an experiment detected with a non-array RF coil this would just cause artifacts, but with an array RF coil and the MB/SMS scheme it seems like a solvable problem. Essentially, at the time of acquisition there would be the primary signals from the current SMS excitation, all arising from one set of slice positions, plus the secondary (old) signals from the prior SMS excitation arising from a different set of slice positions. For simplicity, if we assume just these two types of signal are present during each readout period then a modified MB/SMS reconstruction should be able to separate the signals. And we would get two different effective TEs to use in characterizing the BOLD-like characteristics.


Caveats

I can think of two immediate problems with this idea. The first problem is that the extended TE - that for the residual signal from the prior excitation - will be more than twice the prescribed TE, making the recycled signal quite weak. We might expect to use a primary TE of around 30 ms, say, but the secondary TE would then be around 80 ms once we've accounted for fat saturation and the other housekeeping activities. (We could probably set up the gradient balancing at the end of a slice readout quite efficiently such that little or no additional time is needed to preserve the signal for the subsequent readout.) Is such dual echo data even useful for characterizing BOLD from non-BOLD? (A dual TE experiment on a limited number of slices, no SMS, would be a way to answer that question.)

The second problem is that some signals are likely to persevere for three or more acquisition periods; CSF and vitreous fluid come to mind because they have very long T2 and quite long T2*. Will the persistence of these uninteresting signals overwhelm the potential benefits of second echo signals from gray matter? We might also get accidental spin and stimulated echoes arising from any imperfections in the slice selection scheme, too.


Over to you

So there you have today's thought experiment. If nothing else perhaps this is a good way for you to learn more about EPI - accelerated or not - as it is acquired today. It might make an interesting teaching tool for understanding how gradients balance and how k-space works. Can you draw out the modified gradient scheme that would be required to maintain signal from excitation to excitation in a single-shot unaccelerated EPI sequence?



A note on nomenclature: Simultaneous multi-slice (SMS) and multi-band (MB) are used equivalently here. However, I am trying to shift towards using SMS rather than MB because I think it is a more intuitive description of the sequence, and it's hard enough to comprehend the alphabet soup in MRI as it is!




Sharing data: a better way to go?

$
0
0

On Tuesday I became involved in a discussion about data sharing with JB Poline and Matthew Brett. Two days later the issue came up again, this time on Twitter. In both discussions I heard a lot of frustration with the status quo, but I also heard aspirations for a data nirvana where everything is shared willingly and any data set is never more than a couple of clicks away. What was absent from the conversations, it seemed to me, were reasonable, practical ways to improve our lot.*  It got me thinking about the present ways we do business, and in particular where the incentives and the impediments can be found.

Now, it is undoubtedly the case that some scientists are more amenable to sharing than others. (Turns out scientists are humans first! Scary, but true.) Some scientists can be downright obdurate when faced with a request to make their data public. In response, a few folks in the pro-sharing camp have suggested that we lean on those who drag their feet, especially where individuals have previously agreed to share data as a condition of publishing in a particular journal; name and shame. It could work, but I'm not keen on this approach for a couple of reasons. Firstly, it makes the task personal which means it could mutate into outright war that extends far beyond the issue at hand and could have wide-ranging consequences for the combatants. Secondly, the number of targets is large, meaning that the process would be time-consuming.


Where might pressure be applied most productively?

Appealing to a scientist's best intentions is all well and good, but in my view it's easier to make a relatively small change to the rules of the game. My suggestion is to shift the burden from the individual scientist and onto the journal publishing the results. The scientific publication industry is changing all the time, so the fact that there is a move towards more transparency and sharing of data is just another in a long litany of changes the journals are experiencing.

The journals are, however, uniquely placed to change policies regarding data sharing in particular. If a journal makes as a condition of publication that you first upload the data on which your manuscript is based, guess what? That is precisely what you will do. Why do I know this? Because you already comply with their instructions in manifold other ways. You use the font they want, you make the figures the size they want, you use the reference format they want, and you even relegate the methods into supplemental online material even though you know you shouldn't because you wouldn't read your own paper sans experimental details. What's more, at the point of submitting a manuscript you are laser-focused on your goal and best prepared to execute the task of data sharing as just another step on the path.


A call to action?

If we are seriously bothered by data sharing and want to change the way it's done then the first step, it seems to me, is to create a list of those journals publishing neuroimaging studies and categorize them based on their data sharing policies. Next, I am suggesting that those people who have strong opinions on sharing of data should walk the walk, and publish only in those journals whose processes match their stated opinions.**  This is the market at work. If journals stop receiving good manuscripts because the good scientists have gone elsewhere, they will change their practices.

I think we can use three basic categories for journals' policies on data sharing:
  • In the top category are those journals who mandate data sharing as a condition of publishing your study. No data upload, no publication. This is our star team, the journals we should all be using (if we care about data sharing as a precondition for doing science).
  • In the middle category are all the prevaricators. This is the space the vast majority of journals inhabit. They tell you that you must share your data if you are asked to, and this is a Very Serious Policy. So serious, in fact, that they will do, errr, absolutely nothing if you fail to comply. These journals have neatly deflected the task of sharing back onto you, the individual scientist. Why? Perhaps because they are afraid they will see fewer submissions if they get aggressive with data sharing? Or perhaps they are afraid they will have to put up resources to facilitate the sharing, and that would eat into their precious profit margins. But if the sharing of data is a cost of doing business in scientific publishing then it is their cost to bear.
  • The bottom group of journals hardly needs introduction. In this group is any journal saying Not Our Job. They don't even insist that you offer your data when you publish your manuscript. It's all up to you, dear scientist. 


Your field needs You!

Here's where you come in. I would like to crowd-source a review of the journals publishing neuroimaging studies. All I need is for someone to think of a journal, head to the instructions for authors, find the data sharing policy blurb and send me a link to it. That's it! I will then categorize the journals as above, and I'll put out a blog post as a quick guide for scientists looking for sharing-compliant journals to publish in. Pretty easy, huh?

__________________





* I should state for the record that I don't have strong opinions on whether all data should be shared, whether all published data should be shared, when data should be shared, if and how credit should be given, whether there should be restrictions on who can use shared data, etc. I am neither an advocate for nor an opponent of data sharing. My job is to facilitate data generation by others, and to solve problems arising. Data sharing has been stated to be a problem for some in my community, please take this blog post as my contribution to solving the stated problem.

** I'll note here my feelings about open access, which are considerably less ambiguous than my opinions on data sharing. I now refuse to review for journals who don't offer open access. If you review for a journal that erects pay walls and you object to pay walls then I'm very sorry to inform you, you are part of the problem. If you're an editor for a journal with pay walls then you have a very large amount of explaining to do, in my opinion.



Addendum - 28th April, 2014.

Further Reading:

Human neuroimaging as a "Big Data" science.
PMID: 24113873

Toward open sharing of task-based fMRI data - the OpenfMRI project.
PMID: 23847528

Why share data? Lessons learned from the fMRIDC.
PMID: 23160115

Making data sharing work: the FCP/INDI experience.
PMID: 23123682

Data sharing in neuroimaging research.
PMID: 22493576


QA for fMRI, Part 1: An outline of the goals

$
0
0

For such a short abbreviation QA sure is a huge, lumbering beast of a topic. Even the definition is complicated! It turns out that many people, myself included, invoke one term when they may mean another. Specifically, quality assurance (QA) is different from quality control (QC). This website has a side-by-side comparison if you want to try to understand the distinction. I read the definitions and I'm still lost. Anyway, I think it means that you, as an fMRIer, are primarily interested in QA whereas I, as a facility manager, am primarily interested in QC. Whatever. Let's just lump it all into the "QA" bucket and get down to practical matters. And as a practical matter you want to know that all is well when you scan, whereas I want to know what is breaking/broken and then I can get it fixed before your next scan.


The disparate aims of QA procedures

The first critical step is to know what you're doing and why you're doing it. This implies being aware of what you don't want to do. QA is always a compromise. You simply cannot measure everything at every point during the day, every day. Your bespoke solution(s) will depend on such issues as: the types of studies being conducted on your scanner, the sophistication of your scanner operators, how long your scanner has been installed, and your scanner's maintenance history. If you think of your scanner like a car then you can make some simple analogies. Aggressive or cautious drivers? Long or short journeys? Fast or slow traffic? Good or bad roads? New car with routine preventative maintenance by the vendor or used car taken to a mechanic only when it starts smoking or making a new noise?

There are three general categories of QA that are performed routinely at my center: User QA, Facility QA and Study QA. Each has a distinct goal in mind so this is how I'm going to organize the rest of the posts in this series. Here are the three broad goals:

User QA:

This is a way for a user to quickly determine that the scanner is in a specific, known state. It's a bit like a pilot's pre-flight inspection. The pilot assumes the mechanics have checked the guts of the operation, but we might like to ensure no obvious problems before we strap in and go.

User QA can be done after the scanner is turned on, immediately before running an experimental scan, just prior to shutting the scanner down, or at any time there is a question about the scanner's performance (or state) and the user wants to be proactive and not call for help right away.

We need to be able to run User QA frequently during the day so the data must be collected and analyzed quickly - five minutes or less - so it's going to have limited scope. We want to detect major issues but we're not expecting to detect subtle problems. Indeed, at my facility the main goal of User QA is to determine whether a previous user has left the scanner in a state unusable by you. And by running User QA at the end of a session a user can verify that he is leaving the scanner in a known state so that if a problem is detected subsequently he can point to the User QA as part of his defense.

Facility QA:

This is the stuff that you, as a user, expect me, as the facility physicist, to be doing in the background to ensure all is well with the scanner. It's probably what springs to mind when someone mentions "doing QA." Now, what your physicist chooses to measure (or to omit) is very much dependent on what is expected to fail, how quickly it is expected to fail, and how critical that failure might be. For example, a slow degradation of the RF amplifier is an issue that does need to be addressed, but it's unlikely to be critical for the very next fMRI session. Nine times out of ten the subject's performance will dominate subtle scanner issues. But, if there is gradient spiking then it's important to catch the issue, and address it, as soon as possible. FMRI data might escape serious damage, or it might not. Ahead of time there's probably no way to know.

Another aspect of Facility QA is being able to diagnose faults so that they can be rectified quickly. The data needed to diagnose a fault might be different to that needed in detecting it. Deriving measures from images alone may be insufficient. We may want to characterize our scanner's environment, e.g. the ambient temperature and humidity, and also know that the power delivered to the scanner isn't at the mercy of other equipment in the vicinity.

In the Facility QA, then, we are going to be recording a ton of data and we're going to tailor the measurements to our scanner's circumstances. We also need to allow some time to assess the data and perhaps repeat one or two measures if anything is questionable. Robust Facility QA will likely require an hour of scanner time. It may be feasible to run the tests daily, or you may have to accept less frequent measurements in favor of doing science experiments. You may even need to develop two or more versions of your Facility QA: one that you can do daily and a comprehensive version that happens whenever you have a big block of time. We'll discuss the "when and why" issues in the Facility QA post.

Study QA:

This is the sort of QA that you may have performed for your own purposes. Brain imaging data destined for analysis - your experimental data, in other words - may be subjected to testing for the presence or absence of certain features, e.g. using summary statistical images or time series diagnostics to detect artifacts or to ensure signal above some threshold. The quality of your fMRI data is checked before deciding whether to process it further, or discard it.

Study QA may also be performed using phantoms. If you run a multi-center or longitudinal clinical study then it is common to have dedicated phantoms and QA routines in order to track scanner performance. ADNI and FBIRN are two well-known research programs utilizing dedicated, custom phantoms. And much like the way you might run checks on your own fMRI data, these phantom measurements are the component of a research plan, e.g. to establish whether certain data should be included, or provide a way to merge data from scanners with systematic differences in performance.


A way forward, not the way forward

Hopefully you can already anticipate the experiments and measurements that might fit into each of my three categories. There are many ways to do useful QA, of course, and each facility and research group seems to do something slightly different. In the posts to come I shall try to include references and links to other sites wherever I can. But feel free to submit your own now! I will gladly expand any particular area based on interest. Otherwise, what you'll read in the next few posts is what I've found to be most useful to me and the users at my facility, plus a bit of a literature review.



QA for fMRI, Part 2: User QA

$
0
0

Motivation

The majority of "scanner issues" are created by routine operation, most likely through error or omission. In a busy center with harried scientists who are invariably running late there is a tendency to rush procedures and cut corners. This is where a simple QA routine - something that can be run quickly by anyone - can pay huge dividends, perhaps allowing rapid diagnosis of a problem and permitting a scan to proceed after just a few minutes' extra effort.

A few examples to get you thinking about the sorts of common problems that might be caught by a simple test of the scanner's configuration - what I call User QA. Did the scanner boot properly, or have you introduced an error by doing something before the boot process completed? You've plugged in a head coil but have you done it properly? And what about the magnetic particles that get tracked into the bore, might they have become lodged in a critical location, such as at the back of the head coil or inside one of the coil sockets? Most, if not all, of these issues should be caught with a quick test that any trained operator should be able to interpret.

User QA is, therefore, one component of a checklist that can be employed to eliminate (or permit rapid diagnosis of) some of the mistakes caused by rushing, inexperience or carelessness. At my center the User QA should be run when the scanner is first started up, prior to shut down, and whenever there is a reason to suspect the scanner might not perform as intended. It may also be used proactively by a user who wishes to demonstrate to the next user (or the facility manager!) that the scanner was left in a usable state.


Decide on the test configuration

Although different users may want to test the scanner in different configurations - a major variable would be the head RF coil, if your scanner has more than one - there is benefit in maintaining a standard test configuration for all User QA. It makes differentiating real scanner problems more tractable. Besides, there's no reason why you can't add further bespoke tests for an individual user. The intent of the common procedure, then, is to test the scanner in a way that is as close to routine operation and default configuration as possible. We should select the RF coil and a phantom accordingly.

If you only have one head RF coil then you have no choice to make in that regard. But if you have more than one coil then logic suggests you should select the most commonly used coil for User QA. Next, decide on a phantom. A dedicated phantom used only for User QA is ideal, but it shouldn't matter provided you have available something stable. You could use your Facility QA phantom or an FBIRN phantom, for example, but think carefully about the other uses of the phantom before committing. (See Note 1.)

Set up a standard operating procedure (SOP) for the phantom and RF coil so that every user attempts to perform identical operations every time the User QA is conducted. The current User QA protocol for my scanner is here.


Decide what to test

In my User QA I really want the user to determine two key points quickly: in its present configuration, will the scanner (1) acquire images and (2) acquire reasonable EPI for fMRI? The first question can be answered by a simple localizer scan. On Siemens scanners this is most often a three-plane gradient echo scan requiring less than fifteen seconds. For the second question - EPI for fMRI - I use an attenuated version of one of the EPI protocols used in the comprehensive Facility QA routines that will be the subject of future posts. The User QA version is only 3.5 minutes so that the total time required to set up the phantom, insert it into the bore, acquire the localizer and EPI and evaluate the data is a total of five minutes. (I timed it.) If that is too long for you, a shorter EPI run would probably suffice.


Evaluate the performance

If the user is unable to get the User QA routine to complete successfully then we are already in trouble-shooting mode. So, successful completion is our first goal; it establishes that the scanner is going through the correct operations and seems to be working normally. After that, a modicum of experience (or dedicated training, if you prefer) should permit any qualified operator from conducting a reasonable assessment of the data. I am in the habit of contrasting the EPIs so that I can see the background noise, then initiating a cine loop during which I watch for anything to change. I might then repeat the cine loop with the contrast set for the phantom signal itself, in case there is a subtle problem with the RF transmission, say. I don't do anything more fancy than this. Furthermore, I don't actually require my users to evaluate their User QA data quality but it's clearly prudent (and simple enough) to learn how to do it.


What to do if all is well

From the user's perspective, if everything runs smoothly and the data appear as expected there's nothing further to do for User QA. Carry on with the experiment.

At my facility I request that all User QA data be transferred to our offline data storage host, where the data will reside for 30 days, just in case I want to review it for any reason. I will probably review the most recent User QA data as a first step if I'm called to look at a problem with the scanner.

I don't archive the data but if you had the time and the resources you could do so. I tend to review the last 30 days of User QA results if a real scanner problem is detected, e.g. gradient spiking, in case the User QA history can give me a better indication of when the issue first began. So far it hasn't helped me, but I live in hope!


What to do if something isn't right

Since one of the main aims of User QA is to determine pilot error, the first action on the user's part should be to determine whether the procedure was followed appropriately. If so, and if time is of the essence, it may be time to call for assistance. Alternatively, a quick bit of sleuthing can pay dividends. How much and what sort of sleuthing? That will depend heavily on the user's experience level.

Common problems:
  • Software glitch arising out of an interrupted boot procedure.
  • Failure to insert the sample properly.
  • Failure to connect the RF coil properly.
  • Conductive debris in the coil sockets, or in/on the phantom. 
  • Bent or broken pin(s) on the RF coil plugs.

Uncommon problems:
  • The last user left the on-resonance frequency outside the 600 Hz range used by the automated adjustment, e.g. because that person was testing a development pulse sequence and forgot to return the scanner frequency to its starting point.
  • Other custom configuration changes - RF amplifier in standby, one or more gradient amplifiers in standby, something unplugged inside a cabinet  - as implemented by physicists and engineers who supposedly know what they're doing, but forget to undo what they did prior to departing.
  • Real scanner issues, such as gradient spiking.

If the results of User QA suggest that the scanner might have a real problem, it doesn't hurt to re-run the User QA from scratch to verify every step in the chain before moving on to more involved testing. Intermittent problems, as may be caused by conductive debris in a socket, say, can be difficult to diagnose. I will usually want to assess the reproducibility of a problem before I do anything else. And who knows, if you do manage to find and remove a small iron filing from a plug and return the scanner to normal operation, chances are you'll get to scan as you'd intended!


Next post

Coming soon, for technicians, facility managers and highly motivated routine users!

QA for fMRI, Part 3: Facility QA - what to measure, when, and why


__________________________


Notes:

1.  I have a dedicated Facility QA phantom that is used only by staff. The stability of that phantom is critical to my being able to detect subtle scanner problems. I don't want to risk it getting damaged by frequent use! Similarly, although the FBIRN gel phantom is a lovely piece of kit, it's not cheap. If it gets broken being used three times a day then the replacement cost is significant. I thus chose to use the standard doped water phantoms provided by Siemens. They're cheap and easy to replace if/when they leak. And if I can't maintain precisely the same phantom over time for User QA it doesn't matter all that much.


(Download the User QA procedure used at UC Berkeley, for a Siemens TIM/Trio scanner, here.)





Free online fMRI education!

$
0
0

UCLA has their excellent summer Neuroimaging Training Program (NITP) going on as I type. Most talks are streamed live, or you can watch the videos at your leisure. Slides may also be available. Check out the schedule here.

I am grateful to Lauren Atlas for tweeting about the NIH's summer fMRI course. It's put together by Peter Bandettini's FMRI Core Facility (FMRIF). It started in early June and runs to early September, 3-4 lectures a week. The schedule is here. Videos and slides are available a few days after each talk.

Know of others? Feel free to share by commenting!

QA for fMRI, Part 3: Facility QA - what to measure, when, and why

$
0
0

As I mentioned in the introductory post to this series, Facility QA is likely what most people think of whenever QA is mentioned in an fMRI context. In short, it's the tests that you expect your facility technical staff to be doing to ensure that the scanner is working properly. Other tests may verify performance - I'll cover some examples in future posts on Study QA - but the idea with Facility QA is to catch and then diagnose any problems.

We can't just focus on stress tests, however. We will often need more than MRI-derived measures if we want to diagnose problems efficiently. We may need information that might be seem tangential to the actual QA testing, but these ancillary measures provide context for interpreting the test data. A simple example? The weather outside your facility. Why should you care? We'll get to that.


An outline of the process

Let's outline the steps in a comprehensive Facility QA routine and then we can get into the details:

  • Select an RF coil to use for the measurements. 
  • Select an appropriate phantom.
  • Decide what to measure from the phantom.
  • Determine what other data to record at the time of the QA testing.
  • Establish a baseline.
  • Make periodic QA measurements.
  • Look for deviations from the baseline, and decide what sort of deviations warrant investigation.
  • Establish procedures for whenever deviations from "normal" occur.
  • Review the QA procedure's performance whenever events (failures, environment changes, upgrades) occur, and at least annually.

In this post I'll deal with the first six items on the list - setting up and measuring - and I'll cover analysis of the test results in subsequent posts.


Choose an RF coil

RF coils break often. They are handled multiple times a day, they get dropped, parts can wear with scanner vibration, etc. So it is especially important to think carefully before you commit to a receiver coil to use for your Facility QA. What characteristics are ideal? Well, stability is key, but this is at odds with frequent use if you have but a single head coil at your facility. If you have multiple coils and are able to reserve one for Facility QA then that is ideal. The coils in routine use can then be checked separately, via dedicated tests, once you're sure the rest of the scanner is operating as it should.

When selecting a coil you also want to think about its sensitivity to typical scanner instabilities. If you have an old, crappy coil that nobody uses for fMRI any longer, don't resort to making that the Facility QA coil just because it's used infrequently! You want a coil that is at least as sensitive to scanner problems as those coils in routine use.

I use the standard 12-channel RF coil that came with my system. I happen to have two of these beasts, however, so if there is ever any question as to the coil's performance I am in a position to make an immediate swap and do a coil-to-coil comparison. I also have a 32-channel head coil. I test this coil separately and don't use it to acquire scanner QA measurements, but that's just personal choice. I've found that the 32-channel coil breaks more often than the 12-channel coils, simply because it has five plugs versus just two plugs for the 12-channel coil.


Select a phantom

Here there is really no excuse not to have a phantom dedicated to Facility QA. This phantom should be used only for QA and only by technical staff. You might want to purchase a phantom for this purpose, or simply designate something you have on-hand and then lock it away.

What characteristics should the phantom have? In my experience it doesn't matter all that much provided it approximates the signal coming from a human head. It doesn't need to be a sphere, but it could be. I decided to use one of the vendor-supplied doped water bottles when I devised my Facility QA scheme, and it was for a very simple reason: it was what I had! Perhaps I could have ordered, say, a second FBIRN phantom but I simply wasn't that forward-thinking.

I did, however, take the precaution of having a dedicated holder built for the cylindrical bottle I use. This holder keeps the phantom in exactly the same orientation with respect to the magnet geometry, thereby assuring near-identical shimming for every Facility QA session. (Shim values are some of the ancillary information we'll record below.) Reproducible setup is arguably more important than the particular characteristics - shape, size, contents - of the phantom.

There may be some other considerations before you commit to your Facility QA phantom. Do you need to compare your scanner's performance with other scanners? Cross-validation may require a specific phantom. Also, do you need to measure the performance of anatomical scans, or can you (like me) focus almost exclusively on fMRI-type stability testing? You may even need two or more phantoms to run all the Facility QA tests you need.


Decide what MRI data to acquire

Here's my Facility QA protocol in a nutshell:

  • Localizer scan (15 sec)
  • 200 volumes of "maximum performance" EPI at TR=2000 ms (6 min 44 sec total scan)
  • 200 volumes of "maximum performance" EPI at TR=2000 ms (6 min 44 sec total scan)
  • 200 volumes of "typical fMRI" EPI at TR=2000 ms (6 min 44 sec total scan)
  • Various service mode QA checks (approx. 15 min) 

Including setting up the phantom and recording various ancillary data, the whole process takes about 45 minutes to perform. This allows a further 15 minutes to analyze the EPI data on the scanner, for a total one hour commitment.

I use two types of EPI acquisitions (see Note 1) in my Facility QA protocol: one which is (close to) "maximum performance" and one that is representative of a typical user's parameters for fMRI. There have been instances when a problem has shown up in the user scan and not in the maximum performance scans, most likely because the user scan is applied in a slightly different axial-oblique orientation that requires driving the imaging gradients differently.

The idea with the maximum performance scans is to kick the scanner where it hurts and listen for the squeal. The first time series is inspected for problems but isn't analyzed further. It's essentially a warm-up scan. I fully analyze the second scan, however, making several measurements that reflect the temporal stability of signal, ghosts and noise. More on those measurements in later posts.

Why only one warm-up acquisition of under 7 minutes? More warm-up scans could be warranted if you have the time. My scanner achieves a thermal steady state in about 15 minutes. But I also have very efficient water cooling that means even a short delay between EPI runs, e.g. to re-shim, will cause a major departure from equilibrium. I determined that I could get sufficient stability in imaging signal after a 7-minute warm-up, so that's what I use. If you have reason to worry about the thermal stability of your passive shims in particular, then by all means warm up the scanner for 15-30 mins before running your QA. I've found it's not critical for my scanner and, as with all things QA, it's a tradeoff. More warm-up scans would take me over an hour for the entire Facility QA procedure.

The parameters for the "maximum performance" EPI acquisitions are in these figures (click to enlarge):


There are 40 descending slices acquired axially (perpendicular to the long axis of a doped water bottle) with the slice packet positioned at the bottle center. Provided the positioning is reproducible on the phantom, I don't think the particular slice position and orientation is as important as acquiring as many slices as possible in the TR of 2000 ms. We want to drive the scanner hard. The TE, at 20 ms, is comparatively low for fMRI but it permits a few more slices in the TR. I decided to use 2 mm slice thickness to drive the slice selection gradients about as hard as they ever get driven. But I decided to keep the matrix (64x64) and field-of-view (224x224 mm) at typical fMRI settings because, with the the echo spacing set short, I could get a larger number of slices/TR than with higher in-plane resolution. It's just another one of the compromises.

A word about the echo spacing. My scanner will actually permit a minimum echo spacing of 0.43 ms for a 64x64 matrix over a 224 mm FOV. I test at 0.47 ms echo spacing, however, because I observed that my EPI data were too sensitive to electrical power instabilities at the shortest possible echo spacing (see Note 2). Backing off to 0.47 ms eliminated most of the acute power sensitivity yet maintains an aggressive duty cycle and permits me to disentangle other instabilities that could manifest in the ghosts. (Recall that the N/2 ghosts are exquisitely sensitive expressions of EPI quality, as covered here and here.)

In the third and final EPI time series of my Facility QA protocol I test an EPI acquisition representative of a typical fMRI scan. As before, I use a standard (product) pulse sequence:


All pretty basic stuff. Thirty-three slices acquired at an angle reminiscent of AC-PC on a brain, and other parameters as appropriate for whole brain fMRI. As with the first "max performance," or warm-up, time series, I don't actually record anything from this time series. It is inspected for obvious problems only. I've found that analyzing the performance of the second "max performance" time series is generally sufficient to detect chronic problems. Intermittent problems, such as spiking, are addressed in separate, dedicated tests (see below).

Why don't I just do three acquisitions at maximum performance? I could, I suppose, but I prefer to have at least one look at the scanner performing as it does for a majority of my users' scans. It gives me an opportunity to assess the severity of a (potential) problem detected in the earlier time series, hence to make a decision on whether to take the scanner offline immediately, or whether to re-check at a later time and try to minimize the impact on the users.

What isn't tested in my protocol?

It should be clear that exhaustive testing of every parameter is impractical. In the protocol above I am using only two image orientations and two RF flip angles in total, for example. It is quite possible that gradient spiking will show up earliest in one specific image orientation because of the particular way the imaging gradients are being driven. Even testing all the cardinal prescriptions - coronal, axial, sagittal - would increase the total time considerably yet there's no guarantee that spiking would always be caught (see Note 3).

As for the RF flip angle, if the RF amplifier develops a problem at high power settings and I test only at the relatively low powers used in EPI for fMRI, I may well miss a slowly degrading RF amp. I would hope to catch the degrading performance eventually in the measurements that I do make. Still, if you were especially worried about the stability of your RF amp you could add a high flip angle time series to the tests. You need to determine the priorities for your scanner based on its history and the way it gets used.

Some other things I'm not testing directly: gradient linearity, magnet homogeneity, eddy current compensation, mechanical resonances. Many of these factor into the EPI data that I do acquire, so I'm not completely blind to any of them. My Facility QA protocol is primarily aimed at temporal stability as it affects fMRI data. Your facility may require additional MRI-based tests, e.g. gradient linearity determined on an ADNI phantom. And, of course, your scanner should be getting routine QA performed by the vendor to ensure that it stays within the vendor's specification.


Ancillary data for Facility QA

Now let's shift to considering other data we might record, either because the data could reveal a problem directly or because it might help us diagnose a problem that manifests in the time series EPI data.

These are the fields presently recorded to my QA log:

Date & time of test - Self explanatory. Essential for proper interpretation!

Visual inspection of the penetration panel - Has someone connected an unauthorized device causing an RF noise problem, perhaps?

Visual inspection of the magnet room - Is anything out of place or otherwise obviously wrong?

System status prior to QA - Record whether the scanner was already on, or was started up prior to performing QA. Electronics can be funny that way.

Suite temperature and humidity - I have a desktop monitor that lives in the operator's room. Ideally I'd record in the magnet room with remote sensors, but measuring in the operator's room is a reasonable check on the suite's condition. A consistent temperature in the magnet room is important for general magnet stability. Humidity is critical for proper functioning of gradients in particular. Low humidity may cause spiking, but it can also increase the rate of component failures from static discharges. Furthermore, if you have an electrical equipment room that isn't at a near constant temperature, e.g. because people go into it frequently, then you will want to measure the temperature of that room separately. RF amplifiers are often air-cooled so changes in the surrounding air temperature tends to translate into RF amplifier instabilities.

Prevailing weather - I use the weather report from a nearby airfield. It gives barometric pressure, relative humidity, air temp and dew point, and the prevailing conditions (e.g. sunny, cloudy, rain, etc.). If you have a mini weather station of your own that's even better! Lest you think this information is overkill, more than one site has found that their magnet went out of specification when the sun was shining on the MRI suite. Extreme temperature may have direct effects, e.g. via passive shielding or building steelwork, or indirect effects, e.g. high electrical load for your building's air conditioning. Large, rapid changes in barometric pressure may affect magnetic field drift in some magnet designs, too.

(The following data may only be available via a service mode interface. Check with your local service engineer.)

Gradient coil ambient temp - Temperature of the gradient coil (or return cooling water) before commencing QA. The equilibrium temperature is a function of the cooling water temp to the gradient coil and should be consistent.

Gradient coil temps before/after each time series EPI acquisition - Useful to determine if you are generating excess heat, e.g. because of an increased resistance in a gradient circuit, or if the gradient water cooling has a problem, e.g. low pressure or flow rate.

Magnet temperatures - You may be able to record the temperatures of some of the various barriers between the liquid helium bath (at 4 K) and the MRI suite (290-3 K, or 17-20 C). Your scanner vendor is likely monitoring these numbers remotely, but it doesn't hurt to keep a check on things yourself, especially if your site is prone to periods of extreme vibration - earthquakes, passing freight trains - or you have just had someone accidentally stick a large ferrous object to the magnet. Internal magnet temps can be an early indication of a possible quench due to a softening vacuum shield, amongst other things.

Helium level - Another good indication of something going wrong inside the magnet, although with the refrigeration units (cold heads) on modern MRIs the helium level over time may actually be a better indication of the health of your helium recycling than of the magnet per se.

Linewidth - This is the post-shim water linewidth for your QA phantom. If the position of the phantom is reproducible in the magnet bore then the linewidth should be similarly reproducible.

Magnet center frequency (in MHz) - Together with the magnet temp(s) and helium level, relative stability of on-resonance frequency is a good indication of overall magnet health. Changes may occur with weather conditions or suite temperature, however, so be sure to consider all parameters together when performing diagnostics.

Room temp shim values - A phantom placed reliably in the magnet should yield reproducible shim values when an automated shimming routine is used. (Auto-shimming is the default on all modern scanners.) There are eight RT shims on my scanner: three linear shims (i.e. the gradients themselves), X, Y and Z, and five second-order shims, Z2, ZX, ZY, X2-Y2 and XY. Record them all. Changes in the RT shims may indicate that you have a problem with your phantom (a leak?) or the phantom holder, or they could be an indication that the passive shim trays  - thin strips of steel positioned between the magnet and the gradient set - are working loose due to vibration.

Service mode tests - I include the vendor's RF noise check and spike check routines because these are two relatively common problems and I prefer to diagnose them directly, not via EPI data, if at all possible. You may not have permission to run these tests, however. If not, you could either rely on analysis of the time series EPI data discussed above, or add further acquisitions designed to be maximally sensitive to spikes and RF noise (see Notes 3 and 4).

Additional RF coil tests - My 32-channel coil can be tested with a dedicated routine available under the service mode. I don't acquire any EPI data with this coil.

Service/maintenance work log - It is imperative to keep a record of any work performed on the scanner, and to refer to this log whenever you are interpreting your QA records.

Anything else? -  That's rather up to you. Electrical supply data can be very useful if you can get it. I can get minute-to-minute voltages for my (nominal) 480 V supply. I don't bother getting these reports for every Facility QA session we run, but I ask for them if I see anything strange in my test data.


Establishing your baseline

Having determined the data you'll record it's time to define "normal" for your scanner and its environment. In my experience, six months of data allows me to characterize most of the variations. I want to know what the variance is but I am also keen to know why it is how it is.

There are no shortcuts to obtaining a baseline, you have to acquire your Facility QA as often as you can. If you have a new facility or a new scanner then you probably have a lot of scanner access; your routine users haven't started getting in your way yet. It should be feasible to run once a day, five days a week for at least the first several weeks, then you can begin to reduce the frequency until you are running once a week or thereabouts.

Recently fixed/upgraded scanners should be tested more frequently, in part to check that there are no residual problems but also to redefine the baseline in case it has shifted. More on interpreting the data in the next post.


When to run Facility QA

You have your baseline and you know what normal looks like for your scanner. To science! Except you now have to decide how often to check on your scanner's status to ensure that all remains well. Or, if you're a realist, to determine when something starts to go wrong.

Many people will prefer to have a fixed time in which to run QA. It may be necessary to fix the time slot because of scanner and/or personnel schedules. Is this a good thing? Not necessarily. Some degree of scatter may catch problems that vary with scanner usage, or with time of day. Say you decide to do your Facility QA on a Saturday morning because it's when you have plenty of time. That's fine, but if your building is barely occupied and the electrical load is significantly lower than during working hours Mon-Fri then you may miss an instability that affects midweek scans. So if you opt for a fixed slot for QA, first establish in your baseline measurements that the time of day and the day of the week are insignificant drivers of the variance in the measurements you're making.

If you find that time of day or day of week is a significant factor in your scanner's performance then you may wish to try to rectify the source(s) of the differences first, if this is possible. If not then fixing the day and time of your Facility QA may be required in order to work around the instabilities that are beyond your control. Remember, the point of Facility QA is not to show that the scanner's performance is constant 24/7/365, rather it is to catch changes (usually deterioration) in its performance under fixed test conditions.


Next post in this series: processing and interpreting your Facility QA data.


___________________________


Notes:

1.  If your facility, like mine, uses a customized pulse sequence for routine fMRI acquisitions, resist the temptation to use that sequence for Facility QA. Instead, use one of the product sequences (I use Siemens' ep2d_bold) and then set the parameters as close as you can to what you usually use in your everyday sequence. Why? Because the service engineer is going to want to know what sequence you used when you found the problem you're reporting. They will want to see the problem demonstrated on a standard sequence in case it's a coding issue. (Yes, it really does happen that physicists screw up their custom pulse sequences! ;-) So save yourself the extra time and effort and remain as close to "product" in your Facility QA as you can.

2.  At very short echo spacings the fraction of ramp-sampled data points approaches or exceeds the fraction of sampling that happens along the flat top of the readout gradients. Even tiny shifts in the gradient waveform to the ADC periods will yield intense, unstable N/2 ghosts. (See the section entitled "Excessive ramp sampling" in this post.) A common cause of mismatch is due to the electrical supply at the instant the gradients are commanded to do something. Now, my facility has pretty good electrical power stability these days, but it's not perfect. (I don't have a separate, dedicated power conditioner for the scanner.) So if the voltage on the nominal 480 V, 3-phase supply changes with load elsewhere in the building, these changes pass through to the gradient amplifiers and may be detectable as periodically "swirling" N/2 ghosts. It is actually quite difficult to tie these swirling ghosts to the electrical power stability because other instabilities may dominate, depending on your facility. For example, in my old facility my scanner had its own external chiller comprising two refrigeration pumps that cycled depending on the heat load in the gradient set. When running EPI flat out the pumps would cycle every 200-300 seconds, and this instability was visible as a small instability with the same period in the EPI signal. But now that I have a building chilled water supply rather than a separate chiller the water cooling is essentially constant (and highly efficient!), revealing the next highest level of instability underneath, which in my new facility is the voltage on the 480 V supply.

3.  Siemens offers a separate gradient "Spike check" routine that the service engineer can use. If you know the service mode password you can use it, too. I've found that the dedicated routine is hit and miss compared to EPI for detecting spikes, but the difference may simply be due to the amount of time spent testing. If an EPI time series is 6 minutes long there are many opportunities to catch spikes. The service mode spike check runs for only a few seconds (although it does sweep through all three gradients at many different amplitudes). Sometimes it takes many repetitions of the spike check to confirm spikes that I think I've detected in EPI.

4.   In addition to the spike check mentioned in note 3, the vendor will have an RF noise check that acquires periods of nothing, i.e. the receivers are simply opened to sample the environment in the absence of gradient and RF pulses. Different carrier frequencies and bandwidths are tested to span the full range used in MRI acquisitions. If you are unable to use dedicated routines for either spike or RF noise checking then don't despair, test EPI data can be used to check for significant problems. The process becomes heavily dependent on analysis, however, so I'll cover it in future posts, on processing your QA data. In my opinion, for catching problems the dedicated routines are preferable for both their specificity, sensitivity and speed. The EPI test data can then be analyzed to confirm that all is well, rather than as the primary way to detect problems. I see this as an overlap between Facility QA and Study QA, so I'll revisit it in later posts.




    i-fMRI: My initial thoughts on the BRAIN Initiative proposals

    $
    0
    0

    So we finally have some grant awards on which to judge the BRAIN Initiative. What was previously a rather vague outline of some distant, utopian future can now be scrutinized for novelty, practicality, capability, etc. Let's begin!

    The compete list of awards across six different sections is here. The Next Generation Human Imaging section has selected nine diverse projects to lead us into the future. Here are my thoughts (see Note 1) based mostly on the abstracts of these successful proposals.


    Dissecting human brain circuits in vivo using ultrasonic neuromodulation

    See 1-R24-MH106107-01 for the abstract.

    Ultrasonic neuromodulation is a recent addition to the family of "minimally invasive" stimulation methods (TMS, tDCS) being used to prompt neurons (and other brain cells?) into doing something. In this case, ultrasound waves serve as an energy source to provoke some sort of neural response. The mechanism could be via localized heating, say, and one of the main goals of this project is to determine just how ultrasound interacts with brain tissue.

    Strictly speaking, transcranial ultrasound isn't an imaging method per se, rather it's a manipulation designed to allow imaging methods to see something different after the manipulation. Combining methods - here, ultrasonic neuromodulation with fMRI or EEG - should enable some unique experiments, e.g. to test network modularity. In this regard it's akin to TMS-fMRI. Knockout models have always been critically important in neuroscience and neurology, I see this project as a logical extension of those approaches.


    Path towards MRI with direct sensitivity to neuro-electro-magnetic oscillations

    See 1-R24-MH106048-01 for the abstract.

    This proposal extends prior attempts to use high field MRI to measure directly the electromagnetic activity associated with concerted neural firing. I generally refer to this family of methods as neuronal current imaging (NCI). To date, the only compelling demonstrations of NCI via MRI have involved bloodless preparations, because BOLD signal changes (as well as changes in cerebral blood volume, CBV) tend to overwhelm the tiny signal changes driven by electromagnetic fields associated with neurons working. This is still my biggest concern with this new proposal. I'm not saying it can't be done, only that BOLD is a ubiquitous weed that contaminates every fast imaging pulse sequence yet invented and applied at operating fields above a tesla or so. (My rule of thumb: in primary cortex, expect 1% change in BOLD per tesla of operating field.) CBV changes are also a huge concern when using amplitude changes in signals.

    Then there's sensitivity. For Lorentz force-based contrast, as used previously by this group, the desired effect increases with the magnetic field used to induce it. The problem, however, is that BOLD also scales not less than linearly with operating magnetic field. In sum, then, I see this proposal as interesting but technically challenging in a way that is unlikely to find it displacing other methods any time soon. For such a project, it seems to me that the farther they can get from high field magnets and conventional MRI sequences for spatial encoding, the better off they might be. The group at Los Alamos tried NCI using their ultralow field MRI scanner. They may have circumvented the BOLD contamination issue but that just leaves the not inconsiderable sensitivity issue to address. (See Note 2.)


    Imaging in vivo neurotransmitter modulation of brain network activity in real time

    See 1-R24-MH106083-01 for the abstract.

    This is a curious proposal. It's also the one I have the least background knowledge about. The abstract is scant on details, but it sounds like they are proposing to build transmitting agents that can be inserted into the brain - circulating in the blood, perhaps - and thence to report on the neurotransmitter status nearby. It does sound rather "Innerspace" to me, I have to say.

    Photoacoustics are mentioned as part of one aim. This involves firing laser light at tissue such that an ultrasound pressure wave is generated from the rapid heating. The principles are well established. Whether they will be amenable to use in a "minimally invasive/non-invasive" manner, however, remains to be seen. Perhaps they can adapt magnetoacoustics to the task instead, to eliminate the laser light and its associated heating. I'll be watching this team with interest. I could see them making successful bench-top demonstrations and proofs of principle, but getting agents into brains and reporting signals out of brains will be a massive sensitivity and safety challenge, it seems to me.


    Magnetic particle imaging (MPI) for functional brain imaging in humans

    See 1-R24-MH106053-01 for the abstract.

    Swapping one method reliant on vascular changes (present-day fMRI) for another doesn't, at first blush, seem very ambitious. Critics of fMRI are always lambasting the indirect view of "neural activity" provided by blood flow and volume changes. But the rationale for this project rests on the large potential gain in SNR compared to BOLD-based fMRI. The claim is that MPI would offer more than two orders of magnitude in sensitivity. This is likely true. However, there are some limitations to consider. First and foremost, MPI requires that the signaling agent - magnetic particles of iron oxide, or similar - be injected into the blood stream. This is immediately going to dissuade many people from volunteering for studies, and there is always the potential toxicity to consider. (The nice thing about hydrogen nuclear spin is that it's already everywhere in the brain and the blood, in the form of water. And it's non-toxic!) Perhaps a reduced subject pool is acceptable if it means that we can get better signals from those who do volunteer. Time will tell.

    The other issue is imaging speed. As acquired at present, MPI needs a certain amount of rastering - usually the sample is moved relative to a field-free point or field-free line - which makes the acquisition of a full image considerably slower than for fMRI. Based on my experience with neuroscientists and fMRI to date, unless and until one can get whole brain MPI in two seconds or less, it will be a hard sell. So that is where I would want to see the biggest developments from present technology if I was to view this as a true potential replacement for fMRI.

    Still, I find the whole premise that MPI could replace fMRI unlikely, given that fMRI scanners also make rather good anatomical MRI scanners, hence to permit reasonably good localization of those functional blobs in situ. MPI needs supporting anatomical information, such as that obtained by a separate MRI, in order to make sense of its signals. At best I would think that MPI and MRI might be made complimentary. I see the choice of one versus the other as a false dichotomy.

    There is one possibility that I am very keen to see tested, however. In fMRI we have a huge number of different motion sensitivities, from T1 effects to receive field biases to magnetic susceptibility gradients. It's a complicated mess. If MPI could be made somehow less motion-sensitive than fMRI - perhaps motion would just blur an image and cause false negatives, without the chance of lots of false positives - then it might find a deserving role in mapping brain function, even if it is "just another" vascular method as presently envisaged.


    Vascular interfaces for brain imaging and stimulation

    See 1-R24-MH106075-01 for the abstract.

    If proposal 1-R24-MH106083-01 eluded to the movie, "Innerspace," this one virtually hijacks the plot! The whole idea is to devise new imaging reporter systems that can be introduced via the vasculature "to deliver recording devices to the vicinity of neurons buried inside the brain parenchyma." It's invasive by definition, but that may not be the biggest obstacle by a long way. Getting the agents to go where they are required, to anchor for a while, and then to have sufficient power to transmit their signals to the surface of the head, are all truly massive difficulties.

    Still, with any luck proposals like this one will cross-fertilize with those using optogenetics, photoacoustics and other sensor systems and who knows, perhaps some sort of mini-machine might be devised that can be used in the vasculature without killing either Martin Short or Dennis Quaid.


    MRI corticography (MRCoG): micro-scale human cortical imaging

    See 1-R24-MH106096-01 for the abstract.

    Given that I have the most background knowledge on this proposal it isn't perhaps surprising that I might find it to be the most tractable of the nine. I would even go so far as to say that it is low risk. The premise is straightforward: given that large arrays of small coil loops have difficulty gaining depth penetration for the entire brain, don't aim for the entire brain. Aim to image just the cortex instead. Seen this way, the weak signals from deeper tissue are a contaminant to be eliminated - likely feasible - thereby facilitating smaller image fields-of-view and higher spatial resolution using essentially the same sort of spatial encoding as we use now. Granted there might be benefits to coupling these cortical coil arrays with faster and/or stronger gradients to push the resolution still further, but head gradient sets and even surface gradient sets are already out there.

    Limited ambition on the fMRI contrast front is perhaps my main criticism. We know from a lot of animal work, e.g. from the lab of Seong-Gi Kim, that once one attains laminar specificity, CBF or CBV-based contrast attain much the same spatial localization. So, getting away from BOLD would help but the intrinsic biological limits have already been established, I think. That said, it would be a truly massive step from what we can do today, and I don't see any reason why it can't be done for fMRI purposes.

    Magnetic susceptibility contrast mapping of axon fibers is included as a way to improve white matter tractography. This method benefits from higher field, so this entire project would be tailor-made for 7 T, although more limited performance of both the fMRI and tractography could be obtained at 3 T.


    Advancing MRI & MRS technologies for studying human brain function and energetics

    See 1-R24-MH106049-01 for the abstract.

    I'm way behind on the latest physics of high field MRI to assess this proposal in any detail, but as far as I can gather the aim is to use some new (dielectric) materials to squeeze every last drop of SNR out of existing whole body scanners operating at 7 T and higher. With the SNR enhanced over what's possible with today's transmission and reception systems, the hope would be to facilitate even higher resolution using conventional spatial encoding methods. (If novel methods are being considered they're not mentioned in the abstract.) Overall, then, it looks to be a logical extension of the path that's taken us to 7+ T today. Is that a bad thing? Probably not. Attaining maximum performance out of our existing polarizing fields is a laudable aim on its own. We might as well exploit the polarizing field as far as we possibly can, these are expensive beasts!

    The real novelty would come after attaining the SNR gains. The hope would be to boost the performance of rare nuclei for imaging and spectroscopy. (Endogenous) 31-P and (exogenous) 17-O are the two nuclei mentioned in the abstract, but other nuclei would benefit and could become viable candidates for functional imaging in their own right. (Endogenous) 23-Na and (exogenous) 19-F come immediately to mind.


    Imaging brain function in real world environments & populations with portable MRI

    See 1-R24-MH105998-01 for the abstract.

    In this proposal the drive is towards smaller, lower field polarizing magnets such that the smaller, lighter systems would then be transportable and could be deployed in environments quite different than today's MRI suite. It's an interesting proposal in that in some ways we've been here before. Prepolarized MRI systems using pulsed electromagnets at room temperature (with water cooling) have been around for a couple of decades and have already produced images that are rather good. Historically, the motivation claimed was to get cheaper MRI, but it has turned out that better trumps cheaper and there simply hasn't been the demand for producing a commercial product (sadly, imho).

    Could this project reinvigorate the prepolarized MRI efforts as a side effect, then? I certainly hope so, because many of the problems faced by this proposal are common to prepolarized MRI systems aiming to do functional brain imaging, specifically the need to optimize functional contrast methods at magnetic fields that are generally lower than 1 tesla. BOLD could be used but it's a rather weak effect at low fields. CBF imaging is possible in principle, but arterial spin labeling of blood benefits from high field because the blood T1 increases with B0. So it would seem that CBV imaging (i.e. VASO and its ilk) would be the functional method of choice, if endogenous contrast is the goal. This could be done on prepolarized MRI systems with modest effort, no new magnet technology required.

    To me, then, this proposal looks simultaneously ambitious and elementary. If another call goes out looking specifically for mobile MRI scanners, expect to see many more proposals with a lot more mature technology as their base.


    Imaging the brain in motion: the ambulatory micro-dose, wearable PET brain imager

    See 1-R24-MH106057-01 for the abstract.

    This sounds like a laudable goal but whenever I've been involved in discussions about doing PET the first question asked is "How far away is the cyclotron?" Some radionuclides are amenable to transport, so perhaps an ambulatory cyclotron-PET combination isn't implied, but what does seem clear is that only certain species would be suitable for taking out into the big wide world.

    Assuming that suitably long-lived radionuclides can be employed, and assuming that adequate imaging sensitivity can be achieved with the lower concentrations of radionuclides being considered, that just leaves the engineering challenge of building a portable, even wearable, PET scanner. I've no idea what they plan to do in this regard - the abstract focuses almost exclusively on the radionuclide issues - but one might think that lightweight disposable, "one-time use" technologies might be indispensable here. Way back in the last century we had this quaint photographic method that relied upon one-time use film to record pictures when a shutter was opened in the camera and the film was exposed to light. Perhaps something along these lines might replace the ring of scintillation crystals used in conventional PET scanners. Even so, to me it sounds like it would be a hefty piece of kit.

    UPDATE, 3rd Oct 2014: photo of concept via Julie Brefczynski-Lewis,https://twitter.com/practiCalfMRI/status/518102980399083520

    _________________________



    Notes:

    1.  I am a colleague of some of the successful principal investigators, and I know personally several more from other groups. I have no interests that might conflict, however. I may bend a few people out of shape, but that's a risk I'm prepared to take. These are just my current scientific opinions on what has been proposed. Nothing more, nothing less.

    2.  I got into ultralow field (ULF) MRI back in 2004 after reading a now largely discredited paper claiming to detect neuronal currents with MRI. My rationale was that if BOLD scales as a percent per tesla of operating field then reducing the operating field to the point where BOLD all but vanishes is a good start. But once we considered all the other contaminants we realized that CBV changes would persist and likely still be several orders of magnitude larger than anything we might hope to do with NCI. So we switched gears to trying to use the CBV change as a functional contrast mechanism at ULF. Even this less ambitious goal proved to be near-impossible with our setup; we had too much sensitivity to motion and, quite possibly, concomitant changes in cerebrospinal fluid (CSF) that offset our desired signal changes. So then we changed directions again and went after clinical goals instead. That's where we stand today. We haven't worked on functional imaging methods at ULF for several years now, and we have no plans to restart unless someone gives us a huge pot of money to rebuild the entire ULFMRI system to minimize subject motion. Sitting upright isn't gonna do it.


    Viewing all 87 articles
    Browse latest View live




    Latest Images