Quantcast
Channel: practiCal fMRI: the nuts & bolts

COBIDAcq?

$
0
0

WARNING: this post contains sarcasm and some swearing.
(But only where absolutely necessary.)


COBIDAcq, pronounced "Koby-dack," is the Committee on Best Practice in Data Acquisition. It is based on the similarly dodgy acronym, COBIDAS: Committee on Best Practice in Data Analysis and Sharing. I suppose COBPIDAAS sounds like a medical procedure and CBPDAS is unpronounceable, so COBIDAS here we are.

Unlike COBIDAS, however, the COBIDAcq doesn't yet exist. Do we need it? The purpose of this post is to wheel out the idea and invite debate on the way we do business.


Why a new committee?

You know the old aphorism: "Act in haste, repent at leisure?" It's not just for US tax reform. We have a lot of errors made in haste in fMRI. You may have noticed. Some of the errors may be directly under an experimenter's local control, but many are distributed by influential people or through limitations in commercial products. Whatever the root causes, unless you think fMRI has already attained methods nirvana, there is ample reason to believe we could do a lot better than the status quo.

The COBIDAS approach is intended to "raise the standards of practice and reporting in neuroimaging using MRI," according to its abstract. I am still seeing weak evidence there has been wholesale adoption of the COBIDAS suggestions for reporting. (Feel free to pick up your latest favorite paper and score it against the COBIDAS report.) Thus, I'm not wholly convinced practice in neuroimaging will benefit as much as intended, except to help people disentangle what was done by others and avoid their mistakes at some much later stage, perhaps. What I'm after is intervention a lot earlier in the process.


Risks and systematic errors in an era of Big Data

A long time ago I wrote a post about taking risks in your experiment only if you could demonstrate to yourself they were essential to your primary goals. New is rarely improved without some harmful - often unknown - consequences. Rather, what new usually gets you is new failure modes, new bugs, a need to change approach etc. So if you have a really good idea for a neuroscience experiment and you can achieve it with established methods, why insist on using new methods when they may not help but may cause massive damage? That is tantamount to placing your secondary goals - impressing a reviewer with yer fancy kit - ahead of your primary goals. Crazy!

There is a lot of energy presently going into data sharing and statistical power. This is great n' all, but what if the vast data sets being cobbled together have systematic flaws; potentially many different systematic flaws? How would you know? There are some QA metrics that attempt to capture some of the obvious problems - head motion or gradient spiking - but do you trust that these same metrics are going to catch more subtle problems, like an inappropriate parameter setting or a buggy reconstruction pipeline?

I'd like to propose that we redirect some of this enthusiasm towards improving our methods before massively increasing our sample size to the population of a small market town. Otherwise we are in danger of measuring faster-than-light neutrinos with a busted clock. No amount of repeat measures will tell you your clock is busted. Rather, you need a sanity check and a second clock.

Here are some examples of common problems I see in neuroimaging:
-  Taking a tool that worked at one field strength and for one sort of acquisition and assuming it will work just as well at a different field strength or with a different acquisition, but with little or no explicit testing under the new circumstances.

-  New and improved hardware, sequences or code that are still in their honeymoon phase, foisted into general use before rigorous testing. Only years later do people find a coding bug, or realize that widely used parameters cause problems that can be avoided with relative ease.

-   Following others blindly. Following others is a great idea, as I shall suggest below, but you shouldn't assume they were paying full attention or were sufficiently expert to avoid a problem unless there is documented evidence to refer to. Maybe they got lucky and skirted an issue that you will run into.

And here's my final motivation. It's difficult enough for experienced people to determine when, and how, to use certain methods. Imagine if you were new to neuroimaging this year. Where on earth would you start? You might be easily beguiled by the shiny objects dangled in front of you. More teslas, more channels, shorter repetition times, higher spatial resolution.... If only we could use such simple measures to assess the likelihood of experimental catastrophe.


Ways to improve acquisition performance

I think there are three areas to focus on. Together they should identify, and permit resolution of, all but the most recalcitrant flaws in an acquisition.

1. Is it documented?

Without good documentation, most scientific software and devices are nigh on unusable. Good documentation educates experts as well as the inexperienced. But there's another role to consider: documenting for public consumption is one of the best ways yet devised for a developer to catch his errors. If you don't believe this to be true, you've never given a lecture or written a research paper! So, documentation should help developers catch problems very early, before they would have seen the light of day.

While we're talking documentation, black boxes are a bad idea in science. If it's a commercial product and the vendor doesn't tell us how it works, we need to open it up and figure it out. Otherwise we're conducting leaps of faith, not science.

2. How was it tested at the development stage?

Understandably, when scientists release a new method they want to present a good face to the world. It's their baby after all. When passing your creation to others to use, however, you need to inject a note of realism into your judgment and recognize that there is no perfectly beautiful creation. Test it a bit, see how it falls down. Assess how badly it hurts itself or things around it when it crashes through the metaphorical coffee table. Having done these tests, add a few explanatory notes and some test data to the documentation so that others can see where there might be holes still gaping, and so they can double-check the initial tests.

3. Has it been validated independently and thoroughly?

Today, the standard new methods pipeline can be represented in this highly detailed flow chart:

Developer  →  End user

Not so much a pipeline as quantum entanglement. This is a reeeeeally bad idea. It makes the end user the beta tester, independent tester and customer all in one. Like, when you're trying to complete a form online and you get one of those shibboleth messages, and you're like "What. The. Fuck! Did nobody bother to test this fucking form with Firefox on a Mac? Whaddayamean this site works best with Internet Explorer? I don't even own a PC, morons! How about you pay a high school student to test your fucking site before releasing it on the world?"

Okay, so when this happens I might not be quite that angry... Errr. Let's move on. After better documentation, independent validation is the single biggest area I think we need to see improvement in. And no, publishing a study with the new method does not count as validation. Generally, in science you are trying your hardest to get things to work properly, whereas during validation you are looking to see where and how things fail. There is a difference.


What to do now?

This is where you come in. Unless a significant fraction of end users take this stuff seriously, nothing will change. Maybe you're okay with that. If you're not okay with it, and you'd like more refined tools with which to explore the brain, let your suggestions flow. Do we need a committee? If so, should it be run through the OHBM as the COBIDAS has been? Or, can we form a coalition of the willing, a virtual committee that agrees on a basic structure and divides the work over the Internet? We have a lot of online tools at our disposal today.

I envisage some sort of merit badges for methods that have been properly documented, tested and then validated independently. There will necessarily be some subjectivity in determining when to assign a merit badge, but we're after better methods not perfect methods.

How might COBIDAcq work in practice? I think we would have to have some sort of formal procedure to initiate a COBIDAcq review. Also, it's harder to review a method without at least partial input from the developer, given that we might expect some of the documentation to come from them. In an ideal world, new method developers would eagerly seek COBIDAcq review, forwarding mountains of documentation and test data to expedite the next phase. Yeah, okay. Unrealistic. In the mean time, maybe we do things with as much democracy as we can muster: select for review the methods that are getting the most play in the literature.

One criticism I can envision runs along the line of "this will stifle innovation or prevent me from taking on a new method while I wait for you bozos to test it!" Not so. I'll draw a parallel with what the folks did for registered reports. Not all analyses have to be preregistered. If you don't know a priori what you'll do, you are still free to explore your data for interesting effects. So, if you choose to adopt a method outside of the scope of COBIDAcq, good luck with it! (Please still report your methods according to the COBIDAS report.) Maybe you will inadvertently provide some of the validation that we seek, in addition to discovering something fab about brains!

Nothing about this framework is designed to stop anyone, anywhere from doing precisely what they do now. The point of COBIDAcq is to create peer review of methods as early in their lifetime as possible, and to provide clear signaling that a method has been looked at in earnest by experts. Neuroscientists would then have another way to make decisions when selecting between methods.

Okay, that will do. I think the gist is clear. What say you, fMRI world?



Monitoring gradient cable temperature

$
0
0

While the gradient set is water-cooled, the gradient cables and gradient filters still rely upon air cooling in many scanner suites, such as mine. In the case of the gradient filters, the filter box on my Siemens Trio came with an opaque cover, which we replaced with clear plastic to allow easy inspection and temperature monitoring with an infrared (IR) thermometer:

The gradient filter box in the wall behind my Siemens Trio magnet. It's up at ceiling height, in the lowest possible stray magnetic field. The clear plastic cover is custom. The standard box is opaque white.


Siemens now has a smoke detector inside the gradient filter box, after at least one instance of the gradient filters disintegrating with excess heat. Still, a clear inspection panel is a handy thing to have.

The gradient cables between the filter box and the back of the magnet can also decay with use. If this happens, the load experienced by the gradient amplifier changes and this can affect gradient control fidelity. (More on this below.) The cables can be damaged by excess heat, and this damage leads to higher resistance which itself produces more heating. A classic feedback loop!


The Fluke 561 IR thermometer and a K type thermocouple, purchased separately.


Monitoring both the gradient filters and cables is as easy as purchasing an IR thermometer and some thermocouples. We have a Fluke model 561 IR thermometer (above), costing about $200, which has three nice features. Firstly, it can be used fairly safely inside the magnet room. The unit uses two AA batteries. There is almost no noticeable force on the unit until you take it right into the magnet bore. In well-trained hands it is perfectly safe to use around a 1.5 T or 3 T magnet. It will also perform flawlessly in the fringe field of a 3 T magnet.

The second feature is it's main one: the IR sensor. A laser sight allows easy targeting. This permits quick surface thermometry of the cables, as shown in the photo below, and also of the gradient filters if you have a clear cover like I do. There is a surprising amount of temperature variation along the cables, I find, so having the ability to sweep along a cable can be useful.



The third feature is something you might well skip for simplicity, but I like it. The Fluke is compatible with K type thermocouples. They plug right into the top:


We have installed six thermocouples inside the gradient filter box, one per cable terminal. We used Kapton heat resistant tape to mount them. You can see the tape in the uppermost photo, as nearly horizontal brown bands on the white gradient cable sleeves. Monitoring is as simple as plugging in the desired plug and pulling the trigger on the meter. The Fluke then displays the temperature of the thermocouple rather than of the IR sensor.

K-type thermocouples connected to the six cable terminals inside the filter box. The active ends, taped to the cables, can be seen in the uppermost photograph. We leave the thermocouples installed and simply tuck them out the way when they are not in use.


Using gradient cable and filter thermometry

At the moment I only measure gradient cable temperatures when I'm running long diffusion scans and I want to be sure that I'm not breaking anything. But it would be very easy to incorporate these measurements into routine Facility QA. I would record the starting and ending temperatures of the thermocouples I have taped into place, for consistency. And of course I'd use a constant acquisition protocol; perhaps a diffusion imaging scan to increase the gradient duty cycle and really drive the system. (Right now my Facility QA consists of three ~6 min EPI runs, so only the readout gradient axis has a reasonably high duty cycle, while the other two channels aren't used enough to provoke much departure from room temperature.)

We had to have our gradient cables replaced once because of runaway resistances. For certain diffusion scans we could see cable temperatures over 60°C. But at the time we didn't have the thermocouples installed. We were only alerted to the possibility of a cable resistance/heating issue when our gradient control became unpredictable. We would occasionally see changing levels of distortion in EPI scans; sudden stretches or compressions unrelated to subject movement. (These aren't my data but there is a near identical example posted here.) Had we been monitoring the gradient cable temperatures weekly, we might well have seen a trend towards increasing cable temperatures before the intermittent distortion reported by users, and been in a position to alert the service engineer.

With any luck you will notice the gradient control issues as your first symptom that something is wrong. In the extreme, however, you may find that the gradient connectors decay under the extreme heat. (See the second and third photos in this earlier post on fires in MRI facilities.) By the time your filter connectors are turning to dust you will likely be experiencing occasional spiking events in EPI. It might be externally generated noise that is being conducted into the magnet room by virtue of insufficient RF filtering, or spikes might be generated at the hot connectors. Either way, it's a very good idea to catch the problem long before it gets to this stage!

In future, perhaps the scanner manufacturers will apply water cooling to the gradient cables, as they do already for the gradient and RF amplifiers, plus the gradient set itself, of course. The gradient filters may be at particular risk if, depending on their housing, there is limited airflow. On our unit the filters are designed to cool conductively via the colder equipment room air conditioning. The gradient cables in the magnet room rely upon the magnet room air, which in my facility is at 17°C. Even then, it is quite easy to get >40°C on the surface of the gradient cables with a 20 minute diffusion scan. It's worth looking into.

FMRI data modulators 3: Low frequency oscillations - part I

$
0
0


Low frequency oscillations (LFOs) may be one of the the most important sources of signal variance for resting-state fMRI. Consider this quote from a recent paper by Tong & Frederick:
"we found that the effects of pLFOs [physiological LFOs] dominated many prominent ICA components, which suggests that, contrary to the popular belief that aliased cardiac and respiration signals are the main physiological noise source in BOLD fMRI, pLFOs may be the most influential physiological signals. Understanding and measuring these pLFOs are important for denoising and accurately modeling BOLD signals."

If true, it's strange that LFOs aren't higher on many lists of problems in fMRI. They seem to be an afterthought, if thought about at all. I suspect that nomenclature may be partly responsible for much of the oversight. A lot of different processes end up in the bucket labeled "LFO." The term is used differently in different contexts, with the context most often defined by the methodology under consideration. Folks using laser Doppler flow cytometry may be referring to something quite different than fMRI folks. Or not. Which rather makes my point. In this post I shall try to sort the contents of the LFO bucket, and in at least one later post, I shall dig more deeply into "systemic LFOs." These are the LFOs having truly physiological origin; where the adjective is used according to its physiological definition:


The description I pulled up from the Google dictionary tells us the essential nature of systemic LFOs: at least some of them are likely to involve the blood gases. And I'll give you a clue to keep you interested. It's the CO component that may end up being most relevant to us.


What exactly do we mean by low frequency oscillations anyway?

"Low frequency" generally refers to fluctuations in fMRI signal that arise, apparently spontaneously, with a frequency of around 0.1 Hz. The precise range of frequencies isn't of critical importance for this post, but it's common to find a bandwidth of 0.05 - 0.15 Hz under discussion in the LFO literature. I'll just say ~ 0.1 Hz and move on. I added "apparently spontaneously" as a caveat because some of mechanisms aren't all that spontaneous, it turns out.

For the purposes of this post we're talking about variations in BOLD signal intensity in a time series with a variation of ~ 0.1 Hz. There may be other brain processes that oscillate at low frequencies, such as electrical activity, but here I am specifically concerned with processes that can leave an imprint on a BOLD-contrasted time series. Thus, neurovascular coupling resulting in LFO is relevant, whereas low frequency brain electrical activity per se is not, because the associated magnetic fields (in the nanotesla range, implied from MEG) are far too small to matter.

Is LFO the lowest modulation of interest? No. There are physiological perturbations that arise at even lower frequencies. These are often termed very low frequency oscillations (VLFOs) because, well, we scientists are an imaginative bunch. These VLFOs generally happen below about 0.05 Hz. The biological processes that fluctuate once or twice a minute may well be related to the LFOs that are the focus here, but I am going to leave them for another day.


Categorizing LFOs:  How do they originate?

There is a lot of terminology in use, much of it confusing. After reading a few dozen papers on various aspects of LFOs, I decided I needed to sort things out in my own way. Different fields may use similar terms but may mean slightly different things by them. Generally, the nomenclature changes with the methodology under consideration. An LFO identified with transcranial Doppler ultrasound in a rat brain may not be the same as an LFO observed with optical imaging on a patient's exposed cortical surface during surgery. Reconciling these differences with LFOs observed in fMRI may be quite misleading as a result.

I finally decided on the four categories of LFO you find below. They are defined in an fMRI-centric way. My goal was to identify the irreducible parts, then try to figure out how different papers use varying nomenclature to discuss the specific mechanisms involved. Hopefully, this allowed me to separate processes in a useful manner from the perspective of an fMRI experiment, since much of the literature on physiological LFOs uses non-MRI methods. To help me relate the processes to fMRI specifically, I resorted to thought experiments. I will include a few in the footnotes so you can check my categorizations. Hopefully, if I have incorrectly characterized or omitted a process, it will be more apparent this way.

1. Instrumental limitations

These do not count as true biological LFOs according to my scheme. The most common way to produce variance around ~0.1 Hz in fMRI is through aliasing. We know that if we are acquiring a volume of EPI data every 2 seconds then we are below the Nyquist sampling frequency for normal human heart rates. Some fraction of the respiratory movements might also end up aliased into our frequency band of interest. By assuming an ideal acquisition method that acquires a volume of data not less than twice per heart beat, we begin to eliminate this source of LFO from our fMRI data. (SMS-EPI may permit sufficiently rapid sampling, depending on voxel size.) Which is why I think it is important to separate fundamentally biological processes from things that are fundamentally scanner-based. I contend that sampling rate is a scanner property, and it it is only the interaction of the biology with an imperfect scanner that produces the LFO. Improvements in scanner design and/or pulse sequences will ameliorate these effects.

An unstable patient table that deflects with a subject's breathing is clearly an instrumental limitation. A rock-solid patient bed eliminates mechanical deflections. Perturbation of B₀ from a subject's breathing is another instrumental limitation. There are a few potential solutions in principle. For example, we could use a field tracker that prevents modulation of the magnetic field over the head from chest motion. Or, if we had a pulse sequence other than EPI, with its low bandwidth in the phase encoding dimension, we could render respiration-induced modulations vanishingly small. (See Note 1.) The important point is that as scanner hardware and sequences are improved, we can expect to make significant advances in the amelioration of these pseudo-biological LFOs.

2. Cardiorespiratory mechanics

I apologize for the clunky term. Cardiopulmonary mechanics was another option. Not much better, huh? In this category are processes that derive from body mechanics; that is, the mechanical processes of physiology that originate outside of the brain. The two main sources are a pumping heart and a set of lungs oxygenating the blood. We seek the biological consequences in the brain that are produced by these oscillating organs. (I can't think of other body organs driving any pulsations but I await being enlightened in the comments section.) We have blood pressure waves and respiration-induced CSF pressure changes via the foramen magnum. These processes are independent of whether we are studying a person by fMRI or using any other method. See Note 2 for some thought experiments I used to derive this category.

The most important cardiorespiratory LFO I've seen in the literature is called the Mayer wave. The commonly accepted definition of a Mayer wave is an arterial blood pressure that isn't constant from heart beat to heart beat, but fluctuates about a resting mean. The fluctuations about the mean arterial BP occur with a frequency of ~ 0.1 Hz. Why the variation? It seems to be related to sympathetic nervous system activity. In lay terms, your "fight or flight" response isn't flat, but very slightly modulated.

The Mayer waves act at the speed of the arterial blood pressure wave. The effect on the BOLD signal happens as fast as the pressure wave passes through the vascular tree, which we know from a previous post can be estimated with the pulse transit time. At most it takes a few hundred milliseconds for the pressure wave to reach the toes from the aorta. We can expect differences of tens of milliseconds in arrival time across the brain, faster than the typical sampling rate of an fMRI experiment.

Can we measure it? The Mayer wave is a change in blood pressure, necessitating a good estimate of BP if we are to get a handle on it. We saw in an earlier post that measuring BP non-invasively in the scanner is non-trivial, however, so we shall have to leave isolation of Mayer waves to some future date. In the mean time, I am not unduly worried about Mayer waves as a major source of LFO because, as I shall claim below, there is likely a far more significant process afoot. 

I don't know enough about respiration-induced pulsation of CSF to estimate the importance of this mechanism at frequencies of ~ 0.1 Hz, except to say that any effects that do exist will be greatest around the brainstem, and will likely decrease the farther one gets from the foramen magnum. As with Mayer waves, I think it's safe to assume that we should worry about other mechanisms first, unless you are doing fMRI of brainstem structures, in which case you should hit the literature and keep this process top-of-mind.

3. Myogenic processes 

The third candidate LFO mechanism is vasomotion. Perturbations in the vascular tone - the tension in the smooth muscle of blood vessel walls - may vary over time. Some of the non-neural signaling mechanisms contributing to vasomotion are reviewed here. The effect is myogenic, meaning "in the muscle."

We assume that vasomotion occurs independent of the contents of the blood in the vessel. Many references also suggest that vasomotion occurs independent of nervous control. In other words, there would be some sort of local oscillatory signaling within the vessel wall that produces an idling rhythm. Additionally, however, vasomotion may be influenced by nervous system responses because the smooth muscles of the arterial walls are innervated. Indeed, this is how we get neurovascular coupling. Thus, some vasomotions might actually be responsible for the target signals in our resting state scans, as suggested very recently by Mateo et al. (See also this 1996 article from Mayhew et al.) So, for the purposes of this post, I shall consider vasomotion as a desirable property, at least for resting state fMRI, and leave the issue of any non-neural components of vasomotion for another day. As things stand, it would be nigh on impossible to separate, using current methods, the target vasomotion - that driven by neurovascular coupling - from any non-neural vasomotion that one might label as a contaminant.

4. Blood-borne agents

A fourth category of LFOs was suggested relatively recently. Mayer waves and vasomotion were observed long before fMRI came about. But it was the advent of resting state fMRI that seems to have precipitated the interest in this category. Blood constituents are not stationary. Instead, the concentration of blood gases - oxygen and carbon dioxide in particular - vary based on your rate and depth of breathing, your stress level, and so on. Anything traveling in the blood that either directly or indirectly mediates BOLD signal changes is therefore of concern, and is included in this category.

The spatial-temporal propagation of LFOs through the brain, arising from blood-borne agents, is naturally at the speed of the bulk blood flow. Whereas Meyer waves propagate through the brain with the velocity of the blood pressure wave, agents carried in the blood tend to move much more slowly. We usually use a mass displacement unit for cerebral blood flow (CBF), typically milliliters of blood per fixed mass of tissue per minute.  But that's not very intuitive for this discussion. Instead, consider the average time taken for blood to transit the brain, from an internal carotid artery to a jugular vein. In normal people this journey takes 7-10 seconds. This is the timescale of relevance to LFOs produced by blood-borne agents.

The most important vasodilative agent carried in the blood is carbon dioxide. It is so important that I am dedicating the entire next post - part II - to it. I hadn't expected to be digging into CO effects until later in this series, since I had anticipated that all the main LFO effects would be vascular, with no direct overlap to respiratory effects. Live and learn. It's a timely reminder of just how complex and interwoven are these physiologic confounds.


Summing up the categories

Okay, to summarize, we have instrumental limitations, which could be eliminated in principle, then three categories of LFO arising out of a subject's physiology. The latter three categories can be expected to occur regardless of the particular MRI scanner you use. These physiological mechanisms arise spontaneously; there is no need to evoke them. Thus, it means they are likely ubiquitous in both resting and task fMRI experiments. 

The pulsatile effects of cardiorespiratory mechanics don't seem to be amenable to independent measurement at the present time. We can possibly infer them from the fMRI data, but then we are forced to deal with the consequences of aliasing and any other instrumental limitations that produce signal variance derived from cardiac or lung motion, manifest via different pathways.

We also don't seem to have a way to separate in principle any non-neural vasomotion from that which may be driven by neurovascular coupling. Multi-modal, invasive measurements in animals, such as performed by Mateo et al., may be the only way to discriminate these processes.

That leaves blood-borne agents. Changes in oxygen tension may be important since, for a fixed metabolic rate of oxygen consumption, any process that alters the supply of oxygen in arterial blood necessarily produces a concomitant change in deoxyhemoglobin in venous blood. I am still investigating the potential importance of oxygen tension, but based on several converging lines of evidence, it appears that fluctuations in arterial CO are the far bigger concern.

Coming up in Part II: Systemic LFOs arising from changes in arterial CO (we think).


__________________________


Notes:

1. If you don't like my field tracker ideal, try this out instead. Imagine we have an fMRI scanner that operates at a main magnetic field of 100 microtesla (μT). A 3 ppm field shift at 3 T equates to nearly 400 Hz; a staggeringly vast frequency shift that would cause horrendous distortions and translations in EPI. But a 3 ppm shift at B₀ = 100 μT corresponds to a frequency of just over 0.01 Hz, against a typical linedwidth of ~20 Hz. The magnetic susceptibility due to chest movement vanishes at this low field. Thus, an ultralow field MRI scanner is robust against the modulation of B₀ from chest movements. The corollary? High field, whole body scanners exhibit enhanced sensitivity to chest movement. (3 ppm at 7 T is a frequency of almost 1 kHz. Ouch.)

2. Imagine we stopped the subject's heart and chest motions and instead replaced the biological functions of heart and lungs with a machine that scrubbed CO₂ and oxygenated the blood before recirculating it through the arteries. It does this in smooth, continuous fashion, without pulsations of any kind. If the machine delivers oxygenated blood to the brain at the same effective rate as the brain needs, all should be okay and the brain should continue to behave normally. But what would happen to the cardiorespiratory mechanical effects? If the machine is ideal, if it doesn't pulse at all, and there are no moving parts to produce any sort of pressure wave through the body, we would have successfully eliminated two sources of LFO.

An alternative way to think about LFOs arising from cardio-respiratory mechanics is to note that the pulsations are independent of the substances being manipulated. Pretending for a moment that the biology wouldn't mind, the mechanical effects across the brain would be the same if the heart was pumping olive oil instead of blood and the lungs were inspiring and expiring pure helium instead of 20% oxygen. The respiratory and cardiac mechanical processes would continue unabated, as would any LFOs produced in our fMRI data, because they arise from the pulsations inherent in the plumbing.


FMRI data modulators 3: Low frequency oscillations - part II

$
0
0

In the previous post, I laid out four broad categories of low frequency oscillation (LFO) that arise in fMRI data. The first three categories are mentioned quite often in fMRI literature, with aliasing of respiratory and cardiac pulsations being the best known of all “physiological noise” components. In this post, I am going to dig into the fourth category: blood-borne agents. Specifically, I want to review the evidence and investigate the possibility that non-stationary arterial CO₂ might be producing an LFO that is at least as important as aliased mechanical effects. At first blush, this is unsurprising. We all claim to know CO₂ is a potent vasodilator, so we can think of CO₂ in blood as a sort of changing contrast agent that perturbs the arterial diameter – producing changes in cerebral blood volume - whenever the arterial CO₂ concentration departs from steady state.

Why would arterial CO₂ fluctuate? Why isn't it constant? Simply put, we don't breathe perfectly uniformly. If you monitor your own breathing you’ll notice all sorts of pauses and changes of pace. Much of it depends on what you’re doing or thinking about, which of course gets right to the heart of the potential for fluctuations in CO to be a confound for fMRI.

I had hoped to begin this post with a review of CO transport in the blood, and from there to relay what I’ve found on the biochemical mechanism(s) underlying vasodilation caused by CO. But after several weeks of searching and background reading, I still don’t have sufficient understanding of the biochemistry to give you a concise overview. The CO transport mechanisms are quite well understood, it seems. But how a change in one or more components of CO in arterial blood produces changes in the arterial smooth muscle wall, that is a more complicated story. For the purposes of this post, then, we shall have to content ourselves with the idea that CO is, indeed, a potent vasodilator. The detailed biochemistry will have to wait for a later post. For those of you who simply can’t wait, I suggest you read the review articles given in Note 1. They aren’t aimed at an fMRI audience, so unless you are a biochemist or physiologist, you may not get the sort of intuitive understanding that I have been searching for.


First indications that arterial CO might be an important source of LFO in fMRI data

The effects of respiration on BOLD data were recognized in the mid-nineties as an important consideration for fMRI experiments. By the late nineties, several groups began to investigate the effects of intentionally held breaths on BOLD signal dynamics, using as their basis the phenomenon of arterial CO as a vasodilator. Other groups (e.g. Mitra et al., 1997) observed low frequency fluctuations in BOLD data that suggested a vasomotor origin, or found fluctuations in cerebral blood flow (CBF) measured by non-MR means (e.g. Obrig et al., 2000). It wasn’t until 2004, however, that Wise et al. showed definitively how slow variations of arterial CO concentration were related to, and likely driving, low frequency variations in BOLD time series data:
PETCO-related BOLD signal fluctuations showed regional differences across the grey matter, suggesting variability of the responsiveness to carbon dioxide at rest.”
“Significant PETCO-correlated fluctuations in [middle cerebral artery] MCA blood velocity were observed with a lag of 6.3 +/- 1.2 s (mean +/- standard error) with respect to PETCO changes.”

The spatial-temporal dynamics observed by Wise et al. certainly fit a blood-borne agent. That is, we should expect lag variations dependent on the total arterial distance between the heart and the tissue of interest; in their case, the MCA.

A note about nomenclature, and an important assumption. Wise et al., and many others since, used the peak partial pressure of CO, a measure of concentration, that is known as the “end tidal” CO - PETCO - in the expired breath as an estimate of the partial pressure of CO in the arterial blood, the PaCO. This assumption is based on the lung gases and arterial blood gases being in equilibrium. Clearly, there can be regional differences in blood gases all around the body, but to a first approximation we assume that PETCO reflects PaCO.


How do systemic LFOs relate to BOLD signal changes in brain?

In 2000, Obrig et al. used functional near-infrared spectroscopy (fNIRS), comprising a single light source and detector pair placed on the occipital lobe, over visual cortex, to show that an intrinsic LFO of oxyhemoglobin could be detected with or without visual stimuli. (See Note 2 for a brief introduction to NIRS principles.) The LFO was attenuated by hypercapnia when subjects breathed 5% CO in air, a result that matched earlier findings by Biswal et al. in 1997. Since the largest fraction of oxyhemoglobin is arterial, the reduction of LFO intensity when inhaling CO suggests a connection between LFOs and arterial CO concentration. Vasodilation is expected to increase CBV towards its ceiling and reduce the capacity for fluctuations. Intriguingly, Obrig et al. also reported that LFO could be detected in signals originating from deoxyhemoglobin at a magnitude about one tenth those in oxyhemoglobin. These fluctuations apparently preceded the LFO in oxyhemoglobin by 2 seconds, although I would now interpret the deoxy- fluctuation as lagging the oxyhemoglobin by 9-10 sec instead. (Justification for reinterpretation of the Obrig result will become clear later.) The important point is that their data showed LFOs in signals from species found predominantly in arterial as well as venous compartments.

In 2010, Tong & Frederick published the first in a series of studies investigating the spatial and temporal characteristics of LFOs in fMRI data. Functional NIRS was recorded simultaneously with resting state fMRI. The time course of NIRS obtained from the right prefrontal cortex was used as a reference signal to explore the spatial-temporal relationship between NIRS and the entire whole brain fMRI data on a voxel-wise basis. Two forms of NIRS data were used in separate analyses. Signal from oxyhemoglobin is expected to be positively correlated with fMRI signal, being dominated by changes in CBV and CBF. Signal from deoxyhemoglobin arises mostly in venous blood, and its concentration is expected to be inversely correlated with the fMRI data, assuming the standard BOLD model of activation. A NIRS time series was resampled then compared to the fMRI data using shifts of the NIRS data over a range -7.2 to +7.2 seconds, with shift increments of half the TR for the fMRI, i.e. 0.72 sec. Correlations with a positive time shift indicate that an event in the fMRI precedes detection in NIRS data, while negative shifts indicate a lag in the fMRI. Here is an example from one subject, using the oxyhemoglobin signal from NIRS, with a small red circle depicting the approximate location of the NIRS probe being used to measure the reference signal:

Figure 4 from Tong & Frederick, 2010. (Click to enlarge.)

Two features are immediately apparent: there are widespread spatial correlations between NIRS obtained from a single location (at the red circle) to the fMRI detected over the entire brain, and these spatial correlations change with the time lag. It would have been eminently reasonable to expect correlations only at the spatial location sampled by NIRS; perhaps 1-2 cm of cortex. Yet we see correlations throughout the brain and a changing dependence on lag. Take, for example, the bright yellow signal in the superior sagittal sinus (SSS) seen in the left panel at time 0.0 s (green box). Staying with the sagittal view of the left panels, look at what happens to the SSS signal at successively later times. The bright yellow region seems to “flow” downward, from parietal to occipital, until at time 4.32 s there is just a small yellow dot remaining at the occiput. If you have the patience, you can divine similar flow patterns between other time windows for other parts of the brain, as described in the paper:

“From the sagittal view of the z-maps, the BOLD signal wave starts to appear at locations near the callosomarginal, frontopolar and parietooccipital arteries [-5.04 s]. As time progresses, the wave becomes widespread in the gray matter [e.g. -2.16 s], as it passes through capillary beds and then retreats towards the venous systems through several paths, including: 1) the superior cerebral vein to the superior sagittal sinus (also visible from the coronal view) [e.g. 1.44 s]; 2) the inferior sagittal sinus combining internal cerebral vein to the straight sinus; 3) through the transverse sinus (visible in the coronal view); 4) through the anterior and posterior spinal veins. The path the wave follows through the brain strongly resembles that of the cerebral vasculature.”

That last sentence is crucial. The period -5.04 s to +4.32 s, approximately 9 seconds, compares well with the time taken for full passage of blood through the brain. A blood-borne origin is implied. You can even see deep brain correlations occurring again from +5.04 s to +7.2 s in the figure above, while the spatial distribution at +7.2 s resembles that at -4.32 s. Beyond +5.04 s we may be observing correlations of the current LFO period as sampled by NIRS, with the subsequent LFO sampled by the fMRI, since there are usually patterns in how one breathes.

With the NIRS setup over frontal lobe, Tong, Bergethon & Frederick (2011) found that breath holds causing brief hypercapnia produced the same sorts of spatially varying optimal lags with a NIRS signal as had resting state fMRI data, supporting the assignment of a blood-borne agent as the cause. So far so good.

The McLean group then did something inspired: they changed the position of the NIRS sensor to the periphery. This is sound logic if the LFO is systemic – literally, throughout the body – as they suspected it was. So, in their next experiment they added further NIRS sensors to their setup so that they could record from fingers and/or toes at the same time (Tong et al. 2012). This is how NIRS from a finger and toe compare:

Figure 1 from Tong et al., 2012. (Click to enlarge.)

There is a striking similarity in the time courses, except that the signal at the toe lags that detected at the finger. The differing hemodynamic delays in the periphery are nicely exemplified by a comparison of the lags between a finger and a toe versus between the two big toes:
“The LFO signal reaches the [left big] toe 2.16–4 s later than the finger (time delays: Tdelay = 3.07 ± 0.81 s). For three participants, NIRS data was also collected at the right big toe; the LFOs from the two toes had maximal correlations (rmax = 0.85 ± 0.09) with small time shifts between sides (Tdelay = -0.02 ± 0.57 s).”

The greater distance from the heart to toes than from the heart to fingers explains these results nicely. Naturally, the two big toes should exhibit comparable vascular transit times. This is exceedingly strong evidence of a systemic, blood borne perturbation of arterial blood volume.

From comparisons between sites in the periphery using NIRS alone, Tong et al. moved to comparing NIRS recorded from a fingertip to fMRI recorded simultaneously from the brain. These results were consistent with their earlier correlations produced with NIRS on the forehead:
“First, the voxels, which are highly correlated with NIRS data, are widely and symmetrically distributed throughout the brain, with the highest correlation appearing in the draining veins, although there is also significant correlation throughout the gray matter. This global signal confirms that a significant portion of the LFO signal in the brain is related to systemic blood circulation variations. Second, the dynamic pattern reflects the variable arrival times of the LFOs at different parts of the brain, just as it arrives at the finger and the toe with different time delays. This latter observation supports the contention that the LFO signal directly reflects bulk blood flow and confirms our previous, brain-only measurements.”

We know that aliased cardiac and respiratory frequencies are a major problem for fMRI with slow sampling, i.e. long TR. Here, however, the reference time course is from NIRS sampled well above the Nyquist frequencies of both processes, allowing Tong et al. to make an important inference:
“Another observation from the present results is that because the LFO used in [regressor interpolation at progressive time delays] RIPTiDe is derived by applying a bandpass filter (0.01 to 0.15 Hz) to the NIRS Δ[tHb], which has been sampled at a relatively high frequency (12.5 Hz), the heartbeat (~1 Hz) and respiratory (~0.2 Hz) signals have been fully sampled; therefore there is no aliasing of these signals into the LFO signal. Consequently, the LFOs we identified in the periphery, and those we identified in the brain with BOLD fMRI, are independent of the fluctuations from the cardiac pulsation (measured by pulse oximeter) and respiration (measured by respiration belt), which provides strong counterevidence to the contention that the non-neuronal LFO in BOLD is mainly the aliased signal from cardiac pulsation and respiration.”

It is striking to me that some amount of LFO is systemic. Tong et al. didn’t (dare?) venture a candidate blood-borne agent in their 2012 study, although they must have had strong suspicions. But, as we shall see momentarily, by 2014 they were suggesting arterial CO as a good explanation. Let’s assume it is arterial CO, although the implication is the same whatever the agent: there is a mechanism for producing vasodilation in the walls of peripheral arteries, just as there is in cerebral arteries. Is that surprising? It isn’t something I would have assumed to be necessarily the case, but I’m not a physiologist. The brain, muscles and dermis could all have evolved quite different sensitivity to arterial CO, if there were unique implications for local metabolism. That seems not to be the case. Instead, there is a generalized sensitivity to arterial CO that produces vasodilation. And one consequence of this generalized response is a systemic LFO that can be detected anywhere in the body, including in the brain.


Doing away with the extra hardware

Recording NIRS requires custom hardware. (See Note 3.) For their next trick, Tong & Frederick managed to do away with the need for the NIRS hardware altogether. In 2014, they presented a data-driven version of their RIPTiDe method for mapping lags:
“In this study, we applied a new data-driven method to resting state BOLD fMRI data to dynamically map blood circulation in the brain. The regressors used at each time point to track blood flow were derived from the BOLD signals themselves using a recursive procedure. Because this analytical method is based on fMRI data alone (either task or resting state), it can be performed independently from the functional analyses and therefore does not interfere with the fMRI results. Furthermore, it offers additional information about cerebral blood flow simultaneously recorded with the functional study.”

A bandwidth 0.05 – 0.2 Hz was investigated in resting state data obtained at a TR of 400 ms (using MB-EPI) to ensure sampling of mechanical respiratory and cardiac fluctuations above the Nyquist frequency. Large blood vessels clear of brain tissue were identified in the raw data – for example, the carotid arteries or jugular veins passing through an inferior axial slice, or the superior sagittal sinus in a sagittal slice – and these vessels were used to define a seed region. The time course from a single voxel in a large vessel is designated the reference regressor: the regressor with zero lag. After voxelwise cross correlations with the reference regressor, a new time series regressor is determined. The time series selected has the highest cross correlation with the original (zero lag) regressor at a temporal offset of one TR. This “moves” the regressor through time by one TR, tracking the propagation of the fluctuations inherent in the original time series. The spatial origins of the new regressor don’t matter. The new regressor comprises the time series of all voxels that obey an appropriate threshold criterion. A second cross correlation is then performed, searching for voxels that give the highest correlation with the second regressor time series, but at a further offset of one TR (which is now two TRs away from the reference regressor). The process repeats until the number of voxels selected as the strongest cross correlation, offset by one TR, is less than some predefined number.

The iterative procedure can be applied in reverse; that is, the temporal offset between the reference regressor and the next time series is set to be –TR. A negative lag simply means that the cross correlation will be maximized for fluctuations in the search time series that precede fluctuations in the reference time series. Thus, one may iterate forwards (positive TR lags) or backwards (negative TR lags) in time, relative to the start point. Refinement of the seed selection can also be made based on the results of a first pass through the data. One can even use the time series corresponding to the highest number of voxels obtained in a first pass as the optimal seed regressor for a second analysis; a form of signal averaging. In part b of the figure below, a blue circle indicates that the number of voxels sharing fluctuations with a single voxel seed is quite small; only 200-300 voxels. A black circle indicates the set of voxels to be used in a second, optimized analysis. There is a set of 5000 voxels that have common fluctuations in the band 0.05 – 0.2 Hz.

Whether a single voxel seed or some optimized, averaged seed is used, once a full set of regressor waveforms has been produced recursively, the entire set is used in a GLM to produce z maps of the voxel locations for each lag. An example is shown in part c of this figure:

Figure 2 from Tong & Frederick, 2014. (Click to enlarge.)


Tong & Frederick tested their method in a variety of ways. The results were reassuringly robust to seed selection. This makes sense for a biological process – blood flow – that is evolving smoothly in time.

The dynamic maps produced by the data-driven method resemble those produced in earlier work using a NIRS reference signal:
“The LFOs are “piped” into the brain though big arteries (e.g., internal carotid artery) with no phase shift. They then follow different paths (arterioles, capillaries, etc.) as branches of the cerebral vasculature diverge. It is expected that each signal would evolve independently as it travels along its own path. The observation that some of them have evolved in a similar way, and at a similar pace, is probably due to the uniformity in the fundamental structures of the cerebral blood system, likely reflecting the self-invariant properties of fractal structures found throughout biological systems.”
A delay map - figure below - resembles cerebral circulation, as in earlier work using a NIRS reference. (There are also two compelling videos in the Supplemental Information to the paper.)

Figure 6 from Tong & Frederick, 2014. (Click to enlarge.)



Converging lines of evidence for arterial CO₂ as a cause of systemic LFO

Lag-based analyses of fMRI data provide good evidence that a blood-borne agent is inducing systemic fluctuations at a frequency of ~0.1 Hz. Rhythmic dilation and constriction of pial arterioles at 0.1 Hz has been observed propagating on the exposed cortical surface of a patient undergoing surgery (Rayshubskiy et al., 2014). This is further circumstantial evidence in support of a blood-borne agent of some kind. But mechanisms to explain the source of these LFOs remain speculative. What other evidence is there that variation in PaCO₂, specifically, produces a strong systemic LFO in fMRI data?

Adding to the circumstantial case is the recent work by Power et al. (2017). They were motivated to investigate the empirical properties of the mean global signal in resting state fMRI data, finding variance attributable separately to head motion and hardware artifacts, as well as to the physiological consequences of respiratory patterns. In the absence of large head motion and hardware artifacts, they conclude that most of the remaining variance in the mean global signal is due to respiratory fluctuations, that is, to variations in PaCO₂.

Having observed that common measures of head motion such as framewise displacement (FD) can reflect physiological (i.e. apparent head motion) as well as real head motion effects, Byrge & Kennedy (2017) investigated the spatial-temporal nature of artifacts following changes revealed in the FD trace. They term this the lagged BOLD structure:
“Our general approach is to ask whether there is any common structure in the BOLD epochs immediately following all similar instances of the nuisance signal – specifically, following all framewise displacements within a particular range of values – using a construction similar to a peri-event time histogram. If there is any systematic covariance shared by BOLD epochs that follow similar displacements (within and/or across subjects), such a pattern reflects residual displacement-linked noise that should not be present in a perfect cleanup – regardless of the underlying sources of that noise.”

 “Using this method, we find a characteristic pattern of structured BOLD artifact following even extremely small framewise displacements, including those that fall well within typical standards for data inclusion. These systematic FD-linked patterns of noise persist for temporally extended epochs – on the order of 20–30s – following an initial displacement, with the magnitude of signal changes varying systematically according to the initial magnitude of displacement.”

When the FD is large – perhaps real head motion or an apparent head motion from a deep breath– the BOLD signal attains a negative maximum amplitude some 10-14 sec after the event. But when the FD is small – shallow breaths, perhaps – the BOLD signal produces a positive maximum amplitude at a similar latency. Moreover, the biphasic nature of the BOLD responses in each case also suggests differing mechanisms for differing features. In the case of large FD, there is an initial positive maximum in the BOLD response at a latency of 2-3 sec. But for small FD, the initial response is negative. Figure 1 from their paper is reproduced below. For expediency, you can focus on part (a). The rest of the figure shows that the lagged BOLD structure is observed consistently from two different sites (the rows), and remains in the data after standard preprocessing steps aimed at removing physiological artifacts (the columns). Note the opposite phases for the largest FD (bright yellow) and smallest FD ranges (dark blue):


Figure 1 from Byrge & Kennedy, 2017. (Click to enlarge.)

“The lagged BOLD patterns associated with respiration are not the same as the lagged patterns associated with displacements [i.e. head motion], but their similar temporal and parametric properties are suggestive of the possibility that respiratory mechanisms may underlie some of the displacement-linked lagged structure in the BOLD signal.”

There is another subtle result here. Compare, for example, the darkest blue trace – FD between zero and 0.05 mm – to the bright green trace – FD between 0.35 – 0.4 mm in part (a), above. Counter-intuitively, the smaller FD produce larger subsequent fluctuations in BOLD than some framewise displacements having considerably greater magnitude! So much for eliminating the head motion! If you do that, you reveal another perturbation underneath.

There are several other intriguing results in the Byrge & Kennedy paper. For example, they assess the spatial distribution of the changes depicted in their Figure 1, finding that the structure is largely global. They also find relationships between the lagged BOLD structure and standard models of respiratory effects, especially respiratory volume per unit time (RVT), but minimal association with cardiac measures. Their conclusion is that there are large fluctuations in resting state BOLD data that can be attributed to respiratory effects, and changes in arterial CO₂ is the most plausible explanation. If you have the time, I suggest reading the paper in its entirety. It is extremely thorough and well-written.

Right, it’s time for me to close my case for the prosecution: a contention that variation in PaCO₂ is the proximal cause of systemic LFOs. I want to move on to the consequences of systemic LFOs, however they come about.


How do systemic LFOs affect resting functional connectivity?

A systemic LFO at around 0.1 Hz is a serious potential confound for resting state fMRI, given the common practice of low-pass filtering fMRI data for subsequent analysis. It is widely believed that BOLD fluctuations below about 0.15 Hz represent ongoing brain activity. How much overlap might exist between sLFO and intrinsic brain activity as represented in BOLD data?

In their first investigation into functional connectivity, Tong et al. (2013) used NIRS recorded in the periphery – fingers and toes – to assess the contribution of systemic LFOs in the band 0.01-0.15 Hz to brain networks derived from independent component analysis (ICA). They found that spatial maps of sLFO-correlated BOLD signals tended to overlap the maps of several typical resting-state networks that are often reported using ICA. A subsequent study (Tong & Frederick, 2014) using fMRI data with TR = 400 ms, to avoid aliasing of mechanical respiratory perturbations, found much the same thing. The mechanical respiratory effects could be separated from the systemic LFOs, and the sLFO dominated several prominent independent components.

The earlier studies showed that spatial patterns of sLFO were coincident with resting-state networks commonly reported in the literature. Just how coincidental were those findings? In 2015, Tong et al. inverted the process and set out to determine whether they could establish apparent connectivity in the brain using synthetic time series data having the spatial and temporal properties of systemic LFOs. First, they produced from each subject’s resting state fMRI data a lag map of correlations between a voxel’s time course and NIRS recorded in a finger. This 3D map represents the lag with the strongest correlation between the NIRS signal and each voxel’s BOLD signal. This map was applied to a synthetic BOLD “signal,” comprising sinusoids and white noise. At each voxel, the synthetic time series was scaled by the local signal intensity, and shifted in time using the real lag value produced for the 3D lag map. The spatial-temporal properties of the final synthetic time series thus follow the basic intensity and delay structures of real fMRI data, but are otherwise entirely arbitrary. An example lag map produced using the seed-based regression method is shown below, with the NIRS-based lag map in the inset. (The seed-based and NIRS-based methods generated similar results.) In parts B and C are exemplar synthetic time courses for three voxels with different lags:


Figure 3 from Tong et al., 2015. (Click to enlarge.)

Note that the synthetic data comprises sinusoids band-limited to 0.01-0.2 Hz, the same frequency range as the real data. The lag used at each voxel is derived from a biological measurement, that is, from correlations between real BOLD data and NIRS in the subject’s finger (or, alternatively, from a seed-based regression). In this respect, the lags are biological information, and the lags are encoded into (applied to) the synthetic data. But the only way any neuronal relationships can end up in the synthetic data is if the lags happen to contain neurogenic information in the first place. In the case of a NIRS signal measured in the finger to determine the lag maps, we might concede an autonomic nervous system (ANS) response, perhaps. This is unlikely, however, because the temporal characteristics of the systemic LFOs imply a blood-borne agent, whereas nervous system control over vascular tone ought to be more efficient than waiting for blood to arrive. (See Note 4.) Still, let’s allow the remote possibility of a neurogenic basis for the lags and define any implications. If we eventually learn that the systemic LFOs derive from the ANS and not arterial CO₂ (or some other blood-borne agent), we will then have to consider a highly prominent, concerted ANS response obscuring whatever subtle, regional neural activity we might want to see hiding in the resting-state fMRI data.

Returning to the 2015 paper, the next step was to run group ICA or seed-based correlation analysis, two common approaches to obtain functional connectivity estimates, and assess any false “networks” produced from the synthetic data. These results were compared directly to the same ICA method applied to real fMRI data. In the next figure are eight groups of independent components obtained from spatial correlation with a literature template for resting-state networks. ICs from real data are in the left column, the middle and right columns are ICs derived from the synthetic data created using the seed-based recursive and NIRS reference signal, respectively:


Figure 5 from Tong et al., 2015. (Click to enlarge.)

The good news is that the spatial correlation coefficients (see the numbers above the axial view of each IC) are lower in both sets of synthetic data than for real data. The bad news is that one can clearly recognize “networks” arising out of the synthetic data. (The red boxes highlight two instances of networks that couldn’t be isolated.)

There are also clear similarities in the default mode network returned from a seed-based analysis of real data (left) to that from synthetic data (right):


Figure 6 from Tong et al., 2015. (Click to enlarge.)

Not perfect correspondence, but remarkable consistency. Whether you chose to focus on the similarities or the dissimilarities, we can’t escape an obvious conclusion: systemic LFOs can produce patterns that look like resting-state networks.


How to deal with systemic LFOs in fMRI

The initial approach to de-noising with RIPTiDe, by Frederick, Nickerson & Tong in 2012, used a reference NIRS waveform recorded from the forehead. In a subsequent study, RIPTiDe de-noising was applied based on the NIRS signal from subjects’ fingers (Hocke et al. 2016). This reduces the chance of accidentally capturing neural activity in the NIRS waveform, and simplifies the setup. The NIRS-based method explained twice as much variance in resting-state fMRI data than de-noising methods requiring models of respiration or cardiac response functions. Furthermore, only a small but insignificant correlation was found between NIRS and a respiratory variation model. Most signal power was not shared between NIRS and respiratory or cardiac variation models. These results suggest a different origin for sLFO signals than are measured with conventional respiratory belt or pulse oximetry traces, even though some respiratory models are designed to account for arterial CO₂ fluctuations. Whether multiple de-noising methods should be nested, and in what order, is a subject for a later date.

The use of a reference NIRS signal is still a major limitation, especially for data that are already sitting in repositories for which there may be no peripheral physiological data. The data-driven approach, using seeds developed from the fMRI data themselves, overcomes this. (One can still use a NIRS signal as the initial seed waveform, but it isn’t required.) There is more work to do, but even if the original seed is defined in such a way as to capture neurogenic signals accidentally (or intentionally, if you opt for a gray matter seed), the smooth evolution of the regression procedure over several seconds, followed if desired by the definition of an optimal seed (which will likely represent large draining veins) and a second recursive procedure, should ensure that the final set of regressors doesn’t contain neural activity. So, if you want my advice, I would urge to you read up on the latest developments on Blaise Frederick’s github, and start tinkering.


Conclusions

Circumstantial evidence from several groups suggests that non-stationary arterial CO₂ is responsible for a systemic LFO in fMRI data. The overlap of this systemic LFO with neurogenic fluctuations of interest in resting-state fMRI suggests that a major physiologic “noise” component is being retained in most functional connectivity studies. Some studies may be partly removing sLFO through global signal regression (GSR), but given the spatial-temporal properties of the sLFO, GSR alone is unlikely to clean the data as well as a lag-based method. And there are statistical arguments against GSR anyway. As a compromise, you might consider dynamic GSR, which uses the lag-based properties to model propagation of LFO through the brain with a voxel-specific time delay prior to regression.

RapidTiDe, the accelerated version of the original RIPTiDe method, looks like a useful option for de-noising. The use of the fMRI data to derive seed-based lags and regressors for de-noising should be familiar to anyone who has used the popular CompCor method. No additional measurements are required for RapidTiDe. Most fMRI data should contain sufficient vascular information to permit good seed selection, which should enhance its appeal significantly. Even better, code is available now for you to run tests with!

There are alternative methods that may permit removal of systemic LFOs. I focused on lag-based methods in this post because they provide compelling spatial-temporal demonstrations of systemic LFOs. Collectively, these provide the strongest evidence I’ve found for working under the assumption that the sLFO is due to arterial CO₂. The next step, it seems to me, is to develop routine approaches aimed at accounting for sLFO in resting state fMRI data.

Finally, a quick note on using expired CO₂ traces to get at fluctuations of PaCO₂. Measurement of expired CO₂ is supposed to be the focus of the blog post after next, according to my original list of fluctuations and biases in fMRI data. Until very recently, I had been assuming we would need expired CO₂ measurements to account for changes in PaCO₂. That may still be true, but as my center has been setting up devices to measure expired CO₂, and as I’ve learned more about lag-based methods such as RIPTiDe, my enthusiasm has shifted towards the latter. There are two main reasons for my enthusiasm for RIPTiDe: 1. the data-driven results are striking, and 2. there are practical hurdles to good expired CO₂ data, and some of these hurdles may be insurmountable. The practical hurdles? For a start, you need a dedicated setup that involves using a mask or nasal canula on your subject. Some people aren’t going to like it. Next, getting robust, accurate expired CO₂ data is non-trivial, even when the mask or canula fits perfectly. There are dead volumes in the hoses to consider, amplifier calibration and sensitivity issues, and other experimental factors. Not all breaths can be detected reliably, either. It can be quite difficult to discern very shallow respiration. (I thank Molly Bright and Daniel Bulte for the warnings. You were right!)

Even when all these practical hurdles have been addressed, there’s one final factor that can’t be circumvented: recording expired CO₂ only provides you with data some of the time. You have no knowledge about what’s happening during inspiration, or when the subject holds his breath for a few seconds. Everything the subject does has to be inferred retroactively from the expired breath. All of which suggests to me that other methods should be attempted first. I like the Tong and Frederick approach, especially a seed-based method that uses just the fMRI data. This tactic has worked well for regions of no interest in CSF and white matter, as with the CompCor method. So why not a lag-based method using a seed in the vasculature? Cleaning up systemic LFOs, especially if they are ever proven to arise from CO₂ in the blood, could massively improve the specificity of functional connectivity.

_______________________



Notes:

1.  Comprehensive reviews on CO₂ transport in blood, and on cerebrovascular response to CO₂:

Carbon dioxide transport
GJ Arthurs & M Sudhakar
Continuing Education in Anaesthesia Critical Care & Pain, Vol 5, Issue 6, Dec 2005, Pages 207–210
https://academic.oup.com/bjaed/article/5/6/207/331369
https://doi.org/10.1093/bjaceaccp/mki050

The cerebrovascular response to carbon dioxide in humans
A Battisti-Charbonney, J Fisher & J Duffin
J Physiol. 2011; 589 (Pt 12): 3039–3048. 
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3139085/
https://doi.org/10.1113/jphysiol.2011.206052

Some highlights: 
The mechanism by which CO₂ affects cerebrovascular resistance vessels is not fully understood. Increased CO leads to increased [H+], which activates voltage gated K+ channels. The resulting hyperpolarization of endothelial cells reduces intracellular calcium, which leads to vascular relaxation and hence vasodilation.

The mechanism of regulation of CBF is via pial arteriolar tone, since these provide the main resistance vessels.

The mechanism underlying this regulation appears independent of the decreased and increased arterial pH levels accompanying the elevated and lowered pCO
, respectively, since CBF remains unchanged following metabolic acidosis and alkalosis. Rather, findings suggest that CBF is regulated by changes in pH of the cerebral spinal fluid (CSF) as the result of the rapid equilibration between CO in the arterial blood and CSF. The lowered/elevated pH in the CSF then acts directly on the vasculature to cause relaxation and contraction, respectively. Thus, the action of pCO on the vasculature is restricted to that of altering CSF pH, i.e., is void of other indirect effects as well as direct effects. 
But vasodilation is also observed in the periphery where there is no CSF. So, even if this explanation is correct for the brain, I am still left wondering how arterial CO₂ causes vasodilation elsewhere in the body. Let me know if you find a good review, please!

pCO₂ and pH regulation of cerebral blood flow. 
S Yoon, M Zuccarello & RM Rapoport.
Front Physiol. 2012, 3:365.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3442265/
https://doi.org/10.3389/fphys.2012.00365


2.  Dual wavelength near-infrared spectroscopy (NIRS) can be used to estimate the oxyhemoglobin (HbO), deoxyhemoglobin (Hb) concentrations simultaneously. The method works via differential absorption of light at two wavelengths. The wavelengths are selected to provide optimal absorption of the target chromophores - that is, different forms of hemoglobin - and to minimize absorption by water and other tissue components. The total hemoglobin (HbT) concentration can then be deduced using the Modified Beer-Lambert law. It is generally assumed that HbT provides an estimate of CBV, including arterioles, capillaries and venules. The Hb signal arises mostly from veins when the arterial blood is close to 100% saturated, as in normal subjects. The HbO signal arises from both arterial and venous compartments. There are several references and reviews on all this, but nothing I've found so far is a good introduction for a lay audience (like me). I'll keep an eye out.


3.  While NIRS and pulse oximetry are based on the same phenomenon - the absorption of light by blood components - NIRS devices usually differ from pulse oximeters in the wavelength(s) used, as well as in the number and placement of sensors, signal processing (such as high pass filtering for pulse oximetry), and other application-specific considerations.


4.  Here I am ignoring the well-known feedback response to changes in arterial CO₂, since this is the chemoreflex responding to changes in PaCO₂ rather than ANS providing a feed-forward control over local vascular tone. The chemoreflex regulatory mechanism alters the respiration rate and volume of subsequent breaths, to push CO₂ concentration towards an equilibrium value. The total feedback loop can take multiple breathing cycles; tens of seconds. We will see the results of these feedback loops in the fMRI data. Indeed, these are exactly the systemic LFOs that are the focus of this post! So, the ANS is part of the reason for there being a non-stationary arterial CO₂. But it is indirect in the same way that the ANS is also involved in governing the heart rate. When the heart rate changes we don’t claim a direct, neurogenic source of fluctuations in the fMRI data, even though we recognize the crucial role of the ANS in regulating the process. Some of you may be using the heart rate variability as an emotional measure. Perhaps something similar can be done with changes in respiration. In any event, most fMRI experiments are aiming to see something cortical, something beyond the ANS, and so changes in PaCO₂ or in heart rate are at best uninteresting, at worst a nuisance.


Arterial carbon dioxide as an endogenous "contrast agent" for blood flow imaging

$
0
0

I nearly called this post Low Frequency Oscillations - part III since it closely follows the subject material I covered in the last two posts. But this is a slight tangent. Following the maxim "One scientist's noise is another scientist's signal," in this post I want to look at the utility of systemic LFO to map blood flow dynamics, an idea that was suggested in 2013 by Lv et al. based on the earlier work from Tong & Frederick that I reviewed last post. There is also at least one review of this topic, from 2017.

Let me first recap the last post. There is sufficient evidence, supported by multiple direct and indirect lines of inquiry, to suggest a blood-borne contrast mechanism that produces a prominent fluctuation at around 0.1 Hz in resting-state fMRI data. (Here, I assume a standard T₂*-weighted EPI acquisition for the resting-state fMRI data.) Furthermore, the same fluctuation can be found anywhere in the body. That is, the fluctuation is truly systemic. The best explanation to date is that non-stationary arterial CO₂ concentration, brought about by variations in breathing rate and/or depth, produces changes in arterial tone by virtue of the sensitivity of smooth muscle walls to the CO₂ dissolved in arterial blood. I shall assume such a mechanism throughout this post, while noting that the actual mechanism is less critical here than whether there is some utility to be exploited.

In the title I put "contrast agent" in quotes. That's because the CO₂ isn't the actual contrast agent, but a modulator of contrast changes. When the smooth muscle walls of an artery sense a changing CO₂ concentration, they either expand or contract locally, modulating the blood flow through that vessel. In the brain, a change in a blood vessel's diameter causes a concomitant change cerebral blood volume (CBV), hence cerebral blood flow (CBF). There may be a local change in magnetic susceptibility corresponding to the altered CBV in the arteries and capillaries. But the altered CBF will definitely produce the well-known change in magnetic susceptibility in and around the venous blood that can be detected downstream of the tissue, i.e. the standard BOLD effect. The actual contrast we detect is by virtue of changes in T₂* (for gradient echo EPI), plus the possibility of some flow weighting of the arterial blood depending on the combination of flip angle (FA) and repetition time (TR) being used. As a shorthand, however, I shall refer to arterial CO₂ as the endogenous contrast agent because whenever an artery senses a change in CO₂ concentration, there will be a concomitant change in vessel tone, and we will see a cascade of signal changes arising from it. (See Note 1 for some fun with acronyms!)


Time shift analysis

Most published studies attempting to exploit systemic LFO have used fixed time shifts, or lags, in their analysis. You just need a few minutes' worth of BOLD fMRI data, usually resting state (task-free). The analysis is then conceptually straightforward:
  1. Define a reference, or "seed," time course;
  2. Perform cross correlations between the "seed" and the time course of each voxel, using a set of time shifts that typically spans a range of 15-20 seconds (based on the expected brain hemodynamics);
  3. Determine for each voxel which time shift gives the largest cross correlation value, and plot that value (the delay, in seconds) to produce a lag map.

There are experimental variables, naturally. The duration of the BOLD time series varies, but most studies to date have used the 5-8 min acquisition that's common for resting-state connectivity. Some studies filter the data before starting the analysis. Different studies also tend to choose different seeds. There are pros and cons for each seed category that I assess in the next section. Time shifts are usually increments of TR, e.g. the lag range might be over +/- 5 TRs for a common TR of 2 sec. And, in producing the final lag maps, some studies apply acceptance criteria to reject low correlations.

Let's look at an example time shift analysis, from Siegel et al. (2016). The raw data were filtered with a pass-band of 0.009 - 0.09 Hz. For cross correlations, they used as their seed time course the global gray matter (GM) signal. Cross correlations were computed voxel-by-voxel for nine delays of TR = 2 sec increments, covering +/- 8 sec, followed by interpolation over the lag range. The time shift corresponding to the maximum cross correlation was assigned that voxel's lag value in the final map, as shown here:

Fig. 1 from Siegel et al. (2016).


They define negative time shifts as maximum cross correlations which lead the mean GM signal - the light blue regions in part (c) - and positive time shifts as correlations that lag the mean GM signal. Dark blue represents zero lag, i.e. mostly the GM region used as the seed time course.

What are we to make of the heterogeneity in the lag map in part (c) above? An asymmetry we can understand because this is from a stroke patient. Even so, there doesn't seem to be any clear anatomical distinction in the image. Certainly, some of the red-yellow voxels could represent large draining veins on the brain surface, but there are deeper brain regions that also show up red. What's going on? We need to explore the seed selection criteria in more detail.


How should we choose the seed time course?

With a fixed seed time course to be used as a voxel-wise regressor for the whole data set, there are essentially three situations to consider: a venous seed, an arterial seed, or a brain tissue seed.

Taking the venous seed first, the superior sagittal sinus (SSS) offers a robust BOLD effect and, being on the surface of the brain, can be identified and segmented reasonably easily. Here is a group average lag map produced by Tong et al. (2017) using a seed in SSS (top row) compared to a time-to-peak (TTP) map derived from the first pass kinetics of a bolus injection of gadolinium contrast agent (i.e. dynamic susceptibility contrast imaging):

Fig. 5 from Tong et al. (2017).

Notice how there are more late - that is, venous - voxels (red-yellow) in the lag map produced from the BOLD data (top row) compared to the DSC data (bottom row). The DSC is more heavily weighted towards early arrival, that is, towards the arterial side; more blue areas. And this makes sense because the DSC method is aimed at extracting a perfusion index. The kinetic model used in DSC imaging aims to map the blood arriving in the brain tissue, not the blood leaving the tissue. In other words, DSC is intentionally weighted towards the arterial side of the hemodynamics. The problem with the SSS signal is that it is already quite far removed from whatever happened in the brain upstream. After all, it is blood that has already transited brain tissue and is being directed down towards the jugular veins where it will leave the head entirely. Making strong correlations with arterial flow on the upstream side of the brain is thus a tricky proposition. It can be done, but the complications introduced by the brain tissue in between suggests caution.

What happens, then, if we select an arterial seed instead of a venous seed? Such a comparison was presented recently by Tong et al. (2018) using the MyConnectome data from Russ Poldrack: 90 resting-state fMRI scans collected over a two year period. The internal carotid arteries (ICA), the internal jugular veins (IJV) and the SSS were identified on T₁- and T₂-weighted anatomical scans, since these high-resolution 3D images cover the neck as well as the whole brain. Six time courses were used in time shift analyses: left and right ICA, left and right IJV, SSS, and the global mean brain signal (GS). The time shift range was +/- 15 sec, to ensure full passage of the blood through the head. On average, over the 90 sessions, the maximum cross correlations arose for ICA signals leading GS by between 2.8 and 3 seconds, while the SSS time course lagged the GS time course by 3.6 seconds, and the IJV signals lagged GS by around 4.3 seconds. (There was more scatter in left IJV data than in right IJV.) The accumulated delay from ICA to IJV was 7 to 7.5 sec, consistent with full passage of blood through the head.

Fig. 3 from Tong et al. (2018).

There was, however, an interesting finding. While there was good cross correlation between the ICA and other signals, the ICA was always negatively correlated with the GS, the SSS and IJV (see figure above). That is, the contrast change on the passage of the CO₂ was a signal decrease, not a signal increase as in the downstream regions. This must be a consequence of the particular form of BOLD contrast in the internal carotids. Tong et al. speculate that it is a small change in CBV producing a small extravascular (negative) BOLD signal change from the volume magnetic susceptibility difference between the artery (containing blood near 100% saturated with oxygen) and surrounding neck tissue. This is an interesting technical finding, and it has implications if we want to change the acquisition (see later), but it's also perfectly understandable as a conventional, albeit unusual, form of BOLD contrast.

So, using arterial seeds instead of venous seeds works in a test case. Great! What are the implications for using an arterial seed for perfusion mapping more generally? As with the venous seed, I am primarily concerned with the dynamics once the seed reaches the brain. Clearly, all the blood that is flowing through the internal carotid artery at any moment in time isn't destined for the same brain location or even the same tissue type. Some of the blood in our arterial reference signal ends up in GM, some in WM. The passage of blood is different through these two tissues, imposing different subsequent delay characteristics that are carried through to the venous blood. This is a well-known problem in arterial spin labeling (ASL), where the mean transit time (MTT) is known to differ between GM and WM, as well as with age, and with pathology. In ASL methods, one remedy is to use multiple post-labeling delays and measure a range of MTT rather than relying on a single delay and assuming the entire brain has the same response. Keep this point in mind because I will argue that a fixed lag analysis suffers from the same fundamental problems. Thus, while there are features of an arterial seed that "survive passage of the brain" into the venous system and the draining veins, the brain tissue adds complexity and ambiguity in the form of many potential sources for modulation of the dynamics along the way.

Which brings us to brain tissue as a seed time course. Some groups have used the global mean signal. I am against this on basic physiological grounds: we shouldn't combine the time courses of GM and WM because we know that in a healthy brain the blood flow in GM is 3-5 times higher than in WM. Using a combined GM + WM signal is tantamount to temporal smoothing.

An alternative is to use the GM signal only. This is better, but still not ideal because the GM is modulated by both the sLFO signals that we are trying to measure, plus all sorts of neurovascular modulations due to ongoing brain activity that are the focus of fMRI studies. With a GM seed there is the possibility of feedback effects across the entire GM from changes in arousal, through sympathetic nervous system responses. There will also be local fluctuations depending on the underlying brain activity. Doubtless, some of these fluctuations will be averaged away over the many minutes of a typical acquisition, but we can't assume they will average to zero. Thus, if we take as our reference time course a signal that has neurovascular effects already "baked in," our regression is going to be working simultaneously to assess systemic effects plus at least some fraction of ongoing brain activity. The neurovascular activity is considered "noise" in this interpretation! Lags in GM directly attributable to neural causes are around a second according to Mitra et al. This could be sufficient to cause regional variations that could appear as pathology when assessing patient groups.


Recursive time shift analysis

There is one approach that allows us to overcome many of the aforementioned limitations. And that is to move away from a single seed time course altogether. We need more temporal flexibility, a bit like using multiple transit delays in ASL to compensate for variations in MTT. For lag analysis, the recursive approach developed by Tong & Frederick is an elegant way to "ride along" with the systemic fluctuation as it propagates through the entire vascular system. The basic logic is to look upstream or downstream one TR at a time.

The time course from a single voxel in a large vessel is designated the reference regressor: the regressor with zero lag. After voxel-by-voxel cross correlations with the reference regressor, a new time series regressor is determined. It is the average of the time series of all voxels satisfying a particular cross correlation threshold. The new reference time series has the highest cross correlation with the original (zero lag) regressor at a temporal offset of one TR. This “moves” the regressor through time by one TR, tracking the propagation of the fluctuations inherent in the original time series. The spatial origins of the new regressor don’t matter. The new regressor simply comprises the time series of all voxels that obey an appropriate threshold criterion. A second cross correlation is then performed, searching for voxels that give the highest correlation with the second regressor time series, but at a further offset of one TR (which is now two TRs away from the original time series). The process repeats until the number of voxels selected as the strongest cross correlation, offset by one TR, is less than some predefined number. The algorithm appears in part a of the figure below.

The iterative procedure can be applied in reverse; that is, the temporal offset between the reference regressor and the next time series is set to be –TR. A negative lag simply means that the cross correlation will be maximized for fluctuations in the search time series that precede fluctuations in the reference time series. Thus, one may iterate forwards (positive TR lags) or backwards (negative TR lags) in time, relative to the start point. Refinement of the initial seed selection can also be made based on the results of a first pass through the data. One can even use the time series corresponding to the highest number of voxels obtained in a first pass as the optimal seed regressor for a second analysis; a form of signal averaging. The recursive approach is robust against initial seed conditions. That is, the recursive correlations tend to converge to a similar result whether one starts with mean GM signal, an SSS seed or almost any random seed. In part b of the figure below, a blue circle indicates that the number of voxels sharing fluctuations with a single voxel seed is quite small; only 200-300 voxels. A black circle indicates the set of voxels to be used in a second, optimized analysis. There is a set of 5000 voxels that have common fluctuations in the band 0.05 – 0.2 Hz.

Once a full set of regressor waveforms has been produced recursively, the entire set of regressor time courses is used in a GLM to produce a set of z maps of the voxel locations obtained at each time shift. The entire recursive procedure is shown in the figure below. Example z maps produced from the GLM appear in part c.


Fig. 2 from Tong & Frederick (2014).

 
To view the passage of the systemic flow through the brain, each z map in the set is normalized and can then be played as a movie, one frame for each TR increment assessed. In the movie below we see the z maps obtained at 2.5 frames per second (fps), i.e. TR = 0.4 sec, played back at 6.7 fps, for a changing three-plane view through the brain. The top row was produced with the optimal seed, the bottom row was produced with a local seed. As expected, the results of the recursive procedure converge to similar results regardless of the starting seed.


 (The original Supplemental Movie 1 can be downloaded here.)


The flow pattern in the movie is described by Tong & Frederick thus:
"The LFOs are “piped” into the brain though big arteries (e.g., internal carotid artery) with no phase shift. They then follow different paths (arterioles, capillaries, etc.) as branches of the cerebral vasculature diverge. It is expected that each signal would evolve independently as it travels along its own path. The observation that some of them have evolved in a similar way, and at a similar pace, is probably due to the uniformity in the fundamental structures of the cerebral blood system, likely reflecting the self-invariant properties of fractal structures found throughout biological systems."

An alternative way to view the data is as a lag map which plots the arrival time in seconds, relative to the mean arrival time assigned zero:



 
The regions fed by middle cerebral arteries appear in blue and have the earliest arrival times, while the venous drainage is colored red-yellow. Note also how symmetric the arrival times appear. For a normal, healthy brain, this is as we should expect.

At this point we can go back and revisit the issue of seed selection: fixed time shift analysis or recursive approach? Is there really a benefit to the recursive approach? Aso et al. recorded three 5-minute blocks of BOLD data under conditions of rest, a simple reaction time task (ITI of 6-24 sec), or 10 second breath holds with 90 sec normal breathing. The arrival time maps (for which they use a reversed sign convention; negative values are later arrival) for the three conditions are somewhat similar but have noticeable differences. This is the group averaged response (N=20) using the recursive time shift method:

Fig. 6D from Aso et al. (2017)

The distribution of arterial (early arriving) regions, displayed above in yellow-red, are clearly different even as the general patterns are preserved across conditions. The intra-class correlation coefficient is above 0.7. This fits with our general assumptions about BOLD data: there's a lot going on and sorting out the parts is unmaking a sausage!

The most striking result is in their comparison of the recursive procedure to a fixed SSS seed analysis. Here, they show map of the intra-class correlation coefficient for the three conditions. The recursive analysis (right column) yields ICC values significantly greater than with the SSS seed analysis (left column):

Fig. 7 from Aso et al. (2017)

The recursive procedure maintains an ability to track the hemodynamics even as there are behavioral differences imposed on the time series. The SSS seed produces more variable results, consistent with the idea that low frequency fluctuations in a large venous vessel are quite different to the spatial-temporal spread imposed by brain tissue. The recursive method, while still biased towards the venous side of the brain due to greater BOLD sensitivity, does a better job of tracking the blood dynamics upstream, into the brain tissue.


Applications of time shift analysis

Assessing the blood flow patterns in normal brain is very interesting. The extensive work that went into establishing sLFO as a major source of BOLD variability is highly relevant to the many approaches that try to account for physiological variations as sources of "noise" in resting-state fMRI data in particular. And we've seen that the recursive procedure is able to find differences between rest, a simple task and breath holding. So far so good. What else can we do with it?

To date, I have found only three studies that have used the recursive analysis: the original Tong & Frederick paper and Aso et al., both reviewed in the previous section, and a paper by Donahue et al. that I review in the next section because it uses a gas challenge rather than normal breathing. Here, I'll quickly summarize the clinical applications of the fixed seed analysis.

The earliest reference I can find to clinical application is the work of Lv et al. mentioned in the introduction. In addition to the early work from Lv et al. on stroke patients, Amemiya et al., Siegel et al., Ni et al. and Khalil et al. also assessed stroke or chronic hypoperfusion. Chen et al. used time shift analysis to look at reperfusion therapy after acute ischemic stroke. Christen et al. looked at lags in moyamoya patients, and Satow et al. looked at idiopathic normal pressure hydrocephalus. All these studies observed interesting findings in the patient groups, and many compared the time shift analysis of BOLD data to other imaging methods (e.g. MRA, DWI, DSC) for validation. I would encourage you to read the studies if you are interested in the particular pathologies. But as a representative example, I'll dig into the study by Siegel et al. because they compared the time shift analysis of BOLD data to pulsed arterial spin labeling (PASL). (See Note 2.) In regions of hypoperfusion, the regional CBF measured by pulsed ASL (PASL) was observed to decrease monotonically with the BOLD hemodynamic lag in patients at ~2 weeks after a stroke, as shown in part (b) below:

From Siegel et al. (2016).

But what changes might have persisted a year after the stroke? Would the CBF and time shift relationship be the same?
"These results raise the question of whether hypo-perfusion in the acute post-stroke period recovers in parallel with lag. To address this question, we measured change in lag (1 year minus 2 weeks) versus change in rCBF for all ROIs showing lag >0 subacutely. Although a significant relationship was present between recovery of lag, and recovery of rCBF (Pearson’s r = -0.12; P = 0.039), the variance explained by this relationship was small (r² = 0.015). This may be because overall, measures of perfusion did not change significantly between two weeks and one year post-stroke (two-week average = 85.7% of controls, one-year average = 86.4% of controls; paired t-test P = 0.3719). Thus, while a strong relationship between lag and rCBF is present sub-acutely, areas in which lag recovers do not necessarily return to normal perfusion."
The CBF remains depressed, relative to controls, but the lags resolve somewhat, as illustrated below. In this sample, four out of five patients have radically different lag maps at 1 year compared to 1-2 weeks post-stroke:

Part of Fig. 2 from Siegel et al. (2016). Lag maps for five patients at 1-2 weeks (left) and 1 year (right) after stroke.

That is very interesting. It implies that the net delivery of blood - recall that CBF has units of ml blood per 100 g tissue per minute - remains impoverished but the velocity of that blood through the ischemic region has normalized somewhat. Why might this be? If we consider CBF as a rough proxy for metabolic rate, then a simple explanation is that the metabolism of the tissue affected by the stroke is as low at 1 year as it was 2 weeks. There is probably an infarct - cells that died in the hours after the stroke - creating a persistent lower demand for glucose (and oxygen) within the broader region affected by the stroke. The vascular control mechanisms themselves, on the other hand, appear to have recovered somewhat, so the blood dynamics appear more normal even as the regional CBF remains low. (See Note 3.)

This example illustrates that time shift analysis offers different, complimentary information on a vascular disorder than is measured in PASL. Similar utility was found in the other clinical investigations where other forms of imaging, including DSC, diffusion imaging and MR angiography, were compared to the time shift analysis. There really does seem to be some unique information on offer in the time shift analysis. (See Note 4 for a bonus example, using caffeine.)


Can we increase sensitivity to blood dynamics?

The work presented so far has used standard BOLD data. Admittedly, some studies used multi-band EPI to shorten the TR, but the parameter settings were standard for a typical fMRI acquisition. That is, TE was set to generate sensitivity to T₂* changes, the flip angle was typically set to the Ernst angle, and so on. No special consideration was given to the venous bias in the acquisition. As a consequence, the data being analyzed for time lags is always likely to do better on the venous side of the brain than the arterial side, even with the more rigorous recursive time delay method. Does it have to be this way? Can we boost the sensitivity so that the recursive procedure can track arterial and venous dynamics with something approaching equal sensitivity? There are three broad approaches to ponder.

1. Change the arterial CO₂ concentration:

Rather than relying on the endogenous fluctuations of CO₂ during normal breathing, Donahue et al. used a transient hypercabia challenge to boost arterial and venous changes simultaneously. They delivered alternating 3-minute periods of medical grade air or carbogen (5% CO₂ + 95% O₂) through a mask, a procedure that has been used extensively to study cerebrovascular reactivity. The 3-minute periods during which blood gases are controlled necessitates a change in the temporal lag search window. Donahue et al. assessed lags over the range -20 to +90 seconds relative to the boxcar that describes the five 3-minute periods, using the recursive time shift method with the boxcar as the initial time series.

As we might expect for long duration events, the resulting delay maps are more homogeneous with the carbogen challenge than we've seen using endogenous BOLD fluctuations. The inherent variability of the ongoing physiology is dominated by the response to carbogen. A delay map from a normal volunteer yields almost uniform time-to-peak (TTP) for GM, and a slightly delayed TTP for some WM regions:

Fig. 2 parts (b) and (c) from Donahue et al. (2016).

But the relatively flat normal brain response makes it easy to see changes due to major disruption of the blood supply. The reduced flow through certain arteries in moyamoya patients is immediately evident as changes in TTP:

Fig. 3 parts (3) and (f) from Donahue et al. (2016).


Do we have to use long gas challenges? The 3-minute periods used by Donahue et al. are well suited to major vascular pathology such as stroke and moyamoya disease, where the transit delays can be severely abnormal and conventional measures like ASL are limited. Note the delays of 20+ seconds in the moyamoya patients compared to controls in the last two figures. That sort of disruption would be invisible to ASL methods because the label decays with T₁. If we expect the blood dynamics of interest to be more subtle, such as might arise from a pharmaceutical or a foodstuff, might shorter respiratory challenges be used in order to preserve the dynamics over a shorter range of time delays? I don't see why not. Breath holding might provide an easier alternative than the delivery of gases, too. Another consideration is the pattern of challenge used. Regularizing respiratory responses into a boxcar might not be as informative as using some amount of temporal variability. Perhaps a breathing task that samples short and long breath holds, with changes of normal breathing pace and depth, or a gas delivery paradigm that is more "stochastic." There are plenty of options to test.

2. Change the true contrast agent:

In mapping systemic LFO with BOLD we're using the paramagnetic properties of deoxyhemoglobin on the venous side, and the weak diamagnetism of arterial blood plus perhaps a small amount of inflow weighting on the arterial side. What if we moved to an exogenous contrast agent instead? For example, we might try to measure the re-circulation of a gadolinium contrast agent once the agent is assumed to have attained a steady state blood concentration. The standard approach in DSC is to map the uptake kinetics on the first pass, immediately after contrast injection. At that point, the measurement is considered complete. But it would be interesting to look at the fluctuations arising from changes in respiration rate and depth - the CO₂ sensitivity should be the same - in the minutes afterwards. The blood signal would be fully relaxed by the gadolinium, eliminating BOLD. Essentially, we would have a CBV-weighted signal. This would probably shift the bias from the venous to the arterial side. Sensitivity might take a hit as a result, but that would likely depend on the signal level surrounding arteries and capillaries. The sensitivity to CBV changes could be quite high, given the presence of gadolinium in the blood.

3. Change the pulse sequence parameters:

This is probably where most of us would start: with a standard BOLD resting state approach, no respiratory challenge or exogenous contrast agent. What options do we have in the pulse sequence parameters? Many fMRI studies use the Ernst angle for GM when establishing the flip angle (FA) at the TR being used. Can we boost the arterial signal by increasing inflow sensitivity with higher FA and/or shorter TR? (For an excellent review on inflow effects see Gao & Liu (2012).) We might use MB-EPI to attain a sub-second TR yet maintain a 90 degree excitation for maximum T₁ weighting. On the other hand, MB-EPI has a rather complex excitation pattern along the slice dimension whereas conventional EPI can be applied as either interleaved or contiguous (descending or ascending) slice ordering. Multiband forces a form of spatial interleaving so that the spin history of blood moving along the slice direction is complicated. Still, it's worth a look.

For gradient echo EPI we generally aim to set TE ~ T₂* for maximum BOLD sensitivity. For lag mapping, shorter TE may reduce the venous bias while simultaneously boosting the SNR. Spin echo EPI is another possible option. SE-EPI is used to refocus extravascular BOLD arising from large veins (check out the recent paper by Ragot & Chen for a comprehensive analysis of SE-EPI BOLD), leaving the intravascular and small vessel extravascular BOLD responses. (The BOLD signal in SE-EPI is typically about half that for GE-EPI at 3 T.) Using spin echoes also changes the T₁ recovery dynamics, something which might help add inflow sensitivity to the final signal. Now, SE-EPI does generally reduce the brain coverage per unit time, because the minimum TE is longer for SE-EPI than for GE-EPI, but multiband approaches could render the coverage acceptable. It may even be the case that 180 degree refocusing at short TR is inefficient, as well as driving up SAR, so lower excitation and refocusing FAs would be worth exploring.

Another acquisition issue given only partial consideration in time shift analysis work so far is the duration of a standard BOLD acquisition. How long should we acquire? With ASL methods, a single CBF map typically requires about 4-5 mins of data to attain reasonable SNR. Over this time we assume (usually only implicitly) that the neural activity variations are averaged so that the CBF is a reasonable reflection of the subject's baseline perfusion. For highly aroused or highly caffeinated subjects this assumption could be challenged, but whatever is true for ASL measures should apply equally well (or equally badly) to time shift analysis of BOLD data. Until someone shows us differently, then, I would suggest at least 4 minutes of data.


Conclusions

This post has looked at a method to image vascular dynamics. My intent wasn't for fMRI applications per se, even though there is a lot of overlap when the starting point is resting-state fMRI data. Rather, it's a different interpretation of resting-state data that could be informative for comparison with other blood imaging methods such as ASL. That's what I'm going to be doing with it near term. If your interests are strictly on the neuronal side, however, and you think mapping sLFO has potential for de-noising purposes, I suggest you read the sections entitled "How do systemic LFOs affect resting functional connectivity?" and "How to deal with systemic LFOs in fMRI" in my last post, and then look at the papers by Jahanian et al.,Erdogan et al., Anderson et al., and, of course, the recursive method paper by Tong & Frederick. I think the recursive approach has advantages over the fixed seed approach, as I've explained in this post. Code for the recursive lag method is available from Blaise Frederick's github. At least one person took the plunge after my last post.

I'm going to be comparing the recursive lag mapping method to pseudo-continuous ASL (PCASL) in 2019. I'll try to post regular updates as I progress.

_________________________



Notes:

1.  We have Blood Oxygenation-Level Dependent (BOLD) contrast, so shouldn't we simply define Arterial Blood Carbon Dioxide-Level Dependent contrast? That would give us ABCD-LD. Doesn't exactly trip off the tongue.

What about Arterial Blood CO₂-Level Dependent contrast, ABCO₂LD? Messy.

Or, how about ARterial CArbon DIoxide-LEvel DEpendent contrast, ARCADILEDE? Sounds better, but also sounds like a new drug for irritable bowel syndrome. ("Ask your doctor if ACADILEDE is right for you!"*Side effects may include vacating a lucrative career in industry, multiple grant disappointments, and frequent criticism from Reviewer 3.)


2.  At some point I will do an "introduction to ASL" blog post because there doesn't seem to be as widespread understanding of the method as I'd once dared to hope:


And there I was thinking you were all just being stubborn! There is sufficient evidence to suggest that a baseline CBF map, computed from a good ASL acquisition, can be a useful normalizing step for fMRI across populations when one expects systematic changes in perfusion, e.g. with aging, disease or on administration of drugs. I will be covering - eventually - this normalizing procedure in the blog post series on modulators of fMRI responses. But I'll do an intro to ASL before it, based on some work I'm doing separately with pseudo-continuous ASL (PCASL).


3.  If you're not familiar with CBF as a measure of perfusion, these last few sentences may appear contradictory. The choice of cerebral blood "flow" as the term describing the volume delivery of blood per unit time to a fixed volume of tissue - that is, perfusion - is a rather unfortunate one, since it is easily confused with the sort of laminar flow we think about for fluids in pipes. Perfusion - CBF - isn't a velocity but a rate of mass replacement. If you're confused, think about the difference of blood delivery that happens in normal GM and WM. GM has 3-5 times the metabolic demand of WM, so its CBF is around 3-5-fold higher. But the GM dynamics, as assessed by time shift analysis, aren't 3-5-fold faster. The mean transit time into WM is only a second or so longer than it is to GM. There's simply less volume replacement of blood happening in the WM tissue. How does that come about? Mostly, it's due to lower vascular density. There are fewer capillaries in the WM. The net speed of blood through GM and WM capillaries can therefore be essentially the same, but the GM perfusion is considerably higher by virtue of the greater density of capillaries.


4.  For those of you who might be interested in pharmacological manipulations, a very recent study by Yang et al. shows that time shift analysis can detect changes in blood dynamics due to caffeine ingestion. This study again utilized 90 scans available from the MyConnectome project. Yang et al. compared the 45 scans obtained on days when the subject had consumed coffee to the 45 scans conducted caffeine-free. (I leave open the possibility that Russ was simply more grumpy sans coffee and that drove the results ;-) The analysis used the same procedure as described above for the arterial to venous seed comparison (see the third figure in this post, from Tong et al. (2018)). Using seeds in superior sagittal sinus (SSS), internal carotid arteries (ICA) and the global signal (GS), Yang et al. found the transit time from ICA to SSS was almost a second longer without caffeine, comprising a delay of approximately half a second between the ICA and GS and another half a second between the GS and the SSS:

Fig. 2 from Yang et al. (2018).

The response was reasonably uniform across the brain. The results were consistent with vasoconstriction, an expected response to caffeine.

In the discussion section of the paper the authors dig into the implications of slowed blood dynamics. In particular, they try to reconcile the slower dynamics with the reduced CBF that has been reported in earlier studies on caffeine consumption. (There are no ASL data in MyConnectome to make direct comparisons so the comparisons are necessarily between studies.) There is a suggestion that mapping CBF with ASL and blood dynamics with BOLD data will greatly enhance our understanding of the neural and vascular effects under a variety of conditions. Lots of complimentary information!


Using multi-band (aka SMS) EPI on on low-dimensional array coils

$
0
0

The CMRR's release notes for their MB-EPI sequence recommend using the 32-channel head coil for multiband EPI, and they caution against using the 12-channel head coil:

"The 32-channel Head coil is highly recommended for 3T. The 12-channel Head Matrix is not recommended, but it can be used for acceptable image quality at low acceleration factors."

But what does "low acceleration" mean in practice? And what if your only choice is a 12-channel coil? Following a couple of inquiries from colleagues, I decided to find out where the limits might be.

Let's start by looking at the RF coil layout, and review why the 12-channel coil is considered an inferior choice. Is it simply fewer independent channels, or something else? The figure below shows the layout of the 12-ch and 32-ch coils offered by Siemens:

From Kaza, Klose & Lotze (2011).

In most cases, the EPI slice direction will be transverse or transverse oblique (e.g. along AC-PC), meaning that we are slicing along the long axis of the magnet (magnet Z axis) and along the front-to-back dimension of the head coil. Along the long axis of the 12-ch coil there is almost no variation in the X-Y plane. At the very back of the coil the loops start to curve towards a point of convergence, but still there is no distinction in any direction in the X-Y plane. Compare that situation to the 32-ch coil. It has five distinct planes of coils along the Z axis. With the 32-ch coil, then, we can expect the hardware - the layout of the loops - to provide a good basis for separating simultaneously acquired axial slices, whereas there is no such distinct spatial information available from the coil elements in the 12-channel coil. In the 12-channel coil, every loop detects a significant and nearly equal fraction of any given slice along Z.

There is an additional complication for the Siemens 12-channel coil. It is what Siemens call a "Total Imaging Matrix (TIM)" coil. This means that, depending on a software setting, the signals from the twelve loops can be combined in ways that can lead to better receive-field heterogeneity, or higher SNR, or the whole ensemble can be left as an array coil. The maximum amount of receive field heterogeneity is produced from the "Triple" mode, so that's what I'll use here.

Modern simultaneous multi-slice (SMS) sequences use a scheme called blipped CAIPI to assist in the separation of slices at the reconstruction stage. I've not found a didactic review that can get you up to speed on this method. (If you find one, please post a link in the comments!) The original paper by Setsompop et al. is about as accessible as I've found. For now, a brief summary should suffice. The figures below may or may not help you understand what's going on without reading the full paper!

From Setsompop et al. (2012).

The essential idea is to add a small amount of phase shift along the slice dimension, using small gradient episodes that look very similar to the blips used for phase encoding in-plane. Unlike the phase encoding blips, which are set to be fully refocused - zero net phase shift - at TE, with the blipped CAIPI scheme the phase is designed to refocus every two, three or four lines of k-space. The phase shift is designed to move the image in the field-of-view (FOV) by half, one third or one quarter of the FOV, respectively. But because the blipped CAIPI gradient is active along the slice direction, slices which differ in their Z offset will accrue a differing amount of total shift. This has the effect of encoding unique spatial information in two slices that differ in Z. Importantly, we have imparted a phase shift to the slices in deterministic fashion, meaning that we can then account for that shift at the processing phase, and return the signal to its correct location in the FOV once the slice separation procedure has been performed.

What are the implications for doing MB-EPI on a 12-channel instead of a 32-channel coil? We've already seen there is considerable unique spatial information available along the Z axis from the coil geometry in the 32-channel coil. While blipped CAIPI may improve MB slice separation, it's not being relied upon to do all the work with the 32-ch coil. The hardware carries a lot of the burden. But with the 12-channel coil, the hardware offers almost no support at all. In that case, the work of separating slices is going to rely heavily on the blipped CAIPI scheme. We are fixing it in software, not hardware, as the old engineering joke puts it.


Phantom tests

Time to test it out in practice. The 32-channel coil offers our performance standard. The idea is to keep everything constant and vary only the receive coil, and compare. But what parameters to test? We can expect performance to drop off quickly as the MB factor goes up. While I have tested MB=6 with the 32-channel coil in an earlier blog post, in practice I generally don't like using greater than MB=5, to match the number of planes of independent coil loops in the axial direction. But let's not run before we can walk. Compared to no slice acceleration, a shift to MB=2 is already an impressive gain in speed on a 12-channel coil. To keep things reasonable I tested MB=2 and MB=3 first, deciding that I would only opt for larger MB factors if these tests suggested there is even more performance available. (tl;dr the limit is between 2 and 3, no further tests are forthcoming!)

Spatial resolution is the next consideration. In the absence of acceleration, we are generally limited to voxel dimensions of about 3.5 mm if we want to cover the majority of cortex in a TR of 2 seconds. Here, I decided to push to voxel dimensions of 2.5 mm, leaving the TR at 2 seconds. It's entirely permissible to take the TR reduction offered by MB without changing the voxel dimensions, of course, but I'm primarily interested in the generation and avoidance of artifacts, not the speed limit. The increased resolution in-plane makes the gradients work harder, may require partial Fourier to keep the TE reasonable, and is generally likely to trigger artifacts largely independent of the TR.

I used a spherical gel phantom (the FBIRN phantom) in the first set of comparisons. The slices were axial. An oblique axial prescription - say, AC-PC - is only likely to improve performance, since there is some unique coil information available by the time the slice direction is coronal (slices along the Y axis of the magnet). But I didn't have time to test this theory out.

Here are the tests: 
  1. MB=2, (2.5 mm)³ voxels, with Leak Block
  2. MB=2, (2.5 mm)³ voxels
  3. MB=3, (2.5 mm)³ voxels, with Leak Block
  4. MB=3, (2.5 mm)³ voxels
  5. MB=1, (2.5 mm)³ voxels, Interleaved slice order
Leak Block is the CMRR term for split-slice GRAPPA reconstruction. When Leak Block is not used, the slice separation is performed using a slice GRAPPA algorithm. See this post for more information on the differences. I acquired one run without MB acceleration (MB=1), noting that the slice order was interleaved even though I generally prefer to use contiguous slice ordering for conventional EPI (so that head motion in the slice direction doesn't cause prominent striping). An interleaved slice order is used for SMS, although the entire notion of interleaving is different for SMS than for regular multi-slice imaging.

I kept TR constant at 2000 ms throughout, and acquired 100 volumes for each test. Prescan normalization was enabled (with raw data also saved) for the 32-channel coil, but was disabled for 12ch and neck coils. See here, here and here for more information on the use of Prescan Normalization with receiver coils exhibiting pronounced receive field heterogeneity.

The 32-channel coil - our performance benchmark - produced good results consistently, as expected. I'm showing three views of the phantom data: 1. the in-plane dimension contrasted for the signal, 2. the in-plane dimension contrasted for ghosting and noise, and 3. the slice dimension reconstructed from the stack of slices. This will be the order for all comparisons to come. To keep the comparisons vaguely manageable, here I've omitted the MB=1 images from the 2x2 views. (You can find a bonus set of comparisons that includes MB=1 for the 32-channel coil in Note 1.)

32-channel coil, contrast set for visualization of signal. Phase encoding direction top-to-bottom, frequency encoding direction L-R.
32-channel coil, contrast set for visualization of ghosts and noise. Phase encoding direction top-to-bottom, frequency encoding direction L-R.
32-channel coil, contrast set for visualization of signal. Slice direction top-to-bottom, frequency encoding direction L-R.

As we should expect, MB=3 generates slightly more (and differently patterned) residual aliasing artifacts compared to MB=2, but these are visible only in the noise-contrasted view. The intensity is sufficiently weak that the artifacts aren't visible on the signal-contrasted views of the in-plane or the slice dimensions.

Here are the results for the 12-channel coil, with the contrast settings and 2x2 image layout as above:

12-channel coil, contrast set for visualization of signal. Phase encoding direction top-to-bottom, frequency encoding direction L-R.
12-channel coil, contrast set for visualization of ghosts and noise. Phase encoding direction top-to-bottom, frequency encoding direction L-R.
12-channel coil, contrast set for visualization of signal. Slice direction top-to-bottom, frequency encoding direction L-R.

If you look very carefully, you can see some artifacts overlapping the phantom in the signal-contrasted views for both the in-plane and slice dimensions for MB=3, whether using Leak Block or not. There are no such artifacts visible for the MB=2 data, however. It appears that we've found the performance threshold. Clearly, the artifact level would only increase for MB factors above 3. Is MB=3 usable? I would be inclined to use MB=2 without much reservation, but I would want to do many more tests with MB=3 before I commit. So let's take a look at the performance on a live human brain.


Brain tests

For the brain tests, I left all acquisition parameters as for the phantom data. Unlike the phantom, however, we now have considerable image contrast variations as well as the potential for movement. I didn't have time to acquire time series measurements for temporal SNR (tSNR) assessments, I'm afraid, so all I can offer is static views in which I've tried to pick the strongest examples of any artifacts I could find. It's not perfect, but it gives us a crude assessment.

The data obtained on the 32-channel coil are our standard:

32-channel coil, contrast set for visualization of signal. Phase encoding direction top-to-bottom, frequency encoding direction L-R.
32-channel coil, contrast set for visualization of ghosts and noise. Phase encoding direction top-to-bottom, frequency encoding direction L-R.
32-channel coil, contrast set for visualization of signal. Slice direction top-to-bottom, frequency encoding direction L-R.

The residual aliasing levels are comparable for both MB=2 and MB=3, with and without Leak Block. So now let's check the performance of the 12-channel coil:

12-channel coil, contrast set for visualization of signal. Phase encoding direction top-to-bottom, frequency encoding direction L-R.
12-channel coil, contrast set for visualization of ghosts and noise. Phase encoding direction top-to-bottom, frequency encoding direction L-R.
12-channel coil, contrast set for visualization of signal. Slice direction top-to-bottom, frequency encoding direction L-R.

As in the phantom, the residual aliasing is higher for MB=3 than for MB=2. But the artifact level is low and is only visible in the ghost-contrasted scan. There is no obvious banding artifact or residual aliasing overlapping signal regions. And, as on the phantom, Leak Block seems to make little difference.

Based on these initial tests, I would conclude that MB=2 is quite reasonable to use on the 12-channel coil, and MB=3 might be considered after a couple more tests, especially tSNR tests and artifact evaluations in the presence of subject motion.


MB-EPI on a neck coil

A few of us are interested in non-human imaging, e.g. using a neck coil for dog fMRI, or for scanning sea lion brains, like this:

Using the (human) neck coil to scan Cronut, a 3 year-old male California sea lion.

I reproduced the tests above to find out if the neck coil might be used for MB-EPI. You can see in the images below that the receive profile of the neck coil drops off quite quickly front and back. For humans, the usual procedure is to have the neck coil on the bed along with the 12-channel head coil, allowing a considerable boost of the receive field in one direction. But that's an experimental detail not directly related to the evaluation of MB-EPI. It is something to note if you are trying to position dog or sea lion brains in the most sensitive part of the coil.

You're familiar with the parameters and the layout of the final images by now. Here are the comparison images for MB=2 and MB=3, with and without Leak Block:

Neck coil, contrast set for visualization of signal. Phase encoding direction top-to-bottom, frequency encoding direction L-R.
Neck coil, contrast set for visualization of ghosts and noise. Phase encoding direction top-to-bottom, frequency encoding direction L-R.
Neck coil, contrast set for visualization of signal. Slice direction top-to-bottom, frequency encoding direction L-R.

There are clear artifacts in the MB=3 acquisitions, and Leak Block doesn't help. There are also subtle artifacts visible in-plane but not obviously in the slice direction for the MB=2 data. The artifact level is worse than for MB=2 on the 12-channel coil, but lower than that for MB=3 on the 12-channnel coil. Based on these results, I would avoid using MB=3 on the neck coil, but I would be tempted to use MB=2 if the particular application called for it.

I'm afraid I don't have live sea lion brain data to show so I'm using the next best thing - a live human. I tested only MB=2 on the grounds that the phantom data for MB=3 weren't encouraging. I may go back and test MB=3 on a human at a later date. For now, I am comparing the MB=2 performance to the single band reference (SBRef) images acquired at the start of a run. These SBRef images are acquired with a different effective TR, that is, at an effective TR of (TR x MB), so the T₁-weighted contrast is quite different. Still, these images provide a reasonable basis for evaluating SMS artifacts:

Neck coil, contrast set for visualization of signal. Phase encoding direction top-to-bottom, frequency encoding direction L-R.
Neck coil, contrast set for visualization of ghosts and noise. Phase encoding direction top-to-bottom, frequency encoding direction L-R.
Neck coil, contrast set for visualization of signal. Slice direction top-to-bottom, frequency encoding direction L-R.


Not too bad. No obvious artifacts are visible in any of the MB data, confirming that use of MB=2 is likely permissible for the neck coil.

In the next post, I'll look at MB-EPI for diffusion imaging on the same three coils.

_______________________


Notes:

1.  Here is a bonus set of comparisons for the MB=2 tests, for the 12-channel and neck coils compared to both MB=2 and MB=1 with the 32-channel coil. The MB=2 and MB=1 data quality is quite similar for the 32-channel coil, except that the slice coverage is much reduced for MB=1.

Phase encoding direction top-to-bottom, frequency encoding direction L-R. Leak Block used for MB=2. Contrast set for visualization of signal.
Phase encoding direction top-to-bottom, frequency encoding direction L-R. Leak Block used for MB=2. Contrast set for visualization of ghosts and noise.
Slice direction top-to-bottom, frequency encoding direction L-R. Leak Block used for MB=2. Contrast set for visualization of signal.



Using multiband-EPI for diffusion imaging on low-dimensional array coils

$
0
0

This is a continuation of the previous post looking at MB-EPI on a receive coil with limited spatial information provided by its geometry, such as the 12-channel TIM coil or the 4-channel neck coil on a Siemens Trio.

Simultaneous multi-slice (SMS), aka multi-band (MB), offers considerable time savings for diffusion-weighted imaging (DWI). Unlike in fMRI, where MB factors of 4 or more are quite common, in DWI few studies use MB factors greater than 3. While it may be feasible in principle to push the acquisition time even lower without generating artifacts using a large array coil like the Siemens 32-channel coil, we run into another consideration: heating. Heating isn't usually a concern for gradient echo MB-EPI used in conventional fMRI experiments. In fMRI, the excitation flip angles are generally 78° or less. But with DWI we have a double whammy. Not only do we want a large excitation flip angle to create plenty of signal, we also require a refocusing pulse that is, by convention, set at twice the flip angle of the excitation pulse. (The standard nomenclature is 90° for excitation and 180° for refocusing, but the actual angles may be lower than this in practice, for a variety of reasons I won't go into here.) Now the real kicker. The heat deposition, which we usually measure through the specific absorption rate (SAR), scales quadratically with flip angle. Thus, a single 180° refocusing pulse deposits as much heat as four 90° pulses! (See Note 1.) But wait! It gets worse! In using simultaneous multi-slice - the clue's in the name - we're not doing the equivalent of one excitation or refocusing at a time, but a factor MB of them. Some quick arithmetic to give you a feel for the issue. A diffusion scan run with 90° and 180° pulses, each using MB=3, will deposit fifteen times as much heat as a conventional EPI scan run at the same TR but with a single 90° pulse. On a 3 T scanner, it means we are quickly flirting with SAR limits when the MB factor goes beyond three. The only remedy is to extend TR, thereby undermining the entire basis for deploying SMS in the first place.

But let's not get ahead of ourselves. With a low-dimensional array such as the Siemens 12-channel TIM coil we would be delighted to get MB to work at all for diffusion imaging. The chances of flirting with the SAR limits are a distant dream.


Phantom tests for diffusion imaging

The initial tests were on the FBIRN gel phantom. I compared MB=3 and MB=2 for the 32-channel, 12-channel and neck coils using approximately the same slice coverage throughout. The TR was allowed to increase as needed in going from MB=3 to MB=2. Following CMRR's recommendations, I used the SENSE1 coil combine option throughout. I also used the Grad. rev. fat suppr. option to maximize scalp fat suppression, something that we have found is important for reducing ghosts in larger subjects (especially on the 32-channel coil, which has a pronounced receive bias around the periphery). For the diffusion weighting itself, I opted to use the scheme developed for the UK Biobank project, producing two shells at b=1000 s/mm² and b=2000 s/mm², fifty directions apiece. Four b=0 images are also included, one per twenty diffusion images. (For routine use we now actually use ten b=0 images, one every ten DW images, for a total of 111 directions.) The nominal spatial resolution is (2 mm)³. The TE is 94.8 ms, which is the minimum value attainable at the highest b value used.

There are over a hundred images we could inspect, and you would want to check all of them before you committed to a specific protocol in a real experiment because there might be some strange interaction between the eddy currents from the diffusion-weighting gradients and the MB scheme. For brevity, however, I will restrict the comparisons here to examples of the b=0, 1000 and 2000 scans. I decided to make a 2x2 comparison of a single band reference image (SBRef), a b=0 image (the b=0 scan obtained after the first twenty DW scans), and the first b=1000 and b=2000 images in the series. While only a small fraction of the entire data set, these views are sufficient to identify the residual aliasing artifacts that tell us where the acceleration limit sits.

First up, the results from the 32-channel coil, which is our performance benchmark. No artifacts are visible by eye for any of the b=0, b=1000 or b=2000 scans at either MB=2 or MB=3:

32-channel coil, MB=3. TL: Single band reference image. TR: first b=0 image (21st acquisition in the series). BL: first b=1000 image. BR: First b=2000 image
32-channel coil, MB=2. TL: Single band reference image. TR: first b=0 image (21st acquisition in the series). BL: first b=1000 image. BR: First b=2000 image.


For the 12-channel coil, however, we can see residual aliasing in the b=0 scan for MB=3:

12-channel coil, MB=3. TL: Single band reference image. TR: first b=0 image (21st acquisition in the series). BL: first b=1000 image. BR: First b=2000 image.
12-channel coil, MB=2. TL: Single band reference image. TR: first b=0 image (21st acquisition in the series). BL: first b=1000 image. BR: First b=2000 image.

The SNR is too low to see the residual aliasing in the b=1000 and b=2000 scans at MB=3, but given the effects at b=0 we should assume they're there. However, I should note that the artifact I've shown above was the most prominent example I could find, suggesting that MB=3 on the 12-channel coil might be acceptable under some circumstances. The MB=2 data appear to be reasonably artifact-free throughout.

If you're wondering what the rendered slice dimension looks like, I'm sparing you an even lengthier post. However, I found that artifacts were more prominent in-plane than through plane. This is different to the situation with MB-EPI for fMRI, where artifacts are either similarly visible in each view, or banding may be visible in the slice dimension when there are no in-plane artifacts visible. Could it be the use of SENSE1 coil combination option, perhaps? I will try to learn more about the differences in further tests when I get the time. It's also interesting that contrasting at the noise level didn't lead to different conclusions than simply contrasting at the signal level, so again I've given just the one contrast to keep the post manageable.

Back to data. The neck coil exhibits clear residual aliasing for MB=3 at b=0 and, if you have a good eye, you can see artifacts at b=1000, too:

4-channel neck coil, MB=3. TL: Single band reference image. TR: first b=0 image (21st acquisition in the series). BL: first b=1000 image. BR: First b=2000 image.

I was also able to find some residual aliasing in a few of the b=0 scans at MB=2, but they tended to be at one end of the phantom. This suggests a magnetic field homogeneity effect because it's hard to shim well the flat ends of a cylinder. As always, I've shown the clearest artifact example I could find in the figure below. Most of the slices for b=0 didn't resemble a tennis ball:

4-channel neck coil, MB=2. TL: Single band reference image. TR: first b=0 image (21st acquisition in the series). BL: first b=1000 image. BR: First b=2000 image.

Does the presence of some artifacts in b=0 images at just MB=2 mean that multi-band is unacceptable on the neck coil? Not necessarily. It just means we'd have to be extra careful before using it for real at MB=2, and I don't think I would want to risk MB=3 on that coil. But we would want to look very carefully at regions of poor magnetic field homogeneity for artifacts in brain data. Talking of brains....


Brain tests

Time constraints didn't permit me to acquire brain data for MB=3, only for MB=2, on all three coils. I do have an earlier comparison for MB=3 and MB=2 for the 32ch and 12ch coils, however, which I'll come back to at the end. Let's first look at MB=2 on all three coils.

The acquisition parameters for brain testing are the same as for the phantom tests. Again, I compare a 2x2 matrix comprising the SBRef image, an example b=0 image, and the first b=1000 and b=2000 scans from the full DW acquisition. There will be two views apiece: the in-plane view, which is as the data are acquired, and then a rendered view of the slice dimension to see how slice leakage or banding artifacts might manifest, just in case the heterogeneity of brain introduces something new. Finally, to see how artifacts and SNR propagate through to final results, I'll compare a simple tensor analysis of the full data set. I'll show the apparent diffusion coefficient (ADC) map, a trace image, a fractional anisotropy (FA) map in grayscale, and then a color FA image. The tensor analysis option on the scanner was used to produce these processed images.

Here are the MB=2 brain results for the 32-channel coil:

32-channel coil, MB=2. TL: Single band reference image. TR: first b=0 image (21st acquisition in the series). BL: first b=1000 image. BR: First b=2000 image.
32-channel coil, MB=2. TL: Single band reference image. TR: first b=0 image (21st acquisition in the series). BL: first b=1000 image. BR: First b=2000 image.
32-channel coil, MB=2. TL: ADC image. TR: tensor trace image. BL: FA map. BR: Color FA image.


No problems in the 32-channel coil data. Nor, it turns out, for data from the 12-channel coil:

12-channel coil, MB=2. TL: Single band reference image. TR: first b=0 image (21st acquisition in the series). BL: first b=1000 image. BR: First b=2000 image.
12-channel coil, MB=2. TL: Single band reference image. TR: first b=0 image (21st acquisition in the series). BL: first b=1000 image. BR: First b=2000 image.
12-channel coil, MB=2. TL: ADC image. TR: tensor trace image. BL: FA map. BR: Color FA image.


This result is encouraging for the use of MB=2 on the 12-channel coil. At the end I'll show color FA images that suggest it might be permissible to go as far as MB=3 on the 12-channel coil, but you would definitely want to run your own phantom tests first. Then, in brains, I would pay particularly close attention to regions of high magnetic field heterogeneity - the usual suspects: frontal lobes, temporal lobes, deep brain - because of the suggestion above, in the phantom data, of a shim-related residual aliasing for MB=3.

How did the neck coil do?

4-channel neck coil, MB=2. TL: Single band reference image. TR: first b=0 image (21st acquisition in the series). BL: first b=1000 image. BR: First b=2000 image.
4-channel neck coil, MB=2. TL: Single band reference image. TR: first b=0 image (21st acquisition in the series). BL: first b=1000 image. BR: First b=2000 image.
4-channel neck coil, MB=2. TL: ADC image. TR: tensor trace image. BL: FA map. BR: Color FA image.


Overall, not too bad. There are no artifacts visible to the naked eye, which is encouraging for those of us who might want to (or be forced to) use the neck coil for non-human brains. But there is a clear degradation of signal-to-noise overall, compared to the other two coils. This three-way comparison of the color FA maps shows it pretty well:



Although the slices don't match exactly (sorry!), it's still possible to discern the noisier data produced in going from the 32-channel to the 12-channel coil, and from the 12-channel to the neck coil. There is a more mottled - less smooth - appearance as you move left to right. It's not a surprise to see SNR degrade in the order shown. Not only does the number of receive channels decrease, but the "filling factor" - the amount of coil filled with brain versus space - also decreases from 32ch to 12ch to neck coil. Still, there are no discrete artifacts, suggesting that MB=2 is acceptable on the 12-channel coil. Likewise, the neck coil produces reasonable-looking results on a real brain in spite of the subtle residual aliasing artifacts we saw in the phantom data. If the lower SNR was a problem, a simple tactic would be to increase the voxel size from (2 mm)³ to, say, (2.2 mm)³ for a 33% SNR boost.

Is the SNR worse for MB=2 than it would be for un-accelerated DWI? That's a test I didn't do, but will. It would be an interesting comparison, because the total acquisition time would be above 12 minutes without MB, rendering the entire experiment more susceptible to motion. It is entirely possible that the SNR would end up lower without MB! Except that an earlier test I ran, comparing MB=2 and MB=3 on the 12-channel and 32-channel coils only, suggests that MB does indeed reduce the SNR, even as it shortens the total acquisition time:





In sum, then, I think there is considerable scope to use MB for diffusion imaging on coils with limited spatial information or number of channels. The limits for MB factor appear to be quite similar for diffusion imaging as I found for fMRI applications in the last post.

__________________________


Notes:

1.  SAR also scales quadratically with B₀, so don't assume that anything you can do at 3 T will be approximately twice as demanding if run at 7 T. The actual demands will be (7x7)/(3x3) = 5.4 times greater! You might be able to use your 7 T to reheat your coffee between subject scans!




Restraining the 32-channel coil

$
0
0

There has been a move towards custom head restraint in recent years. These devices are tailored to fit the subject in such a way that any movement of the head can be transmitted to the coil. It is therefore imperative to make sure that the RF coil is also well restrained.

On Siemens Trio and Prisma scanners, the 32-channel head coil is a special case. It was designed independent of the standard head coils. Restraint on the bed is thus a bit of an afterthought. Sticky pads on the base of the coil are designed to prevent movement through friction, but there are gaps on all four sides and no specific mechanism - slots, grooves, etc. - to lock the coil into a particular position. On my Trio, I was in the habit of putting the 32-channel coil all the way back to the frame of the bed, assuming that the most likely direction of motion from a subject would be backwards. Problem solved, right? No. By putting the coil all the way back, when using custom head restraint I actually put stress on the front two coil cables and this led to intermittent receive RF artifacts. A more refined fix was necessary.

My engineer built a simple frame (see photos below) that fits snugly into the rear portion of the bed frame and forces the coil onto protrusions that hold the standard (12-channel) coils properly. It also shims out the left and right gaps so there is no chance of side to side motion, either. With this device in place, the coil can only go one way: up. 

There has been some debate in the literature about the utility of custom head restraint for motion mitigation, with one group finding benefits while another found it made things worse. I note that both groups were using 32-channel coils on a Prisma, so proper head coil restraint may be a reason for different outcomes. I am now working on a fix for Prisma scanners and will do a separate post on the solution once it's been tested. (ETA April-May.) Until then, if you use a 32-channel coil on any Siemens scanner, my advice is to use additional restraint and make sure your coil is in a reliable, stable position. 

 

The coil restraint shim is put into position before the 32-channel coil.


Coil restraint shim in position.







A core curriculum for fMRI?

$
0
0

Blimey. Judging by the reaction to my earlier Tweet, there's something to be done here. And it makes sense because fMRI has been around for thirty years yet seems to be as ad hoc today as it was at its genesis. We're half the age of modern electronic computing, twice as old as Facebook and Twitter. FMRI predates the NCSA Mosaic web browser, for goodness sake. Let that sink in for a minute.

This is something I've been pondering for a long time. In 2010 I thought I might write a textbook to capture what I saw as the fundamental knowledge that every fMRIer would need to know to set up and run an experiment. I started writing this blog as a way to draft chapters for the textbook. The textbook idea got, uh, shelved as early as 2011, once I realized that a blog is better-suited to delivering content on a subject that is inherently dynamic. Try embedding a movie in a textbook! But then the blog, in its original guise, sorta ran out of steam a few years ago, too. And the main reason why I ran out of steam is directly relevant to this post. I was getting into areas about which I know little to nothing, in an attempt to be able to write blog posts relevant to fMRI research. Take the post series on "fMRI data modulators," my rather clunky term for "that which causes your fMRI time series to vary in ways you probably don't want." I was having to try to teach myself respiratory and vascular physiology from scratch. The last time I sat in a formal biology class I was 13. Recently, I've encountered machine learning, glucose metabolism, vascular anatomy and a slew of other areas about which I know almost nothing. Where does one start????

On the assumption that I'm not alone, it would seem there's a trade to be made. If I can teach k-space to a psychologist, surely a statistician can teach an anatomist about normal distributions, a biochemist can teach an electrical engineer about the TCA cycle, and so on. With very few exceptions, all of us could really use better foundational knowledge somewhere. We are all impostors!

No doubt your first reaction is "Sounds lovely, but nobody has the time!" I respectfully disagree. You are likely already spending the time. My suggestion is to determine whether there might be a more efficient way for you to spend your time, by joining a pool of like-minded "teachers" who will cover the things you can't or won't cover. So, here is a throwaway list of things to consider before you pass judgment:

  • We are all trying to do/learn/teach the same things! There ought to be a more efficient way to do it.
  • The core concepts needed to understand and run fMRI experiments change relatively slowly. The shelf life of the fundamentals should last a decade or more. Updates can be infrequent.
  • Most of us have limited teaching resources. Few, if any institutions can cover all the core areas well.
  • My students become your postdocs. Wouldn’t you want them to arrive with a solid base?
  • Today’s students are tomorrow’s professors. If we are to improve teaching overall, we have to start at the bottom, not at the top.
  • A lot of topical problems (including poor replication, double-dipping, motion sensitivity, physiologic nuisance fluctuations) could be reduced at source by people setting up and executing better experiments through deeper knowledge. Crap in, crap out.
  • We are all super busy, yet the effort to contribute to a distributed syllabus could be a wash, perhaps even a net reduction, because you won't have to BS your way through stuff you don't really understand yourself.
  • I want to learn, too! It’s hard to determine an efficient path through new areas! I need a guide.

 

That just leaves the final step: doing it. Until he regrets his offer, Pradeep Raamana has generously offered use of his Quality Conversations forum to commence organizing efforts. I envision a first meeting at which we attempt to define all the main areas that comprise a "core syllabus" for fMRI. This would include, at the very least, NMR physics, MRI physics, various flavors of physiology, some biochemistry, neuroanatomy, basic statistics, machine learning, experimental design and models, scanner design, etc. If we can identify 6-8 umbrella areas then I'd look to create teams for each who would actually determine what they consider to be core, or fundamental, to their domain. Most likely, it's the stuff with a very long shelf life. We're not trying to be topical, the goal is to give everyone practicing fMRI a basic common framework. We want to define the equivalents of the Periodic Table in chemistry, Newtonian mechanics in physics, eukaryotic cell structure in biology, etc.

Doable? Drop your thoughts below.


Core curriculum: An introduction

$
0
0

After much delay, I am finally going to start developing the core curriculum I suggested in December 2021. At that time, I imagined recruiting a group of 10-15 domain experts to provide the bulk of material under each separate discipline. That might have worked. Indeed, it could still work if an appropriate group such as the OHBM education committee decides to have a go. But I'm going to try something different. To borrow a phrase from blockchain folks, I want to be permissionless. I'm going to try to collate publicly available material myself, with occasional assistance from others if and when I get proper stuck. Trying to do it all myself should provide me with an interesting set of learning experiences, I hope, and it should also help guarantee that anyone, anywhere with access to YouTube can participate.

So, how's this gonna go? Not sure, it's an experiment. I have the following main disciplines listed and as of now I plan on tackling them in this order (although I may well start on some of the later ones before finishing the earlier ones). I'm just gonna start and see what happens. I will aim for one post a week, equivalent to 1-2 hours of learning. As I go, I will do my best to organize the collection - for example, all will have Core curriculum somewhere in the title, plus appropriate labels - and once there are enough of them I'll create a main page with links; a virtual contents table.

Likely major themes, in likely order:

  • How to learn from videos
  • Mathematics
  • Physics
  • Engineering
  • Biology
  • Biophysics
  • Image processing & analysis
  • Statistics
  • Psychology
  • Experimental design
  • Practical issues

Why this order? The logic is to try to build concepts on concepts. It's hard to understand most important engineering concepts without a decent understanding of some physics, which itself requires some decent understanding of certain mathematics, and so on. And, as noted in my Dec 2021 post, the goal here is to cover material that is non-volatile over decades. It's about the fundamental concepts, not the state-of-the-art. 

Right, enough preamble. Time to get going! 

----

 

 

Infrequently asked questions:

Q: Where's your Twitter?  A: Gawn, all gawn. Got X'd out.

Q: Can we comment or make suggestions?  A: Yup. I'll do my best to answer comments to the posts, and my email still works.

Q: What do you mean by "non-volatile over decades?"  A: I'm taking my inspiration from the established sciences. Consider chemistry. Any chemist trained in a university anywhere in the world understands the Periodic Table and why the first row transition elements are different from the noble gases. They also understand carbon valence, pH, catalysis and hopefully some thermodynamics. These subjects are all fundamental to the field of chemistry and are unchanged whether they are learned in England, Sri Lanka or Venezuela. They also haven't changed fundamentally since I learned about them in the 1980s. 

Q: Why Blogger and not Substack or some newer platform?  A: Inertia. There's a dozen years of history on this site and a lot of it still applies. Indeed, I hope some of it will be getting re-used in the core curriculum! 

Q: Are you going to go back to more topical tips?  A: I don't have plans to, but if there's something important to cover then I may. However, I won't be going back to writing the series on fMRI artifacts or physiological confounds, at least not at this time. I'm focused on the fundamentals right now. Seeing way too many un(der)prepared folk still coming into neuroimaging.


Core curriculum: How to learn from videos

$
0
0

 

Make coffee, fire up YouTube, click, watch, go about your day. Not so fast! To actually learn the material you'll see, you will need a minimum of the lecture itself, some sort of reading around the lecture (which could be reviewing a transcript or supporting documents), and then answer some questions on that material. So, as this excellent didactic lecture from an anesthesiologist makes clear, questioning is key:

 

  https://www.youtube.com/watch?v=d7IPiNE4_QE

 

I don't have banks of questions ready to pepper you with at the end of each video, I'm afraid, although I will try to come up with a few questions as homework.

If you want to take it to the pro level, try to explain what you've just learned to a novice. Nothing makes you learn something like having to teach it:


  https://www.youtube.com/watch?v=_f-qkGJBPts

 

Nobody to discuss it with or teach it to? Try preparing a one or two slide summary as if you are about to give a presentation on it. Practice presenting the summary just like you would any other presentation. Dry runs are almost as good as the real thing.

A major benefit of using videos on YouTube is that you can stop and rewind as much as you like! (If you didn't know, your back and forward arrow keys take you back or forward 5 seconds on YouTube videos. Saves from the imprecision of the progress bar.) You can watch the video then listen to the words, then watch again, as you like.

Also consider taking notes as you go. No need to worry about missing something. Just pause and/or rewind as needed. One of my better high school teachers used to berate us if we claimed to be studying without a pen in hand. He claimed reading alone was almost useless. We had to read and write to learn. Perhaps you have some tips to share in the comments. Maybe even a link to a good video on how to learn from videos:

 

https://www.youtube.com/watch?v=fRo26gpgvV4

 

Whatever you do, set up a system for yourself and don't just be a passive viewer.

________________


HOMEWORK: Some people are of the opinion that taking notes during a lecture is a bad idea. I reviewed at least one video telling me as much. And yet 99% of my undergraduate classes were exactly that: someone droned on at the front, writing on a chalk board (no dry erase back in them days!), while we scribbled as fast as we could. For a technical subject like neuroimaging (or chemistry), what is a major benefit of writing notes during a lecture? What is a potential cost of writing notes instead of just listening and perhaps trying to summarize afterwards in a debrief?


Core corriculum: Mathematics I - Linear algebra

$
0
0

 

What is linear algebra? To get us going, I'm going to use the excellent lecture series by 3Blue1Brown and do my best to add some MRI-related questions after each video. Hopefully the connections won't be too cryptic. Don't worry if you can't answer my questions. It's more important that you understand the lectures. No doubt you'll find other material on YouTube and web pages to clarify things.

Let's start with a couple of definitions. While you'll find many examples online, for our purposes we can assume that a linear system is one where the size of the output or outputs scales in proportion to the input or inputs. The take-home pay of a worker paid an hourly rate is linear. They might receive their base amount, say 40 hours per week, plus some amount of overtime at twice their hourly rate. The total is still the linear combination of the base plus overtime amounts.

Non-linear systems don't have this simple proportionality. Gravity is the classic physics example. The strength of the interaction between two massive objects changes as the reciprocal of the squared distance (r^2) between them, that is, as 1/r^2. Finding yourself dangling ten meters in the air above the earth is very different from finding yourself ten more meters away from the earth at a height of 1000 km. In the first case you are about a second away from impacting the ground. In the second case you are in orbit and your more immediate health concerns are lack of oxygen and your temperature.

And what about the term algebra? It's just fancy speak for using symbols to represent the relationships between things that vary. We're going to be interested in changes at different positions in space - points in an image - and so we shall eventually use matrices to perform linear algebra. But we have to build up to a matrix from its skinnier cousin, the vector.


1. Vectors: Essence of linear algebra

 


Q: We will use both a physicist's and a computer scientist's view of vectors at different points in the fMRI process. Given what you know today, can you guess where these different viewpoints might come up? Hint: fMRI is based on MRI, which is a physical measurement technique, while fMRI is typically the analysis of a time series of a certain type of dynamic MRI scans.

 

Q: Changes of basis are quite common in MRI. Even the way we usually label image axes involves a change of basis. The magnet bore direction is labeled the z-axis, while left-to-right is the x-axis and up-down is the y-axis. We refer to this assignment as the lab (or magnet) frame of reference. Now consider an axial MR image of a person's brain. An axial slice lies in the x-y plane in the magnet basis (or lab frame if you prefer). Yet we don't generally label the image with (x,y) dimensions. Instead we use (L-R, A-P) where L-R is left-to-right and A-P means anterior-to-posterior. This is an anatomical basis. How might an anatomical basis be more useful than using a magnet basis in MRI?


3. Linear transformations and matrices:

 


Q: We usually label images using a basis (or reference frame) related to the subject's anatomy, i.e. with the (orthogonal) axes labeled head-to-foot (HF), left-to-right (LR) and anterior-posterior (AP). This means if a subject's head isn't perfectly straight in the magnet - let's say, the head is rotated 20 degrees to the left - the brain still appears straight in the 2D image. But here's the thing. The MRI hardware is controlled using the (x,y,z) "lab" reference frame. The anatomical and lab bases can be related to each other through a rotation matrix. Can you write down what a rotation matrix might look like that relates the subject's anatomical reference frame to the scanner's lab (x,y,z) reference frame?

________________



Core curriculum: Mathematics II - Linear algebra (cont.)

$
0
0


Continuing the series on linear algebra using the lectures from 3Blue1Brown, we are getting into some of the operations that will become mainstays of fMRI processing later on. It's entirely possible to do the processing steps in rote fashion as an fMRI practitioner, but understanding the foundations should help you recognize the limits of different approaches.


4. Matrix multiplication as composition

In this video we see how to treat more than one transformation on a space, and how the order of transformations is important.

 



Q: While brains come in all shapes and sizes, we often seek to interpret neuroimaging results in some sort of "average brain" space, or template. We need to account for the variable position and size of anatomical structures. However, we also have the variability of where that brain was located in the scanner, e.g. because of different amounts and types of padding, operator error, and so on. When do you think it makes the most sense to correct for translations and rotations in the scanner: before or after trying to put individual brain anatomy into an "average brain" space? Or does it not matter?


 5. Three-dimensional linear transformations

 Now we're going to move on from 2D to 3D spaces. Same basic rationale, just more numbers to track!

 


6. The determinant 

 



7. Inverse matrices, column space and null space

 


 

Perhaps it's not fully clear why we might need the inverse matrix. It turns out to be the way to achieve the equivalent of division using matrices. To galvanize this insight, let's look at the concept of an inverse matrix for solving an equation without division. Leaving aside the slightly goofy intro, it's a useful tutorial on the mechanics of determining an inverse matrix. 



________________



 



Core curriculum - Mathematics: Linear algebra III

$
0
0

 

Now we start to think about transformations between dimensions, e.g. taking a 2D vector into a 3D space. Non-square matrices come up frequently in engineering and research applications, including fMRI analysis, so you'll want a good understanding of their meaning. 

 

 A8. Non-square matrices

Let's look at a simple physical interpretation of changing the number of dimensions.



We previously saw how to invert a square matrix. But how do we invert a non-square matrix?



 

________________



Core curriculum - Mathematics: Linear algebra IV

$
0
0

 

Before getting back to the lectures from 3Blue1Brown, try this part review, part preview:



Now let's get back into the meaning with a little more detail.

 

A9. The dot (or scalar) product 

The dot product is a way to estimate how much two vectors interact in a common dimension. If the vectors are orthogonal to each other, they don't interact in a common dimension so their dot product is zero. This is like asking how much north-south movement is involved in an east-west heading: none. But if two vectors are perfectly parallel then this is equivalent to the two vectors lying on the number line and we can use our standard (scalar) multiplication rules. In between, we use a little trigonometry to determine their (dot) product.

 


Still lacking an intuition? This excellent summary from Better Explained (slogan: "Learn Right, Not Rote") should do the trick.


A10. The cross (or vector) product

Both the dot and cross products affect dimensionality. With the dot product, we find how much two vectors interact in one dimension. The cross product of two vectors is perpendicular to them both, telling us how much rotation arises in a third dimension.





A useful real world example use of the cross product is to compute the torque vector. Torque is the rotating force generated by pulling or pushing on a lever, such as a wrench or a bicycle crank. The lever moves in one plane but produces a rotation orthogonal to that plane. 

 

 

Torque is also fundamental to the origins of the MRI signal. We will encounter it later in the physics section. Can you take a guess how torque might be relevant to the MRI signal? Hint: it has to do with the interaction of a nuclear magnet (the protons in H atoms) with an applied magnetic field.

This article from Cuemath covers the rules for computing dot and cross products. And here are a couple of useful visualizations:

 


 

________________



 

 


Core curriculum - Mathematics: Linear algebra V

$
0
0

 

With some understanding of basic matrix manipulations, we're ready to begin using matrices to solve systems of linear equations. In this post, you'll learn a few standard tools for solving small systems - system defined by a small number of equations - by hand. Naturally, larger systems as found in fMRI will use computers to solve the equations, but you should understand what's going on when you push the buttons.


A11. Elementary row operations and elimination

 
This is just your standard algebraic manipulation to solve multiple simultaneous equations, e.g. dividing both sides of an equation by some constant to be able to simplify, but where the equations are represented as matrices:

 


A12. Cramer's Rule for solving small linear systems

According to Wikipedia:

In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants of the (square) coefficient matrix and of matrices obtained from it by replacing one column by the column vector of right-sides of the equations.

 



________________




Core curriculum - Mathematics: Linear algebra VI

$
0
0

 

A13. Eigenvectors and Eigenvalues

Let's end this section on linear algebra with a brief exploration of eigenvectors and their eigenvalues. An eigenvector is simply one which is unchanged by a linear transformation except to be scaled by some constant. The constant factor (scalar) by which the eigenvector is scaled is called its eigenvalue. If the eigenvalue is negative then the direction of the vector is reversed as well as scaled.

 Curious about the terminology? Eigen means "proper" or "characteristic" in German. So if you're struggling to understand or remember what eigenvectors are all about, perhaps it helps to rename them "characteristic vectors" instead.

Here's a nice introduction to the concepts. Pay close attention to the symmetry arguments. It turns out eigenvectors represent things like axes of rotational symmetry and the like:

 

 

And with some of the insights under your belt, here's a tutorial on the mechanics of finding eigenvalues and eigenvectors:

 


________________






Coffee Break with practiCal fMRI

$
0
0

 A new podcast on YouTube


We all know the best science at a conference happens either during the coffee breaks or in the pub afterwards. This being the case, practiCal fMRI and a guest sit down for coffee (or something stronger) to discuss some aspect of functional neuroimaging in what we hope is an illuminating, honest fashion. It's not a formal presentation. It's not even vaguely polished. It’s simply a frank, open discussion like you might overhear during a conference coffee break.

In the inaugural Coffee Break, I sit down with Ravi Menon to discuss two recent papers refuting the existence of a fast neuronal response named DIANA that was proposed in 2022. Ravi was a co-author on one of the two refutations. (The other comes from the lab of Alan Jasanoff at MIT.) We then digress into a brief discussion about the glymphatic system and sleep, and finally some other bits and pieces of shared interest. I've known Ravi for three decades and it's been a couple of years since we had a good natter, so we actually chatted on for another hour after I stopped recording. Sorry you don't get to eavesdrop on that conversation. It was all science, zero gossip and the subject of expensive Japanese whisky versus Scotch and bourbon did not feature, honest guv.

 


All the links to the papers and some items mentioned in our discussion can be found in the description under the video on YouTube. 

What's next for Coffee Break? I have a fairly long list of subject matter and potential guests. I'm hoping to follow some sort of slightly meandering theme, but no promises. I'm also hoping to get new episodes out about once every couple of weeks. But again, no promises.

(PS The series of posts on the core fMRI syllabus will resume shortly with a new branch on biology, starting with basic cell biology.)

______________

Core curriculum - Cell biology: taxonomy

$
0
0

 

Most of the biology we need to learn can be treated orthogonal to the mathematics, whereas the mathematics underlies all the physics and engineering to come. As a change of pace, then, I'm going to start covering some of the biology so I can jump back and forth between two separate tracks. One track will involve Mathematics, then Physics, then Engineering, the other will be Cell Biology, Anatomy, Physiology and then Biochemistry.

 

Let's begin with a simple overview of cell structure:

 https://www.youtube.com/watch?v=0xe1s65IH0w

The owner prohibits embedding this video in other media so you'll have to click through the link to watch.


Next, a little more detail on what's in a typical mammalian cell:


All well and good, but we are primarily interested in the types of cells found in neural tissue, whether central nervous system (CNS) or peripheral nervous system (PNS):


A little more taxonomy before we get into the details of neurons and astrocytes. In this video, we start to encounter the chemical and electrical signaling properties in cells, something we will get into in more detail in a later post. Still, it's timely to introduce the concepts.


As we move towards the neural underpinnings of fMRI signals, we need to know a lot more about neurons and astrocytes. Let's do neurons first.


While this next video repeats a lot of what you've already seen, there is enough unique information to make it worth watching.


Finally, a little more taxonomy that relates types of neurons to parts of the body, something that could be very important for fMRI when we are considering an entire organism.


To conclude this introduction to cell biology and types of neural cells, let's look at glial cells in more detail.



 Another simple introduction, to reinforce the main points:


And a nice review to wrap up.


We will look far more closely at astrocytes in a later video, once we've learned more about blood flow and control. For now, just remember that those astrocyte end feet are going to be extremely important for the neurovascular origin of fMRI signals.

 

That will do for this primer. The next post in this series will concern the resting and action potentials, signaling and neurotransmission.

_________________



Can we separate real and apparent motion in QC of fMRI data?

$
0
0

 

A few years ago, Jo Etzel and I got into a brief but useful investigation of the effects of apparent head motion in fMRI data collected with SMS-EPI. The shorter TR (and smaller voxels) afforded by SMS-EPI generated a spiky appearance in the six motion parameters (three translations, three rotations) produced by a rigid body realignment algorithm for motion correction, such as MCFLIRT in FSL. The apparent head motion is caused by magnetic susceptibility variations of the subject's chest as he/she breathes, leading to a change in the magnetic field across the head which, in turn, adds a varying phase to the phase-encoded axis of the EPI. This varying phase then manifests as a translation in the phase-encoded axis. It's not a real motion, it's pseudo-motion, but unfortunately it is a real image translation that adds to any real head motion. I should emphasize here that this additive apparent head motion arises in conventional multi-slice EPI, too, but it's generally only when the TR gets short, as is often the case with SMS-EPI, that the apparent head motion can be visualized easily (as a spiky, relatively high frequency fluctuation in the six motion parameter traces). In EPI sampled at a conventional TR of 2-3 sec, there are only a small handful of data points (volumes) per breath for an average breathing rate of 12-16 breaths/minute and this leads to aliasing of most of the apparent head motion frequency. It may still be possible to see the spiky respiration frequency riding on the six motion parameters, but it's not always there as it is for TR much less than 2 seconds.

Once we'd satisfied ourselves we'd understood the problem fully, I confess I let the matter drop. After all, we have tools like MCFLIRT that try to apply a correction to all sources of head motion simultaneously, whether real or apparent. But now I'm wondering if we might be able to evaluate the real and apparent motion contributions separately, with a view to devising improved QC measures that can emphasize real head motion over the apparent head motion when it comes to making decisions on things like data scrubbing. Jo has been dealing with the appropriate framewise displacement (FD) threshold to use when including or excluding individual volumes. (See also this paper.)

Let's review one of the motion traces from my second 2016 blog post on this issue:

These traces come from axial SMS-EPI with SMS factor (aka MB factor ) of 6. The x axes are in seconds, corresponding to TR = 1 sec. (The phase-encoded axis is anterior-posterior, which is the magnet Y direction.) On the left is a subject restrained with only foam, on the right the same subject's head is restrained with a printed head case. During each run the subject was asked to take a deep breath and sigh on exhale every 30 seconds or so. We clearly see the deep breath-then-sigh episodes in both traces, regardless of the type of head restraint used. Yet it is also clear the apparent head motion, which is the high frequency ripple, dominates the Y, Z and roll traces on the left plot. On the right plot, the dominant effect of apparent head motion manifests in the Y trace, with a much reduced effect in the roll axis. Already we are seeing a slight distinction between the translations and rotations for apparent head motion. It looks like apparent head motion contributes more to translations than rotations, which makes sense given the physical origin of the problem. In which case, can we assume that by extension real head motion will dominate the rotations?

For now, let's assume that the deep breath-then-exhale episodes are producing considerable real head motion, in addition to the large apparent head motion spike from exaggerated chest movement. The left plot above shows that pitch, yaw and roll all characterize the six deep breaths readily. They are also visible in Z and X, but with considerably reduced magnitude. There's no clear effect in the Y trace which is dominated by the aforementioned apparent head motion. So far so good! When the head can actually move in the foam restraint, we have clear biases towards rotations for real head motion and translations for apparent head motion. 

What about the right plots? Real head motion is far harder to achieve because of the printed head case restraint. But we assume the apparent head motion is basically the same magnitude because it's chest motion, not head motion. So we might think of this condition as being a low (or lowest) real motion condition. As with the foam restraint on the left, we again see Y translations dominated by apparent head motion. The roll axis also displays considerable apparent head motion. And as for the foam restraint, the roll and pitch axes display something that may be real or apparent head motion for each of the deep breath-then-exhale periods. We can't be sure if the head (or the entire head case, or even the entire RF coil!) was really moving during each breath, but let's assume it was. If so, then for good mechanical head restraint we have the same rough biases as for foam restraint in our motion traces: real motion dominates rotations, apparent motion manifests mostly as translations.

Jo sees a similar distinction between real and apparent head motion in the motion parameter plots of her 2023 blog post. In her top plot, which she suggests is a low real motion condition, the apparent motion dominates Y and Z translations and the roll traces, exactly as my example above. Her second plot exhibits considerable real head motion. The apparent head motion is still visible as ripples on the Y and Z translation traces, but now it's clear the biggest changes arise in the three rotations and these changes are probably real head motion. Again, we have real motion dominating rotations while apparent motion manifests more in the translations.

Finally, let's consider Frew et al., who looked at head motion in pediatrics. Here's Figure 3 from their paper:


Using framewise displacement (FD), they show a transition from FD dominated by translations to FD dominated by rotations when considering low, medium and high (real) head motion subjects. Rotations and translations are both affected significantly in the medium movement group. Still, the trend here suggests that we might consider rotations alone as an index of real head motion if, as suggested above, apparent head motion contributes mostly to translations.

So, what might we do to separately evaluate real and apparent head motion? This is where you come in. I only have one starting idea, and that's to shift to considering FD using only rotations, rather than rotations and translations, when setting thresholds for the purposes of QC and scrubbing. Based on what I've presented here, we might be able to set a threshold for FD(rotations only) that will capture most of the real head motion and have a much reduced dependency on apparent head motion. This measure could help avoid mischaracterizing large apparent head motions as events to reject when they are inherently fixable with MCFLIRT and similar. (Real head motion produces a big spin history effect and likely introduces non-linear distortions in the images.) Whether the reverse is true - that is, whether FD(translations only) captures most of the apparent head motion and a reduced contribution from real head motion - I leave as an exercise for another day, but my suspicion is that it is not. Put another way, I think the focus should be on using the rotations to capture and evaluate real head motion. Pooling translations and rotations in measures like FD may be complicating the picture for us.

_________________________


Core curriculum - Cell biology: cell membranes and the resting potential

$
0
0

 

A lot of the important functions of neurons (and glia) happen at their cell membranes. In the case of neurons, in addition to the membrane around the cell body (the soma), we also need to understand what happens along the neuronal processes (aka neurites): the dendrites (inputs) and the neuron's axon (the output). 

Let's begin this section by reviewing the structure of the cell membrane.

 


 

Transport across the cell membrane was introduce above. There are different mechanisms of membrane transport, each establishing certain behaviors of a cell.



The sodium-potassium pump is one of the most important membrane transport mechanisms for neural signaling. Let's take a closer look.

 



The cell's resting membrane potential was mentioned in the last two videos. The resting potential is an important starting point for understanding neuronal signaling via action potentials. For the last part of this post, we will look in more detail at the origins of the electrical potentials and electrostatic gradients across a cell membrane at rest.






In the next post we can start to look at cell signaling. Specifically, we are most interested in a neuron's action potential, which is the main way neurons communicate with each other.

_________________


Core curriculum - Cell biology: the neuron's action potential

$
0
0

 

The last post reviewed the origins and properties of the resting membrane potential. Specifically, we are most interested in the membrane potential of neurons because they have an activated state that leads to signaling between neurons. Signaling from one neuron is achieved via an action potential from the cell body (soma) down its axon to synapses with other neurons. There are several good summary videos available online. Try them all to reinforce your knowledge.






Finally, in this post we get our first real look at synapses and excitatory and inhibitory neurotransmitters as part of a graded potential:


Now that you've seen the electrochemical action potential, in the next couple of posts we can dig more into neuron-neuron signaling, including synapses and the role of chemical neurotransmitters.

_________________


BONUS: a speedy review. All familiar stuff now, right?




Core curriculum - Cell biology: synapses and neurotransmitters

$
0
0

 

The action potential from one neuron may or may not trigger further action potentials in neurons it connects to via synapses. A typical neuron with its single axon may make thousands of synapses to the dendrites of these "downstream" neurons. The locations of the synapses matter, in the sense that position relative to the downstream neuron's cell body provides a sort of weighted importance to any one synapse, as does the type of synapse. For fMRI we don't need to get too deep into the details of these connections, but we do need a basic understanding of the differences between excitatory and inhibitory connections. For the most part, whether a connection is excitatory or inhibitory is determined by the type of neurotransmitter released at the synapse.

First, let's get an overview of types of synapse and neurotransmitter, and the difference between excitatory and inhibitory neurotransmission:


Next, a little more detail and some context: 


In case it wasn't already clear, here's a nice explanation linking the pre-synaptic neuron's electrical potential to neurotransmitter release at the synapse:


Categorizing any one neurotransmitter as excitatory or inhibitory is a reflection of its usual effect on the electrochemical potential in post-synaptic neurons. The actual effect on any one post-synaptic neuron - whether that neuron is rendered closer to or farther away from its threshold voltage - can depend on the location of the synapse as well as the neurotransmitter(s) released in the synaptic cleft. Still, we can usefully categorize neurotransmitters according to their broadly different functions around the body:


In case you're interested in the structure of these neurotransmitters - perhaps because you are researching the effects of exogenous compounds ("drugs") on brain activity - here's a little more biochemistry:


Most of the videos above have focused on the neurotransmitter in the synaptic cleft. Naturally, the receptors on the post-synaptic neuron are critical to signaling. So let's take a slightly closer look at receptor types: 



And finally, a little more detail on the importance of synaptic location, not just type, in determining the type of action produced by a neural circuit:



That should suffice as a basic introduction to neurotransmission for the bulk of fMRI experiments, where we are looking at the collective effects of millions of neurons and trillions of synapses in any given voxel. Additional videos suggested by YouTube should provide good branches for those of you wanting more detail.

At this point, I want to shift to looking at the axon structure and its myelin sheath because this is an important distinction at the level of the fMRI voxel. We will tend to categorize any given voxel as containing mostly white matter (myelinated axons) or mostly gray matter (cell bodies). We will look at these in turn.

_________________


Functional connectivity, ha ha ha.

$
0
0

 

If you do resting-state fMRI and you do any sort of functional connectivity analysis, you should probably read this new paper from Blaise Frederick:

https://www.nature.com/articles/s41562-024-01908-6

I've been banging the drum on systemic LFOs for some time. Here's another example of how not properly thinking through the physiology of the entire human can produce misleading changes in so-called FC in the fMRI data. That said, I don't think Blaise has the full story here, either. For one thing, the big dips in his Fig 1b suggest that something is being partially offset with the on-resonance adjustment that is conducted automatically at the start of each EPI time series, so I have a residual concern that there are magnetic susceptibility effects contributing here somewhere. (Perhaps the magnetic susceptibility effects are what's left to drift higher after RIPTiDe correction, as in Fig 6b, for example.) The point is that not having independent measures of things like arousal, or proper models of physiologic noise components like sLFOs, or a full understanding of what's happening in the scanner hardware (including head support) during the experiment can lead to an assumption that things are neural when there are better explanations available. 

---

Link added on 6/23/2024: Blaise Frederick discussing systemic LFOs on "Coffee Break!"


 

Could MSM be a useful tracer for determining CSF flux in the human brain?

$
0
0

 A few years ago I was involved in a project to develop a better chemical shift reference for in vivo MR spectroscopy (Kaiser et al. 2020).  As often happens in science, life, logistics and money conspired to change the directions of those involved and this project got put on the shelf to gather dust. We no longer have either the people or the capabilities to pursue it further. Perhaps someone else would be able to take it on and see whether there are more uses than as a chemical shift reference.

One of the angles we were considering in 2020 was the possibility of using MSM as a tracer for measuring CSF flux in the brain. Various approaches have been developed using MRI, but they are all rather difficult. One involves an intrathecal injection of a gadolinium contrast agent and then looking for signal losses depicting where the Gd contrast diffuses to (Iliff et al. 2013). Negative contrast is always a complication for MRI because signal voids often arise from imperfections in the magnetic field. Another method uses an arterial spin label (ASL) and long post-labeling delays to assess the amount of water passing from the vascular compartment to the tissue compartment, i.e. through the blood-brain barrier, as an index of what is assumed to represent the inflowing part of the glymphatic system (Gregori et al. 2013, Ohene et al. 2019). These methods are low sensitivity and highly prone to motion. A third approach uses low diffusion-weighted imaging to try to differentiate CSF from other water compartments (Harrison et al. 2018). But again the method is inherently sensitive to bulk motion and it's not entirely clear to me how well the signals represent the CSF to interstitial fluid flux versus other microscopic compartments. So, would MSM offer simultaneously positive contrast and improved sensitivity? And would its clearance give an indication of the CSF flux through brain tissue?

MSM is methysulfonylmethane, the trade name for what a chemist would call DMSO2. It is labeled by the FDA as GRAS: "generally regarded as safe." As such, there are few regulations for its use and so you can find it in everything from dog food to ointments for a bad knee. You may well be consuming it and not have a clue. But the good news is you can buy pills of MSM for your experiments. There's no special permission needed, you can get these at your local pharmacy. (A word of caution: the amount listed on the package may not match what is actually in the pills! Do your own assay!) Then, once you've got this past your IRB, you can dose subjects with acute or chronic doses and see what happens to the MSM level in the brain.

MSM is a small, polar molecule which probably distributes throughout biological tissues with approximately the same concentration profile as water. The more water content in the tissue, the higher the MSM concentration is likely to be after a few hours. But this is a guess. What we do know is that entry into the brain is rapid. We can see MSM in a brain spectrum within 10 minutes following an oral dose. The MSM signal then remains fairly stable for several hours, which is a property we wanted for our chemical shift reference. 



But what is driving the clearance rate? In our early tests, we observed a half life in normal brain of about 3 days. This was for a single acute dose. In later tests (not included in the 2020 paper) we saw about the same washout time for a single 6 g dose as for a single 2 g dose. We also had a subject (me!) take a 1 g dose every day for 30 days to ensure steady state concentration, then observed the washout. Again, a half life of about 3 days. 


 

For clearance, we assume the MSM partitions and clears down its concentration gradient. Presumably the MSM distributes into the brain via the blood. Once we stop giving new oral MSM the blood concentration falls to near zero, and presumably clearance of the MSM in the body then occurs via the kidneys. If the routes out of brain tissue include the blood and perhaps CSF clearance, then what matters is the concentration gradients between brain tissue and blood and, perhaps, brain tissue and CSF.

This is where the idea of using MSM as a CSF flux (glymphatic system) tracer comes in. If the half life is around 3 days in normal brain, does the rate of clearance change with sleep deprivation, bouts of vigorous exercise or other challenges to an individual? What about differences between individuals? Do older subjects clear MSM more slowly than younger subjects on average? Women faster than men? Is the density of aquaporin channels a prime determinant of the clearance rate from brain, or is MSM able to diffuse across all membranes with approximately the same rate? And is CSF flux through brain tissue an important determinant of the clearance rate, or incidental to it? We were never able to test these ideas. 

As a practical matter, MSM can be observed easily in a 1H MR spectrum. Its chemical shift of pi (3.142 ppm) and sharp line makes it easy to fit separately from brain metabolites. We also never tested the ability of chemical shift imaging (CSI) to observe MSM, but there's every reason to think that a CSI method which can reliably image the NAA, creatine and choline singlet peaks will be able to map MSM perfectly well, too.

So, there you have it. A free idea for someone to explore and perhaps exploit for the purposes of assessing CSF clearance, sleep, dementias and so on.

_______________