Are you the publisher? Claim or contact us about this channel


Embed this content in your HTML

Search

Report adult content:

click to rate:

Account: (login)

More Channels


Showcase


Channel Catalog


Channel Description:

Deconstructing the most sensationalistic recent findings in Human Brain Imaging, Cognitive Neuroscience, and Psychopharmacology

older | 1 | (Page 2) | 3 | 4 | .... | 8 | newer

    0 0


    "Individual risk attitudes are correlated with the grey matter volume in the posterior parietal cortex suggesting existence of an anatomical biomarker for financial risk-attitude," said Dr Tymula.

    This means tolerance of risk "could potentially be measured in billions of existing medical brain scans."1

    -Gray matter matters when measuring risk tolerance

    Let's pretend that scientists have discovered a neural biomarker that could accurately predict a person's propensity to take financial risks in a lottery. Would it be ethical to release this information to policy makers? That seems to be the conclusion of a new paper published in the Journal of Neuroscience (Gilaie-Dotan et al., 2014):
    The results will also provide a simple measurement of risk attitudes that could be easily extracted from abundance of existing medical brain scans, and could potentially provide a characteristic distribution of these attitudes for policy makers.

    If we accept this line of thinking, it's not much of a stretch to imagine that financial institutions, employers, consumer reporting agencies, and dating services could use this information in a discriminatory, preemptive fashion to screen out potentially risky applicants. Or perhaps casinos, lotteries, and predatory lending companies could target these individuals with personalized ads.

    Conversely, investment firms could vie for traders with the largest right posterior parietal cortices, since they would have the highest tolerance for risk.

    Or am I being alarmist about the breach of ethics involved in releasing protected medical information to outside entities? Although the authors subtly deter extrapolation to this invasive scenario by using phrases like "characteristic distribution" and "risk attitudes of populations" (as opposed to risk attitudes of individuals), they're pretty clear about the promise of their gray matter measure to inform policy (Gilaie-Dotan et al., 2014):
    Our finding suggests the existence of a simple biomarker for risk attitude, at least in the midlife [sic] population we examined in the northeastern United States. ...  If generalized to other groups, this finding will also imply that individual risk attitudes could, at least to some extent, be measured in many existing medical brain scans, potentially offering a tool for policy makers seeking to characterize the risk attitudes of populations.

    Now let's all take a step back and evaluate whether this is currently feasible. The short answer is no (in my view, at least).1A

    First, we have to be somewhat skeptical of the study's major conclusion. Voxel-based morphometry (VBM) was to quantify cortical volume from structural MRIs.2 Gray matter volume in a small chunk of the right posterior parietal cortex (PPC) was the only place in the entire cerebral cortex that correlated with individual attitudes toward financial risk. In humans, right lateralized PPC has been strongly implicated in visuospatial attention.

    Doesn't it seem more plausible that a region like the orbitofrontal cortex (OFC), which has been activated in numerous functional neuroimaging studies of decision making and risk, would show such an association? Studies in primates have demonstrated that economic risk is coded by single neurons in the OFC (O'Neill & Schultz, 2014), and in rats risk preference can be differentiated by OFC neuronal responses (Roitman & Roitman, 2010).

    The authors do cite an extensive literature on the role of parietal neurons in decision making, but fMRI studies have observed effects of risk preference in left PPC, and uncertainty in bilateral PPC (Huettel et al., 2005, 2006).

    But what is the purpose of having a larger gray matter volume in PPC in relation to financial risk attitude? Does it allow for a higher "computational capacity" that can accommodate greater risk tolerance? We don't actually know, as Gilaie-Dotan et al. (2014) explain:
    We do not know precisely how GM volume translates to the neural level. It is possible that volume differences reflect synaptogenesis and dendritic arborization (Kanai and Rees, 2011), but to-date there is no clear evidence of correlation between GM volume measured by VBM and any histological measure, including neuronal density (Eriksson et al., 2009).

    In contrast to the neural correlate of risk attitude, a participant's attitude toward ambiguity was not associated with structural differences anywhere in the cortex (Gilaie-Dotan et al., 2014). How were these attitudes (or preferences) measured? Experimental economics methods were used to estimate individual preferences for risk (uncertainty with known probabilities) and ambiguity (uncertainty with unknown probabilities).

    Participants played a game where they could choose between lotteries that varied in monetary value and in the degree of either risk or ambiguity. In the example trial below, the participant chooses either this option, where they stand a 38% chance of winning $18, or the reference option that offers a 50% chance of winning $5.



    Modified from Fig. 1A (Gilaie-Dotan et al., 2014).


    There were five reward levels ($5, $9.50, $18, $34, and $65), each fully crossed with three probabilities of winning and three levels of ambiguity around the winning probability, as shown below.


    Figure 1 (Levy et al., 2012). Risky and ambiguous stimuli.A) In risky stimuli the red and blue areas of each image are proportional to the number of red and blue chips. Three outcome probabilities were used: 13, 25 and 38%. B) In ambiguous stimuli the central part of the image is obscured with a gray occluder. In the gray area the number of chips of each color is unknown, and thus the probability of drawing a chip of a certain color is not precisely known. Three levels of ambiguity were used, where 25, 50 or 75% of the image is occluded.


    Using a maximum likelihood procedure, the choice data of each participant was fit to a logistic function. Fitting the choice data with a choice function provided estimates for the risk attitude (α) and ambiguity attitude (β) for each person. These were included in multiple regression analyses to determine the neuroanatomical correlates of risk and ambiguity based on the model estimates.3

    Two populations of subjects were tested. The first was a group of 21 individuals who participated in the fMRI study of Levy et al. (2010) at NYU; thus the first analysis was entirely post hoc, and 7 more people were added later to make the total n=28 (mean age = 25).4

    The second group, which served as a validation sample, consisted of 33 healthy subjects from the University of Pennsylvania (mean age = 21.34).5 A region of interest (ROI) analysis created spheres of six different sizes around the right PPC peak that were compared to control ROI spheres in primary motor/primary somatosensory areas. The right PPC finding replicated at p<.05 or p<.01, whereas there was no correlation between risk attitudes and gray matter volume in the M1/S1 control area.

    If you're wondering, like me, whether any other part of the cortex showed a relationship to either risk or ambiguity in Group #2, one sentence in the Results assures us that no other regions were implicated in risk with a standard VBM whole-brain analysis.

    Unlike the sweeping conclusions about the policy implications of their results (which were mentioned three times), the authors were appropriately cautious about causality, saying it's not possible to determine whether a big PPC causes higher risk tolerance, or having a higher risk tolerance leads to an increase in PPC gray matter volume. They also warn against assuming any relationship between genetics and risk attitudes. Finally, they acknowledge that the results may not generalize beyond their populations of students at Northeastern universities who are in their early to mid 20s, a time when the prefrontal cortex isn't fully developed.

    I suspect we'll soon see studies that examine risk attitude and gray matter volume across the life span, given the interest of these researchers in Separating Risk and Ambiguity Preferences
    Across the Life Span: Novel Findings and Implications for Policy (PDF).


    ADDENDUM (Sept 28 2014): The first author, Dr. Gilaie-Dotan, has commented to clarify that voodoo correlations were not used in the paper. I have added the legend for the correlation plot in Fig. 2 at the bottom of the post, which states that it is shown for illustrative purposes only and should not be used for inference. She also explains additional aspects of the data presented in Fig. 4 of the paper (not shown here).


    Footnotes
    1 It's impossible that there are "billions of existing medical brain scans" because the entire world population is currently 7.19 billion. Dr. Tymula could have been quoted in error, but this exact phrase appeared in both ScienceDaily and the original University of Sydney press release. In the Yale press release on the study, the number was downgraded to millions:
    "Based on our findings, we could, in principle, use millions of existing medical brains scans to assess risk attitudes in populations," said Levy. "It could also help us explain differences in risk attitudes based in part on structural brain differences."
    It's commendable that the title of the Yale press release (Brain structure could predict risky behavior) was more circumspect than the one given to the J Neurosci article itself.

    1AADDENDUM (Sept 16 2014): The billions [i.e. millions] of existing medical brain scans are not all high-resolution T1-weighted anatomical images (1 × 1 × 1 mm3) acquired using a 3T Siemens Allegra scanner equipped with a custom RF coil. In other words, most may not have the anatomical resolution to measure such a small brain area.

    2 Gray matter volume in the whole cerebral cortex was quantified, but you'll notice that no subcortical structures (e.g., striatum, nucleus accumbens, cerebellum) were measured.

    3 More methodological details:
    The age and gender of the participants and global GM volume (following ANCOVA normalization) were included in the design matrix as covariates of no interest, and were thus regressed out. F contrasts were applied first with p < 0.001 uncorrected as the criterion to detect voxels with significant correlation to individual’s risk attitudes. Whole-brain correction procedures were then applied...

    4 The authors stated that this did not affect the outcome.

    5 Oddly, these two groups of young people (mean ages of 25 and 21 yrs) were called "midlife" adults three times in the paper.


    References

    Gilaie-Dotan, S., Tymula, A., Cooper, N., Kable, J., Glimcher, P., & Levy, I. (2014). Neuroanatomy Predicts Individual Risk Attitudes. Journal of Neuroscience, 34 (37), 12394-12401 DOI: 10.1523/JNEUROSCI.1600-14.2014

    Huettel SA, Song AW, McCarthy G. (2005). Decisions under uncertainty: probabilistic context influences activation of prefrontal and parietal cortices. J Neurosci. 25(13):3304-11.

    Huettel SA, Stowe CJ, Gordon EM, Warner BT, Platt ML. (2006). Neural signatures of economic preferences for risk and ambiguity. Neuron 49(5):765-75.

    Levy, I., Rosenberg Belmaker, L., Manson, K., Tymula, A., & Glimcher, P. (2012). Measuring the Subjective Value of Risky and Ambiguous Options using Experimental Economics and Functional MRI Methods. Journal of Visualized Experiments (67) DOI: 10.3791/3724

    Levy I, Snell J, Nelson AJ, Rustichini A, Glimcher PW. (2010). Neural representation of subjective value under risk and ambiguity. J Neurophysiol. 103(2):1036-47.

    O'Neill M, Schultz W. (2014). Economic risk coding by single neurons in the orbitofrontal cortex. J Physiol Paris. Jun 19. pii: S0928-4257(14)00025-4.

    Roitman JD, Roitman MF. (2010). Risk-preference differentiates orbitofrontal cortex responses to freely chosen reward outcomes. Eur J Neurosci. 31(8):1492-500.


    ADDENDUM (Sept 28 2014): Here is the legend for Fig 2 (Bottom).
    To demonstrate that the observed correlations were not driven by outliers, for each individual, GM volume of the PPC cluster (top) is plotted on the x-axis against risk attitude on the y-axis. Note that this should not be used for inference as it is not independent of the whole-brain analysis and is presented for visualization purposes only. No other regions were found to be correlated with risk attitudes.





    0 0



    For immediate release — SEPTEMBER 26, 2014

    Research from the UCL lab of Professor Geraint Rees has proven that the recent craze for suggesting that rats have “regrets” or show “disappointment” is solely due to the size of the left temporal-parietal junction (TPJ) in the human authors of those papers (Cullen et al., 2014). This startling breakthrough was part of a larger effort to associate every known personalitytrait, political attitude, and individual difference with the size of a unique brain structure.

    Cullen and colleagues recruited 83 healthy behavioral neuroscientists and acquired structural brain images using a 1.5-T Siemens Sonata MRI scanner.  The participants completed the Individual Differences in Anthropomorphism Questionnaire (IDAQ), along with 698 other self-report measures. Factor analysis of the IDAQ yielded a two factor solution: anthropomorphism of 1) non-human animals, and 2) non-animals (technology and nature).






    Voxel-based morphometry (VBM) was used to quantify gray matter volume from the structural MRIs. To do this, the authors constructed a “mentalizing mask” to divine which regions of interest (ROIs) would yield the best results.




    Based on the intuitions of Psychic Love Doctor Anabella (and results from previous studies on theory of mind and social cognition), six 12 mm spheres were drawn in the left and right medial prefrontal cortices (x y z MNI coordinates = ±10, 51, 34), the temporal poles (±43, 8, −34), and the posterior superior temporal sulcus/TPJ (±52, −56, 23).

    Separate analyses were done using another “mentalizing mask” with different coordinates as well as an anatomically-based mask. But the authors went with the Psychic Love Doctor mask after all. They also did a whole brain analysis, by the way.


    “You’ll Never Believe What Happened Next.”

    A tiny little cluster of 24 voxels in the left TPJ correlated with scores on the animal IDAQ scale. This means that the neuroscientists responsible for studies on regret (Steiner & Redish, 2014) and disappointment (Shabel et al., 2014) in rats had the largest L TPJs, by far. Besides publishing in Nature Neuroscience and Science, respectively, these participants were most inclined to attribute human mental states to non-human animals.



    Fig. 1 (Cullen et al., 2014). The region where grey matter volume showed a correlation with anthropomorphism of non-human animals is shown overlaid on a T1-weighted MRI anatomical image. The cross hair identifies the cluster at the left temporoparietal junction (−45,−54, 27) showing a statistically significant (P < 0.05 FWE-corrected for volume examined) positive correlation with anthropomorphism of non-human animals as measured by the animal IDAQ.


    However, readers of io9 and theNewerYork will be sorely disappointed that no areas of the brain were correlated with anthropomorphization of robots.

    What does this mean for the future of neuroscience research? Given the prestigious outlets that publish papers in the hot new field of Anthropomorphic Neuroscience, here's what I envision:  transcranial direct current stimulation (tDCS) labs will be overrun with modest scientists who study spatial memory, hoping a stimulating, L TPJ-induced portrayal of rats as taxi drivers will land them in the pages of Nature.


    -----

    Disclaimer: Although this post is based on a real study, some of the details are fictionalized. I leave it to the discerning reader to separate fact from fiction. My sincerest apologies to all the authors.


    Further Reading

    Of Mice and Women: Animal Models of Desire, Dread, and Despair– are they really adequate stand-ins for the human condition?

    Post-modern Anthropomorphism– rat “regret” author A. David Redish, Ph.D. on the use of human cognitive terms for non-human animal behavior.

    Rats Regret Making the Wrong Decision– accessible summary.

    Scientists Discover “Dimmer Switch” For Mood Disorders– strains credulity to go from rat “disappointment” to a depression dimmer switch in humans.

    Not tonight dear, I had zymosan A injected into my hind paw– Hypoactive Sexual Desire Disorder, in rats. You decide.

    Liberals Are Conflicted and Conservatives Are Afraid– discusses the Colin Firth study on political orientation and brain structure (Kanai, Feilden, Firth & Rees, 2011).


    References

    Cullen, H., Kanai, R., Bahrami, B., & Rees, G. (2014). Individual differences in anthropomorphic attributions and human brain structure Social Cognitive and Affective Neuroscience, 9 (9), 1276-1280 DOI: 10.1093/scan/nst109

    Shabel, S., Proulx, C., Piriz, J., & Malinow, R. (2014). GABA/glutamate co-release controls habenula output and is modified by antidepressant treatment Science, 345 (6203), 1494-1498 DOI: 10.1126/science.1250469

    Steiner, A., & Redish, A. (2014). Behavioral and neurophysiological correlates of regret in rat decision-making on a neuroeconomic task Nature Neuroscience, 17 (7), 995-1002 DOI: 10.1038/nn.3740




    Contact UCL Media Relations for a high resolution photo of Prof Rees

    0 0



    September 30 is the last day of the fiscal year for the US government. So it's no coincidence that President Obama's BRAIN Initiative1 ended the year with a bang. The NIH BRAIN Awards were announced on the last possible day of FY2014, coinciding with the White House BRAIN Conference. A total of $46 million was dispersed among 58 awards involving over 100 scientists.


    I watched most of the conference live stream. The entire video is now available for viewing on YouTube (and conveniently embedded at the bottom of this post). Below are a few idiosyncratic highlights.

    I missed the early announcements (e.g., that the correct hashtag was #WHBRAIN) and introduction of the first speaker, a female graduate student. Next was John Holdren, senior advisor to the President on science and technology issues. My notes from his talk consisted of a series of buzz words and phrases, befitting a politician:

    “grand challenge”
    “moon shots”
    “game-changing innovations”
    “dynamic understanding of how the brain works”
    “at the speed of thought”
    “new generation tools and technology”
    quoting Obama: “Americans can accomplish anything we set our minds to.”

    The first year budget is $100 million, with another $300 million allocated so far.  A recurrent theme was the need for a sustained commitment to funding. Holdren (and others) mentioned the 12 year strategy for NIH, BRAIN 2025, which focuses on technologies, cells, and circuits.

    The disconnect with reality came when he mentioned the burden of brain disorders and the prospect of curing them:
    “Imagine if no family had to grapple with the helplessness and heartache of watching of a loved with Parkinson's or traumatic brain injury. Imagine if Alzheimer's or ALS or chronic depression were eradicated in our lifetimes.” [NOTE: Holdren is 70]

    Ultimately we'd all like to eradicate these diseases, but that's not going to happen by 2025. Is it a good idea to mislead the public about the immediate clinical treatments arising from the NIH BRAIN Awards? How do we educate the public about the importance of basic science and technology development? DARPA is taking a different approach with their fast-tracking of deep brain stimulation treatments in humans. Their goals are even more ambitious: over a 5 year period, conduct clinical trials in human patients with 7 specified psychiatric and neurological disorders, some of which have never been treated with DBS.

    Moving right along to the first panel, Cori Bargmann and Mark Schnitzer both did a fine job of discussing advances in circuits/networks and engineering/technology (see Storify below). The next panelists were clinician/researchers Geoffrey Manley on traumatic brain injury and Kerry Ressler on post-traumatic stress disorder. Ressler was bullish on new PTSD therapies, suggesting that it might be the most tractable psychiatric disorder. Manley, on the other hand, had a sobering assessment of TBI treatments derived from cellular neurobiology, noting that the field is on its 32nd or 33rd failed clinical trial.2

    This is probably not what the White House wanted to hear, particularly since this panel was brought on to slyly connect the NIH BRAIN Awards to clinical disorders. But this is exactly what people need to hear to understand the utter complexity of trying to cure brain disorders, or at least treat them more effectively.


    Further Reading

    NEW! Indispensable coverage of Next Generation Human Imaging 
    (by @practiCal fMRI):
         i-fMRI: My initial thoughts on the BRAIN Initiative proposals

    A Tale of Two BRAINS: #BRAINI and DARPA's SUBNETS

    BRAIN Initiative Funding Opportunites at NIH

    Humble BRAIN 2025

    And the DARPA deep brain stimulation awards go to...


    Footnotes

    1The BRAIN Initiative badge should be awarded by President Obama to research supported by his $100+ million Brain Research through Advancing Innovative Neurotechnologies Initiative. This bold research effort will include advances in nanotechnology and purely exploratory efforts to record from thousands of neurons simultaneously. Recipients of BRAIN Awards from NIH, DARPA, and NSF are free to use this fictitious badge made by me.

    2The failure of a very promising clinical trial of progesterone for TBI was very recently announced ("based on 17 years of work with 200 positive papers in pre-clinical models"), although I couldn't find it. Here's the listing in ClinicalTrials.gov.







    0 0



    Two Croatian academics with an anti-neuro ax to grind have written a cynical history of neuroword usage through the ages (Mazur & Rinčić, 2013). Actually, I believe the authors were being deliberately sarcastic (at times), since the article is rather amusing.1
    Placing that phenomenon of "neuroization" of all fields of human thought and practice into a context of mostly unjustified and certainly too high – almost millenarianistic – expectations of the science of the brain and mind at the end of the 20th century, the present paper tries to analyze when the use of the prefix neuro- is adequate and when it is dubious.

    Ključne riječi [keywords]:
    brain; neuroscience; word coinage

    Amir Muzur and Iva Rinčić are both on the Faculty of Medicine at the University of Rijeka, in the Department of Humanities and Social Sciences in Medicine. Their interests include the history of bioethics, bioethics and sociology, the history of medicine, and neuroscience.

    The pre-BRAIN Initiative paper2 begins with a reminder of President George Bush Senior's proclamation of the Decade of the Brain:
    Let aside the fact that a new decade did not begin in 1990 but a year later, with such pathos, George Bush Senior started an unprecedented avalanche of expectations, pompousness, and grants which will be lasting up today. The motives of launching the "Decade of the brain" were inspired by increasing awareness and fear of the treath [sic] of Alzheimer’s disease and neural sequels of drugs and AIDs, more than by the declared fascination by brain function.

    Neurocriticism

    The authors did intend to seriously critique the excesses of “neuroization” (since the title of the paper includes the word “Neurocriticism” after all), although it can be tricky to determine exactly when they're going over the top:
    Scientists researching the brain cherish the idea that their work is extremely important, unique, and indispensable. They often venture into other fields and sciences without feeling any inferiority complex, convinced that their knowledge on human brain be sufficient to understand and interprete [sic] everything.  ...  Modern neuroscientists are like ancient alchemists, believing they are up to discover the most important secrets of the life elixir and the philosophers’ stone. Is not the hyperproduction of new names for (psudo)disciplines [sic] also a result of that arrogance?

    A short primer of neuro-disciplines

    Mazur and Rinčić (2013) then present their history of neurowords from 1681 to 2006, focusing on those that have become legitimate (or pseudo-legitimate) fields of study, some of which they characterize as “awkward caricatures” (e.g., neuroeconomics and neuromarketing).3
    Neuromarketing– the application of neuroimaging methods to product marketing (studying consumers’ sensorimotor, cognitive, and affective response to marketing stimuli) – was coined by Ale Smidts in 2002.

    In the same year, it seems that two more new neuro-terms were coined: neuroethics, meaned [sic] for the neuroscience of ethics and the ethics of neuroscience (four years later, in May 2006, a Neuroethics society came to be at a conference in Asilomar in California), and neuroesthetics, as the study of the neural bases for the contemplation and creation of a work of art.

    Neuroeconomics studies the neural underpinnings of making decisions, taking risks, and evaluating rewards. Probably the first to formulate the name was Paul Glimcher in 2003.4

    The article confirms that the recent fad for “neuroization” is not justified. And not surprisingly, it ends on a pessimistically snarky (and utterly hyperbolic) note, putting all neuroscientists in their place:
    In fact, nothing crucial has been discovered in neuroscience for quite a while, and the premordial entrapment in the mind-body problem still lasts: why, then, that explosion of "interest" in the brain at the end of the 20th and at the beginning of the 21st centuries? Is not it a contemporary variation of a historical periodical millenaristic movement, invoking a panacea for a society in general crisis? Neuro- seems to provide not only a desperate ultimate attempt at being original in science where everything has been said and done, but, morover [sic], a guaranty of attracting attention and simulating importance.


    Further Reading

    I've written my own idiosyncratic history of neurowords in Journomarketing of Neurobollocks, which told Steven Poole he didn't invent neurobabble, neurobollocks, or neurotrash (and reminisced about the 2006 neuroword contest hosted by Neurofuture).

    Befitting a blog that started as its own made-up neuroword, here are some selections from the archives:

    Neuroetiquette and Neuroculture

    Neurokitchen Design?

    Neurocoaching?

    Neuroleadership?

    Neuro-Gov

    NeuroPsychoEconomics!

    The Luxury Of Neurobranding


    Footnotes

    1though an expert in Croatian humor I am not.

    2A significantly shorter version of this paper was presented at 9th Lošinj Days of Bioethics, Mali Lošinj, Croatia, May 16-19, 2010.

    3 Interestingly, they note that neuropolitics was probably coined by Timothy Leary in 1977 and neurotheology even earlier, by Aldous Huxley in his 1962 utopian novel Island.

    4The sources for these neuroword origins are included in the footnotes of the paper:
    50 http://en.wikipedia.org/wiki/Neuromarketing

    51 A. Roskies, "Neuroethics for the new millennium," Neuron 35 (2002): 21-23.

    52 http://en.wikipedia.org/wiki/Neuroesthetics#cite_note-0; cf. also "The statement on neuroesthetics" by Semir Zeki ( http://www.neuroesthetics.org/statement-on-neuroesthetics.php)

    53 Paul W. Glimcher, Decisions, Uncertainty, and the Brain: The Science of Neuroeconomics (Cambridge, MA: The MIT Press, 2003).
    However, in my own coverage of neurowords, I found that neuroeconomics has been around since the late 1990s.


    Reference

    Amir Muzur, Iva Rinčić. Neurocriticism: a contribution to the study of the etiology, phenomenology, and ethics of the use and abuse of the prefix neuro-.  JAHR European Journal of Bioethics, Vol.4 No.7 Svibanj 2013. pp. 545-555.

    0 0


    What happens in the brain during a highly immersive reading experience? According to the fiction feeling hypothesis (Jacobs, 2014), narratives with highly emotional content cause a deeper sense of immersion by engaging the affective empathy network to a greater extent than neutral narratives. Emotional empathy in this case, the ability to identify with a fictional character via grounded metarepresentations of ‘global emotional moments’ (Hsu et al., 2014) relies on  a number of brain regions, including ventromedial prefrontal cortex (PFC), dorsomedial PFC, anterior insula (especially in the right hemisphere), right temporal pole, left and right posterior temporal lobes, inferior frontal gyrus, and midcingulate cortex.

    A group of researchers in Germany used text passages from the Harry Potter series to test the fiction feeling hypothesis, specifically that readers will experience a greater sense of empathy for and identification with the protagonists when the content is suspenseful and scary (Hsu et al., 2014). This would be accompanied by greater activations in specific brain regions during an fMRI scan.

    The experimental stimuli were 80 passages from the Harry Potter novels. The authors selected 40 ‘fear-inducing’ and 40 ‘neutral’ passages, each about 4 lines long.1  These were screened and rated by a set of independent participants. Unfortunately, the authors did not provide any examples, so I'm going to have to improvise here.

    Given that I've not read any of the Harry Potter books (or seen the movies), I'm not the best person to run a popular blog serial on NeuroReport's Harry Potter and the _______ books.  Or to to launch an academic publishing franchise on fMRI studies of epic fantasy novels.2

    But here's a sampler anyway, based on Ayn Rand’s Harry Potter and the Prisoners of Collectivism: 3

    He felt the unnatural cold begin to steal over the street. Light was sucked from the environment right up to the stars, which vanished. The cold was biting deeper and deeper into Harry’s flesh [and lighting up his pain matrix in an eerie glow against the dark and lonely night].

    Then, around the corner, gliding noiselessly, came Dementors, ten or more of them, visible because they were of a denser darkness than their surroundings, with their black cloaks and their scabbed and rotting hands. Could they sense fear [and an overactive amygdala] in the vicinity? ...

    Suddenly he heard them: Marxists.
    . . .

    “Only together, collectively, can we achieve anything of lasting significance,” he heard one of them say. Harry moaned in pain[his anterior cingulate and insular cortices writhing from such cognitive dissonance and social exclusion].

    “The fortunate owe it to society to contribute to those who cannot work,” another chanted. Harry closed his eyes and collapsed [his ventral posteriorlateral thalamic nuclei and somatosensory cortex no longer able to endure the intolerable battering].

    My poorly written additions in maroon prefigure the focus of the study empathy for pain. I'm not exactly sure why this was so (for either literary or scientific reasons). At any rate, Hsu et al. (2014) made the following predictions:
    we expected (i) higher immersion ratings for fear-inducing passages, which often describe pain or personal distress, as compared with neutral passages, and (ii) significant correlations of immersion ratings with activity in the affective empathy network, particularly AI [anterior insula] and mCC [mid-cingulate cortex], associated with pain empathy for fear-inducing, but not for neutral, passages.

    AI and mCC have been implicated in the affective component of personally felt pain, as well as in empathy for another person's pain (Jackson et al., 2006). So the expected result would be greater activations in AI and mCC for the Fearful vs. Neutral comparison. They didn't do this exact contrast, but they did look for differential correlations between “immersion ratings” and BOLD responses for Fear > fixation (a low-level control condition) and Neutral > fixation.

    A separate group of individuals (not the ones who were scanned) rated the Fearful and Neutral passages for immersion by rating their subjective experience, ‘I forgot the world around me while reading’ on a scale from 1 (totally untrue) to 7 (totally true). Although the difference between Fear (mean = 3.75) and Neutral (mean = 3.18) was statistically significant, the level of immersion wasn't all that impressive, being below the midpoint even for the scary texts.

    The major fMRI result was a cluster in the mid-cingulate cortex (corrected cluster-level P = 0.037) that showed a higher correlation between immersion ratings and BOLD for Fear than for Neutral.


    Fig. 1B (modified from Hsu et al., (2014). The mid-cingulate gyrus showing a significant correlation difference between passage immersion ratings and BOLD response in the Fear versus Neutral conditions, cross-hair highlighting the peak voxel [8 14 39].


    No such relation was observed in the anterior insula, which was explained by postulating that “motor affective empathy” was more prominent than “sensory affective empathy”:
    Craig [12] considered mCC to be the limbic motor cortex and the site of emotional behavioural initiation, whereas AI is the sensory counterpart. With respect to our stimuli from Harry Potter series, in which behavioural aspects of emotion are particularly vividly described, the motor component of affective empathy (i.e. mCC) might predominate during emotional involvement, and facilitate immersive experience.

    This is obviously a post-hoc explanation, one that's hard to judge in the absence of actual exemplars of the experimental stimuli. Although the results were a bit underwhelming, I was happy the authors did not venture out on a rickety and hyperbolic limb, as the NYT did (gasp!) in Can ‘Neuro Lit Crit’ Save the Humanities? and Next Big Thing in English.


    Footnotes

    1 The Fearful and Neutral passages were matched for many factors that can affect reading:
    ...numbers of letters, words, sentences and subordinate sentence per passage, the number of persons or characters (as the narrative element), the type of intercharacter interaction and the incidence of supranatural events (i.e. magic) involved in text passages across the emotional categories.

    2 Perhaps Neuroskeptic is more qualified for that...

    3 Also from Mallory Ortberg at The Toast, we have Ayn Rand’s Harry Potter and The Order of Psycho-epistemology :
    “You’re a prefect? Oh Ronnie! That’s everyone in the family!”

    Ron looked nervously at Harry. Harry betrayed nothing. You can be a wizard, Ron remembered, and you can be a man; it is good to be both, if you can, but if you must choose, it is better to be a man and not a wizard than a wizard and not a man.

    Further Reading

    Professor of Literary Neuroimaging:  “An unfocused and rambling article in the New York Times the other day was excited about the potential use of neuroimaging to revive the gloomy state of university literature departments. It also tried to convey the importance of evolutionary psychology in explaining fiction.”


    References

    Hsu CT, Conrad M, & Jacobs AM (2014). Fiction feelings in Harry Potter: haemodynamic response in the mid-cingulate cortex correlates with immersive reading experience. Neuroreport PMID: 25304498

    Jackson PL, Rainville P, Decety J. (2006). To what extent do we share the pain of others? Insight from the neural bases of pain empathy. Pain 125:5-9.

    Jacobs AM. (2014). Neurocognitive Model of Literary Reading.



    0 0



    Nightmares can seem very real at times, but then we wake up and realize it was all a bad dream. Now imagine having a vivid nightmare with all the reality of waking life and then... it turns out you're actually awake through it all!

    This happened to an 11 year old Italian boy who reported frightening auditory and visual hallucinations of Voldemort, the archenemy of Harry Potter, for three straight days. These hallucinations began after a bout of sore throat and fever (38°C).  As Vita et al. (2008) report:
    The day after the resolution of fever, he began to present hallucinations. Hallucinations occurred in the afternoon, after watching TV. They were polymodal: he saw and heard Voldemort (an evil character of the Harry Potter saga). He did not realize his hallucinations were not real; he was extremely frightened, and he cried and searched his parents for protection. The episode lasted several hours, and was not associated with modification of vigilance or consciousness. ... Two days later, a new hallucinatory episode occurred: again, he saw Voldemort, who appeared threatening, and he fought against him. A further episode, with the same features, occurred the following day. He interacted with the characters of the hallucination, and on one occasion, he wore a sword and helmet to fight against Voldemort. When asked to recall the hallucinations, the boy said that they appeared real to him.

    Neurological exam, EEG, and CSF cultures for bacteria, viruses, and fungi were all negative. CSF titers of antibodies were normal, and there was no evidence of autoantibodies. However, an MRI scan showed abnormal signs in the boy's brainstem. Several small lesions were observed in the pons, in the vicinity of a region implicated in REM sleep.



    Fig. 1 (modified from Vita et al., 2008). MRI after the onset of hallucinations. Small areas of signal hyperintensity (lesions) are indicated by the arrows.


    The etiology and phenomenology of the boy's condition seem consistent with peduncular hallucinosis, “a rare form of visual hallucination often described as vivid, colorful visions of people and animals.” The exact cause is unknown, but most cases have been related to lesions in the midbrain, thalamus, or brainstem (Dogan et al. 2013; Penney & Galarneau, 2014; Talih, 2013). In some instances the patients are aware that the hallucinations are not real, but other cases present as a psychiatric disorder and can include auditory or tactile hallucinations, in addition to visual.

    Here, Vita et al. (2008) speculate that dreaming and REM sleep have become dissociated: the boy was literally dreaming while awake. Fortunately, his nightmarish condition disappeared after treatment with immunoglobulins. The exact diagnosis was unclear, but it might have been a transient demyelinating syndrome, which involves the loss of white matter, or myelin, that surrounds the axon.

    The authors cited a model of REM sleep in which GABA-containing “REM-on” neurons inhibit GABAergic “REM-off” neurons located in the ventrolateral periaqueductal gray matter (vlPAG) and lateral pontine tegmentum (LPT), and vice versa.



    Fig. 1 (modified from Vita et al., 2008). MRI after the onset of hallucinations. Three small lesions are indicated by the arrows.


    Turns out the lesions (shown in gray stippling below) could include some of these neurons, especially those in the REM-off areas (vlPAG and LPT).


    Fig. 1 (modified from Vita et al., 2008). Schematic of the REM-on and REM-off areas in the pons. Gray stippling indicates the lesions. REM-on region in black, REM-off regions in white.1


    The authors speculated that transient dysfunction of REM-off cells, caused by the inflammatory demyelinating syndrome, resulted in weaker inhibition of REM-on cells, allowing a dream-like state to ooze into wakefulness.




    Luckily the boy won out over Voldemort in the end, assisted by a team of doctors at Catholic University in Rome.


    Footnote

    1  Detailed figure legend:
    D: scheme of the REM-on and REM-off areas in the pons. In black: the REM-on region (locus subceruleus-α [sLCα]). In white: the REM-off region: ventrolateral periaqueductal gray (vlPAG) and lateral pontine tegmentum (LPT). In gray the REM modulatory regions: in rostrocaudal order, pedunculopontine tegmentum (PPT), laterodorsal tegmentum (LDT), dorsal raphe nucleus (DRN), and locus ceruleus (LC). Gray dotted areas: sites of the inflammatory lesions.

    References

    Dogan VB, Dirican A, Koksal A, Baybas S. (2913). A case of peduncular hallucinosis presenting as a primary psychiatric disorder. Ann Indian Acad Neurol. 16(4):684-6.

    Penney L, Galarneau D. (2014). Peduncular hallucinosis: a case report. Ochsner J. 14(3):450-2.

    Talih FR. (2013). A probable case of peduncular hallucinosis secondary to a cerebral peduncular lesion successfully treated with an atypical antipsychotic. Innov Clin Neurosci. 10(5-6):28-31.

    Vita MG, Batocchi AP, Dittoni S, Losurdo A, Cianfoni A, Stefanini MC, Vollono C, Della Marca G, & Mariotti P (2008). Visual hallucinations and pontine demyelination in a child: possible REM dissociation? Journal of clinical sleep medicine : JCSM : official publication of the American Academy of Sleep Medicine, 4 (6), 588-90 PMID: 19110890

    0 0



    In the mirror we see our physical selves as we truly are, even though the image might not live up to what we want, or what we once were. But we recognize the image as “self”. In rare instances, however, this reality breaks down.

    In Black Swan, Natalie Portman plays Nina Sayers, a ballerina who auditions for the lead in Swan Lake. The role requires her to dance the part of the innocent White Swan (for which she is well-suited), as well as her evil twin the Black Swan — which is initially outside the scope of her personality and technical abilities. Another dancer is favored for the role of the Black Swan. Nina's drive to replace her rival, and her desire for perfection, lead to mental instability (and a breathtaking performance). In her hallucinations she has become the Black Swan.1

    The symbolic use of mirrors to depict doubling and fractured identity was very apparent in the film:
    Perhaps Darren Aronofsky [the director's] intentions for the mirror was its power to reveal hidden identities. If you noticed the scenes where Nina saw herself in the mirror, it reflected the illusion of an evil. The mirror presented to her the darkness within herself that metaphorically depicted the evolution into the black swan.

    How can the recognition of self in a mirror break down?


    Alterations in mirror self-recognition

    There are at least seven main routes to dissolution or distortion of self-image:
    1. psychotic disorders
    2. dementia
    3. right parietal-ish or otherwise right posterior cortical strokes and lesions
    4. the ‘strange-face in the mirror' illusion
    5. hypnosis
    6. dissociative disorders (e.g., depersonalization, dissociative identity disorder
    7. body image issues (e.g., anorexia, body dysmorphic disorder)

    Professor Max Coltheart and colleagues have published extensively on the phenomenon of mirrored-self misidentification, defined as “the delusional belief that one’s reflection in the mirror is a stranger.” They have induced this delusion experimentally by hypnotizing highly suggestible participants and planting the suggestion that they would see a stranger in the mirror (Barnier et al., 2011):
    Following a hypnotic suggestion to see a stranger in the mirror, high hypnotizable subjects described seeing a stranger with physical characteristics different to their own. Whereas subjects' beliefs about seeing a stranger were clearly false, they had no difficulty generating sensible reasons to explain the stranger's presence. The authors tested the resilience of this belief with clinically inspired challenges. Although visual challenges (e.g., the hypnotist appearing in the mirror alongside the subject) were most likely to breach the delusion, some subjects maintained the delusion across all challenges.


    Ad campaign for the Exelon Patch (rivastigmine, a cholinesterase inhibitor) used to treat Alzheimer's disease. Photographer Tom Hussey did a series of 10 award-winning portraits depicting Alzheimer's patients looking at their younger selves in a mirror (commissioned by Novartis).


    Mendez et al. (1992) published a retrospective study of 217 patients with Alzheimer's disease. They searched the medical records for caregiver reports of disturbances in person identification of any kind. The most common type was transient confusion about family members that resolved when reminded of the person's identity (found in 33 patients). The charts of five patients contained reports of mirror misidentification, which was always associated with paranoia and delusions. Although not exactly systematic, this fits with other studies reporting that 2–10% of Alzheimer's patients have problems recognizing themselves in a mirror.

    A thorough investigation of the topic was actually published 50 years ago, but largely neglected because it was in French. Connors and Coltheart (2011) translated the 1963 paper of Ajuriaguerra, Strejilevitch, & Tissot into English. The Introduction is quite eloquent:
    The vision of our image in the mirror is a discovery that is perpetually renewed, one in which our being is isolated from the world, from the objects surrounding it, and assumes, despite the fixed quality of reflected images, the significance of multiple personal and potential expressions. The image reflected by the mirror furnishes us not only with that which is, but also how our real image might be changed. It therefore inextricably combines awareness, indulgence and critique.

    They examined how 30 hospitalized dementia interacted with mirrors in terms of (1) recognition of their own reflection; (2) use of reflected space; and (3) identifying body parts. The patients sat in front of a mirror and answered the following questions:
    • What is this?
    • Who is that?
    • How old would you say that person is?
    • How do you think you look?
    Then the experimenter stood behind them and asked questions about himself (e.g., “who is that man?”), and showed them objects in the mirror (e.g., an orange or a pipevery funny).

    Eight patients did not recognize themselves in the mirror:
    • Three didn't understand the concept of a mirror. They didn't pay attention to any reflections until directed to do so, and then they became transfixed. They also failed to recognize photos of themselves or their caretakers.
    • Another three eventually admitted it might be themselves when prodded several times.
    Those six individuals had severe Alzheimer's disease.
    • The final two recognized themselves the second time, and displayed considerably more anxiety. This sounds terribly frightening:
    These patients were attentive to their own reflections and those of the researchers, whom they identified. The first patient seemed a bit anxious; she began by touching herself, then laughed, then proclaimed “that is not quite me, it sort of looks like me, but it's not me.” When she was shown her photo head-on and then from the side, she immediately identified herself when the photo was head-on but from the side said “that's not quite me.”
    These two individuals were in an earlier state of dissolution and likely had more awareness of what was happening to them.

    Other patients with mirrored-self misidentification show greater sparing of cognitive abilities. Chandra and Issac (2014) presented brief case summaries of five mild to moderate dementia patients with “mirror image agnosia, a new observation involving failure to recognize reflected self-images.” This is obviously not a new observation, but the paper includes two videos, one of which is embedded below.
    Sixty-two-year-old female was brought to the hospital with features of forgetfulness and getting lost in less familiar environment. ... She was then shown the mirror 45 cm × 45 cm. She could identify it as a mirror. She showed unusual attraction to the mirror and ignored the physician and people around. She would go to the mirror and converse with her own image as if the image is another person but could correctly identify the reflected face of her daughter in law and the resident but she was asking her own reflection for the name and communicated to others saying that ‘here is a woman who does not know her name’.



    Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported


    LAST BUT NOT LEAST we have the Strange-face-in-the-mirror illusion (Caputo, 2010). When gazing upon one's reflected face in a dimly lit room, after a minute or two...
    The participants reported that apparition of new faces in the mirror caused sensations of otherness when the new face appeared to be that of another, unknown person or strange `other' looking at him/her from within or beyond the mirror. All fifty participants experienced some form of this dissociative identity effect, at least for some apparition of strange faces and often reported strong emotional responses in these instances.

    try this if you dare, 
    on halloween night...


    Further Reading

    The strange-face-in-the-mirror illusion– Mind Hacks, with 271 comments.

    Visual perception during mirror gazing at one's own face in schizophrenia– The strange-face-in-the-mirror illusion with schizophrenics (seems a little mean to me)

    Mirrors in film– a list

    Reflections and Mirrors in film– discussion board




    Footnote

    1 As an aside, Natalie Portman (who has published in NeuroImage) won the 2011 Best Actress Academy Award for this performance. Her male counterpart, Colin Firth (who has published in Current Biology) won the Best Actor Award.


    References

    Ajuriaguerra, J. de, Strejilevitch, M., & Tissot, R. (1963). A propos de quelques conduites devant le miroir de sujets atteints de syndromes démentiels du grand âge [On the behaviour of senile dementia patients vis-à-vis the mirror]. Neuropsychologia, 1, 59–73.

    Barnier AJ, Cox RE, Connors M, Langdon R, & Coltheart M (2011). A stranger in the looking glass: developing and challenging a hypnotic mirrored-self misidentification delusion. The International journal of clinical and experimental hypnosis, 59 (1), 1-26 PMID: 21104482

    Chandra SR, & Issac TG (2014). Mirror image agnosia. Indian journal of psychological medicine, 36 (4), 400-3 PMID: 25336773

    Connors MH, & Coltheart M (2011). On the behaviour of senile dementia patients vis-à-vis the mirror: Ajuriaguerra, Strejilevitch and Tissot (1963). Neuropsychologia, 49 (7), 1679-92 PMID: 21356221

    Mendez MF, Martin RJ, Smyth KA, & Whitehouse PJ (1992). Disturbances of person identification in Alzheimer's disease. A retrospective study. The Journal of nervous and mental disease, 180 (2), 94-6 PMID: 1737981


    - this looks like a strange one -



    0 0
  • 11/01/14--02:06: Fright Week: Fear of Mirrors


  • When I was a kid, I watched this scary TV show called One Step Beyond. It was kind of like The Twilight Zone, except the stories were more haunting and supernatural.

    An especially frightening episode was called The Clown. Everyone loves the circus. Everyone loves a clown.1





    John Newland, the show's narrator: "Laughter is an international language, and the clown, the prince of laughter."

    "Look, a clown!"

    A jealous husband behaves in a physically and verbally abusive fashion towards his young wife any time she's near another man. Why, he's even jealous of Pippo the Clown, a simple and silent entertainer who brings balloons and joy to the diner patrons.

    Mr. Abusive sees the clown touching his wife's blond hair and freaks out. He grabs Pippo's scissors and cuts off a chunk of her hair. The wife screams and runs away into the carnival campgrounds, which is conveniently nearby. Pippo acts in a menacing fashion and scares the husband away.

    The wife wanders around the carnival grounds and into the clown's tent, where she cries into a wig. Pippo returns and tries to fix her hair and cheer her up. She eventually starts laughing and hugs the clown.

    Then the obnoxious lout hears laughter and enters the trailer, finding his wife with the clown. "You dirty cheap one, I've had it..." He grabs her and slaps her and throws her down to the ground.

    Pippo gets defensive and angry and starts choking the husband, who grabs those handy scissors and stabs........ HIS WIFE! Killing her!

    Pippo picks her up, husband drops the scissors and slips away, and guess who becomes the leading murder suspect. The simple clown, who keeps trying to revive the dead girl by making her laugh.

    The Strong Man: "Help, help, somebody help, the clown's killed a dame!" [it's 1960]

    The husband wanders around in a daze, stopping in front of a pawn shop with a mirror in the window.




    Mr. Killer glances away from the mirror for a moment and guess who appears, trying to strangle him.




    He whips around to see the clown and.... there's no one there!!




    This happens a few more times, where the clown appears in the mirror, the guy turns around and there's nobody there...




    Now this was very scary and horrifying when I was a small child. I was afraid to look at a mirror for weeks. The thought of seeing Pippo the Clown standing behind me, strangling me, was terrifying. For a brief period I had Spectrophobia (also known as Catoptrophobia), a fear of mirrors:
    Generally, an individual that deals with Spectrophobia has been traumatized in an event where they believe they have seen or heard apparitions or ghosts. The individual could also become traumatized by horror films, television shows, or by nightmares. This fear could be the result of a trauma involving mirrors. It could also be the result of the person’s superstitious fear of being watched through the mirror.

    "Traumatized" is a bit excessive... I got over it. Watching the episode today, I see how campy and cheesy it is, with its soundtrack of "vampy" music as a stand-in for the wife's sex appeal. Her aura of youthful innocence was over the top, and the husband comes off as a creepy pedophile.2







    And fortunately, I never developed a fear of clowns...




    But I have to say, I didn't make it through the OCULUS Trailer, not on Halloween night. And I think I'll have to try the ‘strange-face in the mirror' illusion another night.


    I hope you enjoyed Fright Week. Check out the other spooky posts:

    The Stranger in the Mirror

    The Waking Nightmare of Lord Voldemort



    Footnotes

    1 Everyone knows about coulrophobia, the very common fear of clowns.

    2The Flaming Nose TV Blog informs us that the actors playing the husband and wife were 40 and 18 years old, respectively. No wonder he comes off as an abusive pedophile... The strangling clown gif is also from the Flaming Nose.

    0 0



    “Research on the brain is surging,” declared the New York Times the other day:

    Yet the growing body of data — maps, atlases and so-called connectomes that show linkages between cells and regions of the brain — represents a paradox of progress, with the advances also highlighting great gaps in understanding.

    So many large and small questions remain unanswered. How is information encoded and transferred from cell to cell or from network to network of cells? Science found a genetic code but there is no brain-wide neural code; no electrical or chemical alphabet exists that can be recombined to say “red” or “fear” or “wink” or “run.” And no one knows whether information is encoded differently in various parts of the brain.

    Yet we still understand so little, they say. And most people don't care.

    The Public Find Brain Science Irrelevant and Anxiety-provoking, based on the outcome of a small qualitative study of 48 London residents (O'Connor & Joffe, 2014):

    The Brain Is Something That Goes Wrong

    Though the brain was ordinarily absent from participants’ mental landscapes, there was one route by which this habitual inattention could be ruptured. The second theme articulates the finding that for many, neurological pathology was the only aspect of brain research that held clear personal relevance. This foregrounding of pathology constituted the brain as a vulnerable, anxiety-provoking organ and anchored brain research in the domain of medicine.

    So people may not care about the brain, unless something in theirs is broken. When they'll find it's important that doctors know how to fix it. And perhaps realize this knowledge comes from basic research.

    This adds new meaning to the Public Health Relevance Statement required for NIH grant applications (see p. I-65 of this PDF):
    For NIH and other PHS agencies applications, using no more than two or three sentences, describe the relevance of this research to public health. In this section, be succinct and use plain language that can be understood by a general, lay audience. If the application is funded, this public health relevance statement will be combined with the project summary (above) and will become public information.

    Anyone can look up grants at NIH RePORTER and read the Public Health Relevance Statement for each. Not that most people will be doing this. But what might they find for a basic science grant that studies invertebrates? Say the central pattern generating circuits found in the crustacean stomatogastric ganglion, which controls the rhythmic muscle contractions that grind and move food through the gut? Here's one:
    Public Health Relevance Statement: Mental illness may result from relatively minor imbalances in circuit parameters that nonetheless result in significantly disordered functions. To understand what kinds of circuit parameters when perturbed lead to mental illness, it is necessary to understand how different neuronal excitability and synaptic strengths are in normal healthy brains, and how individual neuronal processes compensate for each other.

    I chose this example because the Principal Investigator, Dr. Eve Marder, has done such groundbreaking work on neuromodulation and circuit dynamics over the duration of her illustrious career. Last year she was awarded the $500,000 Gruber Neuroscience Prize for Pioneering Contributions to the Understanding of Neural Circuitry:
    ...Early in her career, Marder revealed that the STG was not "hard-wired" to produce a single pattern of output, but that it was a remarkably plastic circuitry that could change both its parameters and function in response to various neuromodulators while still maintaining its morphologic connectivity. This discovery marked a paradigm shift in how scientists viewed the architecture and function of neural circuits, including those in the human brain.
    . . .

    More recently, Marder's research has focused on how neural circuits maintain stability, or homeostasis, over long periods of time despite constantly reconfiguring themselves. This research has broad implications for the study of many neurological diseases linked to dysfunctional neural circuitry, such as schizophrenia, depression, epilepsy, post-traumatic stress disorder (PTSD), and chronic pain.

    What if PIs were required to provide a detailed description of how their findings will actually lead to new treatments? It's one thing to say “our findings will have broad implications for the study of many neurological diseases” but quite another to explain exactly how this this will happen, even if you're studying humans (not to mention if you're studying a system of 30 neurons in the crab gut). The down side here is that the public might expect too much “Hey, why haven't you cured Alzheimer's yet? Haven't we, the taxpayers, given you billions of dollars?”

    On the other hand, politicians are falling all over each other saying, “I'm not a scientist, but...” I'll go ahead and make ignorant policy decisions and second guess independent peer review of grants. So it's critical that neuroscientists can communicate the “broader implications” of their work and yes, how their research may eventually lead to improved treatments for brain diseases.

    For that reason, I've been pondering the relative translational potential of neural engineering, pharmacological, and regenerative medicine approaches to neurological and psychiatric disorders... We'll see what (if anything) I can come up with, at least from a comparative perspective.

    Cheesy Bench to Bedside Image Credit: UAMS

    0 0

    Photo illustration by Andrea Levy for The Chronicle Review


    Inflammatory title, isn't it. Puzzled by how it could possibly happen? Then read on!

    A few days ago, The Chronicle of Higher Education published a piece called Neuroscience Is Ruining the Humanities. You can find it in a Google search and at reddit, among other places. The url is http://chronicle.com/article/Neuroscience-Is-Ruining-the/150141/ {notice the “Neuroscience-Is-Ruining” part.}

    Oh wait. Here's a tweet.


    At some point along the way, without explanation, the title of the article was changed to the more mundane The Shrinking World of Ideas. The current take-home bullet points are:
    • We have shifted our focus from the meaning of ideas to the means by which they’re produced.
    • When professors began using critical theory to teach literature they were, in effect, committing suicide by theory.

    The author is essayist Arthur Krystal, whose 4,000+ word piece can be summarized as “postmodernism ruined everything.” In the olden days of the 19th century, ideas mattered. Then along came the language philosophers and some French historians in the 1920s/30s, who opened the door for Andy Warhol and Jacques Derrida and what do you know, ideas didn't matter any more. That's fine, he can express that opinion, and normally I wouldn't care. I'm not going to debate the cultural harms or merits of postmodernism today.

    What did catch my eye was this: “...what the postmodernists indirectly accomplished was to open the humanities to the sciences, particularly neuroscience.”

    My immediate response: “that is the most ironic thing I've ever heard!! there is no truth [scientific or otherwise] in postmodernism!” Meaning: scientific inquiry was either irrelevant to these theorists, or something to be distrusted, if not disdained. So how could they possibly invite Neuroscience into the Humanities Building?

    Let's look at Krystal's extended quote (emphasis mine):
    “...By exposing the ideological codes in language, by revealing the secret grammar of architectural narrative and poetic symmetries, and by identifying the biases that frame "disinterested" judgment, postmodern theorists provided a blueprint of how we necessarily think and express ourselves. In their own way, they mirrored the latest developments in neurology, psychology, and evolutionary biology. [Ed. warning: non sequitur ahead.] To put it in the most basic terms: Our preferences, behaviors, tropes, and thoughts—the very stuff of consciousness—are byproducts of the brain’s activity. And once we map the electrochemical impulses that shoot between our neurons, we should be able to understand—well, everything. So every discipline becomes implicitly a neurodiscipline, including ethics, aesthetics, musicology, theology, literature, whatever.”

    I'm as reductionist as the next neuroscientist, sure, but Krystal's depiction of the field is either quite the caricature, or incredibly naïve. Ultimately, I can't tell if he's actually in favor of "neurohumanities"...
    In other words, there’s a good reason that "neurohumanities" are making headway in the academy. Now that psychoanalytic, Marxist, and literary theory have fallen from grace, neuroscience and evolutionary biology can step up. And what better way for the liberal arts to save themselves than to borrow liberally from science?

    ...or opposed:
    Even more damning are the accusations in Sally Satel and Scott O. Lilienfeld’s Brainwashed: The Seductive Appeal of Mindless Neuroscience , which argues that the insights gathered from neurotechnologies have less to them than meets the eye. The authors seem particularly put out by the real-world applications of neuroscience as doctors, psychologists, and lawyers increasingly rely on its tenuous and unprovable conclusions. Brain scans evidently are "often ambiguous representations of a highly complex system … so seeing one area light up on an MRI in response to a stimulus doesn’t automatically indicate a particular sensation or capture the higher cognitive functions that come from those interactions." 1

    Then he links to articles like Adventures in Neurohumanities and Can ‘Neuro Lit Crit’ Save the Humanities? (in a non-critical way) 2  before meandering back down memory lane. They sure don't make novelists like they used to!

    So you see, neuroscience hasn't really ruined the humanities.3 Have the humanities ruined neuroscience? Although there has been a disturbing proliferation of neuro- fields, I think we can weather the storm of Jane Austen neuroimaging studies.


    Footnotes

    1Although I haven't always seen eye to eye with Satel and Lilienfeld, here Krystal clearly overstates the extent of their dismissal of the entire field (which has happened before).

    2 Read Professor of Literary Neuroimaging instead.

    3 The author of the Neurocultures Manifesto may disagree, however.

    link via @vaughanbell

    0 0
  • 12/08/14--04:59: Hipster Neuroscience


  • According to Urban Dictionary,
    Hipsters are a subculture of men and women typically in their 20's and 30's that value independent thinking, counter-culture, progressive politics, an appreciation of art and indie-rock, creativity, intelligence, and witty banter.  ...  Hipsters reject the culturally-ignorant attitudes of mainstream consumers, and are often be seen wearing vintage and thrift store inspired fashions, tight-fitting jeans, old-school sneakers, and sometimes thick rimmed glasses.

    by Trey Parasuco November 22, 2007 

    Makes them sound so cool. But we all know that everyone loves to complain about hipsters and the endless lifestyle/culture/fashion pieces written about them.





    And they're so conformist in their nonconformity.

    Recently, Jonathan Touboul posted a paper at arXiv to model The hipster effect: When anticonformists all look the same:
    The hipster effect is this non-concerted emergent collective phenomenon of looking alike trying to look different. Uncovering the structures behind this apparent paradox ... can have implications in deciphering collective phenomena in economics and finance, where individuals may find an interest in taking positions in opposition to the majority (for instance, selling stocks when others want to buy). Applications also extend to the case of neuronal networks with inhibition, where neurons tend to fire when others and silent, and reciprocally.

    You can find great write ups of the paper at Neuroecology and the Washington Post:
    There are two kinds of people in this world: those who like to go with the flow, and those who do the opposite — hipsters, in other words. Over time, people perceive what the mainstream trend is, and either align themselves with it or oppose it.
    ...

    What if this world contained equal numbers of conformists and hipsters? No matter how the population starts out, it will end up in some kind of cycle, as the conformists try to catch up to the hipsters, and the hipsters try to differentiate themselves from the conformists.

    But there aren't equal numbers of conformists and hipsters. And this type of cycle doesn't apply to neuroscience research, which is always moving forward in terms of trends and technical advances (right)?



    It may be the Dream of the 1890s in Portland, but it's BRAIN 2015 all the way (RFA-MH-15-225):

    BRAIN Initiative: Development and Validation of Novel Tools to Analyze Cell-Specific and Circuit-Specific Processes in the Brain (U01)


    Although hipsters are in their 20s and 30s, the august NIH crowd (and its advisors) has set the BRAIN agenda that everyone else has to follow. When the cutting-edge tools (e.g., optogenetics) become commonplace, you have to do amazing things with them like create false memories in mice, or else develop methods like Dreadd2.0: An Enhanced Chemogenetic Toolkit or Ultra-Multiplexed Nanoscale In Situ Proteomics for Understanding Synapse Types.

    The BRAIN Initiative wants to train the hipsters and other "graduate students, medical students, postdoctoral scholars, medical residents, and/or early-career faculty" in Research Tools and Methods and Computational Neuroscience. This will "complement and/or enhance the training of a workforce to meet the nation’s biomedical, behavioral and clinical research needs."

    But this is an era when the average age of first-time R01 Principal Investigators is 421and post-docs face harsh realities:
    Research in 2014 is a brutal business, at least for those who want to pursue academic science as a career. Perhaps the most telling line comes from the UK report: of 100 science PhD graduates, about 30 will go on to postdoc research, but just four will secure permanent academic posts with a significant research component. There are too many scientists chasing too few academic careers.

    How do you respond to these brutal challenges? I don't have an answer.2  But many young neuroscientists may have to start pickling their own vegetables, raising their own chickens, and curing their own meats.



    Footnotes

    1  The average age of first-time Principal Investigators on NIH R01 grants has risen from 36 in 1980 to 42 in 2001, where it remains today (see this PPT). So this has been going on for a while.

    2  Or at least, not an answer that will fit within the scope of this post. Some obvious places to start are to train fewer scientists, enforce a reasonable retirement age, and increase funding somehow. And decide whether all research should be done by 20 megalabs, or else reduce the $$ amount and number of grants awarded to any one investigator.




    0 0

    Source: Alyssa L. Miller, Flickr.


    For nearly 9 years, this blog has been harping on the blight of overblown press releases, with posts like:

    Irresponsible Press Release Gives False Hope to People With Tourette's, OCD, and Schizophrenia

    Press Release: Press Releases Are Prestidigitation

    New research provides fresh evidence that bogus press releases may depend largely on our biological make-up

    Save Us From Misleading Press Releases

    etc.


    So it was heartening to see a team of UK researchers formally evaluate the content of 462 heath-related press releases issued by leading universities in 2011 (Sumner et al., 2014). They classified three types of exaggerated claims and found that 40% of the press releases contained exaggerated health advice, 33% made causal statements based on correlational results, and 36% extrapolated from animal research to humans.

    A fine duo of exaggerated health advice and causal statements based on correlational results recently caught my eye. Here's a press release issued by Springer, the company that publishes Cognitive Therapy and Research:

    Don’t worry, be happy: just go to bed earlier

    When you go to bed, and how long you sleep at a time, might actually make it difficult for you to stop worrying. So say Jacob Nota and Meredith Coles of Binghamton University in the US, who found that people who sleep for shorter periods of time and go to bed very late at night are often overwhelmed with more negative thoughts than those who keep more regular sleeping hours.

    The PR issues health advice (“just go to bed earlier”) based on correlational data: “people who sleep for shorter periods of time and go to bed very late at night are often overwhelmed with more negative thoughts.” But does staying up late cause you to worry, or do worries keep you awake at night? A survey can't distinguish between the two.

    The study by Nota and Coles (2014) recruited 100 teenagers (or near-teenagers, mean age = 19.4 + 1.9) from the local undergraduate research pool. They filled out a number of self-report questionnaires that assessed negative affect, sleep quality, chronotype (morning person vs. evening person), and aspects of repetitive negative thinking (RNT).

    RNT is a transdiagnostic construct that encompasses symptoms typical of depression (rumination), anxiety (worry), and obsessive-compulsive disorder (obsessions). Thus, the process of RNT is considered similar across the disorders, but the content may differ. The undergraduates were not clinically evaluated so we don't know if any of them actually had the diagnoses of depression, anxiety, and/or OCD. But one can look at whether the types of symptoms that are endorsed (whether clinically relevant or not) are related to sleep duration and timing. Which is what the authors did.

    Shorter sleep duration and a later bedtime were indeed associated with more RNT. However, when accounting for levels of negative affect, the sleep variables no longer showed a significant correlation.Not a completely overwhelming relationship, then.

    But as expected, the night owls reported more RNT than the non-night owls. 

    Here's how the findings were interpreted in the Springer press release and conspicuously, by the authors themselves (the study of Sumner et al., 2014 also observed this pattern). Note the exaggerated health advice and causal statements based on correlational results.

    “Making sure that sleep is obtained during the right time of day may be an inexpensive and easily disseminable intervention for individuals who are bothered by intrusive thoughts,” remarks Nota.

    The findings also suggest that sleep disruption may be linked to the development of repetitive negative thinking. Nota and Coles therefore believe that it might benefit people who are at risk of developing a disorder characterized by such intrusive thoughts to focus on getting enough sleep.

    “If further findings support the relation between sleep timing and repetitive negative thinking, this could one day lead to a new avenue for treatment of individuals with internalizing disorders,” adds Coles. “Studying the relation between reductions in sleep duration and psychopathology has already demonstrated that focusing on sleep in the clinic also leads to reductions in symptoms of psychopathology.”

    As they mentioned, we already know that many psychiatric disorders are associated with problematic sleep, and that improved sleep is helpful in these conditions. Recommending that people suffering with debilitating and uncontrollable intrusive thoughts to “just go to bed earlier” isn't particularly helpful. Not only that, such advice can be downright irritating.

    Here's a news story from Yahoo that plays up the “sleep reduces worry” causal relationship even more:
    This Sleep Tweak Could Help You Worry Less

    Can the time you hit the hay actually influence the types of thoughts you have? Science says yes.

    Are you a chronic worrier? The hour you’re going to sleep, and how much sleep you’re getting overall, may exacerbate your anxiety, according to a new study published in the journal Cognitive Therapy and Research.

    The great news here? By tweaking your sleep habits you could actually help yourself worry less. Really.

    Great! So internal monologues of self-loathing (“I'm a complete failure”, “No one likes me”) and deep anxiety about the future (“My career prospects are dismal”, “I worry about my partner's terrible diagnosis”) can be cured by going to bed earlier!

    Even if you could forcibly alter your chronotype (and I don't know if this is possible), what do you do when you wake up in the middle of the night haunted by your repetitive negative thoughts?


    Further Reading


    Alexis Delanoir on the RNT paper and much more in Depression And Stress/Mood Disorders: Causes Of Repetitive Negative Thinking And Ruminations

    Scicurious, with an amusingly titled piece: This study of hype in press releases will change journalism


    Footnotes

    Chronotype was dichotomously classified as evening type vs. moderately morning-type / neither type (not a lot of early birds, I guess). And only 75 students completed questionnaires in this part of the study.

    2 It's notable that the significance level for these correlations was not corrected for multiple comparisons in the first place.


    References

    Nota, J., & Coles, M. (2014). Duration and Timing of Sleep are Associated with Repetitive Negative Thinking. Cognitive Therapy and Research DOI: 10.1007/s10608-014-9651-7

    Sumner, P., Vivian-Griffiths, S., Boivin, J., Williams, A., Venetis, C., Davies, A., Ogden, J., Whelan, L., Hughes, B., Dalton, B., Boy, F., & Chambers, C. (2014). The association between exaggeration in health related science news and academic press releases: retrospective observational study, BMJ, 349 (dec09 7) DOI: 10.1136/bmj.g7015




    0 0

    Ho ho ho!

    “Laughter consists of both motor and emotional aspects. The emotional component, known as mirth, is usually associated with the motor component, namely, bilateral facial movements.”

    -Yamao et al. (2014)

    The subject of laughter has been under an increasing amount of scientific scrutiny.  A recent review by Dr. Sophie Scott and colleagues (Scott et al., 2014) emphasized that laughter is a social emotion. During conversations, voluntary laughter by the speaker is a communicative act. This contrasts with involuntary laughter, which is elicited by external events like jokes and funny behavior.

    One basic idea about the neural systems involved in the production of laughter relies on this dual process theme:
    The coordination of human laughter involves the periaqueductal grey [PAG] and the reticular formation [RF], with inputs from cortex, the basal ganglia, and the hypothalamus. The hypothalamus is more active during reactive laughter than during voluntary laughter. Motor and premotor cortices are involved in the inhibition of the brainstem laughter centres and are more active when suppressing laughter than when producing it.


    Figure 1 (Scott et al., 2014). Voluntary and involuntary laughter in the brain.


    An earlier paper on laughter and humor focused on neurological conditions such as pathological laughter and gelastic epilepsy (Wild et al., 2003). In gelastic epilepsy, laughter is the major symptom of a seizure. These gelastic (“laughing”) seizures usually originate from the temporal poles, the frontal poles, or from benign tumors in the hypothalamus (Wild et al., 2003). Some patients experience these seizures as pleasant (even mirthful), while others do not:
    During gelastic seizures, some patients report pleasant feelings which include exhilaration or mirth. Other patients experience the attacks of laughter as inappropriate and feel no positive emotions during their laughter. It has been claimed that gelastic seizures originating in the temporal regions involve mirth but that those originating in the hypothalamus do not. This claim has been called into question, however...

    In their extensive review of the literature, Wild et al. (2003) concluded that the “laughter‐coordinating centre” must lie in the dorsal midbrain, with intimate connections to PAG and RF. Together, this system may comprise the “final common pathway” for laughter (i.e., coordinating changes in facial muscles, respiration, and vocalizations). During emotional reactions, prefrontal cortex, basal temporal cortex, the hypothalamus, and the basal ganglia transmit excitatory inputs to PAG and RF, which in turn generates laughter.


    Can direct cortical stimulation produce laughter and mirth?

    It turns out that the basal temporal cortex (wearing a Santa hat above) plays a surprising role in the generation of mirth, at least according to a recent paper by Yamao et al., (2014). Over a period of 13 years, they recorded neural activity from the cortical surface of epilepsy patients undergoing seizure monitoring, with the purpose of localizing the aberrant epileptogenic tissue. They enrolled 13 patients with implanted subdural grids to monitor for left temporal lobe seizures, and identified induced feelings of mirth in two patients (resulting from electrical stimulation in specific regions).

    Obviously, this is not the typical way we feel amusement and utter guffaws of delight, but direct stimulation of the cortical surface goes back to Wilder Penfield as a way for neurosurgeons to map the behavioral functions of the brain. Of particular interest is the localization of language-related cortex that should be spared from surgical removal if at all possible.

    The mirth-inducing region (Yamao et al., 2014) encompasses what is known as the basal temporal language area (BTLA), first identified by Lüders and colleagues in 1986. The region includes the left fusiform gyrus, about 3-7 cm from the tip of the temporal lobe. Stimulation at high intensities produces total speech arrest (inability to speak) and global language comprehension problems. Low stimulation intensity produces severe anomia, an inability to name things (or places or people). Remarkably, however, Lüders et al. (1991) found that “Surgical resection of the basal temporal language area produces no lasting language deficit.”

    With this background in mind, let's look at the results from the mirthful patients. The location of induced-mirth (shown below) is the white circle in Patient 1 and the black circles in Patient 2.  In comparison, the locations of stimulation-induced language impairment are shown in diamonds. Note, however, that mirth was co-localized with language impairment in Patient 2.



    Fig. 1 (modified from Yamao et al., 2014). The results of high-frequency electrical cortical stimulation. “Mirth” (circles) and “language” (diamonds) electrodes are shown in white and black colors for Patients 1 and 2, respectively. Note that mirth was elicited at or adjacent to the electrode associated with language impairment.  R = right side. The view is of the bottom of the brain.


    How do the authors interpret this finding?
    ...the ratio of electrodes eliciting language impairment was higher for the mirth electrodes than in no-mirth electrodes, suggesting an association between mirth and language function. Since the BTLA is actively involved in semantic processing (Shimotake et al., 2014 and Usui et al., 2003), this semantic/language area was likely involved in the semantic aspect of humor detection in our cases.

    Except there was no external humor to detect, as the laughter and feelings of mirth were spontaneous. After high-frequency stimulation, one patient reported, “I do not know why, but something amused me and I laughed.” The other patient said, “A familiar melody that I had heard in a television program in my childhood came to mind; its tune sounded funny and amused me.”

    The latter description sounds like memory-induced nostalgia or reminiscence, which can occur with electrical stimulation of the temporal lobe (or TL seizures). But most of the relevant stimulation sites for those déjà vu-like experiences are not in the fusiform gyrus, which has been mostly linked to higher-level visual processing.

    The authors also found that stimulation of the left hippocampus consistently caused contralateral (right-sided) facial movement that led to laughter.

    I might have missed it, but one thing we don't know is whether stimulation of the right fusiform gyrus would have produced similar effects. Another thing to keep in mind is that these little circles are only one part of a larger system (see Scott et al. figure above). Presumably, the stimulated BTLA sites send excitatory projections to PAG and RF, which initiate laughter. But where is mirth actually represented, if you can feel amused and laugh for no apparent reason? By bypassing higher-order regions1, laughter can be a surprising and puzzling experience.


    Footnote

    1 Like, IDK, maybe ventromedial PFC, other places in both frontal lobes, hypothalamus, basal ganglia, and more "classically" semantic areas in the left temporal lobe...


    link originally via @Neuro_Skeptic:



    References

    LÜDERS, H., LESSER, R., HAHN, J., DINNER, D., MORRIS, H., WYLLIE, E., & GODOY, J. (1991). BASAL TEMPORAL LANGUAGE AREA Brain, 114 (2), 743-754 DOI: 10.1093/brain/114.2.743

    Scott, S., Lavan, N., Chen, S., & McGettigan, C. (2014). The social life of laughter Trends in Cognitive Sciences, 18 (12), 618-620 DOI: 10.1016/j.tics.2014.09.002

    Wild, B., & et al. (2003). Neural correlates of laughter and humour Brain, 126 (10), 2121-2138 DOI: 10.1093/brain/awg226

    Yamao, Y., Matsumoto, R., Kunieda, T., Shibata, S., Shimotake, A., Kikuchi, T., Satow, T., Mikuni, N., Fukuyama, H., Ikeda, A., & Miyamoto, S. (2014). Neural correlates of mirth and laughter: A direct electrical cortical stimulation study Cortex DOI: 10.1016/j.cortex.2014.11.008




    0 0



    Traumatic Brain Injury (TBI) is a serious public health problem that affects about 1.5 million people per year in the US, with direct and indirect medical costs of over $50 billion. Rapid intervention to reduce the risk of death and disability is crucial. The diagnosis and treatment of TBI is an active area of preclinical and clinical research funded by NIH and other federal agencies.

    But during the White House BRAIN Conference, a leading neurosurgeon painted a pessimistic picture of current treatments for acute TBI. In response to a question about clinical advances based on cellular neurobiology, Dr. Geoffry Manley noted that the field is on its 32nd or 33rd failed clinical trial. The termination of a very promising trial of progesterone for TBI had just been announced (the ProTECT III, Phase III Clinical Trial “based on 17 years of work with 200 positive papers in preclinical models”), although I couldn't find any notice at the time (Sept 30 2014).

    Now, the results from ProTECT III have been published in the New England Journal of Medicine (Wright et al., 2014). 882 TBI patients from 49 trauma centers were enrolled in the study and randomized to receive progesterone, thought to be a neuroprotective agent, or placebo within 4 hours of major head injury. The severity of TBI fell in the moderate to severe range, as indicated by scores on the Glasgow Coma Scale (which rates the degree of impaired consciousness).

    The primary outcome measure was the Extended Glasgow Outcome Scale (GOS-E) at six months post-injury. The trial was stopped at 882 patients (out of a planned 1140) because there was no way that progesterone would improve outcomes:
    After the second interim analysis, the trial was stopped because of futility. For the primary hypothesis comparing progesterone with placebo, favorable outcomes occurred in 51.0% of patients assigned to progesterone and in 55.5% of those assigned to placebo. 

    Analysis of subgroups by race, ethnicity, and injury severity showed no differences between them, but there was a suggestive (albeit non-significant) sex difference.

    - click on image for a larger view -


    Modified from Fig. 2 (Wright et al., 2014).Adjusted Relative Benefit in Predefined Subgroups. Note the red boxp value for sex differences.


    Squares to the left of the dotted line indicate that placebo performed better than progesterone in a given patient group, while values to the right favor progesterone. The error bars show confidence intervals, which indicate that nearly all groups overlap with 0 (representing zero benefit for progesterone) The red box indicates a near-significant difference between men and women, with women actually faring worse with progesterone than with placebo. You may quibble about conventional significance, but women on average deteriorated with treatment, while men were largely unaffected.

    This was a highly disappointing outcome for a well-conducted study that built on promising results in smaller Phase II Clinical Trials (which were backed by a boatload of preclinical data). The authors reflect on this gloomy state of affairs:
    The PROTECT III trial joins a growing list of negative or inconclusive trials in the arduous search for a treatment for TBI. To date, more than 30 clinical trials have investigated various compounds for the treatment of acute TBI, yet no treatment has succeeded at the confirmatory trial stage. Many reasons for the disappointing record of translating promising agents from the laboratory to the clinic have been postulated, including limited preclinical development work, poor drug penetration into the brain, delayed initiation of treatment, heterogeneity of injuries, variability in routine patient care across sites, and insensitive outcome measures.

    If that isn't enough, a second failed trial of progesterone was published in the same issue of NEJM (Skolnick et al., 2014). This group reported on negative results from an even larger pharma-funded trial (SyNAPse, which is the tortured acronym for Study of a Neuroprotective Agent, Progesterone, in Severe Traumatic Brain Injury). The SyNAPse trial enrolled the projected number of 1180 patients across 21 countries, all with severe TBI. The percentage of patients with favorable outcomes at six months was 50.4% in the progesterone group and 50.5% in the placebo group.
    The negative result of this study, combined with the results of the PROTECT III trial, should stimulate a rethinking of procedures for drug development and testing in TBI.

    This led Dr. Lee H. Schwamm (2014) to expound on the flawed culture of research in an Editorial, invoking the feared god of false positive findings (Ioannidis, 2005) and his minions: small effect sizes, small n's, too few studies, flexibility of analysis, and bias. Schwamm pointed to problematic aspects of the Phase II Trials that preceded ProTECT III and SyNAPse, including modest effect sizes and better-than expected outcomes in the placebo group.


    Hope for the Future

    “And you have to give them hope.”
    --Harvey Milk


    When the going gets tough in research, who better to rally the troops than your local university press office? The day after Dr. Manley's presentation at the BRAIN conference on Sept. 30, the University of California San Francisco issued this optimistic news release:

    $17M DoD Award Aims to Improve Clinical Trials for Traumatic Brain Injury

    An unprecedented, public-private partnership funded by the Department of Defense (DoD) is being launched to drive the development of better-run clinical trials and may lead to the first successful treatments for traumatic brain injury, a condition affecting not only athletes and members of the military, but also millions among the general public, ranging from youngsters to elders.

    Under the partnership, officially launched Oct. 1 with a $17 million, five-year award from the DoD, the research team, representing many universities, the Food and Drug Administration (FDA), companies and philanthropies, will examine data from thousands of patients in order to identify effective measures of brain injury and recovery, using biomarkers from blood, new imaging equipment and software, and other tools.
    . . .

    “TBI is really a multifaceted condition, not a single event,” said UCSF neurosurgeon Geoffrey T. Manley, MD, PhD, principal investigator for the new award... “TBI lags 40 to 50 years behind heart disease and cancer in terms of progress and understanding of the actual disease process and its potential aftermath. More than 30 clinical trials of potential TBI treatments have failed, and not a single drug has been approved.”

    The TED (TBI Endpoints Development) Award is meant to accelerate research to improve TBI diagnostics, classification, and patient selection for clinical trials. Quite a reversal of fortune in one day.

    Out of the ashes of two failed clinical trials, a phoenix arises. Hope for TBI patients and their families takes wing.


    Further Reading (and viewing)

    White House BRAIN Conference (blog post)

    90 min video of the conference

    Brief Storify (summary of the conference)

    ClinicalTrials.gov listings for SyNAPSe and ProTECT III.


    References

    Schwamm, L. (2014). Progesterone for Traumatic Brain Injury — Resisting the Sirens' Song New England Journal of Medicine, 371 (26), 2522-2523 DOI: 10.1056/NEJMe1412951

    Skolnick, B., Maas, A., Narayan, R., van der Hoop, R., MacAllister, T., Ward, J., Nelson, N., & Stocchetti, N. (2014). A Clinical Trial of Progesterone for Severe Traumatic Brain Injury New England Journal of Medicine, 371 (26), 2467-2476 DOI: 10.1056/NEJMoa1411090

    Wright, D., Yeatts, S., Silbergleit, R., Palesch, Y., Hertzberg, V., Frankel, M., Goldstein, F., Caveney, A., Howlett-Smith, H., Bengelink, E., Manley, G., Merck, L., Janis, L., & Barsan, W. (2014). Very Early Administration of Progesterone for Acute Traumatic Brain Injury. New England Journal of Medicine, 371 (26), 2457-2466 DOI: 10.1056/NEJMoa1404304

    0 0

    The Incredible Grow Your Own Brain (Barron Bob)


    Using super absorbent material from disposable diapers, MIT neuroengineers Ed Boyden, Fei Chen, and Paul Tillberg went well beyond the garden variety novelty store "Grow Brain" to expand real brain slices to nearly five times their normal size.

    Boyden, E., Chen, F. & Tillberg, P. / MIT / Courtesy of NIH

    A slice of a mouse brain (left) was expanded by nearly five-fold in each dimension by adding a water-soaking salt. The result — shown at smaller magnification (right) for comparison — has its anatomical structures are essentially unchanged. (Nature - E. Callaway)


    As covered by Ewan Callaway in Nature:
    Blown-up brains reveal nanoscale details

    Material used in diaper absorbant can make brain tissue bigger and enable ordinary microscopes to resolve features down to 60 nanometres.

    Microscopes make living cells and tissues appear bigger. But what if we could actually make the things bigger?

    It might sound like the fantasy of a scientist who has read Alice’s Adventures in Wonderland too many times, but the concept is the basis for a new method that could enable biologists to image an entire brain in exquisite molecular detail using an ordinary microscope, and to resolve features that would normally be beyond the limits of optics.

    The technique, called expansion microscopy, involves physically inflating biological tissues using a material more commonly found in baby nappies (diapers).

    . . .

    “What we’ve been trying to do is figure out if we can make everything bigger,” Boyden told the meeting at the NIH in Bethesda, Maryland. To manage this, his team used a chemical called acrylate that has two useful properties: it can form a dense mesh that holds proteins in place, and it swells in the presence of water.

    Sodium polyacrylate (via Leonard Gelfand Center, CMU)


    Acrylate, a type of salt also known as waterlock, is the substance that gives nappies their sponginess. When inflated, Boyden's tissues grow about 4.5 times in each dimension.




    Just add water

    Before swelling, the tissue is treated with a chemical cocktail that makes it transparent, and then with the fluorescent molecules that anchor specific proteins to the acrylate, which is then infused into tissue. Just as with nappies, adding water causes the acrylate polymer to swell. After stretching, the fluorescent-tagged molecules move further away from each other; proteins that were previously too close to distinguish with a visible-light microscope come into crisp focus. In his NIH presentation, Boyden suggested that the technique can resolve molecules that had been as close as 60nm before expansion.

    Most scientists thought it was cool, but there were some naysayers: “This is certainly highly ingenious, but how much practical use it will be is less clear,” notes Guy Cox, a microscopy specialist at the University of Sydney, Australia.

    Others saw nothing new with the latest brain-transforming gimmick. Below, Marc Schuster displays his 2011 invention, the inflatable brain.



    “An inflatable brain makes a great prop for your Zombie Prom King costume,” says Schuster, author of The Grievers.


    Link via Roger Highfield:




    0 0



    The Boston Marathon bombings of April 15, 2013 killed three people and injured hundreds of others near the finish line of the iconic footrace. The oldest and most prominent marathon in the world, Boston attracts over 20,000 runners and 500,000 spectators. The terrorist act shocked and traumatized and unified the city.

    What should the survivors do with their traumatic memories of the event? Many with disabling post-traumatic stress disorder (PTSD) receive therapy to lessen the impact of the trauma. Should they forget completely? Is it possible to selectively “alter” or “remove” a specific memory? Studies in rodents are investigating the use of pharmacological manipulations (Otis et al., 2014) and behavioral interventions (Monfils et al., 2009) to disrupt the reconsolidation of a conditioned fear memory. Translating these interventions into clinically effective treatments in humans is an ongoing challenge.

    The process of reconsolidation may provide a window to altering unwanted memories. When an old memory is retrieved, it enters a transiently labile state, when it's susceptible to change before becoming consolidated and stored again (Nader & Hardt et al., 2009). There's some evidence that the autonomic response to a conditioned fear memory can be lessened by an “updating” procedure during the reconsolidation period (Schiller et al., 2010).1 How this might apply to the recollection of personally experienced trauma memories is uncertain.


    Remembering the Boston Bombings

    Can you interfere with recall of a traumatic event by presenting competing information during the so-called reconsolidation window? A new study by Kredlow and Otto (2015) recruited 113 Boston University undergraduates who were in Boston on the day of the bombings. In the first testing session, participants wrote autobiographical essays recounting the details of their experience, prompted by specific questions. In principle, this procedure re-activated the traumatic memory, rendering it vulnerable to updating during the reconsolidation window (~6 hours).

    The allotted time for the autobiographical essay was 4 min. After that, separate groups of subjects read either a neutral story, a negative story, or a positive story (for 5 min). The fourth group did not read a story. Presentation of a story that is not one's own would presumably “update” the personal memory of the bombings.

    A second session occurred one week later. The participants were again asked to write an autobiographical essay for 4 min, under the same conditions as Session #1. They were also asked about their physical proximity to the bombings, whether they watched the marathon in person, feared for anyone's safety, and knew anyone who was injured or killed. Nineteen subjects were excluded for various reasons, leaving the final n=94.

    One notable weakness is that we don't know anything about the mental health of these undergrads, except that they completed the 10 item Positive and Negative Affective Schedule (PANAS-SF) before each session. And they were “provided with mental health resources” after testing (presumably links to resources, since the study was conducted online).

    In terms of proximity, 10% of the participants were within one block of the bombings (“Criterion A” stressor), placing them at risk for developing of PTSD. Most (95%) feared for someone's safety and 12% knew someone who was injured or killed (also considered Criterion A). But we don't know if anyone had a current or former PTSD diagnosis.

    The authors predicted that reading the negative stories during the “autobiographical reconsolidation window” would yield the greatest reduction in episodic details recalled from Session #1 (S1) to Session #2 (S2), relative to the No-Story condition. This is because the negative story and the horrific memories are both negative in valence [although I'm not sure of what mechanism would account for this effect].2
    Specifically, we hypothesized that learning a negative affective story during the reconsolidation window compared to no interference would interfere with the reconsolidation of memories of the Boston Marathon bombings. In addition, we expected the neutral and positive stories to result in some interference, but not as much as the negative story.

    The essays were coded for the number of memory details recalled in S1 and S2 (by 3-5 raters3), and the main measure was the number of details recalled in S2 for each of the four conditions. Other factors taken into account were the number of words used in S1, and time between the Boston Marathon and the testing session (both of which influenced the number of details recalled).

    The results are shown in Table 1 below. the authors reported comparisons between Negative Story vs. No Story (p<.05, d = 0.62), Neutral Story vs. No Story (p=.20, d = 0.39), and Positive Story vs. No Story (p=.83, d = 0.06). The effect sizes are “medium-ish” for both the Negative and Neutral comparisons, but only “significant” for Negative.


    I would argue that the comparison between Negative Story vs. Neutral Story which was not reported is the only way to evaluate the valence aspect of the prediction, i.e. whether the reduction in details recalled was specific to reading a negative story vs. potentially any story. I wasn't exactly sure why they didn't do an ANOVA in the first place, either.


    Nonetheless, Kredlow and Otto (2015) suggest that their study...
    ...represent[s] a step toward translating reconsolidation interference work to the clinic, as, to our knowledge, no published studies to date have examined nonpharmacological reconsolidation interference for clinically-relevant negative memories. Additional studies should examine reconsolidation interference paradigms, such as this one, in clinical populations.

    If this work was indeed extended to clinical populations, I would suggest conducting the study under more controlled conditions (in the lab, not online), which would also allow close monitoring of any distress elicited by writing the autobiographical essay (essentially a symptom provocation design). As the authors acknowledge, it would be especially important to evaluate not only the declarative, detail-oriented aspects of the traumatic memories, but also any change in their emotional impact.


    Further Reading

    Brief review of memory reconsolidation

    Media’s role in broadcasting acute stress following the Boston Marathon bombings

    Autobiographical Memory for a Life-Threatening Airline Disaster

    I Forget...


    Footnotes

    1 But this effect hasn't replicated in other studies (e.g., Golkar et al., 2012).

    2 Here, the authors say:
    ...some degree of similarity between the original memory and interference task may be required to achieve interference effects. This is in line with research suggesting that external and internal context is an important factor in extinction learning and may also be relevant to reconsolidation. As such, activating the affective context in which a memory was originally consolidated may facilitate reconsolidation interference.
    This is a very different strategy than the “updating of fear memories” approach, where a safety signal occurs before extinction. But conditioned fear (blue square paired with mild shock) is very different from episodic memories of a bombing scene.

    3 Details of the coding system:
    A group consensus coding system was used to code the memories. S1 and S2 memory descriptions for each participant were compared and coded for recall of memory details. One point was given for each detail from the S1 memory description that was recalled in the S2 memory description. Each memory pair was coded by between three to five raters until a consensus between three raters was reached. Raters were blind to participant randomization, but not to each other's ratings. Consensus was reached in 83% of memory pairs.

    References

    Kredlow MA, & Otto MW (2015). Interference with the reconsolidation of trauma-related memories in adults. Depression and anxiety, 32 (1), 32-7 PMID: 25585535

    Monfils MH, Cowansage KK, Klann E, LeDoux JE. (2009). Extinction-reconsolidation boundaries: key to persistent attenuation of fear memories. Science 324:951-5.

    Nader K, Hardt O. (2009). A single standard for memory: the case for reconsolidation. Nat Rev Neurosci. 10:224-34.

    Otis JM, Werner CT, Mueller D. (2014). Noradrenergic Regulation of Fear and Drug-Associated Memory Reconsolidation. Neuropsychopharmacology. [Epub ahead of print]

    Schiller D, Monfils MH, Raio CM, Johnson DC, Ledoux JE, & Phelps EA (2010). Preventing the return of fear in humans using reconsolidation update mechanisms. Nature 463: 49-53.


    0 0


    “It is feasible to recruit and retain a cohort of female participants to perform a functional magnetic resonance imaging [fMRI] task focused on making decisions about sex, on the basis of varying levels of hypothetical sexual risk, and to complete longitudinal prospective diaries following this task. Preliminary evidence suggests that risk level differentially impacts brain activity related to sexual decision making in these women [i.e., girls aged 14-15 yrs], which may be related to past and future sexual behaviors.”

    -Hensel et al. (2015)

    Can the brain activity of adolescents predict whether they are likely to make risky sexual decisions in the future?  I think this is the goal of a new pilot study by researchers at Indiana University and the Kinsey Institute (Hensel et al., 2015). While I have no reason to doubt the good intentions of the project, certain aspects of it make me uncomfortable.

    But first, I have a confession to make. I'm not an expert in adolescent sexual health like first author Dr. Devon Hensel. Nor do I know much about pediatrics, adolescent medicine, health risk behaviors, sexually transmitted diseases, or the epidemiology of risk, like senior author Dr. J. Dennis Fortenberry (who has over 300 publications on these topics).  His papers include titles such as Time from first intercourse to first sexually transmitted infection diagnosis among adolescent women and Sexual learning, sexual experience, and healthy adolescent sex. Clearly, these are very important topics with serious personal and public health implications. But are fMRI studies of a potentially vulnerable population the best way to address these societal problems?

    The study recruited 14 adolescent girls (mean age = 14.7 yrs) from health clinics in lower- to middle-income neighborhoods. Most of the participants (12 of the 14) were African-American, most did not drink or do drugs, and most had not yet engaged in sexual activity.  However, the clinics served areas with “high rates of early childbearing and sexually transmitted infection” so the implication is that these young women are at greater risk of poor outcomes than those who live in different neighborhoods.

    Detailed sexual histories were obtained from the girls upon enrollment (see below). They also kept a diary of sexual thoughts and behaviors for 30 days.




    Given the sensitive nature of the information revealed by minors, it's especially important to outline the informed consent procedures and the precautions taken to protect privacy. Yes, a parent or guardian gave their approval, and the girls completed informed consent documents that were approved by the local IRB. But I wanted to see more about this in the Methods. For example, did the parent or guardian have access to their daughters' answers and/or diaries, or was that private? This could have influenced the willingness of the girls to disclose potentially embarrassing behavior or “verboten” activities (prohibited by parental mores, church teachings, legal age of consent,1 etc.). 

    I don't know, maybe the standard procedures are obvious to those within the field of sexual health behavior, but they weren't to me.

    Turning to more familiar territory, the experimental design for the neuroimaging study involved presentation of four different types of stimuli: (1) faces of adolescent males; (2) alcoholic beverages; (3) restaurant food; (4) household items (e.g., frying pan). My made-up examples of the stimuli are shown below.



    Each picture was presented with information that indicated the item's risk level (“high” or “low”):
    • Adolescent male faces: number of previous sexual partners and typical condom use (yes/no)
    • Alcoholic beverages: number of alcohol units and whether there was a designated driver (yes/no)
    • Food: calorie content and whether the restaurant serving the food had been cited in the past year for health code violations (yes/no)
    • Household items: whether the object could be returned to the store (yes/no)

    For each picture, participants rated how likely they were to: (1) have sex with the male, (2) drink the beverage, (3) eat the food, or (4) purchase the product (1 = very unlikely to 4 = very likely). There were 35 exemplars of each category, and each stimulus was presented in both “high” and “low” risk contexts. So oddly, the pizza was 100 calories and from a clean restaurant on one trial, compared to 1,000 calories and from a roach-infested dump on another trial.

    The faces task was adapted from a study in adult women (Rupp et al., 2009) where the participants gave a mean likelihood rating of 2.45 for sex with low risk men vs. 1.41 for high risk men (significantly less likely for the latter). The teen girls showed the opposite result: 2.85 for low risk teen boys vs. 3.85 for high risk teen boys (significantly more likely) the “bad boy” effect?

    But the actual values were quite confusing. At one point the authors say they omitted the alcohol condition: “The present study focused on the legal behaviors (e.g., sexual behavior, buying item, and eating food) in which adolescents could participate.”

    But in the Fig. 1 legend, they say the opposite (that the alcohol condition was included):
    Panel (A) provides the average likelihood of young women's endorsing low- and high-risk decisions in the boy, alcohol, food, and household item (control) stimulus categories.

    Then they say that the low-risk male faces were rated as the most unlikely (i.e., least preferred) of all stimuli.  But Fig. 1 itself shows that the low-risk food stimuli were rated as the most unlikely...



    Regardless of the precise ratings, the young women were more drawn to all stimuli when they were in the high risk condition. The authors tried to make a case for more "risky" sexual choices among participants with higher levels of overt or covert sexual reporting, but the numbers were either impossibly low (for behavior) or thought-crimes only (for dreams/fantasy). So it's really hard to see how brain activity of any sort could be diagnostic of actual behaviorat this point in their lives.

    And the neuroimaging results were confusing as well. First, the less desirable low-risk stimuli elicited greater responses in cognitive and emotional control regions:
    Neural activity in a cognitive-affective network, including prefrontal and anterior cingulate (ACC) regions, was significantly greater during low-risk decisions.

    But then, we see that the more desirable high-risk sexual stimuli elicited greater responses in cognitive/emotional control regions:
    Compared with other decisions, high-risk sexual decisions elicited greater activity in the anterior cingulate, and low-risk sexual decision elicited greater activity in regions of the visual cortex. 

    This pattern went in the opposite direction from what was seen in adult women (Rupp et al., 2009), and it implicated a different region of the ACC. It's difficult to draw comparisons, though, because the adult and adolescent groups diverged in age, demographic characteristics, and sexual experience.


    Figure adapted from Hensel et al., 2015 (left) and Rupp et al., 2009 (right).


    So is it feasible to use fMRI to understand teen girls' sexual decision making? Maybe, from the point of view of logistics and subject compliance, which is no mean feat. But is it necessary, or even informative? Certainly not, in my view. It's not clear what neuroimaging will add to the picture, beyond the participants' fully disclosed sexual histories. Finally, is it ethical to use brain imaging to understand teen girls' sexual decision making? While the future predictive value of the fMRI data is uncertain, linking a biomarker to sensitive sexual information requires extra protection, especially when from a potentially vulnerable adolescent population.


    Footnote

    1 In the state of Indiana, it is illegal for an individual 18 years of age or older to have sex with one of the participants in the present study. So if a young women engaged in sexual activity with an 18 year old senior, he could potentially go to jail. Not that this was necessarily the case for anyone here.


    References

    Hensel, D., Hummer, T., Acrurio, L., James, T., & Fortenberry, J. (2015). Feasibility of Functional Neuroimaging to Understand Adolescent Women's Sexual Decision Making. Journal of Adolescent Health. DOI: 10.1016/j.jadohealth.2014.11.004

    Rupp, H., James, T., Ketterson, E., Sengelaub, D., Janssen, E., & Heiman, J. (2009). The role of the anterior cingulate cortex in women's sexual decision making. Neuroscience Letters, 449 (1), 42-47 DOI: 10.1016/j.neulet.2008.10.083

    0 0


    The Neurocritic (the blog) began 9 years ago today.

    I've enjoyed the journey immensely and look forward to the years to come, by Nodes of Ranvier (the band — not the myelin sheath gaps).






    Node of Ranvier



    And now a word from our sponsors,  Episode 3979 of Sesame Street...

    The Number 9



    The Letter k



    Thank you for watching! (and reading).

    0 0


     ...or should I say braindoggle...


    I've been reading The Future of the Brain, a collection of Essays by the World's Leading Neuroscientists edited by Gary Marcus and Jeremy Freeman. Amidst the chapters on jaw-dropping technical developments, Big Factory Science, and Grand Neuroscience Initiatives, one stood out for its contrarian stance (and personally reflective tone). Here's Professor Leah Krubitzer, who heads the Laboratory of Evolutionary Biology at University of California, Davis:

    “From a personal rather than scientific standpoint, the final important thing I've learned is don't be taken in by the boondoggle, don't get caught up in technology, and be very suspicious of "initiatives." Science should be driven by questions that are generated by inquiry and in-depth analysis rather than top-down initiatives that dictate scientific directions. I have also learned to be suspicious of labels declaring this the "decade of" anything: The brain, The mind, Consciousness. There should be no time limit on discovery. Does anyone really believe we will solve these complex, nonlinear phenomena in ten years or even one hundred? Tightly bound temporal mandates can undermine the important, incremental, and seemingly small discoveries scientists make every day doing critical, basic, nonmandated research. These basic scientific discoveries have always been the foundation for clinical translation. By all means funding big questions and developing innovative techniques is worthwhile, but scientists and the science should dictate the process.”

    ...although it should be said that a bunch of scientists did at least contribute to the final direction taken by the BRAIN Initiative (Brain Research through Advancing Innovative NeurotechnologiesSM)...


    An AS @ UVA Project
    by Meagan Hess
    May 2004



    Top image: vintage spoof Monopoly game issued during the 1936 US presidential campaign.




    0 0


    What do schizophrenia, bipolar disorder, major depression, addiction, obsessive compulsive disorder, and anxiety have in common? A loss of gray matter in the dorsal anterior cingulate cortex (dACC) and bilateral anterior insula, according to a recent review of the structural neuroimaging literature (Goodkind et al., 2015). These two brain regions are important for executive functions, the top-down cognitive processes that allow us to maintain goals and flexibly alter our behavior in response to changing circumstances. The authors modestly concluded they had identified a “Common Neurobiological Substrate for Mental Illness.”

    One problem with this view is that the specific pattern of deficits in executive functions, and their severity, differ across these diverse psychiatric disorders. For instance, students with anxiety perform worse than controls in verbal selection tasks, while those with depression actually perform better (Snyder et al., 2014). Another problem is that gray matter volume in the dorsolateral prefrontal cortex, a key region for working memory (a core impairment in schizophrenia and to a lesser extent, in major depression and non-psychotic bipolar disorder), was oddly unaffected in the meta-analysis.

    The NIMH RDoC movement (Research Domain Criteria) aims to explain the biological basis of psychiatric symptoms that cut across traditional DSM diagnostic categories. But I think some of the recent research that uses this framework may carry the approach too far (Goodkind et al., 2015):
    Our findings ... provide an organizing model that emphasizes the import of shared endophenotypes across psychopathology, which is not currently an explicit component of psychiatric nosology. This transdiagnostic perspective is consistent...with newer dimensional models such as the NIMH’s RDoC Project.

    However, not even the Director of NIMH believes this is true:
    "The idea that these disorders share some common brain architecture and that some functions could be abnormal across so many of them is intriguing," said Thomas Insel, MD...

    [BUT]

    "I wouldn't have expected these results. I've been working under the assumption that we can use neuroimaging to help classify the different forms of mental illness," Insel said. "This makes it harder."

    Anterior Cingulate and Anterior Insula and Everyone We Know

    The dACC and anterior insula are ubiquitously activated 1in human neuroimaging studies (leading Micah Allen to dub it the ‘everything’ network), and comprise either a salience network or task-set network (or even two separate cingulo-opercular systems) in resting state functional connectivity studies. But the changes reported in the newly published work were structural in nature. They were based on a meta-analysis of 193 voxel-based morphometry (VBM) studies that quantified gray matter volume across the entire brain in psychiatric patient groups, and compared this to controls.

    Goodkind et al., (2015) included a handy flow chart for how they selected the papers for their review.



    I could be wrong, but it looks like 34 papers were excluded because they found no differences between patients and controls. This would of course bias the results towards greater differences between patients and controls. And we don't know which of the six psychiatric diagnoses were included in the excluded batch. Was there an over-representation of null results in OCD? Anxiety? Depression?


    What Does VBM Measure, Anyway?

    Typically, VBM measures gray matter volume, which in the cortex is determined by surface area (which can vary due to differences in folding patterns) and by thickness (Kanai & Rees, 2011). These can be differentially related to some ability or characteristic. For example, Song et al. (2015) found that having a larger surface area in early visual cortex (V1 and V2) was correlated with better performance in a perceptual discrimination task, while larger cortical thickness was actually correlated with worse performance. Other investigators warn that volume really isn't the best measure of structural differences between patients and controls, and that cortical thickness is better (Ehrlich et al., 2012):
    Cortical thickness is assumed to reflect the arrangement and density of neuronal and glial cells, synaptic spines, as well as passing axons. Postmortem studies in patients with schizophrenia showed reduced neuronal size and a decrease in interneuronal neuropil, dendritic trees, cortical afferents, and synaptic spines, while no reduction in the number of neurons or signs of gliosis could be demonstrated.
    This leads us to the huge gap between dysfunction in cortical and subcortical microcircuits and gross changes in gray matter volume.


    Psychiatric Disorders Are Circuit Disorders

    This motto tells us that mental illnesses are disorder of neural circuits, in line with the funding priorities of NIMH and the BRAIN Initiative. But structural MRI studies tell us nothing about the types of neurons that are affected. Or how their size, shape, and synaptic connections might be altered. Basically, volume loss in dACC and anterior insula could be caused by any number of reasons, and by different mechanisms across the disorders under consideration. Goodkind et al., (2015) state:
    Our connection of executive functioning to integrity of a well-established brain network that is perturbed across a broad range of psychiatric diagnoses helps ground a transdiagnostic understanding of mental illness in a context suggestive of common neural mechanisms for disease etiology and/or expression.

    But actually, we might find a reduction in the density of von Economo neurons in the dACC of individuals with early-onset schizophrenia (Brüne et al., 2010), but not in persons with other disorders. Or a reduction in the density of GAD67 mRNA-expressing neurons in ACC cortical layer 5 in schizophrenia, but not in bipolar disorder. On the other hand, we could see something like an alteration in the synapses onto parvalbumin inhibitory interneurons (due to stress) that cuts across multiple diagnoses.

    And it's not always the case that bigger is better: smaller cortical volumes can also be associated with better performance (Kanai & Rees, 2011).

    As Kanai and Rees (2011) noted in their review:
    ...a direct link between microstructures and macrostructures has not been established in the human brain. A histological study directly compared whether histopathological measurements of resected temporal lobe tissue correlated with grey matter density as used in typical VBM studies. However, none of the histological measures — including neuronal density — showed a clear relationship with the grey matter volume. 

    So where do we go from here? Bridging the technological gulf between exceptionally invasive methods (like optogenetics and chemogenetics in animals) and non-invasive ones (TMS, MRI in humans) is a minor funding priority of the BRAIN Initiative. Another more manageable strategy for the present would be a comprehensive review of imaging, genetic, and post-mortem neuroanatomical studies of brains from people who lived with schizophrenia, bipolar disorder, major depression, addiction, obsessive compulsive disorder, and anxiety. This has been done most extensively (perhaps) for schizophrenia (e.g., Meyer-Lindenberg, 2010; Arnsten, 2011). Certain types of electrophysiological studies in primate prefrontal cortex may provide another bridge, although this has been disputed.

    Goodkind and colleagues have indeed uncovered some “biological commonalities that may have been underappreciated in prior work,” but it's also clear there are “some fairly obvious distinctions between schizophrenia and bipolar disorder” at a clinical level (to give one example). In the rush to cut up psychiatric nosology along the RDoC dotted lines, let's not forget the limitations of current methods that are designed to do the carving.

    Further Reading

    Other comprehensive reviews:

    Large-scale brain networks and psychopathology: a unifying triple network model

    Does the salience network play a cardinal role in psychosis? An emerging hypothesis of insular dysfunction

    Salience processing and insular cortical function and dysfunction


    Critiques of phrenology-like VBM studies:

    Now Is That Gratitude?

    Should Policy Makers and Financial Institutions Have Access to Billions of Brain Scans?

    Anthropomorphic Neuroscience Driven by Researchers with Large TPJs

    Liberals Are Conflicted and Conservatives Are Afraid


    Great discussion of a failure to replicate VBM studies (at Neuroskeptic):


    Failed Replications: A Reality Check for Neuroscience?


    Footnotes

    1To quote Russ Poldrack:
    In Tal Yarkoni's recent paper in Nature Methods, we found that the anterior insula was one of the most highly activated part of the brain, showing activation in nearly 1/3 of all imaging studies!
    2Links to recent J Neurosci articles via @prerana123 and @MyCousinAmygdala.


    References

    Brüne M, Schöbel A, Karau R, Benali A, Faustmann PM, Juckel G, Petrasch-Parwez E. (2010). Von Economo neuron density in the anterior cingulate cortex is reduced inearly onset schizophrenia. Acta Neuropathol. 119(6):771-8.

    Ehrlich S, Brauns S, Yendiki A, Ho BC, Calhoun V, Schulz SC, Gollub RL, Sponheim SR. (2012). Associations of cortical thickness and cognition in patients with schizophrenia and healthy controls. Schizophr Bull. 38(5):1050-62.

    Goodkind, M., Eickhoff, S., Oathes, D., Jiang, Y., Chang, A., Jones-Hagata, L., Ortega, B., Zaiko, Y., Roach, E., Korgaonkar, M., Grieve, S., Galatzer-Levy, I., Fox, P., & Etkin, A. (2015). Identification of a Common Neurobiological Substrate for Mental Illness. JAMA Psychiatry DOI: 10.1001/jamapsychiatry.2014.2206

    Kanai, R., & Rees, G. (2011). The structural basis of inter-individual differences in human behaviour and cognition. Nature Reviews Neuroscience, 12 (4), 231-242. DOI: 10.1038/nrn3000

    Song C, Schwarzkopf DS, Kanai R, Rees G. (2015). Neural population tuning links visualcortical anatomy to human visual perception. Neuron 85(3):641-56.

    Snyder HR, Kaiser RH, Whisman MA, Turner AE, Guild RM, Munakata Y. (2014). Opposite effects of anxiety and depressive symptoms on executive function: the case of selecting among competing options. Cogn Emot. 28(5):893-902.


    Fig. 3 (Meyer-Lindenberg, 2010). Schematic summary of putative alterations in dorsolateral prefrontal cortex circuitry in schizophrenia.

older | 1 | (Page 2) | 3 | 4 | .... | 8 | newer