Are you the publisher? Claim or contact us about this channel

Embed this content in your HTML


Report adult content:

click to rate:

Account: (login)

More Channels


Channel Catalog

Channel Description:

Deconstructing the most sensationalistic recent findings in Human Brain Imaging, Cognitive Neuroscience, and Psychopharmacology

older | 1 | 2 | 3 | (Page 4) | 5 | 6 | .... | 9 | newer

    0 0
  • 09/30/15--04:48: Good Brain / Bad Brain
  • 'Wiring diagrams' link lifestyle to brain function

    Human Connectome Project finds surprising correlations between brain architecture and behavioural or demographic influences.

    The brain’s wiring patterns can shed light on a person’s positive and negative traits, researchers report in Nature Neuroscience1. The finding, published on 28 September, is the first from the Human Connectome Project (HCP), an international effort to map active connections between neurons in different parts of the brain.

    What are some of these surprising conclusions about the living human brain?

    Good Brain / Bad Brain

    Smith et al. (2015):
    “We identified one strong mode of population co-variation: subjects were predominantly spread along a single 'positive-negative' axis linking lifestyle, demographic and psychometric measures to each other and to a specific pattern of brain connectivity.”

    Well. This sounds an awful lot like the Hegemony of the Western Binary as applied to resting state functional connectivity to me...

    And hey, looks like IQ, years of education, socioeconomic status, the ability to delay reward, and life satisfaction give you a good brain.

    “You can distinguish people with successful traits and successful lives versus those who are not so successful,” [Marcus Raichle] says.

    The authors used canonical correlation analysis (CCA) to estimate how 280 demographic and behavioral subject measures and patterns of brain connectivity co-varied in a similar way across subjects (Smith et al., 2015):
    “This analysis revealed a single highly significant CCA mode that relates functional connectomes to subject measures (r = 0.87, P< 10−5 corrected for multiple comparisons across all modes estimated).”

    And who is not so “successful” (at least according to their chaotic and disconnected brains)?

    Regular pot smokers:  “...one of the negative traits that pulled a brain farthest down the negative axis was marijuana use in recent weeks.”  Cue up additional funding for NIDA:  “...the finding emphasizes the importance of projects such as one launched by the US National Institute on Drug Abuse last week, which will follow 10,000 adolescents for 10 years to determine how marijuana and other drugs affect their brains.”

    But what about wine coolers??

    Why am I asking this?? Because in the subject measures, it was a little obvious that malt liquor was considered separately from beer/wine coolers. {Who drinks wine coolers? Who drinks malt liquor?}

    In terms of alcohol content, the distinction is silly these days, since you can buy craft beers like Boatswain Double IPA (8.4% alcohol) for $2.29 at Trader Joe's. Unless those questions were retained as a code for race and socioeconomic status...


    I'm getting way off track here. My point is that presenting correlational HCP data in a binary manner without any sort of social context isn't a very flattering thing to do.

    I am my connectome,” says Sebastian Seung. What about the 460 participants in the study? What about you?


    Reardon S (2015). 'Wiring diagrams' link lifestyle to brain function. Nature News. doi:10.1038/nature.2015.18442

    Smith SM, Nichols TE, Vidaurre D, Winkler AM, Behrens TE, Glasser MF, Ugurbil K, Barch DM, Van Essen DC, Miller KL. (2015). A positive-negative mode of population covariation links brain connectivity, demographics and behavior. Nat Neurosci. 2015 Sep 28. doi: 10.1038/nn.4125.

    “As a black woman interested in feminist movement, I am often asked whether being black is more important than being a woman; whether feminist struggle to end sexist oppression is more important than the struggle to racism or vice versa. All such questions are rooted in competitive either/or thinking, the belief that the self is formed in opposition to an other...Most people are socialized to think in terms of opposition rather than compatibility. Rather than seeing anti-racist work as totally compatible with working to end sexist oppression, they often see them as two movements competing for first place.”

    bell hooks, Feminist Theory: From Margin to Center

    0 0

    My previous Good Brain / Bad Brain post may have been a little out there, so here are four brief comments.

    (1) HCP database.  The entire Human Connectome Project database (ConnectomeDB) is an amazing resource that's freely available (more details in Van Essen et al., 2013, 2015).

    (2) Good reporting / bad reporting.  Smith et al. (2015) are to be commended for such an impressive body of work.1  But I still think it was remiss to report a population along a judgmental good/bad binary axis in a cursory manner. The correlation/causation conundrum needs more of a caveat than:
    These analyses were driven by and report only correlations; inferring and interpreting the (presumably complex and diverse) causalities remains a challenging issue for the future.
    ...or else you're confronted with press coverage like this:
    Are some brains wired for a lifestyle that includes education and high levels of satisfaction, while others are wired for anger, rule-breaking, and substance use?

    “Wired” implies born that way no effects of living in poverty in a shitty neighborhood.

    Oh, and my flippant observation about the wine cooler/malt liquor axis wasn't actually a major player in the canonical correlation analysis. But race and ethnicity information was indeed collected (but not used: “partly because the race measure is not quantitative, but consists of several distinct categories”).

    (3) Ethics!  This brings up the larger issue of ethics. A whole host of personal participant information (e.g., genomics from everyone, including hundreds of identical twins) is included in the package. From Van Essen et al. (2013):
    The released HCP data are not considered de-identified, insofar as certain combinations of HCP Restricted Data (available through a separate process) might allow identification of individuals as discussed below. It is accordingly important that all investigators who agree to Open Access Data Use Terms consult with their local IRB or Ethics Committee to determine whether the research needs to be approved or declared exempt. If needed and upon request, the HCP will provide a certificate stating that an investigator has accepted the HCP Open Access Data Use Terms. Because HCP participants come from families with twins and non-twin siblings, there is a risk that combinations of information about an individual (e.g., age by year; body weight and height; handedness) might lead to inadvertent identification, particularly by other family members, if these combinations were publicly released.


    Important Notice to Recipients and System Administrators of HCP Connectome In A Box Hard Drives

    Thank you for acquiring a Connectome-in-a-Box that contains HCP image data.  This provides an easy and efficient way to transfer large HCP datasets to other labs and institutions wanting to process lots of data, especially when multiple investigators are involved. With it comes a need to insure compliance with HCP’s Data Use Terms as well as any institutional requirements.

    And any participant in the study can look at the results and infer, because of their regular cannabis use and their father's history of heavy drinking, that they must have a “bad brain.” Do the investigators have an obligation to counsel them on what this might mean (and what they should do)? Yeah, stop smoking cigarettes and pot, but there's not much they can do about their father's substance abuse or their fluid intelligence.

    (4) Biology.  Finally, I'm not sure what the finding means biologically. Across a population, there's a general mode of functional connectivity while participants lie in a scanner with nothing to do. That falls along an axis of “positive” and “negative” traits. And this pattern of correlated hemodynamic activity across 30 node-pair edges means....... what, exactly?

    Every person's connectome is unique (“I am my connectome” for the thousandth time).2  But this mantra more commonly refers to the fine-grained structural connectome. You know, the kind that will live forever and be uploaded to a computer (see Amy Harmon's article on The Neuroscience of Immortality, which caused quite a splash).

    What is the relationship between resting state functional connectivity and the implementation of thought and behavior via neural codes? This must be exceptionally unique for each person. We know this because even in lowly organisms like flies, neurons in an olfactory region called the mushroom bodies show a striking degree of individuality in neural coding across animals.3 
    At the single-cell level, we show that uniquely identifiable MBONs [mushroom body output neurons, n=34] displayprofoundly different tuning across different animals, but that tuning of the same neuron across the two hemispheres of an individual fly was nearly identical.

    In other words, a fly's unique olfactory experience shapes the response properties of a tiny set of neurons, even for animals reared under the same conditions. “In several cases, we even recorded on the same day from progeny of the same cross, raised in the same food vial” (Hige et al., 2015).

    I never know what to do with information like this, especially in the context of human brains, good and bad.....  Maybe: Are some fly MBONs wired for a wild lifestyle of apple cider vinegar?


    1 Or maybe the result was a massive case of confirmation bias, as suggested in a private comment to me.

    2See this book review for an opposing view.

    3fly paper via @fly_papers (also @debivort and @neuroecology).

    4Also see Neurocriminology in prohibition-era New York.

    0 0

    Is ketamine a destructive club drug that damages the brain and bladder? With psychosis-like effects widely used as a model of schizophrenia? Or is ketamine an exciting new antidepressant, the “most important discovery in half a century”?

    For years, I've been utterly fascinated by these separate strands of research that rarely (if ever) intersect. Why is that? Because there's no such thing as “one receptor, one behavior.” And because like most scientific endeavors, neuro-pharmacology/psychiatry research is highly specialized, with experts in one microfield ignoring the literature produced by another (though there are some exceptions).1

    Ketamine is a dissociative anesthetic and PCP-derivative that can produce hallucinations and feelings of detachment in non-clinical populations. Phamacologically it's an NMDA receptor antagonist that also acts on other systems (e.g., opioid). Today I'll focus on a recent neuroimaging study that looked at the downsides of ketamine: anhedonia, cognitive disorganization, and perceptual distortions (Pollak et al., 2015).

    Imaging Phenomenologically Distinct Effects of Ketamine

    In this study, 23 healthy male participants underwent arterial spin labeling (ASL) fMRI scanning while they were infused with either a high dose (0.26 mg/kg bolus + slow infusion) or a low dose (0.13 mg/kg bolus + slow infusion) of ketamine 2 (Pollak et al., 2015). For comparison, the typical dose used in depression studies is 0.5 mg/kg (Wan et al., 2015). Keep in mind that the number of participants in each condition was low, n=12 (after one was dropped) and n=10 respectively, so the results are quite preliminary.

    ASL is a post-PET and BOLD-less technique for measuring cerebral blood flow (CBF) without the use of a radioactive tracer (Petcharunpaisan et al., 2010). Instead, water in arterial blood serves as a contrast agent, after being magnetically labeled by applying a 180 degree radiofrequency inversion pulse. Basically, it's a good method for monitoring CBF over a number of minutes.

    ASL sequences were obtained before and 10 min after the start of ketamine infusion. Before and after the scan, participants rated their subjective symptoms of delusional thinking, perceptual distortion, cognitive disorganization, anhedonia, mania, and paranoia on the Psychotomimetic States Inventory (PSI). The study was completely open label, so it's not like they didn't know they were getting a mind-altering drug.

    Behavioral ratings were quite variable (note the large error bars below), but generally the effects were larger in the high-dose group, as one might expect.

    The changes in Perceptual Distortion and Cognitive Disorganization scores were significant for the low-dose group, with the addition of Delusional Thinking, Anhedonia, and Mania in the high-dose group. But again, it's important to remember there was no placebo condition, the significance levels were not all that impressive, and the n's were low.

    The CBF results (below) show increases in anterior and subgenual cingulate cortex and decreases in superior and medial temporal cortex, similar to previous studies using PET.

    Fig 2a (Pollak et al., 2015). Changes in CBF with ketamine in the low- and high-dose groups overlaid on a high-resolution T1-weighted image.

    Did I say the n's were low? The Fig. 2b maps (not shown here) illustrated significant correlations with the Anhedonia and Cognitive Disorganization subscales, but these were based on 10 and 12 data points, when outliers can drive phenomenally large effects. One might like to say...
    For [the high-dose] group, ketamine-induced anhedonia inversely related to orbitofrontal cortex CBF changes and cognitive disorganisation was positively correlated with CBF changes in posterior thalamus and the left inferior and middle temporal gyrus. Perceptual distortion was correlated with different regional CBF changes in the low- and high-dose groups.
      ...but this clearly requires replication studies with placebo comparisons and larger subject groups.

    Nonetheless, the fact remains that ketamine administration in healthy participants caused negative effects like anhedonia and cognitive disorganization at doses lower than those used in studies of treatment-resistant depression (many of which were also open label). Now you can say, “well, controls are not the same as patients with refractory depression” and you'd be right (see Footnote 1). “Glutamatergic signaling profiles” and symptom reports could show a variable relationship, with severe depression at the low end and schizophrenia at the high end (with controls somewhere in the middle).

    A recent review of seven placebo-controlled, double-blind, randomized clinical trials of ketamine and other NMDA antagonists concluded (Newport et al., 2015):
    The antidepressant efficacy of ketamine ... holds promise for future glutamate-modulating strategies; however, the ineffectiveness of other NMDA antagonists suggests that any forthcoming advances will depend on improving our understanding of ketamine’s mechanism of action. The fleeting nature of ketamine’s therapeutic benefit, coupled with its potential for abuse and neurotoxicity, suggest that its use in the clinical setting warrants caution.

    The mysterious and paradoxical ways of ketamine continue...

    So take it in don't hold your breath
    The bottom's all I've found
    We can't get higher than we get
    On the Long Way Down

    Further Reading

    Ketamine for Depression: Yay or Neigh?

    Warning about Ketamine in the American Journal of Psychiatry

    Chronic Ketamine for Depression: An Unethical Case Study?

    still more on ketamine for depression

    Update on Ketamine in Palliative Care Settings

    Ketamine - Magic Antidepressant, or Expensive Illusion? - by Neuroskeptic

    Fighting Depression with Special K - by Scicurious


    1 One exception is the present study, which discussed the divergent anhedonia results (compared to  previous findings of reduced anhedonia in depression). Another example is the work of Dr. John H. Krystal, which includes papers in both the schizophrenia and the treatment-resistant depression realms. However, most of the papers discuss only one and not the other. One notable exception (schizophrenia-related) said this:
    ...it is important to note that studies examining its effects on glutamateric pathways in the context of mood symptoms (178) may be highly informative for developing our understanding of its relevance to schizophrenia (111). Briefly, emerging models in this area postulate that ketamine may act as anti-depressant by promoting synaptic plasticity via intra-cellular signaling pathways, ultimately promoting brain-derived neurotrophic factor expression via synaptic potentiation (179) and in turns synaptic growth (178). In that sense, acute NMDAR antagonism may promote synaptic plasticity along specific pathways impacted in mood disorders, such as ventral medial PFC (180, 181, p. 916). Conversely, when administered to patients diagnosed with schizophrenia, NMDAR antagonists seem to worsen their symptom profile (182), perhaps by “pushing” an already aberrantly elevated glutamatergic signaling profile upward. Collectively such dissociable effects of ketamine may imply that along distinct circuits there may be an inverted-U relationship between ketamine’s effects and symptoms: depressed patients may be positioned on the low end of the inverted-U (178) and schizophrenia patents may be positioned on the higher end (183). Both task-based and resting-state functional connectivity techniques are well positioned to interrogate such system-level effects of NMDAR antagonists in humans.

    2Low-dose ketamine: target plasma level of 50–75 ng/mL was specified (in practice this approximated a rapid bolus of an average of 0.12 mg/kg over 20 s followed by a slow infusion of 0.31 mg/kg/h).

    High-dose ketamine: target plasma level of 150 ng/mL was specified (in practice this approximated a rapid bolus of 0.26 mg/kg over 20 s followed by a slow infusion of 0.42 mg/kg/h).


    Petcharunpaisan S, Ramalho J, Castillo M. (2010). Arterial spin labeling inneuroimaging. World J Radiol. 2(10):384-98.

    Pollak, T., De Simoni, S., Barimani, B., Zelaya, F., Stone, J., & Mehta, M. (2015). Phenomenologically distinct psychotomimetic effects of ketamine are associated with cerebral blood flow changes in functionally relevant cerebral foci: a continuous arterial spin labelling study Psychopharmacology DOI: 10.1007/s00213-015-4078-8

    Wan LB, Levitch CF, Perez AM, Brallier JW, Iosifescu DV, Chang LC, Foulkes A, Mathew SJ, Charney DS, Murrough JW. (2015). Ketamine safety and tolerability in clinical trials for treatment-resistant depression. J Clin Psychiatry 76(3):247-52.

    0 0

    Scene from Sssssss (1973).

    When Dr. Stoner needs a new research assistant for his herpetological research, he recruits David Blake from the local college.  Oh, and he turns him into a snake for sh*ts and giggles.”

    Movie Review by Jason Grey

    Horror movies where people turn into snakes are relatively common (30 by one count), but clinical reports of delusional transmogrification into snakes are quite rare. This is in contrast to clinical lycanthropy, the delusion of turning into a wolf.

    What follows are two frightening tales of unresolved mental illness, minimal followup, and oversharing (plus mistaking an April Fool's joke for a real finding).

    THERE ARE NO ACTUAL PICTURES OF SNAKESin this post [an important note for snake phobics].

    The first case of ophidianthropy was described by Kattimani et al. (2010):
    A 24 year young girl presented to us with complaints that she had died 15 days before and that in her stead she had been turned into a live snake. At times she would try to bite others claiming that she was a snake. ... We showed her photos of snakes and when she was made to face the large mirror she failed to identify herself as her real human self and described herself as snake. She described having snake skin covering her and that her entire body was that of snake except for her spirit inside.  ...  She was distressed that others did not understand or share her conviction. She felt hopeless that nothing could make her turn into real self. She made suicidal gestures and attempted to hang herself twice on the ward...

    The initial diagnosis was severe depressive disorder with psychotic features. A series of drug trials was unsuccessful (Prozac and four different antipsychotics), and a course of 10 ECT sessions had no lasting effect on her delusions. The authors couldn't decide whether the patient should be formally diagnosed with schizophrenia or a more general psychotic illness. Her most recent treatment regime (escitalopram plus quetiapine) was also a failure because the snake delusion persisted.

    “Our next plan is to employ supportive psychotherapy in combination with pharmacotherapy,” said the authors (but we never find out what happened to her). Not a positive outcome...

    Scene from Sssssss (1973).

    Ophidiantrophy with paranoid schizophrenia, cannabis use, bestiality, and history of epilepsy

    The second case is even more bizarre, with a laundry list of delusions and syndromes (Mondal, 2014):
    A 23 year old, married, Hindu male, with past history of  ... seizures..., personal history of non pathological consumption of bhang and alcohol for the last nine years and one incident of illicit sexual intercourse with a buffalo at the age of 18 years presented ... with the chief complains of muttering, fearfulness, wandering tendency ... and hearing of voices inaudible to others for the last one month. ... he sat cross legged with hands folded in a typical posture resembling the hood of a snake. ... The patient said that he inhaled the breath of a snake passing by him following which he changed into a snake. Though he had a human figure, he could feel himself poisonous inside and to have grown a fang on the lower set of his teeth. He also had the urge to bite others but somehow controlled the desire. He said that he was not comfortable with humans then but would be happy on seeing a snake, identifying it belonging to his species. ... He says that he was converted back to a human being by the help of a parrot, which took away his snake fangs by inhaling his breath and by a cat who ate up his snake flesh once when he was lying on the ground. ...  the patient also had thought alienation phenomena in the form of thought blocking, thought withdrawal and thought broadcasting, delusion of persecution, delusion of reference, delusion of infidelity [Othello syndrome], the Fregoli delusion, bizarre delusion, nihilistic delusion [Cotard's syndrome], somatic passivity, somatic hallucinations, made act [?], third person auditory hallucinations, derealization and depersonalisation. He was diagnosed as a case of paranoid schizophrenia as per ICD 10.


    He was was given the antipsychotic haloperidol while being treated as an inpatient for 10 days. Some of his symptoms improved but others did not. “Long term follow up is not available.”

    The discussion of this case is a bit... terrifying:
    Lycanthropy encompasses two aspects, the first one consisting of primary lupine delusions and associated behavioural deviations termed as lycomania, and the second aspect being a psychosomatic problem called as lycosomatization (Kydd et al., 1991).
    Kydd, O.U., Major, A., Minor, C (1991). A really neat, squeaky-clean isolation and characterization of two lycanthropogens from nearly subhuman populations of Homo sapiens. J. Ultratough Molec. Biochem. 101: 3521-3532.  [this is obviously a fake citation]
    Endogenous lycanthropogens responsible for lycomania are lupinone and buldogone which differ by only one carbon atom in their ring structure; their plasma level having a lunar periodicity with peak level during the week of full moon. Lycosomatization likely depends on the simultaneous secretion of suprathreshold levels of both lupinone and the peptide lycanthrokinin, a second mediator, reported to be secreted by the pineal gland, that “initiates and maintains the lycanthropic process” (Davis et al., 1992). Thus, secretion of lupinone without lycanthrokinin results in only lycomania. In our patient these molecular changes were not investigated.

    oh my god, the paper by Davis et al. on the Psychopharmacology of Lycanthropy (and "endogenous lycanthropogens") was published in the April 1, 1992 issue of the Canadian Medical Association Journal. There is no such thing as lupinone and buldogone.

    Fig. 1 (Davis et al., 1992): Structural formulas of endogenous lycanthropogens.

    I know the authors are non-native English speakers, but where was the peer review for the Asian Journal of Psychiatry??  We might as well return to the review for Sssssss, which was more thorough.

       David Blake -
    Our hapless victim.  David is a college student who gets recruited by Dr. Stoner to help out at his farm, and be his latest test subject.  He's a nice guy, and there really is not much to say about him, as he's pretty bland until he starts growing scales.

       Dr. Carl Stoner - The villain of our piece.  He's a snake researcher looking for new grant money, and a new test subject.  He actually means well enough, and is looking to advance humanity, but in classic horror movie fashion, he plays God and things go too far.

       Kristine Stoner - The doctor's daughter, who is also interested in snakes.  Especially David's.  She's smart, and kind, and again a bit of a blank slate beyond those traits.  Loyal to a fault with her father.

       Dr. Daniels - A minor character, but Stoner's chief rival, and the man who holds the purse strings.  The two doctors have an antagonistic relationship, but there seems to be an undercurrent of past friendship as well, overshadowed by Daniels' position.  Or I'm reading too much into things.

    Sssssss has a score of 13% on Rotten Tomatoes. We don't have a similar rating system for journal articles, but there's always PubMed Commons and PubPeer...

    Further Reading

    People Who Change into Snakes in Movies - from California Herps

    Snake me up before you go-go: An unusual case of ophidianthropy - by Dr Mark Griffiths

    Psychopharmacology of Lycanthropy

    Werewolves of London, Ontario


    Davis WM, Wellwuff HG, Garew L, Kydd OU. (1992). Psychopharmacology of lycanthropy. CMAJApr 1;146(7):1191-7.

    Kattimani S, Menon V, Srivastava MK, Mukharjee A. (2010). Ophidianthropy: the case of a woman who ‘Turned into  a Snake’. Psychiatry On-Line.

    Mondal, G., Nizamie, S., Mukherjee, N., Tikka, S., & Jaiswal, B. (2014). The ‘snake’ man: Ophidianthropy in a case of schizophrenia, along with literature review, Asian Journal of Psychiatry, 12, 148-149 DOI: 10.1016/j.ajp.2014.10.002

    0 0

    Ryan Reynolds in Buried (2010)

    The pathological fear of being buried alive is called taphophobia.1  This seems like a perfectly rational fear to me, especially if one is claustrophobic and enjoys horror movies and Edgar Allan Poe shortstories. Within a modern medical context, however, it simply not possible that a person will be buried while still alive.

    But this wasn't always the case. In the 19th century, true stories of premature burial were common, appearing in newspapers and medical journals of the day. Tebb and Vollum (1896) published a 400 page tome (Premature burial and how it may be prevented: with special reference to trance, catalepsy, and other forms of suspended animation) that was full of such examples:

    The British Medical Journal, December 8, 1877,
    p. 819, inserts the following : —


    "A correspondent at Naples states that the Appeal Court has had before it a case not likely to inspire confidence in the minds of those who look forward with horror to the possibility of being buried alive. It appeared from the evidence that some time ago a woman was interred with all the usual formalities, it being believed that she was dead, while she was only in a trance. Some days afterwards, the grave in which she had been placed being opened for the reception of another body, it was found that the clothes which covered the unfortunate woman were torn to pieces, and that she had even broken her limbs in attempting to extricate herself from the living tomb. The Court, after hearing the case, sentenced the doctor who had signed the certificate of decease, and the mayor who had authorised the interment, each to three months' imprisonment for involuntary manslaughter."

    To avoid this fate worse than death, contraptions known as “safety coffins” were popular, with air tubes, bells, flags, and/or burning lamps (Dossey, 2007). Some taphophobes went to great lengths to outline specific instructions for handling their corpse, to prevent such an ante-mortem horror from happening to them. Some might even say these directives were a form of “overkill”...

    From the Lancet, August 20, 1864, p. 219.


    "Amongst the papers left by the great Meyerbeer, were some which showed that he had a profound dread of premature interment. He directed, it is stated, that his body should be left for ten days undisturbed, with the face uncovered, and watched night and day. Bells were to be fastened to his feet. And at the end of the second day veins were to be opened in the arm and leg. This is the gossip of the capital in which he died. The first impression is that such a fear is morbid. No doubt fewer precautions would suffice, but now and again cases occur which seem to warrant such a feeling, and to show that want of caution may lead to premature interment in cases unknown. An instance is mentioned by the Ost. Deutsche Post of Vienna. A few days since, runs the story, in the establishment of the Brothers of Charity in that capital, the bell of the dead-room was heard to ring violently, and on one of the attendants proceeding to the place to ascertain the cause, he was surprised at seeing one of the supposed dead men pulling the bell-rope. He was removed immediately to another room, and hopes are entertained of his recovery."

    Here's a particularly gruesome one:

    From the Daily Telegraph, January 18, 1889.

    "A gendarme was buried alive the other day in a village near Grenoble. The man had become intoxicated on potato brandy, and fell into a profound sleep. After twenty hours passed in slumber, his friends considered him to be dead, particularly as his body assumed the usual rigidity of a corpse. When the sexton, however, was lowering the remains of the ill-fated gendarme into the grave, he heard moans and knocks proceeding from the interior of the 'four-boards.' He immediately bored holes in the sides of the coffin, to let in air, and then knocked off the lid. The gendarme had, however, ceased to live, having horribly mutilated his head in his frantic but futile efforts to burst his coffin open."

    Doesn't that sound like fun? Wouldn't you like to experience this yourself? Now you can!

    Taphobos, an immersive coffin experience (by James Brown)

    How does it work?

    The game uses a real life coffin, an Oculus Rift, a PC and some microphones. One player gets in the coffin with the Rift on, together with a headset + microphone. The other player plays on a PC again with mic + headset, this player will play a first person game where they must work with the buried player to uncover where the coffin is and rescue the trapped player before their oxygen runs out. This is all powered by the Unity engine.

    But why?? (Brown, 2015):

    This work is intended to explore “uncomfortable experiences and interactions” as part of academic research in the Human Computer Interaction field (HCI) from an MSc by Research in Computer Science student, James Brown. The player inside the coffin will experience various emotions as they are put in and then try to get out of the confined space. Claustrophobia as well as the fear of being buried alive “taphophobia” may well affect players of the game and they must cope with these emotions as they play.

    Further Reading


    Buried Alive! (October 31, 2011)


    1 Also spelled taphephobia. From the Greek taphos, or grave.


    Brown J. (2015). Taphobos: An Immersive Coffin Experience. British HCI 2015, July 13-17, 2015, Lincoln, United Kingdom.

    Dossey L. (2007). The undead: botched burials, safety coffins, and the fear of the grave. Explore (NY). 3:347-54.

    Tebb W, Vollum EP. (1896). Premature burial and how it may be prevented: with special reference to trance, catalepsy, and other forms of suspended animation. SWAN SONNENSCHEIN & CO., LIM.: London.  {archive.org}

    Warnning: Do NOT Get Caught While Searching!!
    Your IP : - Country : - City:
    Your ISP TRACKS Your Online Activity! Hide your IP ADDRESS with a VPN!
    Before you searching always remember to change your IP adress to not be followed!
    0 0

    Credit: Image courtesy of Aalto University

    Is it possible to be “addicted” to food, much like an addiction to substances (e.g., alcohol, cocaine, opiates) or behaviors (gambling, shopping, Facebook)? An extensive and growing literature uses this terminology in the context of the “obesity epidemic”, and looks for the root genetic and neurobiological causes (Carlier et al., 2015; Volkow & Bailer, 2015).

    Fig. 1 (Meule, 2015). Number of scientific publications on food addiction (1990-2014). Web of Science search term “food addiction”.

    Figure 1 might lead you to believe that the term “food addiction” was invented in the late 2000s by NIDA. But this term is not new at all, as Adrian Meule (2015) explained in his historical overview, Back by Popular Demand: A Narrative Review on the History of Food Addiction Research. Dr. Theron G. Randolph wrote about food addiction in 1956 (he also wrote about food allergies).

    Fig. 2 (Meule, 2015). History of food addiction research.

    Thus, the concept of food addiction predates the documented rise in obesity in the US, which really took off in the late 80s to late 90s (as shown below).1

    Prevalence of Obesity in the United States, 1960-2012

    1960-62 1971-74 1976-80 1988-89 1999-2000
    12.80% 14.10% 14.50% 22.50% 30.50%

    2007-08 2011-12

    33.80% 34.90%

    Sources:Flegal et al. 1998, 2002, 2010; Ogden et al. 2014

    One problem with the “food addiction” construct is that you can live without alcohol and gambling, but you'll die if you don't eat. Complete abstinence is not an option.2

    Another problem is that most obese people simply don't show signs of addiction (Hebebrand, 2015):
    ...irrespective of whether scientific evidence will justify use of the term food and/or eating addiction, most obese individuals have neither a food nor an eating addiction.3Obesity frequently develops slowly over many years; only a slight energy surplus is required to in the longer term develop overweight. Genetic, neuroendocrine, physiological and environmental research has taught us that obesity is a complex disorder with many risk factors, each of which have small individual effects and interact in a complex manner. The notion of addiction as a major cause of obesity potentially entails endless and fruitless debates, when it is clearly not relevant to the great majority of cases of overweight and obesity.

    Still not convinced? Surely, differences in the brains' of obese individuals point to an addiction. The dopamine system is altered, right, so this must mean they're addicted to food? Well think again, because the evidence for this is inconsistent (Volkow et al., 2013; Ziauddeen & Fletcher, 2013).

    An important new paper by a Finnish research group has shown that D2 dopamine receptor binding in obese women is not different from that in lean participants (Karlsson et al., 2015). Conversely, μ-opioid receptor (MOR) binding is reduced, consistent with lowered hedonic processing. After the women had bariatric surgery (resulting in mean weight loss of 26.1 kg, or 57.5 lbs), MOR returned to control values, while the unaltered D2 receptors stayed the same.

    In the study, 16 obese women (mean BMI=40.4, age 42.8) had PET scans before and six months after undergoing the standard Gastric Bypass procedure (Roux-en-Y Gastric Bypass) or the Sleeve Gastrectomy. A comparison group of non-obese women (BMI=22.7, age 44.9) was also scanned. The radiotracer [11C]carfentanil measured MOR availability and [11C]raclopride measured D2R availability in two separate sessions. The opioid and dopamine systems are famous for their roles in neural circuits for “liking” (pleasurable consumption) and “wanting” (incentive/motivation), respectively (Castro & Berridge, 2014).

    The pre-operative PET scans in the obese women showed that MOR binding was significantly lower in a number of reward-related regions, including ventral striatum, dorsal caudate, putamen, insula, amygdala, thalamus, orbitofrontal cortex and posterior cingulate cortex. Six months after surgery, there was an overall 23% increase in MOR availability, which was no longer different from controls.

    Fig. 1 (modified from Karlsson et al., 2015).Top: μ-opioid receptors are reduced in obese participants pre-operatively (middle), but after bariatrc surgery (right) they recover to control levels (left). Bottom: D2 receptors are unaffected in the obese participants.

    Karlsson et al. (2015) suggest that:
    The MOR system promotes hedonic [pleasurable] aspects of feeding, and this can make obese individuals susceptible to overeating in order to gain the desired hedonic response from food consumption, which may further promote pathological eating. We propose that at the initial stages of weight gain, excessive eating may cause perpetual overstimulation of the MOR system, leading to subsequent MOR downregulation.  ...  However, bariatric surgery-induced weight loss and decreased food intake may reverse this process.

    The unchanging striatal dopamine D2 receptor densities in the obese participants are in stark contrast to what is seen in individuals who are addicted to stimulant drugs, such as cocaine and methamphetamine (Volkow et al., 2001). Drugs of abuse are consistently associated with decreases in D2 receptors.

    Fig. 1 (modified from Volkow et al., 2001). Ratio of the Distribution Volume of [11C]Raclopride in the Striatum (Normalized to the Distribution Volume in the Cerebellum) in a Non-Drug-Abusing Comparison Subject and a Methamphetamine Abuser.

    So the next time you see a stupid ass headline like, “Cheese really is crack. Study reveals cheese is as addictive as drugs”, you'll know the writer is on crack.

    Further Reading - The Scicurious Collection on Obesity

    Overeating and Obesity: Should we really call it food addiction?

    No, cheese is not just like crack

    Dopamine and Obesity: The D2 Receptor

    Dopamine and Obesity: The Food Addiction?

    Cheesecake-eating rats and food addiction, a commentary


    1Not surprisingly, papers on the so-called obesity epidemic lagged behind the late 80s-mid 90s rise in prevalence.

    - click on image for a larger view -
    Number of papers on "obesity epidemic" in PubMed (1996-2015)

    2Notice in Fig. 2 that anorexia is considered the opposite: an addiction to starving.

    3Binge eating disorder (BED) might be another story, and I'll refer you to an informative post by Scicurious for discussion of that issue. You do not have to be obese (or even overweight) to have BED.


    Carlier N, Marshe VS, Cmorejova J, Davis C, Müller DJ. (2015). Genetic Similarities between Compulsive Overeating and Addiction Phenotypes: A Case for "Food Addiction"?Curr Psychiatry Rep. 17(12):96.

    Castro, D., & Berridge, K. (2014). Advances in the neurobiological bases for food ‘liking’ versus ‘wanting’ Physiology & Behavior, 136, 22-30 DOI: 10.1016/j.physbeh.2014.05.022

    Karlsson, H., Tuulari, J., Tuominen, L., Hirvonen, J., Honka, H., Parkkola, R., Helin, S., Salminen, P., Nuutila, P., & Nummenmaa, L. (2015). Weight loss after bariatric surgery normalizes brain opioid receptors in morbid obesity Molecular Psychiatry DOI: 10.1038/mp.2015.153

    Meule A (2015). Back by Popular Demand: A Narrative Review on the History of Food Addiction Research. The Yale journal of biology and medicine, 88 (3), 295-302 PMID: 26339213

    Volkow ND, Baler RD. (2015). NOW vs LATER brain circuits: implications for obesity and addiction. Trends Neurosci. 38(6):345-52.

    Volkow ND, Wang GJ, Tomasi D, Baler RD. (2013). Obesity and addiction: neurobiological overlaps. Obes Rev. 14(1):2-18.

    Ziauddeen H, Fletcher PC. (2013). Is food addiction a valid and useful concept?Obes Rev. 14(1):19-28.

    0 0

    There's a new article in Trends in Cognitive Sciences about how neuroscientists can incorporate social media into their research on the neural correlates of social cognition (Meshi et al., 2015). The authors outlined the sorts of social behaviors that can be studied via participants' use of Twitter, Facebook, Instagram, etc.: (1) broadcasting information; (2) receiving feedback; (3) observing others' broadcasts; (4) providing feedback; (5) comparing self to others.

    Meshi, Tamir, and Heekeren / Trends in Cognitive Sciences (2015)

    More broadly, these activities tap into processes and constructs like emotional state, personality, social conformity, and how people manage their self-presentation and social connections. You know, things that exist IRL (this is an important point to keep in mind for later).

    The neural systems that mediate these phenomena, as studied by social cognitive neuroscience types, are the Mentalizing Network (in blue below), the Self-Referential Network (red), and the Reward Network (green).

    Fig. 2 (Meshi et al., 2015).Proposed Brain Networks Involved in Social Media Use.  (i)mentalizing network: dorsomedial prefrontal cortex (DMPFC), temporoparietal junction (TPJ), anterior temporal lobe (ATL), inferior frontal gyrus (IFG), posterior cingulate cortex/precuneus (PCC).(ii) self-referential network: medial prefrontal cortex (MPFC) and PCC. (iii) reward network: ventromedial prefrontal cortex (VMPFC), ventral striatum (VS),ventral tegmental area (VTA). 

    The article's publication was announced on social media:

    I anticipated this day in 2009, when I wrote several satirical articles about the neurology of Twitter.  I proposed that someone should do a study to examine the neural correlates of Twitter use:
    It was bound to happen. Some neuroimaging lab will conduct an actual fMRI experiment to examine the so-called "Neural Correlates of Twitter" -- so why not write a preemptive blog post to report on the predicted results from such a study, before anyone can publish the actual findings?

    Here are the conditions I proposed, and the predicted results (a portion of the original post is reproduced below).

    A low-level baseline condition (viewing "+") and an active baseline condition (reading the public timeline [public timeline no longer exists] of random tweets from strangers) will be compared to three active conditions:

    (1) Celebrity Fluff

    (2) Social Media Marketing Drivel

    (3) Friends on your Following List

    ... The hemodynamic response function to the active control condition will be compared to those from Conditions 1-3 above. Contrasts between each of these conditions and the low-level baseline will also be performed.

    The major predicted results are as follows:
    Fig. 2A. (Mitchell et al., 2006). A region of ventral mPFC showed greater activation during judgments of the target to whom participants considered themselves to be more similar.

    • Reading the stream of Celebrity Fluff will activate the frontal eye fields to a much greater extent than the control condition, as the participants will be engaged in rolling their eyes in response to the inane banter.
    Figure from Paul Pietsch, Ph.D.The frontal eye fields are in a stamp-sized zone at the posterior end of the middle frontal gyri. 

    • Reading the stream of Social Media Marketing Drivel will tax the neural circuits involved in generating a feeling of disgust, including the anterior insula, ventrolateral prefrontal cortex-temporal pole, and putamen-globus pallidus (Mataix-Cols et al., 2008)
    Fig. 1A (Jabbi et al., 2008). Coronal slice (y = 18) showing the location of the ROI (white) previously shown to be involved in the experience and observation of disgust.

    In conclusion, we predict that the observed patterns of brain activity will be dependent on the nature of the Twitter material being read. These distinct neural networks are expected to reflect the cognitive, emotional, and visceral processes underlying the rapidly changing content of digital media, which ultimately results in "rewiring" of the brain.

    Back to the present post...

    Not too far off, eh?

    Although the TICS piece mentioned that seven social media neuroscience articles have been published to date1(none quite like that one), it didn't review them. Bloggers have covered some of these (e.g., The Facebook Brain and More Friends on Facebook Does NOT Equal a Larger Amygdala) and related topics like social media use and personality, Facebook neuromarketing, metaphorical Facebook cells, Twitter psychosis (interview), “internet addiction”, textmania, and the lack of evidence that social network sites “damage social relationships” or cause depression.

    After discussing the many ways in which social media data can be used as a proxy for real-world behavior, Meshi et al. mentioned some conspicuous differences between online and offline behavior (e.g., online disinhibition as illustrated by trolls, overly disclosive trainwreck LiveJournals, and TMI). This brings us to the “What the Internet is doing to our brains” brigade of unsupported scaremongering:

    Social networking websites are causing alarming changes in the brains of young users, an eminent scientist has warned.

    Sites such as Facebook, Twitter and Bebo are said to shorten attention spans, encourage instant gratification and make young people more self-centred.

    The claims from neuroscientist Susan Greenfield will make disturbing reading for the millions whose social lives depend on logging on to their favourite websites each day.

    Susan Greenfield, Susan Greenfield

    No history of social media neuroscience is complete without the unsubstantiated claims of Baroness Susan Greenfield an extremely prominent British neuroscientist, author, and broadcaster: 'My fear is that these technologies are infantilising the brain into the state of small children who are attracted by buzzing noises and bright lights, who have a small attention span and who live for the moment.'  Although she declares the dangers of digital Mind Change far and wide, such statements are not backed by careful peer reviewed studies.

    Susan Greenfield: I am not some greedy harridan

    She is concerned that those who live only in the present, online, don’t allow their malleable brains to develop properly. “It’s not going to destroy the planet but is it going to be a planet worth living in if you have a load of breezy people who go around saying yaka-wow. Is that the society we want?”

    A team of British psychologists, neuroscientists, bloggers, and science writers have been trying for ages to rebut the Baroness asking her to produce reliable evidence for her dire assertions (see Appendix). 

    The neuroscience of social media isn't just emerging. It's been with us for over ten years.


    1One of these seven references is not a peer-reviewed paper, it's an abstract for a conference that's starting in a few days. I found it here: Facebook Network Structure and Brain Reactivity to Social Exclusion.

    And there are actually more publications than that. One was covered in a post by Mo Costandi, Shared brain activity predicts audience preferences. There was a review article on Social Rewards and Social Networks in the Human Brain. There's a very recent paper on cortisol and Facebook behaviors in teens (likely that Meshi et al. hadn't seen it). But oddly, the 2012 TICS commentary by Stafford and Bell (Brain network: social media and the cognitive scientist) wasn't cited.


    Meshi D, Tamir TI, Heekeren HR (2015). The Emerging Neuroscience of Social Media. Trends in Cognitive Sciences : 10.1016/j.tics.2015.09.004

    Appendix: The “Rational UK Neuroscientists and Writers vs. Susan Greenfield” Collection

    Breezy People Mind Hacks (Vaughan Bell) remember #yakawow?

    Does the internet rewire your brain? Mind Hacks/BBC Future (Tom Stafford)

    The brain melting internet Mind Hacks (Vaughan Bell)

    The elusive hypothesis of Baroness Greenfield The Lay Scientist (Martin Robbins)

    Mind Change: Susan Greenfield has a big idea, but what is it? The Lay Scientist (Martin Robbins)

    Twitter Vs Dr. Susan Greenfield Neurobonkers (Simon Oxenham)

    A little more conversation Speaking Out (Sophie Scott)

    Susan Greenfield's Dopamine Disaster Neuroskeptic

    Is the internet changing our brains? BPS Research Digest (Christian Jarrett)

    Chilling warning to parents from top neuroscientist Bad Science (Ben Goldacre)

    Digital tech, the BMJ, and The Baroness Mind Hacks (Vaughan Bell)

    An open letter to Baroness Susan Greenfield BishopBlog (Dorothy Bishop)

    On Greenfield Counterbalanced (Pete Etchells)

    Facebook will destroy your children's brains The Lay Scientist (Martin Robbins)
    Social media sites like Facebook and Twitter have left a generation of young adults vulnerable to degeneration of the brain, we can exclusively reveal for aboutthefifthtime. Symptoms include self-obsession, short attention spans and a childlike desire for constant feedback, according to a 'top scientist' with no record of published research on the issue.
    . . . 

    The scientist believes that use of the internet – and computer games – could 'rewire' the brain, causing neurons to establish new connections and pathways. "Rewiring itself is something that the brain does naturally all the time," the professor said, "but the phrase 'rewiring the brain' sounds really dramatic and chilling, so I like to use it to make it seem like I'm talking about a profound and unnatural change, even though it isn't."

    0 0

    What is happiness, and how do we find it? There are 93,290 books on happiness at Amazon.com. Happiness is Life's Most Important Skill, an Advantage and a Project and a Hypothesis that we can Stumble On and Hard-Wire in 21 Days.

    The Pursuit of Happiness is an Unalienable Right granted to all human beings, but it also generates billions of dollars for the self-help industry.

    And now the search for happiness is over! Scientists have determined that happiness is located in a small region of your right medial parietal lobe. Positive psychology gurus will have to adapt to the changing landscape or lose their market edge. “My seven practical, actionable principles are guaranteed to increase the size of your precuneus or your money back.”

    The structural neural substrate of subjective happiness is the precuneus.

    A new paper has reported that happiness is related to the volume of gray matter in a 222.8 mm3 cluster of the right precuneus (Sato et al., 2015). What does this mean? Taking the finding at face value, there was a correlation (not a causal relationship) between precuneus gray matter volume and scores on the Japanese version of the Subjective Happiness Scale.1

    Fig. 1 (modified from Sato et al., 2015).  Left: Statistical parametric map (p < 0.001, peak-level uncorrected for display purposes). The blue cross indicates the location of the peak voxelRight: Scatter plot of the adjusted gray matter volume as a function of the subjective happiness score at the peak voxel. [NOTE: Haven't we agreed to not show regression lines through scatter plots based on the single voxel where the effect is the largest??]

    The search for happiness: Using MRI to find where happiness happens,” said one deceptive headline. Should we accept the claim that one small region of the brain is entirely responsible for generating and maintaining this complex and desirable state of being?

    NO. Of course not. And the experimental subjects were not actively involved in any sort of task at all. The study used a static measure of gray matter volume in four brain Regions of Interest (ROIs): left anterior cingulate gyrus, left posterior cingulate gyrus, right precuneus, and left amygdala. These ROIs were based on an fMRI activation study in 26 German men (mean age 33 yrs) who underwent a mood induction procedure (Habel et al., 2005). The German participants viewed pictures of faces with happy expressions and were told to “Look at each face and use it to help you to feel happy.” The brain activity elicited by happy faces was compared to activity elicited by a non-emotional control condition. Eight regions were reported in their Table 1.

    Table 1 (modified from Habel et al., 2005).

    Only four of those regions were selected as ROIs by Sato et al. (2015). One of these was a tiny 12 voxel region in the paracentral lobule, which was called precuneus by Sato et al. (2015).

    Image: John A Beal, PhD. Dept. of Cellular Biology & Anatomy, Louisiana State University Health Sciences Center Shreveport.

    Before you say I'm being overly pedantic, we can agree that the selected coordinates are at the border of the precuneus and the paracentral lobule. The more interesting fact is that the sadness induction of Habel et al. (2005) implicated a very large region of the posterior precuneus and surrounding regions (1562 voxels). An area over 100 times larger than the Happy Precuneus.

    Oops. But the precuneus contains multitudes, so maybe it's not so tragic. The precuneus is potentially involved in very lofty functions like consciousness and self-awareness and the recollection of  autobiographical memories. It's also a functional core of the default-mode network (Utevsky et al., 2014), which is active during daydreaming and mind wandering and unconstrained thinking.

    But it seems a bit problematic to use hand picked ROIs from a study of transient and mild “happy” states (in a population of German males) to predict a stable trait of subjective happiness in a culturally distinct group of younger Japanese college students (26 women, 25 men).

    Cross-Cultural Notions of Happiness

    Isn't “happiness” a social construct (largely defined by Western thought) that varies across cultures?

    Should we expect “the neural correlates of happiness” (or well-being) to be the same in Japanese and Chinese and British college students? In the Chinese study, life satisfaction was positively correlated with gray matter volume in the right parahippocampal gyrus but negatively correlated with gray matter volume in the left precuneus... So the participants with the largest precuneus volumes in that study had the lowest well-being.

    What does a bigger (or smaller) size even mean for actual neural processing? Does a larger gray matter volume in the precuneus allow for a higher computational capacity that can generate greater happiness?? We have absolutely no idea: “...there is no clear evidence of correlation between GM volume measured by VBM and any histological measure, including neuronal density” (Gilaie-Dotan et al., 2014).

    Sato et al. (2015) concluded that their results have important practical implications: Are you happy? We don't have to take your word for it any more!
    In terms of public policy, subjective happiness is thought to be a better indicator of happiness than economic success. However, the subjective measures of happiness have inherent limitations, such as the imprecise nature of comparing data across different cultures and the difficulties associated with the applications of these measures to specific populations, including the intellectually disabled. Our results show that structural neuroimaging may serve as a complementary objective measure of subjective happiness.

    Finally, they issued the self-help throw down: “...our results suggest that psychological training that effectively increases gray matter volume in the precuneus may enhance subjective happiness.”

    Resting-state functional connectivity of the default mode network associated with happiness is so last month...

    adapted from Luo et al. (2015)

    Further Reading

    Are You Conscious of Your Precuneus?

    Be nice to your Precuneus – it might be your real self…

    Your Precuneus May Be the Root of Happiness and Satisfaction

    The Precuneus and Recovery From a Minimally Conscious State


    1 The Subjective Happiness Scale is a 4-item measure of global subjective happiness (Lyubomirsky & Lepper, 1999).


    Habel, U., Klein, M., Kellermann, T., Shah, N., & Schneider, F. (2005). Same or different? Neural correlates of happy and sad mood in healthy males NeuroImage, 26 (1), 206-214 DOI: 10.1016/j.neuroimage.2005.01.014

    Sato, W., Kochiyama, T., Uono, S., Kubota, Y., Sawada, R., Yoshimura, S., & Toichi, M. (2015). The structural neural substrate of subjective happiness Scientific Reports, 5 DOI: 10.1038/srep16891

    0 0
  • 11/29/15--21:16: Carving Up Brain Disorders

  • Neurology and Psychiatry are two distinct specialties within medicine, both of which treat disorders of the brain. It's completely uncontroversial to say that neurologists treat patients with brain disorders like Alzheimer's disease and Parkinson's disease. These two diseases produce distinct patterns of neurodegeneration that are visible on brain scans. For example, Parkinson's disease (PD) is a movement disorder caused by the loss of dopamine neurons in the midbrain.

    Fig. 3 (modified from Goldstein et al., 2007). Brain PET scans superimposed on MRI scans. Note decreased dopamine signal in the putamen and substantia nigra (S.N.) bilaterally in the patient.

    It's also uncontroversial to say that drugs like L-DOPA and invasive neurosurgical interventions like deep brain stimulation (DBS) are used to treat PD.

    On the other hand, some people will balk when you say that psychiatric illnesses like bipolar disorder and depression are brain disorders, and that drugs and DBS (in severe intractable cases) may be used to treat them. You can't always point to clear cut differences in the MRI or PET scans of psychiatric patients, as you can with PD (which is a particularly obvious example).

    The diagnostic methods used in neurology and psychiatry are quite different as well. The standard neurological exam assesses sensory and motor responses (e.g., reflexes) and basic mental status. PD has sharply defined motor symptoms including tremor, rigidity, impaired balance, and slowness of movement. There are definitely cases where the symptoms of PD should be attributed to another disease (most notably Lewy body dementia)1, and other examples where neurological diagnosis is not immediately possible. But by and large, no one questions the existence of a brain disorder.

    Things are different in psychiatry. Diagnosis is not based on a physical exam. Psychiatrists and psychologists give clinical interviews based on the Diagnostic and Statistical Manual (DSM-5), a handbook of mental disorders defined by a panel of experts with opinions that are not universally accepted. The update from DSM-IV to DSM-5 was highly controversial (and widely discussed).

    The causes of mental disorders are not only biological, but often include important social and interpersonal factors. And their manifestations can vary acrosscultures.

    Shortly before the release of DSM-5, the former director of NIMH (Dr. Tom Insel) famously dissed the new manual:
    The strength of each of the editions of DSM has been “reliability” – each edition has ensured that clinicians use the same terms in the same ways. The weakness is its lack of validity. Unlike our definitions of ischemic heart disease, lymphoma, or AIDS, the DSM diagnoses are based on a consensus about clusters of clinical symptoms, not any objective laboratory measure.

    In other words, where are the clinical tests for psychiatric disorders?

    For years, NIMH has been working on an alternate classification scheme, the Research Domain Criteria (RDoC) project, which treats mental illnesses as brain disorders that should be studied according to domains of functioning (e.g., negative valence). Dimensional constructs such as acute threat (“fear”) are key, rather than categorical DSM diagnosis. RDoC has been widely discussed on this blog and elsewhere it's the best thing since sliced bread, it's necessary but very oversold, or it's ill-advised.

    What does this have to do with neurology, you might ask? In 2007, Insel called for the merger of neurology and psychiatry:
    Just as research during the Decade of the Brain (1990-2000) forged the bridge between the mind and the brain, research in the current decade is helping us to understand mental illnesses as brain disorders. As a result, the distinction between disorders of neurology (e.g., Parkinson's and Alzheimer's diseases) and disorders of psychiatry (e.g., schizophrenia and depression) may turn out to be increasingly subtle. That is, the former may result from focal lesions in the brain, whereas the latter arise from abnormal activity in specific brain circuits in the absence of a detectable lesion. As we become more adept at detecting lesions that lead to abnormal function, it is even possible that the distinction between neurological and psychiatric disorders will vanish, leading to a combined discipline of clinical neuroscience.

    Actually, Insel's view dates back to 2005 (Insel & Quirion, 2005)....2
    Future training might begin with two post-graduate years of clinical neuroscience shared by the disciplines we now call neurology and psychiatry, followed by two or three years of specialty training in one of several sub-disciplines (ranging from peripheral neuropathies to public sector and transcultural psychiatry). This model recognizes that the clinical neurosciences have matured sufficiently to resemble internal medicine, with core training required prior to specializing.

    ...and was expressed earlier by Dr. Joseph P. Martin, Dean of Harvard Medical School (Martin, 2002):
    Neurology and psychiatry have, for much of the past century, been separated by an artificial wall created by the divergence of their philosophical approaches and research and treatment methods. Scientific advances in recent decades have made it clear that this separation is arbitrary and counterproductive. .... Further progress in understanding brain diseases and behavior demands fuller collaboration and integration of these fields. Leaders in academic medicine and science must work to break down the barriers between disciplines.

    Contemporary leaders and observers of academic medicine are not all equally ecstatic about this prospect, however. Taylor et al. (2015) are enthusiastic advocates of a move beyond “Neural Cubism”, to increased integration of neurology and psychiatry. Dr. Sheldon Benjamin agrees that greater cross-discipline training is needed, but wants the two fields to remain separate. But Dr. Jose de Leon thinks the psychiatry/neurology integration is a big mistake that revives early 20th century debates (see table below, in the footnotes).3

    I think a distinction can (and should) be made between the research agenda of neuroscience and the current practice of psychiatry. Neuroscientists who work on such questions assume that mental illnesses are brain disorders and act accordingly, by studying the brain. They study animal models and brain slices and genes and humans with implanted or attached electrodes and humans in scanners. And they study the holy grail of neural circuits using DREDDs and optogenetics. This doesn't invalidate the existence of social, cultural, and interpersonal factors that affect the development and manifestation of mental illnesses. As an non-clinician, I have less to say about medical practice. I'm not grandiose enough to claim that neuroscience research (or RDoC, for that matter) will transform the practice of psychiatry (or neurology) in the near future. [Though you might think differently if you read Public Health Relevance Statements or articles in high profile journals.]

    Basic researchers may not even think about the distinction between neurology and psychiatry. Is the abnormal deposition of amyloid-β peptide in Alzheimer's disease (AD) an appropriate target for treatment? Are metabotropic glutamate receptors an appropriate target in schizophrenia? These are similar questions, despite the fact that one disease is neurological and the other psychiatric. There are defined behavioral endpoints that mark treatment-related improvements in either case. It's very useful to measure a change in amyloid burden4 using florbetapir PET imaging in AD [there's nothing similar in schizophrenia], but the most important measure is cognitive improvement (or a flattening of cognitive decline).

    Does Location Matter?

    In response to the pro-merger cavalcade, a recent meta-analysis asked whether the entire category of neurological disorders affects different brain regions than the entire category of psychiatric disorders (Crossley et al., 2015). The answer was why yes, the two categories affect different brain areas, and for this reason neurology and psychiatry should remain separate.

    I thought this was an odd question to begin with, and an even odder conclusion. It's not surprising that disorders of movement, for example, involve different brain regions than disorders of mood or disorders of thought. From my perspective, it's more interesting to look at where the two categories overlap, with an eye to specific comparisons (not global lumping). For instance, are compulsive and repetitive behaviors in OCD associated with alterations in some of the subcortical circuits implicated in movement disorders? Why yes.

    But let's take a closer look at the technical details of the study.

    Crossley et al. (2015) searched for structural MRI articles that observed decreases in gray matter in patients compared to controls. The papers used voxel-based morphometry (VBM) to quantify regional gray matter volumes across the entire brain. For inclusion, disorders needed to have at least seven published studies to be entered into the analysis. A weighted method was used to control for number of published studies (e.g., AD and schizophrenia were way over-represented in their respective categories), and 7 papers were chosen at random for each disorder. The papers were either in the brainmap.org VBM database or found via electronic searches. The x y z peak coordinates were extracted from each paper and entered into the GingerALE program, which performed a meta-analysis via the activation likelihood estimation (ALE) method (see these references: [pdf], [pdf], [pdf] ).

    They found that the basal ganglia, insula, lateral and medial temporal cortex, and sensorimotor areas were affected to a greater extent in neurological disorders. Meanwhile, anterior and posterior cingulate, medial frontal cortex, superior frontal gyrus, and occipital cortex were more affected in psychiatric disorders.

    - click on image for a larger view -

    The authors also looked at network differences, with networks based on previous resting state fMRI studies. Some of these results were uninformative. For example, psychiatric disorders affect visual networks more than neurological disorders do. That was because neurological disorders affect visual regions much less than expected (based on the total number of affected voxels).

    Another finding was that abnormalities in the cerebellum occurred less often than expected in neurological disorders. But this is obviously not the case in cerebellar ataxia, which affects (you guessed it) THE CEREBELLUM. So I'm not sure how useful it is to make global statements about cerebellar involvement in neurological disorders.

    ALE map (FDR pN < 0.05) from 16 VBM studies of ataxia.

    ALE map above was based on 16 papers in the BrainMap database (from a search including 'Ataxia', 'Friedreich ataxia', or 'Spinocerebellar Ataxia'). Gray matter decreases are seen in the cerebellum.

    It was sort of interesting to see all the neurological disorders lumped together and compared to all the psychiatric disorders (the coarsest carving imaginable), but I guess I'm more of a splitter. But an integrative one who also looks for commonalities and overlap. The intersection of neurology and psychiatry is a fascinating topic that could fill many future blog posts.


    1 Comedian Robin Williams, who died by suicide, was initially thought to have depression and/or PD. However, an autopsy ultimately diagnosed Lewy body dementia (‘diffuse Lewy body disease’). PD isn't purely a motor disorder, either. Symptoms can include cognitive changes, depression, and dementia.

    2 That's a fascinating history that may be covered at another time. For now, here's the table from de Leon (2015).

    3 It's interesting to see the prediction for 2015: we should be in the age of diagnostic biomarkers by now...

    4That article came to a surprising conclusion:
    If these data support a regional association between amyloid plaque burden and metabolism, it is for the somewhat heretical inversion of the amyloid hypothesis. That is, regional amyloid plaque deposition is protective, possibly by pulling the more toxic amyloid oligomers out of circulation and binding them up in inert plaques, or via other mechanisms...


    Benjamin S. (2015). Neuropsychiatry and neural cubism. Acad Med. 90(5):556-8.

    Crossley, N., Scott, J., Ellison-Wright, I., & Mechelli, A. (2015). Neuroimaging distinction between neurological and psychiatric disorders.. The British Journal of Psychiatry, 207 (5), 429-434 DOI: 10.1192/bjp.bp.114.154393

    David, A., & Nicholson, T. (2015). Are neurological and psychiatric disorders different? The British Journal of Psychiatry, 207 (5), 373-374. DOI: 10.1192/bjp.bp.114.158550

    de Leon J. (2015) Is psychiatry only neurology? Or only abnormal psychology? Déjà vu after 100 years. Acta Neuropsychiatr. 27(2):69-81.

    Insel TR, & Quirion R (2005). Psychiatry as a clinical neuroscience discipline. JAMA, 294 (17), 2221-4 PMID: 16264165

    Martin JB. (2002). The integration of neurology, psychiatry, and neuroscience in the21st century. Am J Psychiatry 159(5):695-704.

    Taylor JJ, Williams NR, George MS. (2015). Beyond neural cubism: promoting a multidimensional view of brain disorders by enhancing the integration of neurology and psychiatry in education. Acad Med. 90(5):581-6.

    0 0

    Recent technological developments in neuroscience have enabled rapid advances in our knowledge of how neural circuits function in awake behaving animals. Highly targeted and reversible manipulations using light (optogenetics) or drugs have allowed scientists to demonstrate that activating a tiny population of neurons can evoke specific memories or induce insatiable feeding.

    But this week we learned these popular and precise brain stimulation and inactivation methods may produce spurious links to behavior!! And that “controlling neurons with light or drugs may affect the brain in more ways than expected”! Who knew that rapid and reversible manipulations of a specific cell population might actually affect (gasp) more than the targeted circuit, suggesting that neural circuits do not operate in isolation??

    Apparently, a lotofpeoplealreadyknew this.

    Here's the dire Nature News report:
    ...stimulating one part of the brain to induce certain behaviours might cause other, unrelated parts to fire simultaneously, and so make it seem as if these circuits are also involved in the behaviour.

    According to Ölveczky, the experiments suggest that although techniques such as optogenetics may show that a circuit can perform a function, they do not necessarily show that it normally performs that function. “I don’t want to say other studies have been wrong, but there is a danger to overinterpreting,” he says.

    But the paper in question (Otchy et al., 2015) was not primarily about that problem. The major theme is shown in the figure above the difference between acute manipulations using a drug (muscimol) to transiently inactivate a circuit versus the chronic effects of permanent damage (which show remarkable recovery).1In the songbird example, acute inactivation of the nucleus interface (Nif) vocal control area (and its “off-target” attachments) warped singing, but the “chronic” lesion did not.2

    In an accompanying commentary, Dr. Thomas C. Südhof asked:
    How should we interpret these experiments? Two opposing hypotheses come to mind. First, that acute manipulations are unreliable and should be discarded in favour of chronic manipulations. Second, that acute manipulations elicit results that truly reflect normal circuit functions, and the lack of changes after chronic manipulations is caused by compensatory plasticity. 

    But not so fast! said Südhof (2015), who then stated the obvious. “Many chronic manipulations of neural circuits (both permanent genetic changes and physical lesions) do actually produce major behavioural changes.” [as if no one had ever heard of H.M. or Phineas Gage or Leborgne before now.]

    The acute/chronic conundrum is nothing new in the world of human neurology. But centuries of crudely observing accidents of nature, with no control over which brain regions are damaged, and no delineation of precise neural mechanisms for behavior, don't count for much in our store of knowledge about acute vs. chronic manipulations of neural circuits.

    Let's take a look at a few examples anyway.

    In his 1876Lecture on the Prognosis of Cerebral Hæmorrhage, Dr. Julius Althaus discussed recovery of function:
    Do patients ever completely recover from an attack of cerebral hæmorrhage?
    This question used formerly to be unhesitatingly answered in the affirmative.
    . . .

    The extent to which recovery of function may take place depends—

    1. Upon the quantity of blood which has been effused.  ...

    2. Upon the portion of the brain into which the effusion has taken place. Sensation is more easily re-established than motion; and hæmorrhage into the thalamus opticus seems to give better prospects of recovery than when the blood tears up the corpus striatum.  ...


    In his 1913 textbook of neurology (Organic and Functional Nervous Diseases), Dr. Moses Allen Starr discussed aspects of paralysis from cortical disease, and the uniqueness of motor representations across individuals: “Every artisan, every musician, every dancer, has a peculiar individual store of motor memories. Some individuals possess a greater variety of them than others. Hence the motor zone on the cortex is of different extent in different persons, each newly acquired set of movements increasing its area.”

    In 1983, we could read about Behavioral abnormalities after right hemisphere stroke and then Recovery of behavioral abnormalities after right hemisphere stroke.

    More recently, there's been an emphasis on connectome-based approaches for quantifying the effects of focal brain injuries on large-scale network interactions, and how this might predict neuropsychological outcomes. So the trend in human neuroscience is to acknowledge the impact of chronic lesions on distant brain regions, rather than the current contention [in animals, of course] that “acute manipulations are probably more susceptible to off-target effects than are chronic lesions.”

    But I digress...

    Based on two Nature commentaries about the Otchy et al. paper, I was expecting “ah ha, gotcha, optogenetics is a fatally flawed technique.” This Hold Your Horses narrative fits nicely into a recap of neurogaffes in high places. One of the experiments did indeed use an optogenetic manipulation, but the issue wasn't specific to that method.

    Ultimately, the neuroblunder for me wasn't the Experimental mismatch in neural circuits (or a failure of optogenetics per se), it was the mismatch between the-problem-as-hyped and a lack of historical context for said problem.


    1Here's a figure from the other experiment, which involved acute vs. chronic inactivation of motor cortex in rats. Basically, the tiny injection of muscimol impaired lever-pressing behavior (acutely), but the large lesion did not (chronically). Panel H shows a similar deleterious effect using optogenetic stimulation.

    Modified from Fig. 1 (Otchy et al., 2015).

    I can't stress this point enough a human with a comparably sized lesion in primary motor cortex would not [likely] show that much spontaneous recovery of function in 5-10 days. Yes, of course there's plasticity in the central nervous system of adult humans, but I think Otchy et al. (2015) overstate the case here:
    As in our experimental animals, patients with lesions to motor-related brain areas have motor deficits that resolve in the days and weeks following the injury. Aspects of this recovery are thought to be independent of rehabilitation, suggesting spontaneous processes at work.

    2It isn't exactly true that the lesions had no effect on song: “A fraction of the initial post-lesion vocalizations were severely degraded and did not resemble pre-lesion song.”


    Althaus J (1876). A Lecture on the Prognosis of Cerebral Haemorrhage. British medical journal, 2 (812), 101-4. PMID: 20748269

    Otchy, T., Wolff, S., Rhee, J., Pehlevan, C., Kawai, R., Kempf, A., Gobes, S., & Ölveczky, B. (2015). Acute off-target effects of neural circuit manipulations. Nature DOI: 10.1038/nature16442

    Reardon, S. (2015). Brain-manipulation studies may produce spurious links to behaviour. Nature DOI: 10.1038/nature.2015.19003

    Südhof, T. (2015). Reproducibility: Experimental mismatch in neural circuits. Nature DOI: 10.1038/nature16323

    Warnning: Do NOT Get Caught While Searching!!
    Your IP : - Country : - City:
    Your ISP TRACKS Your Online Activity! Hide your IP ADDRESS with a VPN!
    Before you searching always remember to change your IP adress to not be followed!
    0 0

    My entire body of work has been called into question!

    And what a fine week for technical neurogaffes it is. First was the threat that many trendy and important studies of neural circuits may need to be replicated using old-fashioned lesion methods, because of “off-target” effects:
    Where do we go from here? Most acute manipulation studies that use optogenetics confirm, and so add valuable support to, existing hypotheses that were established in earlier studies. But for those studies that have proposed new circuit functions, it may be advisable to re-evaluate the conclusions using independent approaches.1

    Up next we have....

    fMRI Neuroblunders in Brief
    The most notable one of late is a new paper by Eklund et al. (2015), which demonstrated that common statistical tests used to analyze fMRI data can give wildly inflated false positive rates of up to 60%, as illustrated in the top figure.

    What they found is shocking”!
    While voxel-wise error rates were valid, nearly all cluster-based parametric methods (except for FSL’s FLAME 1) have greatly inflated familywise Type I error rates. This inflation was worst for analyses using lower cluster-forming thresholds (e.g. p=0.01) compared to higher thresholds, but even with higher thresholds there was serious inflation. This should be a sobering wake-up call for fMRI researchers, as it suggests that the methods used in a large number of previous publications suffer from exceedingly high false positive rates (sometimes greater than 50%).

    The problems (and recommended solutions) were expertly discussed already by Russ Poldrack, who is quoted above (see Big problems for common fMRI thresholding methods), and by Neuroskeptic (False Positive fMRI Revisited). I needn't belabor the issues any further.

    Next question:

    Is the ubiquitously activated dorsal anterior cingulate cortex (dACC) selective for pain (as opposed to conflict or cognitive control or salience)? That was the contention of a new paper by Lieberman and Eisenberger (2015) that made use of the Neurosynth meta-analytic framework developed by Tal Yarkoni.

    It Depends on What “Selective” Means2

    A 15,000 word debate between Yarkoni (No, the dorsal anterior cingulate is not selective for pain) and Lieberman (Comparing Pain, Cognitive, and Salience Accounts of dACC) ensued, with no end in sight.

    It Also Depends on What “Pain” Means

    Social Pain and Physical Pain Are Not Interchangeable. This may sound obvious to you, but Eisenberger and Lieberman have argued otherwise, with their neural alarm view of dACC function. Neurosynth uses text mining and machine learning to build maps based on terms that appear in published papers, along with activation coordinates. So the map above doesn't distinguish between different types of experimentally-induced physical pain (heat, cold, pressure, etc.) vs. emotional pain or social exclusion in a video game.

    This might be one of L&E's major points, but pain researchers aren't on board; many don't even think the dorsal posterior insula is a pain-specific region.


    1If anyone can parse the bold red sentence that appears in the Naturecommentary (immediately after the first quoted passage in the post), please let me know.
    In the future, it might be helpful always to correlate acute and chronic manipulations of specific neurons. If results from acute and chronic manipulations are discrepant, analyses of circuits that act in parallel to the manipulated circuit, or of similar neurons that are activated by different stimuli, might be more likely to provide an explanation for the discrepancy than examination of chains of hierarchically connected neurons, because off-target effects probably propagate throughout neural circuits by spilling over into adjacent, connected circuits.

    2Sam Schwarzkopf addressed this in his post, What is selectivity?

    0 0

    Does the pain of mental anguish rely on the same neural machinery as physical pain? Can we treat these dreaded ailments with the same medications? These issues have come to the fore in the field of social/cognitive/affective neuroscience.

    As many readers know, Lieberman and Eisenberger (2015) recently published a controversial paper claiming that a brain region called the dorsal anterior cingulate cortex (dACC, shown above) is “selective” for pain.1This finding fits with their long-time narrative that rejection literally “hurts” social pain is analogous to physical pain, and both are supported by activity in same regions of dACC (Eisenberger et al., 2003). Their argument is based on work by Dr. Jaak Panksepp and colleagues, who study separation distress and other affective responses in animals (Panksepp & Yovell, 2014).

    Panksepp wrote The Book on Affective Neuroscience in 1998, and coined the term even earlier (Panksepp, 1992). He also wrote a Perspective piece in Science to accompany Eisenberger et al.'s 2003 paper:

    We often speak about the loss of a loved one in terms of painful feelings, but it is still not clear to what extent such metaphors reflect what is actually happening in the human brain? Enter Eisenberger and colleagues ... with a bold neuroimaging experiment that seeks to discover whether the metaphor for the psychological pain of social loss is reflected in the neural circuitry of the human brain. Using functional magnetic resonance imaging (fMRI), they show that certain human brain areas that “light up” during physical pain are also activated during emotional pain induced by social exclusion [i.e., exclusion from playing a video game].

    But as I've argued for years, Social Pain and Physical Pain Are Not Interchangeable. Whenever I read an article proclaiming that “the brain bases of social pain are similar to those of physical pain”, I am reminded of how phenomenologically DIFFERENT they are.

    And subsequent work has demonstrated that physical pain and actual social rejection (a recent romantic break-up) do not activate the same regions of dACC (Woo et al., 2014). Furthermore, multivariate activation patterns across the entire brain can discriminate pain and rejection with high accuracy.2 

    Modified from Fig. 3 (Woo et al., 2014). Differences between fMRI pattern-based classifiers for pain and rejection.

    Feelings of rejection were elicited by showing the participants pictures of their ex-partners (vs. pictures of close friends), and physical pain was elicited by applying painful heat to the forearm (vs. warm heat).

    Does this mean there is no overlap between brain systems that can dampen physical and emotional pain (e.g., endogenous opioids)? Of course not; otherwise those suffering from utter despair, unspeakable loneliness, and other forms of psychic turmoil would not self-medicate with mind-altering substances.

    Separation Distress: Of Mice and Psychoanalysis

    Although Panksepp has worked primarily with rodents and other animals throughout his career, he maintains a keen interest in neuropsychoanalysis, an attempt to merge Freudian psychoanalysis with contemporary neuroscience. Neuropsychoanalysis “seeks to understand the human mind, especially as it relates to first-person experience.” If you think that's a misguided (and impossible) quest, you might be surprised by some of the prominent neuroscientists who have signed on to this agenda (see theseposts).

    Prof. Panksepp is currently collaborating with Prof. Yoram Yovell, a Psychoanalyst and Neuroscientist at the Institute for the Study of Affective Neuroscience (ISAN) in Haifa. A recent review paper addresses their approach of affective modeling in animals as a way to accelerate drug development in neuropsychiatry (Panksepp & Yovell, 2014). Their view is that current models of depression, which focus on animal behaviors instead of animal emotions, have hindered new breakthroughs in treatments for depression. It’s actually a fascinating and ambitious research program:
    We admit that our conceptual position may be only an empirical/ontological approximation, especially when contrasted to affective qualia in humans … but it is at least a workable empirical approach that remains much underutilized. Here we advance the view that such affective modeling can yield new medical treatments more rapidly than simply focusing on behavioral processes in animals. In sum, we propose that the neglect of affect in preclinical psychiatric modeling may be a major reason why no truly new psychiatric medicinal treatments have arisen from behavior-only preclinical modeling so far.

    They propose that three key primal emotional systems3may be critical for understanding depression: SEEKING (enthusiasm-exuberance), PANIC (psychic pain), and PLAY (joyful exuberance). If these constructs sound highly anthropormorphic when applied to rats, it's because they are!! Perhaps you'd rather “reaffirm classical behaviorist dogma (Panksepp & Yovell, 2014) and stick with more traditional notions like brain reward systems, separation distress, and 50-kHz ultrasonic vocalizations (e.g., during tickling, mating, and play) when studying rodents.

    Of interest today is the PANIC system (Panksepp & Yovell, 2014), which mediates the psychic pain of separation distress (i.e. excessive sadness and grief), which can be counteracted by minimizing PANIC arousals (as with low-dose opioids).” Since low-dose opioids alleviate separation distress in animals (based on reductions in distress vocalizations), why not give them to suicidal humans suffering from psychic pain?

    Well... because making strong inferences about the contents of animal minds is deeply problematic (Barrett et al., 2007). I've written about some of the problems with animal models of dread and despair. One might also question whether it's wise to give opioid drugs (even in very low doses) to severely ill people.

    Low-Dose Buprenorphine for Suicidal Ideation
    Recently investigators are increasingly entertaining the possibility of using ‘safe opioids’ for the treatment of depression, as well as the chronic ‘psychological pain’ that often promotes suicidal ideation. To be a ‘safe opioid’, the analgesic effects and the lethal (respiratory depression) effects of a particular opioid ligand need to be dissociated. Buprenorphine, a partial agonist at μ-opioid receptors (i.e. stimulating opioid receptors at low doses, but blocking them at high doses), is just such a drug.

    Panksepp and Lovell's ideas led to a clinical trial (A Study of Nopan Treatment of Acute Suicidality) and a new paper in the American Journal of Psychiatry (Yovell et al., 2015). Nopan is sublingual buprenorphine hydrochloride 0.2 mg. At higher doses, buprenorphine is used as a treatment for opioid addiction, much like methadone.

    Research on suicidal behavior is an important and tragically neglected topic, and many clinicians, organizations, and industry sponsors are reluctant to engage. So it's notable that the current study was funded by the Neuropsychoanalysis Foundation (which awards grants and sponsors the journal Neuropsychoanalysis), the Hope for Depression Research Foundation (whose Board is filled with some Heavy Hitters of Neuroscience e.g., Akil, Mayberg, McEwen, Nestler, Hen), and ISAN.

    It's interesting to track some of the changes in the study protocol and description over time. The initial ClinicalTrials.gov entry (dated 2010_01_11) dropped its psychoanalytic language on 2011_05_23:
    The acutely suicidal patient presents a complex and dangerous clinical dilemma. Many suicidal patients receive antidepressant medications, but the onset of action of these medications is at least three weeks, and despite their established antidepressant effect, they have not shown a clear anti-suicidal benefit. Psychoanalysts hypothesized that depression (often leading to suicidality) shares important characteristics with the psychological sequelae of object loss and separation distress. Endogenous opioids (endorphins) have been implicated in mediating social bonding and separation distress in mammals. 

    On the same date, the Secondary Outcome Measure (Reduction in psychache as measured by the Holden Psychache Scale) was replaced by a more standard and non-psychoanalytic instrument, the Beck Depression Inventory (Reduction in depression as measured by the BDI). Dr. Beck conceptualized depression in a cognitive framework.

    On the other hand, “psychache” (coined by suicidologist Dr. Edwin Shneidman) means “unbearable psychological pain—hurt, anguish, soreness, and aching. ... Psychache stems from thwarted or distorted psychological needs . . . every suicidal act reflects some specific unfulfilled psychological need.”  Many of these views are at odds with neuropsychiatry (Schneidman, 1993):
    Depression seems to have physiological, biochemical, and probably genetic components. The use of medications in treatment is on target. [so far so good] ... Suicide, on the other hand, is a phenomenological event... It is responsive to talk therapy and to changes in the environment. Suicide is not a psychiatric disorder. Suicide is a nervous dysfunction, not a mental disease.

    But 90% of suicides are in people with clinically diagnosable psychiatric disorders; anxiety, depression, impulsivity, and alcohol abuse are major risk factors. While cases of psychache would certainly benefit from talk therapy and a change in environment, pharmacological (and/or brain stimulation) treatments seem to be essential. Which is the clearly the intention of Yovell et al. (2015), or else they wouldn't have conducted a drug study.

    In short, I found it curious that the focus of their clinical trial changed so much mid-stream, and that the mental anguish of the original formulation is so completely and utterly human (given its genesis from the animal literature).

    In the next post, I'll cover the actual study and the background on why anyone would think low-dose opioids are a good idea in cases of treatment-resistant depression and suicidality.

    Further Reading

    Vicodin for Social Exclusion

    Suffering from the pain of social rejection? Feel better with TYLENOL®

    Existential Dread of Absurd Social Psychology Studies

    Does Tylenol Exert its Analgesic Effects via the Spinal Cord?

    The Mental Health of Lonely Marijuana Users

    Tylenol Doesn't Really Blunt Your Emotions

    Of Mice and Women: Animal Models of Desire, Dread, and Despair


    1 In contrast, based on years of detailed neuroanatomical and neurophysiological experiments, most neuroscientists think the dACC is a functionally heterogeneous region (e.g., Vogt et al., 1992). Shortly after the Lieberman & Eisenberger (2015) paper was published, a number of researchers expressed their vehement disagreement in blog posts: Yarkoni-1, Lieberman reply, Yarkoni-2, Shackman, Wager.

    2In contrast to these results, an earlier study by this group claimed that social rejection shares somatosensory representations with physical pain. It's always nice to see examples where scientists update their own theories based on new evidence.

    3 In Panksepp's scheme, there are seven basic or primal emotions that are subcortically based and evolutionarily conserved: SEEKING, RAGE, FEAR, LUST, CARE, PANIC/GRIEF, and PLAY. Needless to say, this model has not gone unchallenged (Barrett et al., 2007; LeDoux, 2015). Barrett and colleagues have argued that emotions are not natural kinds, but rather emergent psychological events constructed from core affect (positive or negative states) and a human conceptual system for emotion.


    Barrett LF, Lindquist KA, Bliss-Moreau E, Duncan S, Gendron M, Mize J, Brennan L. (2007). Of Mice and Men: Natural Kinds of Emotions in the Mammalian Brain? A Response to Panksepp and Izard. Perspect Psychol Sci. 2(3):297-312.

    Eisenberger NI, Lieberman MD, Williams KD. (2003). Does rejection hurt? An FMRI study of social exclusion. Science 302:290-2.

    Panksepp, J., & Yovell, Y. (2014). Preclinical Modeling of Primal Emotional Affects (SEEKING, PANIC and PLAY): Gateways to the Development of New Treatments for Depression. Psychopathology, 47 (6), 383-393. DOI: 10.1159/000366208

    Shneidman ES. (1993). Suicide as psychache. J Nerv Ment Dis. 181(3):145-7.

    Woo CW, Koban L, Kross E, Lindquist MA, Banich MT, Ruzic L, Andrews-Hanna JR, & Wager TD (2014). Separate neural representations for physical pain and social rejection. Nature communications, 5. PMID: 25400102

    Yovell, Y., Bar, G., Mashiah, M., Baruch, Y., Briskman, I., Asherov, J., Lotan, A., Rigbi, A., & Panksepp, J. (2015). Ultra-Low-Dose Buprenorphine as a Time-Limited Treatment for Severe Suicidal Ideation: A Randomized Controlled Trial. American Journal of Psychiatry DOI: 10.1176/appi.ajp.2015.15040535

    0 0

    The prescription opioid crisis of overdosing and overprescribing has reached epic proportions, according to the North American media. Just last week, we learned that 91% of patients who survive opioid overdose are prescribed more opioids! The CDC calls it an epidemic, and notes there's been “a 200% increase in the rate of overdose deaths involving opioid pain relievers and heroin.” A recent paper in the Annual Review of Public Health labels it a “public health crisis” and proposes “interventions to address the epidemic of opioid addiction” (Kolodny et al., 2015).

    In the midst of this public and professional outcry, why on earth would anyone recommend opioid drugs as a treatment for severe depression and suicidal ideation??

    Let's revisit the questions posed in my previous post:

    1.  Does the pain of mental anguish rely on the same neural machinery as physical pain?
    3.  Can we treat these dreaded ailments with the same medications?

    The opioid-for-depression proponents would answer both of those questions in the affirmative,1 with some qualifications. First off, the actual medication in question (and its dose) is different from the typically abused opiate / opioid drug. As far as I can tell, no one is clamoring for narcotic analgesics like OxyContin and Vicodin to be used as antidepressants.

    In his 2008 paper on the Psychotherapeutic Benefits of Opioid Agonist Therapy, Dr. Peter L. Tenore reviewred the history of the Opium Cure and declared, “Opioids have been used for centuries to treat a variety of psychiatric conditions with much success.” However, these drugs can be highly addictive (obviously) so he issued this caveat at the end of the paper:
    It should be noted that opioids do not have FDA approval for the treatment of psychiatric disorders. The intent of this paper was not to suggest that practitioners should prescribe opioids in a manner not approved by the FDA, but rather it was to explore the mechanisms and develop hypotheses that might explain the observation that opioid-dependent psychiatric patients in appropriately certified opioid replacement therapy programs (i.e., methadone treatment programs) stabilize on higher opioid dosages than those without psychiatric diagnoses.

    Methadone and especially low-dose buprenorphine are the drugs being tested for their antidepressant efficacy, even in those who have no opioid abuse issues. Buprenorphine is a mixed partial μ/κ agonist with complex actions, including:
    • Antagonist (blocker) of κ-opioid receptors (KORs) that bind dynorphins (endogenous opioids associated with anxiety and dysphoria)
    • Partial agonist at μ-opioid receptors (MORs), producing analgesic effects but with less euphoria and less respiratory depression than full agonists
    Basic research in rodents suggests that KORs may be a promising target for potential psychiatric treatments in humans, based on improvements shown in standard behavioral assays such as the forced swim test and the elevated maze test (Crowley & Kash, 2015).2But there's still a long way to go. In addition to the difficulty of modeling mental anguish in animals, the complexity of the dynorphin/KOR system which can exhibit paradoxical and “convoluted” effects on behavior3 presents a barrier to clinical translation.

    In contrast, a very different approach uses affect modeling in an effort to accelerate drug development in neuropsychiatry (Panksepp & Yovell, 2014). In this view, current models of depression have hindered new breakthroughs because of their focus on animal behaviors, instead of animal emotions. Panksepp maintains that separation distress and infant versions of psychic pain, excessive sadness, and grief are mediated by the PANIC system, which is soothed by opioids. Chicks, kittens, puppies, and other infant animals emit distress vocalizations when separated from their mothers. Rat pups emit ultrasonic vocalizations and baby monkeys“coo”. These innate, reflexive, and adaptive behaviors are reduced with low doses of morphine.4

    Panksepp and colleagues have inferred that very strong and human-like emotions are associated with distress vocalizations.

    By way of example, here is my adult cat. He's very affectionate and chatty. He requires a lot of attention and doesn't like to be alone. Does he meow and miss me when I'm on vacation? I imagine he does. Do I think he feels psychic pain and grief while I'm gone? No.

    Watt and Panskepp (2009) argue that depression is an evolutionarily conserved mechanism to terminate separation distress, drawing on psychoanalytic concepts like object relations theory as well as the literature on neuropeptides and neuromodulators implicated in major depression.

    Nopan Treatment of Acute Suicidality

    The research on separation distress in animals helped motivate a clinical trial that was recently published in the American Journal of Psychiatry (Yovell et al., 2015). The initial daily dose of Nopan (0.1 or 0.2 mg sublingual buprenorphine hydrochloride) was relatively low, reaching a maximum dose of 0.8 mg daily by the end of the four week trial (mean = 0.44 mg). By way of comparison, the maintenance dose for opioid dependence is 4 to 24 mg/day.5 Analgesic effects are obtained at 0.1– 8 mg (according to Heit, 2010), although Yovell et al. said their doses were subanalgesic.

    Eighty-eight severely suicidal patients were enrolled in the double-blind, placebo-controlled trial, about 2/3 of whom had made at least one suicide attempt. Over half met criteria for borderline personality disorder (BPD), which includes symptoms like affective instability, self-harm, high rates of substance abuse, and fear of abandonment (i.e., heightened separation distress). Although 50 patients had BPD, the other 48 did not. If separation distress is a major motivating construct for the trial, it seems problematic to have a heterogeneous population on that dimension. Nevertheless...
    Almost all were clinically unstable, and their ability to cooperate with the study team was compromised, as reflected in a high dropout rate (29.5%) during the first week of treatment.

    This sort of study is very difficult to conduct, so it's not surprising that the completion rate was low (57%):  33 patients on buprenorphine, 17 on placebo (the original randomization was deliberately 2:1).

    - click on image for a larger view -

    Modified from Fig. S2 (Yovell et al., 2015). A portion of the flow diagram (starting from those who were enrolled).

    Importantly, participants with a lifetime history of opioid abuse were excluded. Buprenorphine is a Schedule III controlled substance in the US (same as ketamine), with a lower potential for abuse than heroin (Schedule I) and morphine, methadone, OxyContin, etc. (Schedule II).6As we know, individuals with BPD have high rates of substance abuse. In one study, 44% of patients seeking buprenorphine treatment for opioid addiction were diagnosed with BPD. Therefore, the investigators had to screen the participants very closely, in partnership with their regular clinical providers. Other exclusionary criteria were schizophrenia, current psychosis, and ECT within the past month. Finally, there could be no substance or alcohol abuse or benzodiazepine dependence within the past 2 years.

    Regarding patient demographics, 70% were female, 43% had major depression, 25% were currently hospitalized, 49% had experienced major stressors in the last year (but separation during past month in only 24%), 70% on antidepressants, 49% on benzos, and about 20% on mood stabilizers and antipsychotics.
    A week’s supply of medication (<5.6 mg, usually <2.8 mg) was not considered to present a high risk for suicide by overdose. Outpatients received the study medication for the following week during their weekly visits, and took it independently at home.

    Another difficulty is that opioids have notable side effects, and this was true with the “ultra-low-dose” used here (Yovell et al., 2015) which isn't really all that low.
    One or more adverse events were reported in 77.2% of participants in the buprenorphine group and 54.8% of those in the placebo group (p=0.03). Among participants in the buprenorphine group, there were more reports of fatigue (49.1% compared with 22.6% in the placebo group), nausea (36.8% compared with 12.9%), dry mouth (29.8% compared with 9.7%), and constipation (26.3% compared with 9.7%).

    The primary outcome measure was change in suicidal ideation after four weeks of treatment. To make up for high dropout, the Last Observation Carried Forward (LOCF) was used. In this way, data from all participants who received at least one dose of drug and one suicidal ideation score were included. However, LOCF is a flawed procedure that can overestimate effect sizes.7

    The buprenorphine group had lower suicidal ideation than the placebo group at weeks 2 and 4, but an analysis restricted to patients who completed the study was not reported. Furthermore, the suicidal ideation scores were highly variable, and improvements in many secondary outcome measures (e.g., depression severity) didn't reach statistical significance (Fig. S3 below).

    The authors acknowledged seven critical limitations, summarized below (the most egregious in red):
    1. Outcome measures based on self-report. Clinician ratings of suicidality, depression, and overall functioning should be part of any future trial. 
    3. Participants were unstable and severely suicidal; many had BPD; high dropout rates. Are the findings applicable to more stable, less severely suicidal patients?
    5. Flexible and gradual dosing limits inferences about the optimal dosage of buprenorphine to treat suicidal ideation.
    7. Heterogeneity of the study population [e.g., why were both BPD and non-BPD included?] and its modest size limited ability to stratify results by dose, gender, and diagnosis.
    9. Study did not assess nonsuicidal self-injury, which is associated with BPD, mental pain, and abnormalities in the endogenous opioid system.
    •     I'll add that a case series on buprenorphine for NSSI was not cited.
  • Trial did not include an extended follow-up period to allow assessment of possible long-term effects, including the possibility of developing drug craving or rebound suicidality.
  • Despite its favorable safety profile, buprenorphine is potentially addictive and possibly lethal. 

  • The authors end by stating the “results do not support the widespread, long-term, or nonexperimental use of buprenorphine for suicidality.” In other words, don't open buprenorphine clinics to treat severe depression and suicidal ideation! [ketamine infusion clinics, I'm looking at you].

    People suffering from suicidal thoughts and actions deserve the best possible care, yet many researchers and clinicians shy away from conducting clinical trials.  Ketamine seems to be the exception, where 24 studies are listed in ClinicalTrials.gov. While the present study has a long list of admitted flaws that make me wonder why it was published in AJP, the authors are in the admirable position of trying to help an extremely vulnerable population.

    Read Part 1, Social Pain Revisited: Opioids for Severe Suicidal Ideation.


    1 But not surprisingly, many others don't agree with this strategy e.g., Opioids in Depression: Not Quite There Yet (Xin et al. 2015), Psychiatry is the missing P in chronic pain care (Howe & Sullivan, 2014), and my previous post.

    2 These tests measure “depression-like” and “anxiety-like” behaviors, respectively. We could certainly debate whether these are adequate models of depression and anxiety, and this was in fact the topic of an interesting discussion on Twitter. The problem of anthromorphism is greatly magnified with concepts like the “psychic pain” of separation distress (the PANIC system of Panksepp and colleagues). For now, I'll refer you to my old post, Of Mice and Women: Animal Models of Desire, Dread, and Despair.

    3The world of dynorphin and KORs has gotten even more complicated since the discovery that subpopulations of dynorphin neurons in the nucleus accumbens have opposing effects (aversion vs. reward). Crowley and Kash (2015) suggest that translation to humans may be..... uh difficult, to say the least:
     “Important studies using modern genetic approaches have highlighted the multiple ways that KORs effect behavior, and paradoxical effects have emerged when manipulating the dynorphin system.”  
    “In addition, circuit and site-specific manipulations...provide some clarity as to the convoluted effect seen with systemic administration of KOR agonists. This provides key important information as to how KOR modulation can be used to shift anxiety-related behaviors: both low doses of KOR agonists, as well as KOR antagonists, may prove to be effective.”
    “Despite of an abundance of literature showing KORs to be a promising therapeutic target for the treatment of drug addiction ... few drugs impacting the KOR system have been taken to the level of human clinical trials.”
    4 I haven't heard that morphine or buprenorphine is recommended for human babies who cry persistently and excessively.

    5  @debe: in addiction it's 2 to 32 mg (pure bup) or to 24 (bup+nalox). 0.2 mg cps are for pain.

    6The clinical trial was conducted in Israel, where buprenorphine is used to treat opioid dependence (as in many other countries) Litigiousness might be one possible reason such a study hasn't been conducted in the US?

    7Caution is needed when interpreting results using Last Observation Carried Forward (LOCF) analyses filling in missing values based on existing data because of the problematic nature of this method for handling missing data. See Appendix below.


    Crowley NA, Kash TL. (2015). Kappa opioid receptor signaling in the brain: Circuitry and implications for treatment. Prog Neuropsychopharmacol Biol Psychiatry 62:51-60.

    Howe CQ, Sullivan MD. (2014). The missing 'P' in pain management: how the current opioid epidemic highlights the need for psychiatric services in chronic pain care. Gen Hosp Psychiatry 36(1):99-104.

    Kolodny A, Courtwright DT, Hwang CS, Kreiner P, Eadie JL, Clark TW, Alexander GC. (2015). The prescription opioid and heroin crisis: a public health approach to anepidemic of addiction. Annu Rev Public Health 36:559-74.

    Panksepp, J., & Yovell, Y. (2014). Preclinical Modeling of Primal Emotional Affects (SEEKING, PANIC and PLAY): Gateways to the Development of New Treatments for Depression. Psychopathology, 47 (6), 383-393. DOI: 10.1159/000366208

    Tenore, P. (2008). Psychotherapeutic Benefits of Opioid Agonist Therapy. Journal of Addictive Diseases, 27 (3), 49-65. DOI: 10.1080/10550880802122646

    Watt, D., & Panksepp, J. (2009). Depression: An Evolutionarily Conserved Mechanism to Terminate Separation Distress? A Review of Aminergic, Peptidergic, and Neural Network Perspectives Neuropsychoanalysis, 11 (1), 7-51 DOI: 10.1080/15294145.2009.10773593

    Yovell, Y., Bar, G., Mashiah, M., Baruch, Y., Briskman, I., Asherov, J., Lotan, A., Rigbi, A., & Panksepp, J. (2015). Ultra-Low-Dose Buprenorphine as a Time-Limited Treatment for Severe Suicidal Ideation: A Randomized Controlled Trial. American Journal of Psychiatry DOI: 10.1176/appi.ajp.2015.15040535

    Yin X, Guven N, Dietis N. (2015). Opioids in Depression: Not Quite There Yet. UK Journal of Pharmaceutical and Biosciences 3(1):12-7.


    Reasons to avoid Last Observation Carried Forward (LOCF):

    Streiner 2014: “...LOCF has serious and, in some cases, fatal problems.”

    Olsen et al. 2012: “Although these methods are simple to implement, they are deeply flawed in that they may introduce bias and underestimate uncertainty, leading to erroneous conclusions.”

    www.missingdata.org.uk: “For full longitudinal data analyses this is clearly disastrous: means and covariance structure are seriously distorted. For single time point analyses the means are still likely to be distorted, measures of precision are wrong and hence inferences are wrong. Note this is true even if the mechanism that causes the data to be missing is completely random.”

    Molnar et al. 2008: “If there were a prize for the most inappropriate analytical technique in dementia research, 'last observation carried forward' would be the runaway winner.”

    Molnar et al. 2009: “The published results of some randomized controlled trials of dementia drugs may be inaccurate (i.e., drug effectiveness may be exaggerated) or invalid (i.e., there may be false-positive results) because of bias introduced through the inappropriate use of LOCF analyses.”

    0 0


    Did you know that SPECT imaging can diagnose PTSD with 100% accuracy (Amen et al., 2015)? Not only that, out of a sample of 397 patients from the Amen Clinic in Newport Beach, SPECT was able to distinguish between four different groups with 100% accuracy! That's right, the scans of (1) healthy participants, and patients with (2) classic post-traumatic stress disorder (PTSD), (3) classic traumatic brain injury (TBI), and (4) both disorders..... were all classified with 100% accuracy!

    TRACK-TBI investigators, your 3T structural and functional MRI outcome measures are obsolete.

    NIMH, the hard work of developing biomarkers for mental illness is done, you can shut down now. Except none of this research was funded by you...

    The finding was #19 in a list of the top 100 stories by Discover Magazine.

    How could the Amen Clinics, a for-profit commercial enterprise, accomplish what an army of investigators with billions in federal funding could not?

    The authors1 relied on a large database of scans collected from multiple sites over a 20 year period. The total sample included 20,746 individuals who visited one of nine Amen Clinics from 1995-2014 for psychiatric and/or neurological evaluation (Amen et al., 2015). The first analysis included a smaller, highly selected sample matched on a number of dimensions, including psychiatric comorbidities (Group 1).

    - click on image for larger view -

    You'll notice the percentage of patients with ADHD was remarkably high (58%, matched across the three patient groups). Perhaps that's because...

    I did not know that.
     Featuring Johnny Cash ADD.

    SPECT uses a radioactive tracer injected 30 minutes before a scan that will assess either the “resting state” or an “on-task” condition (a continuous performance task, in this study). Clearly, SPECT is not the go-to method if you're looking for decent temporal resolution to compare two conditions of an active attention task. The authors used a region of interest (ROI) analysis to measure tracer activity (counts) in specific brain regions.

    I wondered about the circularity of the clinical diagnosis (i.e., were the SPECT scans used to aid diagnosis), particularly since “Diagnoses were made by board certified or eligible psychiatrists, using all of the data available to them, including detailed clinical history, mental status examination and DSM-IV or V criteria...” But we were assured that wasn't the case: “These quantitative ROI metrics were in no way used to aid in the clinical diagnosis of PTSD or TBI.” The rest of the methods (see Footnote 2) were opaque to me, as I know nothing about SPECT.

    A second analysis relied on visual readings (VR) of about 30 cortical and subcortical ROIs. “Raters did not have access to detailed clinical information, but did know age, gender, medications, and primary presenting symptoms (ex. depressive symptoms, apathy, etc.).”  Hmm...

    But the quantitative ROI analysis gave superior results to the clinician VR. So superior, in fact, that the sensitivity/specificity in distinguishing one group from another was 100% (indicated by red boxes below). The VR distinguished patients from controls with 100% accuracy, but was not as good for classifying the different patient groups during the resting state scan only a measly 86% sensitivity, 81% specificity for TBI vs. PTSD, which is still much better than other studies. However, results from the massively sized Group 2 were completely unimpressive. 3

    - click on image for larger view, you'll want to see this -

    Why is this so important? PTSD and TBI can show overlapping symptoms in war veterans and civilians alike, and the disorders can co-occur in the same individual. More accurate diagnosis can lead to better treatments. This active area of research is nicely reviewed in the paper, but no major breakthroughs have been reported yet. So the claims of Amen et al. are remarkable. Stunning if true. But they're not. They can't be. The accuracy of the classifier exceeds the precision of the measurements, so this can't be possible. What is the test-retest reliability of SPECT? What is the concordance across sites? Was there no change in imaging protocol, no improvements or upgrades to the equipment over 20 years? SPECT is sensitive to motion artifact, so how was that handled, especially in patients who purportedly have ADHD?

    SPECT has been noted for its poor spatial resolution compared to other functional neuroimaging techniques like PET and fMRI. A panel of 16 experts did not include SPECT among the recommended imaging modalities for the detection of TBI. Dr. Amen and his Clinics in particular have been criticized in journals (Farah, 2009; Adinoff & Devous, 2010a, 2010b; Chancellor &, Chatterjee, 2011) and blogs (Science-Based Medicine, The Neurocritic, and Neurobollocks) for making unsubstantiated claims about the diagnostic accuracy and usefulness of SPECT.

    Are his latest results too good to be true? You can check for yourself! The paper was published in PLOS ONE, which has an open data policy:
    PLOS journals require authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception.

    When submitting a manuscript online, authors must provide a Data Availability Statement describing compliance with PLOS's policy. If the article is accepted for publication, the data availability statement will be published as part of the final article.

    Before you get too excited, here's the Data Availability Statement:
    Data Availability: All relevant data are within the paper.

    But this is not true. NONE of the data are available within the paper. There's no way to reproduce the authors' analyses, or to conduct your own. This is a problem, because...
    Refusal to share data and related metadata and methods in accordance with this policy will be grounds for rejection. PLOS journal editors encourage researchers to contact them if they encounter difficulties in obtaining data from articles published in PLOS journals. If restrictions on access to data come to light after publication, we reserve the right to post a correction, to contact the authors' institutions and funders, or in extreme cases to retract the publication.

    So all you “research parasites” out there4 you can request the data. I thought this modest proposal would create a brouhaha until I saw a 2014 press release announcing the World's Largest Database of Functional Brain Scans Produces New Insights to Help Better Diagnose and Treat Mental Health Issues:
    With a generous grant from the Seeds Foundation [a Christian philanthropic organization] in Hong Kong, Dr. Amen and his research team led by neuroscientist Kristen Willeumier, PhD, have turned the de-identified scans and clinical information into a searchable database that is shared with other researchers around the world.

    In the last two years, Amen and colleagues have presented 20 posters at the National Academy of Neuropsychology. The PR continues:
    The magnitude and clinical significance of the Amen Clinics database – being the world's largest SPECT imaging database having such volume and breadth of data from patients 9 months old to 101 years of age – makes it a treasure trove for researchers to help advance and revolutionize the practice of psychiatry.

    Does this mean that Dr. Amen will grant you access to the PLOS ONE dataset (or to the entire Amen Clinics database) if you ask nicely? If anyone tries to do this, please leave a comment.


    1 The other authors included Dr. Andrew “Glossolalia” Newberg and Dr. Theodore “Neuro-LuminanceSynaptic Space” Henderson.

    2 Methods:
    To account for outliers, T-score derived ROI count measurements were derived using trimmed means [91] that are calculated using all scores within the 98% confidence interval (-2.58 < Z < -2.58). The ROI mean for each subject and the trimmed mean for the sample are used to calculate T with the following formula: T = 10*((subject ROI_mean - trimmed regional_avg)/trimmed regional_stdev)+50.
    3 Results from the less pristine Group 2 were not impressive at all, I must say. Group 2 had TBI (n=7,505), PTSD (n=1,077), or both (n=1,017) compared to n=11,147 patients without either (these were not clean controls as in Group 1). Given the massive number of subjects, the results were clinically useless, for the most part (see Table 6).

    4 A brand new editorial in NEJM by Longo and Drazen (who decry “research parasites”) is causing a twitterstorm with the hashtags #researchparasites and #IAmAResearchParasite.


    Adinoff B, Devous M. (2010a). Scientifically unfounded claims in diagnosing and treating patients. Am J Psychiatry 167(5):598.

    Adinoff B, Devous M. (2010b). Response to Amen letter. Am J Psychiatry 167(9):1125-1126.

    Amen, D., Raji, C., Willeumier, K., Taylor, D., Tarzwell, R., Newberg, A., & Henderson, T. (2015). Functional Neuroimaging Distinguishes Posttraumatic Stress Disorder from Traumatic Brain Injury in Focused and Large Community Datasets PLOS ONE, 10 (7) DOI: 10.1371/journal.pone.0129659

    Chancellor B, Chatterjee A. (2011). Brain branding: When neuroscience and commerce collide. AJOB Neuroscience2(4): 18-27.

    Farah MJ. (2009). A picture is worth a thousand dollars. J Cogn Neurosci. 21(4):623-4.

    0 0

    Today, The Neurocritic celebrates ten years as a blog. Given the ongoing use of a pseudonym, how should I commemorate the occasion?

    1. Should I finally update my blog template? (“Hey, 2004 wants their Blogger template back”).

    2. Should I throw a party? Popular London-based blogs Mind Hacks and BPS Research Digest held big public bashes in November 2014 and December 2015, respectively. My audience is only a fraction of theirs, however.  I doubt a local gathering of fans would fill more than a broom closet.

    3. How about a Happy Hour, where I privately invite social media folks who live nearby? I know where many of you live, but not vice versa.

    4. Or I could publicly announce the location of an informal gathering or night on the town with an open invitation to readers. In either of those scenarios, you'd get to meet me in person. Other pseudonymous bloggers appear in public all the time, why shouldn't I?

    5. Another idea was inspired by the cooking competition show Top Chef, which is celebrating its 10th Anniversary this season. In the most recent elimination challenge, the contestants were asked to recall what they were like 10 years ago. The goal was to prepare a dish that represents themselves, professionally and emotionally, at that stage of their lives.
    Top Chef 13: The chefs must create a dish that tells the story of who they were 10 years ago.
    This was not a pleasant experience for some of the contestants. Chef Isaac, from New Orleans, had to remember the devastation after Hurricane Katrina. He prepared duck gumbo with roasted jalapeno andouille sausage, crispy rice cake and duck cracklings the type of a dish he made for large numbers of displaced people 10 years ago.

    Front runner Chef Kwame was quite upset by recalling his estranged relationship with his father. He made jerk broccoli with corn bread pudding and smokey blue cheese as an homage to his Jamaican father. This wasn't a wise decision, however. He ended up at the bottom.

    I thought about how I might write a post based on a similar theme: to tell the story of who I was 10 years ago and why I started to blog. I remembered some of the major things in my life at the time, and decided it would be too personal. For 10 years, I've avoided revealing anything about myself.

    “I tried my best to stay under the radar and hoped that no one would think of me as a real person,” I said two years ago.

    Why did I decide to start a neuroscience blog?

    It was out of sheer frustration. I was facing some rejection of my own work, and felt I didn't have much of a voice in the neuroscience community. I was annoyed by flawed journal articles and overblown press coverage, and decided that blogging would be a cathartic outlet for my complaints. I didn't expect that many people would actually read it, but at least writing might make me feel a bit better.

    6. I could do a retrospective of my most popular and commented-on posts, but that would be boring. Nobody cares, no one would read it.

    7. Perhaps a look back at how the science blogosphere has evolved would have broader appeal? Or not. I wrote an opinion piece in 2013 during a time of #scicomm upheaval, that was enough. Although lifestyle pieces on the rise of social media and the decline of blogs are ever-popular...

    8. Should I write a personal reflection on the greatest advances in Human Brain Imaging, Cognitive Neuroscience, and Psychopharmacology since 2006? Such a piece would be time consuming, and needs no special ties to a 10 year blogiversary celebration. Any specific requests for this type of post?

    9. Or I could mention other neuro/psych blogs that have been around for 8-10 years, like Neurophilosophy (Mo Costandi soon celebrates his 10th), BPS Research Digest, SciCurious, Neuroskeptic, Neuron Culture (now here), Providentia (now nine), Addiction Inbox, Talking Brains, NeuroDojo (established in 2002), BrainBlog, Deric's MindBlog, and of course Shrink Rap. I could also recognize some influential legacy blogs of the era, including Neurofuture, Developing Intelligence, Mixing Memory, Cognitive Daily, and Omni Brain. Finally, I could credit major influences like Bad Neuro-Journalism (which dates back to 1998) and Mind Hacks.

    10. Finally, I may announce a new occasional feature in the coming days or weeks.

    Thank you for reading!

    More Navel Gazing

    Eight Years of Neurocriticism

    The Decline of Neurocriticism

    Warnning: Do NOT Get Caught While Searching!!
    Your IP : - Country : - City:
    Your ISP TRACKS Your Online Activity! Hide your IP ADDRESS with a VPN!
    Before you searching always remember to change your IP adress to not be followed!
    0 0
  • 01/31/16--23:56: Was I Wrong?

  • In honor of The Neurocritic's 10th anniversary, I'd like to announce a new occasional feature:

    Was I Wrong?

    In science, as in life, we learn from our mistakes. We can't move forward if we don't admit we were wrong and revise our entrenched theory (or tentative hypothesis) when faced with contradictory evidence. Likewise, it's possible that some of the critiques in this blog are no longer valid because additional evidence shows that the authors were correct. And vindicated. At least for now...

    I've been collecting possible instances of this phenomenon for months, and I'll preview two of these today.

    (1) In November 2015, I said that Obesity Is Not Like Being "Addicted to Food". Drugs of abuse are consistently associated with decreases in D2 dopamine receptors, but D2 receptor binding in obese women is not different from that in lean participants (Karlsson et al., 2015). Conversely, μ-opioid receptor (MOR) binding is reduced, which supports lowered hedonic processing. After the women had bariatric surgery, MOR returned to control values, while the unaltered D2 receptors stayed the same.

    However, a recent study in mice “points to a causal link between striatal dopamine signaling and the outcomes of bariatric interventions” (Han et al., 2016). How relevant is this new finding for clinical studies in humans?

    (2) In another post, I poo-pooed the notion that there is One Brain Network for All Mental Illness. However, a subsequent paper in Molecular Psychiatry claimed that common psychiatric disorders share the same genetic origin (Pettersson et al., 2015). If so, could this result in common brain abnormalities alterations across disorders?

    In the future, I'll take a closer look at these and other examples to see if I should revise my opinions.

    0 0
  • 02/21/16--22:47: The Brain at Rest

  • As you might have gathered, my brain is taking a rest from blogging after the excitement of The Neurocritic's tenth anniversary. Regular blogging will resume shortly.

    Thank you for your patience.

    Fig. 1 (Buckner, 2013). The brain's default network. The default network was discovered serendipitously when experimenters using neuroimaging began examining brain regions active in the passive control conditions of their experiments. The image shows brain regions more active in passive tasks as contrast to a wide range of simple, active task conditions.6

    Dialogues Clin Neurosci. 2013 Sep; 15(3): 351–358.

    0 0
  • 03/07/16--03:20: Writing-Induced Fugue State

  • Who is this, wandering around the crowded street, afraid of everything, trusting no one?

    “There must be something wrong, somewhere.”
    But maybe I’m safer since I look disheveled.

    Who are these people? Where is this place?
    Did I write that? When did that happen? I don’t remember.

    I can’t stop writing. I can’t stop walking, either, which is a problem because it’s hard to write and walk at the same time.

    In the early 1940s, Austrian Psychiatrist Dr. Erwin Stengel wrote a pair of papers on fugue states, a type of dissociative disorder involving loss of personal identity and aimless wandering (Stengel, 1941):
    THE peculiar condition designated “fugue state,” of which the main symptom is compulsive wandering, has puzzled psychiatrists since it was first described. Nothing is known of the aetiology of this well-defined condition. Fugue states occur in epileptics, hysterics, and certain psychopaths. Bleuler has described their occurrence in schizophrenia, and they have been recorded in cases of general paralysis and of altered personality due to brain tumour.  ...  Kraepelin recognized that it was impossible to distinguish between the states of compulsive wandering associated with various mental disorders. Janet tried to distinguish between hysterical and epileptic fugues by pointing out that short fugues are more likely to be epileptic than hysterical.  

    He was disturbed by inaccurate use of the term, which was widespread (Stengel, 1943):
    ...the following conditions have been described as fugues: States of wandering, in accordance with the classical conception; states of double personality; all kinds of transitory abnormal behaviour of functional origin; hysterical loss of consciousness and of memory; twilight states; confusional states of hysterical nature; delirious states in schizophrenia. The tendency to call transient states of altered consciousness fugues, irrespective of the behaviour of the patient, is obvious. This is a most unsatisfactory state of affairs. 

    Stengel presented dozens of cases in these papers and was obsessed with finding common etiological factors, no matter what the underlying medical condition (e.g., epilepsy, “hysteria”, schizophrenia):
    The intimate similarity of fugue states associated with different mental disorders suggests that there must be aetiological factors common to all. However, no attempt has been made hitherto to ascertain such factors. I have been engaged in investigations concerning this problem for more than eight years...
    ...and (Stengel, 1943):
    Clinical studies carried out over many years have convinced me that there is no justification in differentiating between hysterical and epileptic wandering states, as the behaviour of the patients and the majority of the etiological factors are fundamentally the same in all fugues with the impulse to wander (Stengel, 1939, 1941).

    Since Stengel was trained as a psychoanalyst and considered Freud as a mentor, you might guess the common etiology:
    This was a disturbance of the environment of child life. A serious disturbance in the child-parent relationship, usually of such a nature that the relationship to one or both parents was either completely lacking or only partially developed, had occurred in nearly every case.

    Beyond the mommy/daddy issues, symptoms of severe depression (suicide attempts, failure to eat, lack of hygiene) and/or mania (elation, hypersexuality) were commonplace. Here's one especially tragic example:
    CASE 9. M. E , female, born 1906. The patient was normal until her twenty first year. At that time she suddenly became unstable and wanted to live apart from her mother, with whom she had been happy hitherto. She went to Paris, where she found employment as a secretary, but after some months she returned home again. When she was 22 she experienced for the first time an urge to wander, which reappeared subsequently two or three times every year. For no adequate reason, sometimes after an insignificant quarrel, she left home and wandered about for some days. During these states she was not fully conscious, slept little, and neglected herself. When normal consciousness returned, after three or four days, she found herself in the country far away from home. These states were followed by profound depression, lasting for several weeks, when the patient indulged in self-reproaches, ate very little, lost weight, and could not work. ... The patient was a typical daydreamer. In her daydreams a fantasy of a man disappointed in love committing suicide often appeared. (Her father had committed suicide.) ... The patient, who was of unusual intelligence, suffered very much from her abnormal states, which appeared at intervals of four to five months, and were always followed by melancholic depression. In one of these depressions she committed suicide by poisoning.

    Period Fugue

    Stengel (1941) asserted that the majority of his female patients started their wandering premenstrually, but his definition of what this meant was kind of loose (and meaningless): “usually appear before menstruation”, “usually just before menstruation”, “usually commences shortly before her menstrual period”, “at the onset of menstruation”, “about the time of menstruation”.

    He had no explanation for this, other than the implication that it's an unstable lady thing. One particularly fun case (Case 14) was a young woman with a previous bout of encephalitis lethargica. But it was determined that her menstrual period and an Oedipus Complex drove her to wander, not her illness.

    The report for Case 35 (Miss May S. M, aged 18, member of the women's military service) was accompanied by a four page excerpt from her diary, which is illuminating for what it tells us about bipolar disorder (but fugue, not so much):
    1940.  12.1: Had a drink, sang all the way home. 13.1: The matinee went off well. Feeling so horribly sad, a terribly empty feeling, felt like crying my heart out. Home is like the end of the world. 21.1: Tried to commit suicide. Instead wrote to G. telling him to give me some ideas how to get to America. Feeling just frightful, feel dead. 27.1: No feelings at all. 30.1: Have a mad desire to go really common, lipstick, scarlet nails and with as little clothes as possible.

    Modern conceptions of fugue states (including dissociative amnesia) focus on trauma, memory systems, and underlying neurobiological causes, instead of dysfunctional child-parent relationships (MacDonald & MacDonald, 2009).

    Who is this? How did I end up here?

    You mean there’s a world outside my head, beyond the computer, exceeding all page limits and formatting errors?

    Writing-Induced Fugue State

    ADDENDUM (March 7 2016): I should clarify that in DSM-5, dissociative fugue no longer has its own category. Now it's a subtype of dissociative amnesia(DSM-5 diagnostic code 300.12):

    Sub-Specifier: Dissociative Amnesia with dissociative fugue (300.13)

    This occurs when an individual travels or wanders, either in a seemingly purposeful or bewildered fashion, without knowing who they are. Dissociative fugue involves amnesia of a person’s entire identity or for other important autobiographical information.

    Another salient difference from Stengel's day is that the fugue state must not due to a general medical condition, like temporal lobe epilepsy.


    Stengel, E. (1941). On the Aetiology of the Fugue States The British Journal of Psychiatry, 87 (369), 572-599 DOI: 10.1192/bjp.87.369.572

    Stengel, E. (1943). Further Studies on Pathological Wandering (Fugues with the Impulse to Wander) The British Journal of Psychiatry, 89 (375), 224-241 DOI: 10.1192/bjp.89.375.224

    0 0

    Our bodily sense of self contributes to our personal feelings of awareness as a conscious being. How we see our bodies and move through space and feel touched by loved ones are integral parts of our identity. What happens when this sense of self breaks down? One form of dissolution is Depersonalization Disorder (DPD).1 Individuals with DPD feel estranged or disconnected from themselves, as if their bodies belong to someone else, and “they” are merely a detached observer. Or the self feels absent entirely. Other symptoms of depersonalization include emotional blunting, out-of-body experiences, and autoscopy.

    Autoscopy for dummies - Antonin De Bemels (cc licence)

    Transient symptoms of depersonalization can occur due to stress, anxiety, sleep deprivation, or drugs such as ketamine (a dissociative anesthetic) and hallucinogens (e.g., LSD, psilocybin). These experiences are much more common than the official diagnosis of DPD, which occurs in only 1-2% of the population.

    Research by Olaf Blanke and colleagues (reviewed in Blanke et al., 2015) has tied bodily self-consciousness to the integration of multi-sensory signals in fronto-parietal and temporo-parietal regions of the brain.

    The fragmentation or loss of an embodied self raises philosophically profound questions. Although the idea of “mind uploading” is preposterous in my view (whether via whole brain emulation or cryonics), proponents must seriously ask whether the uploaded consciousness will in any way resemble the living person from whom it arose.2“Minds are not disembodied logical reasoning devices” (according to Andy Clark).  And...
    Increasing evidence suggests that the basic foundations of the self lie in the brain systems that represent the body (Lenggenhager et al., 2012).

    Lenggenhager et al. asked whether the loss of sensorimotor function alters body ownership and the sense of self. Persons with spinal cord injuries scored higher on Cambridge Depersonalization Scale (CDS) items such as “I have to touch myself to make sure that I have a body or a real existence.” This suggests that disconnecting the brain from somatosensory input can change phenomenological aspects of self-consciousness.

    The Stranger in the Mirror

    Patients with depersonalization not only feel a change in perception concerning the outside world, but they also have clear-cut changes concerning their own body.  ...  The patient sees his face in the mirror changed, rigid and distorted. His own voice seems strange and unfamiliar to him.  ...  It is in this respect especially remarkable that the estrangement concerning the outside world is often an estrangement in the optic sphere (Schilder, 1935, p. 139).

    Depersonalization can involve perceptual distortions of bodily experience in different sensory modalities (e.g., vision, hearing, touch, and pain). Recent research has examined interactions between visual and somatosensory representations of self in the tactile mirroring paradigm (also called visual remapping of touch). Here, the participant views images of a person being touched (or not) while they themselves are touched. Tactile perception is enhanced by simultaneously receiving and observing the same stimulation, especially when the image is of oneself.

    Are the symptoms of depersonalization associated with reduced or absent responses in the tactile mirroring paradigm? If so, at what stage of processing (early or late) does this occur? A new study recorded EEG to look at somatosensory evoked potential (SEP) responses to tactile stimuli during mirroring (Adler et al., 2016). The participants scored high (n=14) or low (n=13) on the CDS.

    One SEP of interest was the P45, which occurs shortly (25-50 msec) after tactile stimulation. Although the spatial resolution of EEG does not allow firm conclusions about the neural generators, we know from invasive studies in epilepsy patients and animals that P45 originates in the primary somatosensory cortex (S1).

    When the participants viewed the other-face, P45 did not differ on touch vs. no-touch trials. But the later N80 component was enhanced for touch vs. no-touch, and the enhancement was similar for low and high depersonalization (DP) participants.

    Modified from Figs. 3 and 4 (Adler et al. 2016). SEPs in response to tactile stimuli for low DP (top) and high DP (bottom) while observing touch (thick line) or no-touch (thin line) on another person's face. SEPs are shown for components P45 and N80 at a cluster of central-parietal electrodes located over somatosensory cortex.

    Results were different when subjects viewed images of themselves. P45 was enhanced in the low DP group when viewing themselves being touched (vs. no-touch trials). However, those with high DP scores did not show this P45 enhancement.

    Modified from Figs. 3 and 4 (Adler et al. 2016). SEPs in response to tactile stimuli while observing touch (thick line) or no-touch (thin line) on the participant's own face. Red arrow indicates no self-mirror enhancement of P45.

    These results suggest a very early disturbance in sensory integration of the self in depersonalization:
    Measurable effects of mirroring for tactile events on the observer's own face may be absent over P45 because deficits in implicit self-related processing prevent the resulting visual enhancement of tactile processing from taking place in the context of self-related information. An alternative, or additional, explanation for the absence of P45 mirroring effects may be that seeing their own body causes depersonalised individuals to actively inhibit the processing of bodily stimulation via this pathway. This may cause feelings of disembodiment, and is akin to the suggestion that fronto-limbic inhibitory mechanisms acting on emotional processes cause the emotional numbing experienced in depersonalisation (Sierra and David, 2011).
    [Although I'm not so sure how much “active inhibition” can occur within 25 msec...]

    A later component (P200) did not show the expected effect in the high DP group, either. While these results are intriguing, we must keep in mind that this was a small study that requires replication.3

    Our Bodies, Our Selves

    Predictive coding models hypothesize that the anterior insular cortex (AIC) provides top-down input to somatosensory, autonomic, and visceral regions and plays a critical role in integrating exteroceptive and interoceptive signals (Seth et al., 2012; Allen et al., 2016). DPD is associated with “pathologically imprecise interoceptive predictive signals,” leading to a disruption of conscious presence (the subjective sense of reality of the world and of the self within the world). Here's the predictive coding model of conscious presence (Seth et al., 2012):
    It has been suggested that DPD is associated with a suppressive mechanism grounded in fronto-limbic brain regions, notably the AIC, which “manifests subjectively as emotional numbing, and disables the process by which perception and cognition become emotionally colored, giving rise to a subjective feeling of unreality” (Sierra and David, 2011)...

    In our model, DPD symptoms correspond to abnormal interoceptive predictive coding dynamics. ... the imprecise interoceptive prediction signals associated with DPD may result in hypoactivation of AIC since there is an excessive but undifferentiated suppression of error signals.

    In contrast, Adler et al. (2016) adopt a very different (Freudian) view:
    We speculate that the abnormalities related to depersonalisation may be based on a lack of mirroring interactions in early childhood. Several recent papers culminated in the idea that mirroring experiences in early life - the process of moving and being moved by others, both physically and affectively - give rise to our sense of bodily self... This bodily self forms the core of other forms of self-consciousness, from body ownership to the sense of agency and the ability to mentalise (e.g. Fonagy et al., 2007; Gallese & Sinigaglia, 2010; Markova and Legerstee, 2006; Stern, 1995). ...  Depersonalisation could be a potential consequence of such developmental experiences.

    I don't buy it... none of the participants in their study had a clinical diagnosis, and we know nothing of their early childhood. In the end, any model of chronic DPD still has to account for the transient phenomena of disconnection and unreality experienced by so many of us.

    Further Reading

    Feeling Mighty Unreal: Derealization in Kleine-Levin Syndrome

    Fright Week: The Stranger in the Mirror


    1 In DSM-5, the syndrome is known as Depersonalization/Derealization Disorder. I wrote about the symptoms of derealization a subjective alteration in one's perception or experience of the outside world in another blog post.

    2 For a discussion of the relevant issues, see The False Science of Cryonics and Silicon soul: The vain dream of electronic immortality.

    3 Given the requirements for specialized equipment and a specialized population, I don't imagine this study is on the Many Labs or Replication Project lists.


    Adler, J., Schabinger, N., Michal, M., Beutel, M., & Gillmeister, H. (2016). Is that me in the mirror? Depersonalisation modulates tactile mirroring mechanisms. Neuropsychologia DOI: 10.1016/j.neuropsychologia.2016.03.009

    Allen M, Fardo F, Dietz MJ, Hillebrandt H, Friston KJ, Rees G, Roepstorff A. (2016). Anterior insula coordinates hierarchical processing of tactile mismatch responses. Neuroimage 127:34-43.

    Blanke O, Slater M, Serino A. (2015). Behavioral, Neural, and Computational Principlesof Bodily Self-Consciousness. Neuron 88(1):145-66.

    Lenggenhager, B., Pazzaglia, M., Scivoletto, G., Molinari, M., & Aglioti, S. (2012). The Sense of the Body in Individuals with Spinal Cord Injury. PLoS ONE, 7 (11) DOI: 10.1371/journal.pone.0050757

    Schilder, P. (1935). The Image and Appearance of the Human Body. London: Kagan, Paul, Trench, Trubner & Co.

    Seth AK, Suzuki K, Critchley HD. (2012). An interoceptive predictive coding model of conscious presence. Front Psychol. 2:395.

    0 0
  • 03/27/16--17:39: Everybody Loves Dopamine

  • Dopamine is love. Dopamine is reward. Dopamine is addiction.

    Neuroscientists have a love/hate relationship with how this monoamine neurotransmitter is portrayed in the popular press.

    [The claim of vagus nerve-stimulating headphones is worth a post in its own right.]

    “You can fold your laundry, but you can’t fold your dopamine.”
    - James Cole Abrams, M.A. (in Contemplative Psychotherapy)

    The word dopamine has become a shorthand for positive reinforcement, whether it's from fantasy baseball or a TV show.

    But did you know that a subset of dopamine (DA) neurons originating in the ventral tegmental area (VTA) of the midbrain respond to obnoxious stimuli (like footshocks) and regulate aversive learning?

    Sometimes the press coverage of a snappy dopamine paper can be positive and (mostly) accurate, as was the case with a recent paper on risk aversion in rats (Zalocusky et al., 2016). This study showed that rats who like to “gamble” on getting a larger sucrose reward have a weaker neural response after “losing.” In this case, losing means choosing the risky lever, which dispenses a low amount of sucrose 75% of the time (but a high amount 25%), and getting a tiny reward. The gambling rats will continue to choose the risky lever after losing. Other rats are risk-averse, and will choose the “safe” lever with a constant reward after losing.

    This paper was a technical tour de force with 14 multi-panel figures.1 For starters, cells in the nucleus accumbens (a VTA target) expressing the D2 receptor (NAc D2R+ cells) were modified to express a calcium indicator that allowed the imaging of neural activity (via fiber photometry). Activity in NAc D2R+ cells was greater after loss, and during the decision phase of post-loss trials. And these two types of signals were dissociable.2 Then optogenetic methods were used to activate NAc D2R+ cells on post-loss trials in the risky rats. This manipulation caused them to choose the safer option.

    - click to enlarge -

    Noted science writer Ed Yong wrote an excellent piece about these findings in The Atlantic (Scientists Can Now Watch the Brain Evaluate Risk).

    Now, there's a boatload of data on the role of dopamine in reinforcement learning and computational models of reward prediction error (Schultz et al., 1997) and discussion about potentialweaknesses in the DA and RPE model. So while a very impressive addition to the growing pantheon of laser-controlled rodents, the results of Zalocusky et al. (2016) aren't massively surprising.

    More surprising are two recent papers in the highly sought-after population of humans implanted with electrodes for seizure monitoring or treatment of Parkinson's disease. I'll leave you with quotes from these papers as food for thought.

    1. Stenner et al. (2015). No unified reward prediction error in local field potentials from the human nucleus accumbens: evidence from epilepsy patients.
    Signals after outcome onset were correlated with RPE regressors in all subjects. However, further analysis revealed that these signals were better explained as outcome valence rather than RPE signals, with gamble gains and losses differing in the power of beta oscillations and in evoked response amplitudes. Taken together, our results do not support the idea that postsynaptic potentials in the Nacc represent a RPE that unifies outcome magnitude and prior value expectation.

    The next one is extremely impressive for combining deep brain stimulation with fast-scan cyclic voltammetry, a method that tracks dopamine fluctuations in the human brain!

    2. Kishida et al. (2016). Subsecond dopamine fluctuations in human striatum encode superposed error signals about actual and counterfactual reward. 
    Dopamine fluctuations in the striatum fail to encode RPEs, as anticipated by a large body of work in model organisms. Instead, subsecond dopamine fluctuations encode an integration of RPEs with counterfactual prediction errors, the latter defined by how much better or worse the experienced outcome could have been. How dopamine fluctuations combine the actual and counterfactual is unknown. One possibility is that this process is the normal behavior of reward processing dopamine neurons, which previously had not been tested by experiments in animal models. Alternatively, this superposition of error terms may result from an additional yet-to-be-identified subclass of dopamine neurons.

    Further Reading

    As Addictive As Cupcakes Mind Hacks (“If I read the phrase ‘as addictive as cocaine’ one more time I’m going to hit the bottle.”)

    Dopamine Neurons: Reward, Aversion, or Both? Scicurious

    Back to Basics 4: Dopamine! Scicurious (in fact, anything by Scicurious on dopamine)

    Why Dopamine Makes People More Impulsive– Sofia Deleniv at Knowing Neurons

    2-Minute Neuroscience: Reward Systemvideo by Neuroscientifically Challenged


    1 For example:
    Because decision-period activity predicted risk-preferences and increased before safe choices, we sought to enhance the D2R+ neural signal by optogenetically activating these cells during the decision period. An unanticipated obstacle (D2SP-driven expression of channelrhodopsin-2 eYFP fusion protein (D2SP-ChR2(H134R)-eYFP) leading to protein aggregates in rat NAc neurons) was overcome by adding an endoplasmic reticulum (ER) export motif and trafficking signal29 (producing enhanced channelrhodopsin (eChR2); Methods), resulting in improved expression (Extended Data Fig. 7). In acute slice recordings, NAc cells expressing D2SP-eChR2(H134R)-eYFP tracked 20-Hz optical stimulation with action potentials (Fig. 4c).

    2 The human Reproducibility Project: Psychology brigade might be interested to see Pearson’s r2 = 0.86 in n = 6 rats.


    Kishida KT, Saez I, Lohrenz T, Witcher MR, Laxton AW, Tatter SB, White JP, Ellis TL, Phillips PE, Montague PR. (2016). Subsecond dopamine fluctuations in human striatum encode superposed error signals about actual and counterfactual reward. Proc Natl Acad Sci 113(1):200-5.

    Schultz W, Dayan P, Montague PR. (1997). A neural substrate of prediction and reward. Science 275:1593–1599. [PubMed]

    Stenner MP, Rutledge RB, Zaehle T, Schmitt FC, Kopitzki K, Kowski AB, Voges J, Heinze HJ, Dolan RJ. (2015). No unified reward prediction error in local field potentials from the human nucleus accumbens: evidence from epilepsy patients. J Neurophysiol. 114(2):781-92.

    Zalocusky, K., Ramakrishnan, C., Lerner, T., Davidson, T., Knutson, B., & Deisseroth, K. (2016). Nucleus accumbens D2R cells signal prior outcomes and control risky decision-making Nature DOI: 10.1038/nature17400

    Warnning: Do NOT Get Caught While Searching!!
    Your IP : - Country : - City:
    Your ISP TRACKS Your Online Activity! Hide your IP ADDRESS with a VPN!
    Before you searching always remember to change your IP adress to not be followed!

older | 1 | 2 | 3 | (Page 4) | 5 | 6 | .... | 9 | newer