Quantcast
Channel: The Neurocritic
Viewing all 218 articles
Browse latest View live

Depth Electrodes or Digital Biomarkers? The future of mood monitoring

0
0

Mood Monitoring via Invasive Brain Recordings or Smartphone Swipes

Which Would You Choose?


That's not really a fair question. The ultimate goal of invasive recordings is one of direct intervention, by delivering targeted brain stimulation as a treatment. But first you have to establish a firm relationship between neural activity and mood. Well, um, smartphone swipes (the way you interact with your phone) aim to establish a firm relationship between your “digital phenotype” and your mood. And then refer you to an app for a precision intervention. Or to your therapist / psychiatrist, who has to buy into use of the digital phenotyping software.

On the invasive side of the question, DARPA has invested heavily in deep brain stimulation (DBS) as a treatment for many disorders– Post-Traumatic Stress Disorder (PTSD), Major Depression, Borderline Personality Disorder, General Anxiety Disorder, Traumatic Brain Injury, Substance Abuse/Addiction, Fibromyalgia/Chronic Pain, and memory loss. None of the work has led to effective treatments (yet?), but the DARPA research model has established large centers of collaborating scientists who record from the brains of epilepsy patients. And a lot of very impressive papers have emerged – some promising, others not so much.

One recent study (Kirkby et al., 2018) used machine learning to discover brain networks that encode variations in self-reported mood. The metric was coherence between amygdala and hippocampal activity in the β-frequency (13-30 Hz). I can't do justice to their work in the context of this post, but I'll let the authors' graphical abstract speak for itself (and leave questions like, why did it only work in 13 out of 21 of your participants? for later).




Mindstrong

Then along comes a startup tech company called Mindstrong, whose Co-Founder and President is none other than Dr. Thomas Insel, former director of NIMH, and one of the chief architects1 of the Research Domain Criteria (RDoC), “a research framework for new approaches to investigating mental disorders” that eschews the DSM-5 diagnostic bible. The Appendix chronicles the timeline of Dr. Insel's evolution from “mindless” RDoC champion to “brainless” wearables/smartphone tech proselytizer.2


From Wired:
. . .

At Mindstrong, one of the first tests of the [“digital phenotype”] concept will be a study of how 600 people use their mobile phones, attempting to correlate keyboard use patterns with outcomes like depression, psychosis, or mania. “The complication is developing the behavioral features that are actionable and informative,” Insel says. “Looking at speed, looking at latency or keystrokes, looking at error—all of those kinds of things could prove to be interesting.”

Curiously, in their list of digital biomarkers, they differentiate between executive function and cognitive control — although their definitions were overlapping (see my previous post, Is executive function different from cognitive control? The results of an informal poll).
Mindstrong tracks five digital biomarkers associated with brain health: Executive function, cognitive control, working memory, processing speed, and emotional valence. These biomarkers are generated from patterns in smartphone use such as swipes, taps, and other touchscreen activities, and are scientifically validated to provide measurements of cognition and mood.

Whither RDoC?

NIMH established a mandate requiring that all clinical trials should postulate a neural circuit “mechanism” that would be responsible for any efficacious response. Thus, clinical investigators were forced to make up simplistic biological explanations for their psychosocial interventions:

“I hypothesize that the circuit mechanism for my elaborate new psychotherapy protocol which eliminates fear memories (e.g., specific phobias, PTSD) is implemented by down-regulation of amygdala activity while participants view pictures of fearful faces using the Hariri task.”



[a fictitious example]


I'm including a substantial portion of the February 27, 2014 text here because it's important.
NIMH is making three important changes to how we will fund clinical trials.

First, future trials will follow an experimental medicine approach in which interventions serve not only as potential treatments, but as probes to generate information about the mechanisms underlying a disorder. Trial proposals will need to identify a target or mediator; a positive result will require not only that an intervention ameliorated a symptom, but that it had a demonstrable effect on a target, such as a neural pathway implicated in the disorder or a key cognitive operation. While experimental medicine has become an accepted approach for drug development, we believe it is equally important for the development of psychosocial treatments. It offers us a way to understand the mechanisms by which these treatments are leading to clinical change.

OK, so the target could be a key cognitive operation. But let's say your intervention is a Housing First initiative in homeless individuals with severe mental illness and co-morbid substance abuse. Your manipulation is to compare quality of life outcomes for Housing First with Assertive Community Treatment vs. Congregate Housing with on-site supports vs. treatment as usual. What is the key cognitive operation here? Fortunately, this project was funded by the Canadian government and did not need to compete for NIMH funding.

I think my ultimate issue is one of fundamental fairness. Is it OK to skate away from the wreckage and profit by making millions of dollars? From Wired:
“I spent 13 years at NIMH really pushing on the neuroscience and genetics of mental disorders, and when I look back on that I realize that while I think I succeeded at getting lots of really cool papers published by cool scientists at fairly large costs—I think $20 billion—I don’t think we moved the needle in reducing suicide, reducing hospitalizations, improving recovery for the tens of millions of people who have mental illness,” Insel says. “I hold myself accountable for that.”

But how? You've admitted to spending $20 billion on cool projects and cool papers and cool scientists who do basic research. This has great value. But the big mistakes were an unrealistic promise of treatments and cures, and the charade of forcing scientists who study C. elegans to explain how they're going to cure psychiatric disorders.


Footnotes

1Dr. Bruce Cuthbert was especially instrumental, as well as a large panel of experts. But since this post is about digital biomarkers, the former director of NIMH is the focus of RDoC here.

2 The Insel archives of the late Dr. Mickey Nardo in his prolific blog, 1boringoldman.com, are a must-read. I also wish the late Dr. Barney Carroll was still here to issue his trenchant remarks and trademark witticisms.


Reference

Kirkby LA, Luongo FJ, Lee MB, Nahum M, Van Vleet TM, Rao VR, Dawes HE, Chang EF, Sohal VS. (2018). An Amygdala-Hippocampus Subnetwork that Encodes Variation in Human Mood. Cell 175(6):1688-1700.e14.


Additional Reading - Digital Phenotyping

Jain SH, Powers BW, Hawkins JB, Brownstein JS. (2015). The digital phenotype. Nat Biotechnol. 33(5):462-3. [usage of the term here means data mining of content such as Twitter and Google searches, rather than physical interactions with a smartphone]

Insel TR. (2017). Digital Phenotyping: Technology for a New Science of Behavior. JAMA 318(13):1215-1216. [smartphone swipes, NOT content:Who would have believed that patterns of typing and scrolling could reveal individual fingerprints of performance, capturing our neurocognitive function continuously in the real world?”]

Insel TR. (2017). Join the disruptors of health science. Nature 551(7678):23-26. [conversion to the SF Bay Area/Silicon Valley mindset]. Key quote:
“But what struck me most on moving from the Beltway to the Bay Area was that, unlike pharma and biotech, tech companies enter biomedical and health research with a pedigree of software research and development, and a confident, even cocky, spirit of disruption and innovation. They have grown by learning how to move quickly from concept to execution. Software development may generate a minimally viable product within weeks. That product can be refined through ‘dogfooding’ (testing it on a few hundred employees, families or friends) in a month, then released to thousands of users for rapid iterative improvement.”
[is ‘dogfooding’ a real term?? if that's how you're going to test technology designed to help people with severe mental illnesses — without the input of the consumers themselves — YOU WILL BE DOOMED TO FAILURE.]

Philip P, De-Sevin E, Micoulaud-Franchi JA. (2018). Technology as a Tool for Mental Disorders. JAMA 319(5):504.

Insel TR. (2018). Technology as a Tool for Mental Disorders-Reply. JAMA  319(5):504.

Insel TR. (2018). Digital phenotyping: a global tool for psychiatry. World Psychiatry 17(3):276-277.


Appendix - a selective history of RDoC publications























Post-NIMH Transition (articles start appearing less than a month later) 









#CNS2019

0
0


It's March, an odd-numbered year, must mean.... it's time for the Cognitive Neuroscience Society Annual Meeting to be in San Francisco!

I only started looking at the schedule yesterday and noticed the now-obligatory David Poeppel session on BIG stuff 1 on Saturday (March 23, 2019):

Special Session -The Relation Between Psychology and Neuroscience, David Poeppel, Organizer,  Grand Ballroom

Then I clicked on the link and saw a rare occurrence: an all-female slate of speakers!



Whether we study single cells, measure populations of neurons, characterize anatomical structure, or quantify BOLD, whether we collect reaction times or construct computational models, it is a presupposition of our field that we strive to bridge the neurosciences and the psychological/cognitive sciences. Our tools provide us with ever-greater spatial resolution and ideal temporal resolution. But do we have the right conceptual resolution? This conversation focuses on how we are doing with this challenge, whether we have examples of successful linking hypotheses between psychological and neurobiological accounts, whether we are missing important ideas or tools, and where we might go or should go, if all goes well. The conversation, in other words, examines the very core of cognitive neuroscience.

Also on the schedule tomorrow is the public lecture and keynote address by Matt Walker Why Sleep?
Can you recall the last time you woke up without an alarm clock feeling refreshed, not needing caffeine? If the answer is “no,” you are not alone. Two-thirds of adults fail to obtain the recommended 8 hours of nightly sleep. I doubt you are surprised by the answer to this question, but you may be surprised by the consequences. This talk will describe not only the good things that happen when you get sleep, but the alarmingly bad things that happen when you don’t get enough. The presentation will focus on the brain (learning, memory aging, Alzheimer’s disease, education), but further highlight disease-related consequences in the body (cancer, diabetes, cardiovascular disease). The take-home: sleep is the single most effective thing we can do to reset the health of our brains and bodies.

Why sleep, indeed.

Meanwhile, Foals are playing tonight at The Fox Theater in Oakland. Tickets are still available.




view video on YouTube.


ADDENDUM: The sequel was finally posted on March 31: An Amicable Discussion About Psychology and Neuroscience.


Footnote

1 See these posts:

The Big Ideas in Cognitive Neuroscience, Explained #CNS2017

Big Theory, Big Data, and Big Worries in Cognitive Neuroscience #CNS2018

An Amicable Discussion About Psychology and Neuroscience

0
0

People like conflict (the interpersonal kind, not BLUE).1 Or at least, they like scientific debate at conferences. Panel discussions that are too harmonious seem to be divisive. Some people will say, “well, now THAT wasn't very controversial.” But as I mentioned last time, one highlight of the 2019 Cognitive Neuroscience Society Annual Meeting was a Symposium organized by Dr. David Poeppel.2

Special Session -The Relation Between Psychology and Neuroscience, David Poeppel, Organizer, Grand Ballroom
Whether we study single cells, measure populations of neurons, characterize anatomical structure, or quantify BOLD, whether we collect reaction times or construct computational models, it is a presupposition of our field that we strive to bridge the neurosciences and the psychological/cognitive sciences. Our tools provide us with ever-greater spatial resolution and ideal temporal resolution. But do we have the right conceptual resolution? This conversation focuses on how we are doing with this challenge, whether we have examples of successful linking hypotheses between psychological and neurobiological accounts, whether we are missing important ideas or tools, and where we might go or should go, if all goes well. The conversation, in other words, examines the very core of cognitive neuroscience.

Conversation. Not debate. So first, let me summarize the conversation. Then I'll get back to the merits (demerits) of debate. In brief, many of the BIG IDEAS motifs of 2017 were revisited...
  • David Marr and the importance of work at all levels of analysis 
  • What are the “laws” that bridge these levels of analysis?
  • Emergent properties” – a unique higher-level entity (e.g., consciousness, a flock of birds) emerges from the activity of lower-level activity (e.g., patterns of neuronal firing, the flight of individual birds)... the sum is greater than its parts
  • Generative Models – formal models that make computational predictions
...with interspersed meta-commentary on replication, publishing, and Advice to Young Neuroscientists. Without further ado:

Dr. David Poeppel – Introductory Remarks that examined the very core of cognitive neuroscience (i.e., “we have to face the music”).
  • the conceptual basis of cognitive neuroscience shouldn't be correlation 
For example, fronto-parietal network connectivity (as determined by resting state fMRI) is associated with some cognitive function, but that doesn't mean it causes or explains the behavior (or internal thought). We all know this, and we all know that “we must want more!” But we haven't the vaguest idea of how to relate complex psychological constructs such as attention, volition, and emotion to ongoing biological processes involving calcium channels, dendrites, and glutamatergic synapses.
  • but what if the psychological and the biological are categorically dissimilar??
In their 2003 book, Philosophical Foundations of Neuroscience, Bennett and Hacker warned that cognitive neuroscientists make the cardinal error of “...commit[ting] the mereological fallacy, the tendency to ascribe to the brain psychological concepts that only make sense when ascribed to whole animals.”
For the characteristic form of explanation in contemporary cognitive neuroscience consists in ascribing psychological attributes to the brain and its parts in order to explain the possession of psychological attributes and the exercise (and deficiencies in the exercise) of cognitive powers by human beings.” (p. 3)

On that optimistic note, the four panelists gave their introductory remarks.

(1) Dr. Lila Davachi asked, “what is the value of the work we do?” Uh, well, that's a difficult question. Are we improving society in some way? Adding to a collective body of knowledge that may (or may not) be the key to explaining behavior and curing disease? Although still difficult, Dr. Davachi posed an easier question, “what are your goals?” To describe behavior, predict behavior (correlation), explain behavior (causation), change behavior (manipulation)? But “what counts as an explanation?” I don't think anyone really answered that question. Instead she mentioned the recurring themes of levels of analysis (without invoking Marr by name), emergent properties (the flock of birds analogy), and bridging laws (that link levels of analysis). The correct level of analysis is/are the one(s) that advance your goals. But what to do about “level chauvinism” in contemporary neuroscience? This question was raised again and again.

(2) Dr. Jennifer Groh jumped right out of the gate with this motif. There are competing narratives in neuroscience we can call the electrode level (recording from neurons) vs. the neuroimaging level (recording large-scale brain activations or “network” interactions based on an indirect measure of neural activity). They make different assumptions about what is significant or worth studying. I found this interesting, since her lab is the only one that records from actual neurons. But there are ever more reductionist scientists who always throw stones at those above them. Neurobiologists (at the electrode level and below) are operating at ever more granular levels of detail, walking away from cognitive neuroscience entirely (who wants to be a dualist, anyway?). I knew exactly where she was going with this: the field is being driven by techniques, doing experiments merely because you can (cough — OPTOGENETICS— cough). Speaking for myself, however, the fact that neurobiologists can control mouse behavior by manipulating highly specific populations of cells raises the specter of insecurity... certain areas of research might not be considered “neuroscience” any more by a bulk of practitioners in the field (just attend the Society for Neuroscience annual meeting).

(3) Dr. Catherine Hartley continued with the recurring theme that we need both prediction and explanation to reach our ultimate goal of understanding behavior. Is a prediction system enough? No, we must know how the black box functions by studying “latent processes” such as representation and computation. But what if we're wrong about representations, I thought? The view of @PsychScientists immediately came to mind. Sorry to interrupt Dr. Hartley, but here's Golonka and Wilson in Ecological Representations:
Mainstream cognitive science and neuroscience both rely heavily on the notion of representation in order to explain the full range of our behavioral repertoire. The relevant feature of representation is its ability to designate (stand in for) spatially or temporally distant properties ... While representational theories are a potentially a powerful foundation for a good cognitive theory, problems such as grounding and system-detectable error remain unsolved. For these and other reasons, ecological explanations reject the need for representations and do not treat the nervous system as doing any mediating work. However, this has left us without a straight-forward vocabulary to engage with so-called 'representation-hungry' problems or the role of the nervous system in cognition.

They go on to invoke James J Gibson's ecological information functions. But I can already hear Dr. Poeppel's colleague @GregoryHickok and others on Twitter debating with @PsychScientists. Oh. Wait. Debate.

Returning to The Conversation that I so rudely interrupted, Dr. Hartley gave some excellent examples of theories that link psychology and neuroscience. The trichromatic theory of color vision— the finding that three independent channels convey color information — was based on psychophysics in the early-mid 1800s (Young–Helmholtz theory). This was over a century before the discovery of cones in the retina, which are sensitive to three different wavelengths. She also mentioned the more frequently used examples of Tolman's cognitive maps (which predated The Hippocampus as a Cognitive Map by 30 years) and error-driven reinforcement learning (Bush–Mosteller [23, 24] and Rescorla–Wagner, both of which predate knowledge of dopamine neurons). To generate good linking hypotheses in the present, we need to construct formal models that make quantitative predictions (generative models).

(4) Dr. Sharon Thompson-Schill gave a brief introduction with no slides, which is good because this post has gotten very long. For this reason, I won't cover the panel discussion and the Q&A period, which continued the same themes outlined above and expanded on “predictivism” (predictive chauvinism and data-driven neuroscience) and raised new points like the value (or not) of introspection in science. When the Cognitive Neuroscience Society updates their YouTube channel, I'll let you know. Another source is the excellent live tweeting of @VukovicNikola. But to wrap up, Dr. Thompson-Schill asked members of the audience whether they consider themselves psychologists or neuroscientists. Most identified as neuroscientists (which is a relative term, I think). Although more people will talk to you on a plane if you say you're a psychologist, “neuroscience is easy, psychology is hard,” a surprising take-home message.


Debating Debates

I've actually wanted to see more debating at the CNS meeting. For instance, the Society for the Neurobiology of Language (SNL) often features a lively debate at their conferences.3 Several examples are listed below.

2016:
Debate: The Consequences of Bilingualism for Cognitive and Neural Function
Ellen Bialystok & Manuel Carreiras

2014:
What counts as neurobiology of language – a debate
Steve Small, Angela Friederici

2013: Panel Discussions
The role of semantic information in reading aloud
Max Coltheart vs Mark Seidenberg

2012: Panel Discussions
What is the role of the insula in speech and language?
Nina F. Dronkers vs Julius Fridriksson


This one-on-one format has been very rare at CNS. Last year we saw a panel of four prominent neuroscientist address/debate...
Big Theory versus Big Data: What Will Solve the Big Problems in Cognitive Neuroscience?


Added-value entertainment was provided by Dr. Gary Marcus, which speaks to the issue of combative personalities dominating the scene.4


Gary Marcus talking over Jack Gallant. Eve Marder is out of the frame.
image by @CogNeuroNews


I'm old enough to remember the most volatile debate in CNS history, which was held (sadly) at the New York Marriott World Trade Center Hotel in 2001. Dr. Nancy Kanwisher and Dr. Isabel Gauthier debated whether face recognition (and activation of the fusiform face area) is a 'special' example of domain specificity (and perhaps an innate ability), or a manifestation of plasticity due to our exceptional expertise at recognizing faces:
A Face-Off on Brain Studies / How we recognize people and objects is a matter of debate
. . .

At the Cognitive Neuroscience Society meeting in Manhattan last week, a panel of scientists on both sides of the debate presented their arguments. On one side is Nancy Kanwisher of MIT, who first proposed that the fusiform gyrus was specifically designed to recognize faces–and faces alone–based on her findings using a magnetic resonance imaging device. Then, Isabel Gauthier, a neuroscientist at Vanderbilt, talked about her research, showing that the fusiform gyrus lights up when looking at many different kinds of objects people are skilled at recognizing.
Kudos to Newsday for keeping this article on their site after all these years.


Footnotes

1 This is the color-word Stroop task: name the font color, rather than read the word. BLUE elicits conflict between the overlearned response ("read the word blue") and the task requirment (say "red").

2 aka the the now-obligatory David Poeppel session on BIG STUFF. See these posts:
3 Let me now get on my soapbox to exhort the conference organizers to keep better online archives  — with stable urls— so I don't have to hunt through archive.org to find links to past meetings.

4 Although this is really tangential, I'm reminded of the Democratic Party presidential contenders in the US. Who deserves more coverage, Beto O'Rourke or Elizabeth Warren? Bernie Sanders or Kamala Harris?

Does ketamine restore lost synapses? It may, but that doesn't explain its rapid clinical effects

0
0

Bravado SPRAVATO™ (esketamine)
© Janssen Pharmaceuticals, Inc. 2019.


Ketamine is the miracle drug that cures depression:
“Recent studies report what is arguably the most important discovery in half a century: the therapeutic agent ketamine that produces rapid (within hours) antidepressant actions in treatment-resistant depressed patients (4, 5). Notably, the rapid antidepressant actions of ketamine are associated with fast induction of synaptogenesis in rodents and reversal of the atrophy caused by chronic stress (6, 7).”

– Duman & Aghajanian (2012). Synaptic Dysfunction in Depression: Potential Therapeutic Targets. Science 338: 68-72.

Beware the risks of ketamine:
“While ketamine may be beneficial to some patients with mood disorders, it is important to consider the limitations of the available data and the potential risk associated with the drug when considering the treatment option.”

– Sanacora et al. (2017). A Consensus Statement on the Use of Ketamine in the Treatment of Mood Disorders. JAMA Psychiatry 74: 399-405.

Ketamine, dark and light:
Is ketamine a destructive club drug that damages the brain and bladder? With psychosis-like effects widely used as a model of schizophrenia? Or is ketamine an exciting new antidepressant, the “most important discovery in half a century”?

For years, I've been utterly fascinated by these separate strands of research that rarely (if ever) intersect. Why is that? Because there's no such thing as “one receptor, one behavior.” And because like most scientific endeavors, neuro-pharmacology/psychiatry research is highly specialized, with experts in one microfield ignoring the literature produced by another...

– The Neurocritic (2015). On the Long Way Down: The Neurophenomenology of Ketamine

Confused?? You're not alone.


FDA Approval

The animal tranquilizer and club drug ketamine now known as a “miraculous” cure for treatment resistant depression has been approved by the FDA in a nasal spray formulation. No more messy IV infusions at shady clinics.

Here's a key Twitter thread that marks the occasion:


How does it work?

A new paper in Science (Moda-Sava et al., 2019) touts the importance of spine formation and synaptogenesis basically, the remodeling of synapses in microcircuits  in prefrontal cortex, a region important for the top-down control of behavior. Specifically, ketamine and its downstream actions are involved in the creation of new spines on dendrites, and in the formation of new synapses. But it turns out this is NOT linked to the rapid improvement in 'depressive' symptoms observed in a mouse model.



So I think we're still in the dark about why some humans can show immediate (albeit short-lived) relief from their unrelenting depression symptoms after ketamine infusion. Moda-Sava et al. say:
Ketamine’s acute effects on depression-related behavior and circuit function occur rapidly and precede the onset of spine formation, which in turn suggests that spine remodeling may be an activity-dependent adaptation to changes in circuit function (83, 88) and is consistent with theoretical models implicating synaptic homeostasis mechanisms in depression and the stress response (89, 90). Although not required for inducing ketamine’s effects acutely, these newly formed spines are critical for sustaining the antidepressant effect over time.

But the problem is, depressed humans require constant treatment with ketamine to maintain any semblance of an effective clinical response, because the beneficial effect is fleeting. If we accept the possibility that ketamine acts through the mTOR signalling pathway, in the long run detrimental effects on the brain (and non-brain systems) may occur (e.g., bladder damage, various cancers, psychosis, etc).

But let's stay isolated in our silos, with our heads in the sand.


Thanks to @o_ceifero for alerting me to this study.

Further Reading

Ketamine for Depression: Yay or Neigh?

Warning about Ketamine in the American Journal of Psychiatry

Chronic Ketamine for Depression: An Unethical Case Study?

still more on ketamine for depression

Update on Ketamine in Palliative Care Settings

Ketamine - Magic Antidepressant, or Expensive Illusion? - by Neuroskeptic

Fighting Depression with Special K - by Scicurious

On the Long Way Down: The Neurophenomenology of Ketamine


Reference

Moda-Sava RN, Murdock MH, Parekh PK, Fetcho RN, Huang BS, Huynh TN, Witztum J, Shaver DC, Rosenthal DL, Alway EJ, Lopez K, Meng Y, Nellissen L, Grosenick L, Milner TA, Deisseroth K, Bito H, Kasai H, Liston C. (2019). Sustained rescue of prefrontal circuit dysfunction by antidepressant-induced spine formation. Science 364(6436). pii: eaat8078.

The Paracetamol Papers

0
0

I have secretly obtained a large cache of files from Johnson & Johnson, makers of TYLENOL®, the ubiquitous pain relief medication (generic name: acetaminophen in North America, paracetamol elsewhere). The damaging information contained in these documents has been suppressed by the pharmaceutical giant, for reasons that will become obvious in a moment.1

After a massive upload of materials to Wikileaks, it can now be revealed that Tylenol not only...
...but along with the good comes the bad. Acetaminophen (paracetamol) also has ghastly negative effects that tear at the very fabric of society. These OTC tablets...

In a 2018 review of the literature, Ratner and colleagues warned:
“In many ways, the reviewed findings are alarming. Consumers assume that when they take an over-the-counter pain medication, it will relieve their physical symptoms, but they do not anticipate broader psychological effects.”

In the latest installment of this alarmist saga, we learn that acetaminophen blunts positive empathy, i.e. the capacity to appreciate and identify with the positive emotions of others (Mischkowski et al., 2019). I'll discuss those findings another time.

But now, let's evaluate the entire TYLENOL® oeuvre by taking a step back and examining the plausibility of the published claims. To summarize, one of the most common over-the-counter, non-narcotic, non-NSAID pain-relieving medications in existence supposedly alleviates the personal experience of hurt feelings and social pain and heartache (positive outcomes). At the same time, TYLENOL® blunts the phenomenological experiences of positive emotion and diminishes empathy for others' people's experiences, both good and bad (negative outcomes). Published articles have reported that many of these effects can be observed after ONE REGULAR DOSE of paracetamol. These findings are based on how undergraduates judge a series of hypothetical stories. One major problem (which is not specific to The Paracetamol Papers) concerns the ecological validity of laboratory tasks as measures of the cognitive and emotional constructs of interest. This issue is critical, but outside the main scope of our discussion today. More to the point, an experimental manipulation may cause a statistically significant shift in a variable of interest, but ultimately we have to decide whether a circumscribed finding in the lab has broader implications for society at large.


Why TYLENOL® ?

Another puzzling element is, why choose acetaminophen as the exclusive pain medication of interest? Its mechanisms of action for relieving fever, headache, and other pains are unclear. Thus, the authors don't have a specific, principled reason for choosing TYLENOL® over Advil (ibuprofen) or aspirin. Presumably, the effects should generalize, but that doesn't seem to be the case. For instance, ibuprofen actually Increases Social Pain in men.

The analgesic effects of acetaminophen are mediated by a complex series of cellular mechanisms (Mallet et al., 2017). One proposed mechanism involves descending serotonergic bulbospinal pathways from the brainstem to the spinal cord. This isn't exactly Prozac territory, so the analogy between Tylenol and SSRI antidepressants isn't apt. The capsaicin receptor TRPV1 and the Cav3.2 calcium channel might also be part of the action (Mallet et al., 2017). A recently recognized player is the CB1cannabinoid receptor. AM404, a metabolite of acetaminophen, indirectly activates CB1 by inhibiting the breakdown and reuptake of anandamide, a naturally occurring cannabinoid in the brain (Mallet et al., 2017).



Speaking of cannabinoids, cannabidiol (CBD) the non-intoxicating cousin of THC has a high profile now because of its soaring popularity for many ailments. Ironically, CBD has a very low affinity for CBandCB2 receptors and may act instead via serotonergic 5-HT1A receptors {PDF}, as a modulator of μ- and δ-opioid receptors, and as an antagonist and inverse agonist at several G protein-coupled receptors. Most CBD use seems to be in the non-therapeutic (placebo) range, because the effective dose for, let's say, anxiety is 10-20 times higher than the average commercial product. You'd have to eat 3-6 bags of cranberry gummies for 285-570 mg of CBD (close to the 300-600 mg recommended dose). Unfortunately, you would also ingest 15-30 mg of THC, which would be quite intoxicating.



Words Have Meanings

If acetaminophen were so effective in “mending broken hearts”, “easing heartaches”, and providing a “cure for a broken heart”, we would be a society of perpetually happy automatons, wiping away the suffering of breakup and divorce with a mere OTC tablet. We'd have Tylenol epidemics and Advil epidemics to rival the scourge of the present Opioid Epidemic.

Meanwhile, social and political discourse in the US has reached a new low. Ironically, the paracetamol “blissed-out” population is enraged because they can't identify with the feelings or opinions of the masses who are 'different' than they are. Somehow, I don't think it's from taking too much Tylenol. A large-scale global survey could put that thought to rest for good.




Footnotes

1 This is not true, of course, I was only kidding. All of the information presented here is publicly available in peer-reviewed journal articles and published press reports.

2except for when it doesn’t – “In contrast, effects on perceived positivity of the described experiences or perceived pleasure in scenario protagonists were not significant” (Mischkowski et al., 2019).

3 Yes, I made this up too. It is entirely fictitious; no one has ever claimed this, to the best of my knowledge.


References

Mallet C, Eschalier A, Daulhac L. Paracetamol: update on its analgesic mechanism of action (2017). Pain relief–From analgesics to alternative therapies.

Mischkowski D, Crocker J, Way BM. (2019). A Social Analgesic? Acetaminophen(Paracetamol) Reduces Positive Empathy. Front Psychol. 10:538.

The Secret Lives of Goats

0
0
Goats Galore (May 2019)


If you live in a drought-ridden, wildfire-prone area on the West Coast, you may see herds of goats chomping on dry grass and overgrown brush. This was initially surprising for many who live in urban areas, but it's become commonplace where I live. Announcements appear on local message boards, and families bring their children.


Goats Goats Goats (June 2017)


Goats are glamorous, and super popular on social media now (e.g. Instagram, more Instagram, and Twitter). Over 41 million people have watched Goats Yelling Like Humans - Super Cut Compilation on YouTube. We all know that goats have complex vocalizations, but very few of us know what they mean.





For the health and well-being of livestock, it's advantageous to understand the emotional states conveyed by vocalizations, postures, and other behaviors. A 2015 study measured the acoustic features of different goat calls, along with their associated behavioral and physiological responses. Twenty-two adult goats were put in four situations:
(1) control (neutral)
(2) anticipation of a food reward (positive)
(3) food-related frustration (negative)
(4) social isolation (negative)
Dr. Elodie Briefer and colleagues conducted the study at a goat sanctuary in Kent, UK (Buttercups Sanctuary for Goats). The caprine participants had lived at the sanctuary for at least two years and were fully habituated to humans. Heart rate and respiration were recorded as indicators of arousal, so this dimension of emotion could be considered separately from valence (positive/negative). For conditions #1-3, the goats were tested in pairs (adjacent pens) to avoid the stress of social isolation. They were habituated to the general set-up, to the Frustration and Isolation scenarios, and to the heart rate monitor before the actual experimental sessions, which were run on separate days. Additional details are presented in the first footnote.1





Audio A1. One call produced during a negative situation (food frustration), followed by a call produced during a positive situation (food reward) by the same goat (Briefer et al., 2015).


Behavioral responses during the scenarios were timed and scored; these included tail position, locomotion, rapid head movement, ear orientation, and number of calls. The investigators recorded the calls and produced spectograms that illustrated the frequencies of the vocal signals.



The call on the left (a) was emitted during food frustration (first call in Audio A1). The call on the right (b) was produced during food reward; it has a lower fundamental frequency (F0) and smaller frequency modulations. Modified from Fig. 2 (Briefer et al., 2015).


Both negative and positive food situations resulted in greater goat arousal (measured by heart rate) than the neutral control condition and the low arousal negative condition (social isolation). Behaviorally speaking, arousal and valence had different indicators:
During high arousal situations, goats displayed more head movements, moved more, had their ears pointed forwards more often and to the side less often, and produced more calls. ... In positive situations, as opposed to negative ones, goats had their ears oriented backwards less often and spent more time with the tail up.
Happy goats have their tails up, and do not point their ears backwards. I think I would need a lot more training to identify the range of goat emotions conveyed in my amateur video. At least I know not to stare at them, but next time I should read more about their reactions to human head and body postures.


Do goats show a left or right hemisphere advantage for vocal perception?

Now that the researchers have characterized the valence and arousal communicated by goat calls, another study asked whether goats show a left hemisphere or right hemisphere “preference” for the perception of different calls (Baciadonna et al., 2019). How is this measured, you ask?

Head-Turning in Goats and Babies

The head-turn preference paradigm is widely used in studies of speech perception in infants.

Figure from Prosody cues word order in 7-month-old bilingual infants (Gervain & Werker, 2013).




However, I don't know whether this paradigm is used to assess lateralization of speech perception in babies. In the animal literature, a similar head-orienting response is a standard experimental procedure. For now, we will have to accept the underlying assumption that orienting left or right may be an indicator of a contralateral hemispheric “preference” for that specific vocalization (i.e., orienting to the left side indicates a right hemisphere dominance, and vice versa).
The experimental procedure usually applied to test functional auditory asymmetries in response to vocalizations of conspecifics and heterospecifics is based on a major assumption (Teufel et al. 2007; Siniscalchi et al. 2008). It is assumed that when a sound is perceived simultaneously in both ears, the head orientation to either the left or right side is an indicator of the side of the hemisphere that is primarily involved in the response to the stimulus presented. There is strong evidence that this is the case in humans ... The assumption is also supported by the neuroanatomic evidence of the contralateral connection of the auditory pathways in the mammalian brain (Rogers and Andrew 2002; Ocklenburg et al. 2011).

The experimental set-up to test this in goats is shown below.



A feeding bowl (filled with a tasty mixture of dry pasta and hay) was fixed at the center of the arena opposite to the entrance. The speakers were positioned at a distance of 2 meters from the right and left side of the bowl and were aligned to it. 'X' indicates the position of the Experimenter. Modified from Fig. 2 (Baciadonna et al., 2019).


Four types of vocalizations were played over the speakers: food anticipation, food frustration, isolation, and dog bark (presumably a negative stimulus). Three examples of each vocalization were played, each from a different and unfamiliar goat (or dog).

The various theories of brain lateralization of emotion predicted different results. The right hemisphere model predicts right hemisphere dominance (head turn to the left) for high-arousal emotion regardless of valence (food anticipation, food frustration, dog barks). In contrast, the valence model predicts right hemisphere dominance for processing negative emotions (food frustration, isolation, dog barks), and left hemisphere dominance for positive emotions (food anticipation). The conspecific model predicts left hemisphere dominance for all goat calls (“familiar and non-threatening”) and right hemisphere dominance for dog barks. Finally, a general emotion model predicts right hemisphere dominance for all of the vocalizations, because they're all emotion-laden.

The results sort of supported the conspecific model (according to the authors), if we now accept that dog barks are actually “familiar and non-threatening” [if I understand correctly]. The head-orienting response did not differ significantly between the four vocalizations, and there was a slight bias for head orienting to the right (p=.046 vs. chance level), when collapsed across all stimulus types. 2

The time to resume feeding after hearing a vocalization (a measure of fear) didn't differ between goat calls and dog barks, so the authors concluded that “goats at our study site may have been habituated to dog barks and that they did not perceive dog barks as a serious threat.” However, if a Siberian Husky breaks free of its owner and runs around a fenced-in rent-a-goat herd, chaos may ensue.





Footnotes

1 Methodological details:
“(1) During the control situation, goats were left unmanipulated in a pen with hay (‘Control’). This situation did not elicit any calls, but allowed us to obtain baseline values for physiological and behavioural data. (2) The positive situation was the anticipation of an attractive food reward that the goats had been trained to receive during 3 days of habituation (‘Feeding’). (3) After goats had been tested with the Feeding situation, they were tested with a food frustration situation. This consisted of giving food to only one of the goats in the pair and not to the subject (‘Frustration’). (4) The second negative situation was brief isolation, out of sight from conspecifics behind a hedge. For this situation, goats were tested alone and not in a pair (‘Isolation’).”

2 The replication police will certainly go after such a marginal significance level, but I would like to see them organize a “Many Goats in Many Goat Sanctuaries” replication project.


References

Baciadonna L, Nawroth C, Briefer EF, McElligott AG. (2019). Perceptual lateralization of vocal stimuli in goats. Curr Zool. 65(1):67-74. [PDF]

Briefer EF, Tettamanti F, McElligott AG. (2015). Emotions in goats: mapping physiological, behavioural and vocal profiles. Animal Behaviour 99:131-43. [PDF]


'I Do Not Exist' - Pathological Loss of Self after a Buddhist Retreat

0
0

Eve is plagued by a waking nightmare.

‘I do not exist. All you see is a shell with no being inside, a mask covering nothingness. I am no one and no thing. I am the unborn, the non-existent.’


– from Pickering (2019).

Dr. Judith Pickering is a psychotherapist and Jungian Analyst in Sydney, Australia. Her patient ‘Eve’ is an “anonymous, fictionalised amalgam of patients suffering disorders of self.”   Eve had a psychotic episode while attending a Tibetan Buddhist retreat.
“She felt that she was no more than an amoeba-like semblance of pre-life with no form, no substance, no past, no future, no sense of on-going being.”



Eve's fractured sense of self preceded the retreat. In fact, she was drawn to Buddhist philosophy precisely because of its negation of self. In the doctrine of non-being (anātman), “there is no unchanging, permanent self, soul, or essence in living beings.” The tenet of emptiness (śūnyatā) that “all things are empty [or void] of intrinsic existence” was problematic as well. When applied and interpreted incorrectly, śūnyatā and anātman can resemble or precipitate disorders of the self.

Dr. Pickering noted:
‘Eve’ is representative of a number of patients suffering both derealisation and depersonalisation. They doubt the existence of the outer world (derealisation) and fear that they do not exist. In place of a sense of self, they have but an empty core inside (depersonalisation).

How do you find your way back to your self after that? Will the psychotic episode respond to neuroleptics or mood stabilizers?

The current article takes a decidedly different approach from this blog's usual themes of neuroimaging, cognitive neuroscience, and psychopharmacology. Spirituality, dreams, and the unconscious play an important role in Jungian psychology. Pickering mentions the Object Relations School, Attachment Theory, Field Theory, The Relational School, the Conversational Model, Intersubjectivity Theory and Infant Research. She cites Winnicott, Bowlby, and Bion (not Blanke & Arzy 2005, Kas et al. 2014, or Seth et al. 2012).

Why did I read this paper? Sometimes it's useful to consider the value of alternate perspectives. Now we can examine the potential hazards of teaching overly Westernized conceptions of Buddhist philosophy.1 


When Westerners Attend Large Buddhist Retreats

Eve’s existential predicament exemplifies a more general area of concern found in situations involving Western practitioners of Buddhism, whether in traditional settings in Asia, or Western settings ostensibly adapted to the Western mind. Have there been problems of translation in regard to Buddhist teachings on anātman (non-self) as implying the self is completely non-existent, and interpretations of śūnyatā (emptiness) as meaning all reality is non-existent, or void?
. . .

This relates to another issue concerning situations where Westerners attend large Buddhist retreats in which personalised psycho-spiritual care may be lacking. Traditionally, a Buddhist master would know the student well and carefully select appropriate teachings and practices according to a disciple’s psychological, physical and spiritual predispositions, proficiency and maturity. For example, teaching emptiness or śūnyatā to someone who is not ready can be extremely harmful. As well as being detrimental for the student, it puts the teacher at risk of a major ethical infringement...

I found Dr. Pickering's discussion of Nameless Dread to be especially compelling.




Nameless Dread

I open the door to a white, frozen mask. I know immediately that Eve has disappeared again into what she calls ‘the void’. She sits down like an automaton, stares in stony silence at the wall as if staring into space. I do not exist for her, she is totally isolated in her own realm of non-existence.

The sense of deadly despair pervades the room. I feel myself fading into nothingness, this realm of absence, unmitigated, bleakness and blankness.We sit in silence, sometimes for session after session. I wonder what on earth do I have to offer her? Nothing, it seems.




ADDENDUM (June 18 2019): A reader alerted me to a tragic story two years ago in Pennsylvania, where a young woman ultimately died by suicide after experiencing a psychotic episode during an intensive 10-day meditation retreat. The article noted:
"One of the documented but rare adverse side effects from intense meditation retreats can be depersonalization disorder. People need to have an especially strong ego, or sense of self, to be able to withstand the strictness and severity of the retreats."

Case reports of extreme adverse events are rare, but a 2017 study documented "meditation-related challenges" in Western Buddhists. The authors conducted detailed qualitative interviews in 60 people who engaged in a variety of Buddhist meditation practices (Lindahl et al., 2017). Thematic analysis revealed a taxonomy of 59 experiences across seven domains (I've appended a table at the end of the post). The authors found a wide range of responses: "The associated valence ranged from very positive to very negative, and the associated level of distress and functional impairment ranged from minimal and transient to severe and enduring." The paper is open access, and Brown University issued an excellent press release.


Footnote

1This is especially important given the appropriation of semi-spiritual versions of yoga and mindfulness, culminating in inanities such as tech bro eating disorders.


References

Blanke O, Arzy S. (2005). The out-of-body experience: disturbed self-processing at the temporo-parietal junction. Neuroscientist 11:16-24.

Kas A, Lavault S, Habert MO, Arnulf I. (2014) Feeling unreal: a functional imaging study in patients with Kleine-Levin syndrome. Brain 137: 2077-2087.

Lindahl JR, Fisher NE, Cooper DJ, Rosen RK, Britton WB. (2017). The varieties of contemplative experience: A mixed-methods study of meditation-related challenges  in Western Buddhists. PLoS One 12(5):e0176239.

Pickering J. (2019). 'I Do Not Exist': Pathologies of Self Among Western Buddhists.J Relig Health 58(3):748-769.

Seth AK, Suzuki K, Critchley HD. (2012). An interoceptive predictive coding model of conscious presence. Front Psychol. 2:395.


Further Reading

Derealization / Dying

Feeling Mighty Unreal: Derealization in Kleine-Levin Syndrome

A Detached Sense of Self Associated with Altered Neural Responses to Mirror Touch



Phenomenology coding structure (Table 4, Lindahl et al., 2017).

- click table for a larger view -

The Shock of the Unknown in Aphantasia: Learning that Visual Imagery Exists

0
0

Qualia are private. We don’t know how another person perceives the outside world: the color of the ocean, the sound of the waves, the smell of the seaside, the exact temperature of the water. Even more obscure is how someone else imagines the world in the absence of external stimuli. Most people are able to generate an internal “representation1 of a beach — to deploy imagery — when asked, “picture yourself at a relaxing beach.” We can “see” the beach in our mind’s eye even when we’re not really there. But no one else has access to these private images, thoughts, narratives. So we must rely on subjective report.

The hidden nature of imagery (and qualia more generally)2 explains why a significant minority of humans are shocked and dismayed when they learn that other people are capable of generating visual images, and the request to “picture a beach” isn’t metaphorical. This lack of imagery often extends to other sensory modalities (and to other cognitive abilities, such as spatial navigation and autobiographical memories), which will be discussed another time. For now, the focus is on vision.

Redditors and their massive online sphere of influence were chattering the other day about this post in r/TIFU: A woman was explaining her synesthesia to her boyfriend when he discovered that he has aphantasia, the inability to generate visual images.

TIFU by explaining my synesthesia to my boyfriend

“I have grapheme-color synesthesia. Basically I see letters and numbers in colors. The letter 'E' being green for example. A couple months ago I was explaining it to my boyfriend who's a bit of a skeptic. He asked me what colour certain letters and numbers were and had me write them down.  ...

Tonight we were laying in bed and my boyfriend quized me again. I tried explaining to him I just see the colors automatically when I visualize the letters in my head. I asked him what colour are the letters in his head. He looked at me weirdly like what do you mean in "my head, that's not a thing"

My boyfriend didnt understand what I meant by visualizing the letters. He didn't believe me that I can visualize letters or even visualize anything in my head.

Turns out my boyfriend has aphantasia. When he tries to visualize stuff he just sees blackness. He can't picture anything in his mind and thought that everyone else had it the same way. He thought it was just an expression to say "picture this" or etc...

There are currently 8652 comments on this post, many from individuals whowerestunnedto learn that the majority of people do have imagery. Other comments were from knowledgeable folks with aphantasia who described what the world is like for them, the differences in how they navigate through life, and how they compensate for what is thought of as "a lack" by the tyranny of the phantasiacs.






There's even a subreddit for people with aphantasia:



How did I find out about this? 3  It was because my 2016 post was suddenly popular again!





That piece was spurred by an eloquent essay on what's it's like to discover that all your friends aren't speaking metaphorically when they say, “I see a beach with waves and sand.” Research on this condition blossomed once more and more people realized they had it. Onlinecommunities developed and grew, including resourcesfor researchers. This trajectory is akin to the formation of chat groups for individuals with synesthesia and developmental prosopagnosia (many years ago). Persons with these neuro-variants have always existed,4 but they were much harder to locate pre-internet. Studies of these neuro-unique individuals have been going on for a while, but widespread popular dissemination of their existence alerts others – “I am one, too.”

The Vividness of Visual Imagery Questionnaire (VVIQ) “is a proven psychometric measurement often used to identify whether someone is aphantasic or not, albeit not definitive.” But it's still a subjective measure that relies on self-report. Are there more “objective” methods for determining your visual imagery abilities? I'm glad you asked. An upcoming post will discuss a couple of cool new experiments.


Footnotes

1 This is a loaded term that I won’t explain – or debate – right now.

2Somepeople don’t believe that qualia exist (as such), but I won’t elaborate on that, either.

3 I don’t hang out on Reddit, and my Twitter usage has declined.

4 Or at least, they've existed for quite some time.


Further Reading

Aphantasia Index

The Eye's Mind

Bonus Episode: What It's Like to Have no Mind's Eye, a recent entry of BPS Research Digest. There's an excellent collection of links, as well as a 30 minute podcast (download here).

Imagine These Experiments in Aphantasia (my 2016 post).

Involuntary Visual Imagery (if you're curious about what has been haunting me).

In fact, while I was writing this post, intrusive imagery of the Tsawwassen Ferry Terminal in Delta BC (the ferry from Vancouver to Victoria Island) appeared in my head. I searched Google Images and can show you the approximate view.



I was actually standing a little further back, closer to where the cars are parked. But I couldn't quite capture that view. Here is the line of cars waiting to get on the ferry.



During this trip two years ago (with my late wife), this sign had caught my eye so I ran across the street for coffee...


Is there an objective test for Aphantasia?

0
0



How well do we know our own inner lives? Self-report measures are a staple of psychiatry, neuroscience, and all branches of psychology (clinical, cognitive, perceptual, personality, social, etc.). Symptom scales, confidence ratings, performance monitoring, metacognitive efficiency (meta-d'/d'), vividness ratings, preference/likeability judgements, and affect ratings are all examples. Even monkeys have an introspective side! 1

In the last post we learned about a condition called aphantasia, the inability to generate visual images. Although the focus has been on visual imagery, many people with aphantasia cannot form “mental images” of any sensory experience. Earworms, those pesky songs that get stuck in your head, are not a nuisance for some individuals with aphantasia (but many others do get them). Touch, smell, and taste are even less studied mental imagery of these senses is generally more muted, if it occurs at all (even in the fully phantasic).

The Vividness of Visual Imagery Questionnaire (VVIQ, Marks 1973)2 is the instrument used to identify people with poor to non-existent visual imagery (i.e., aphantasia). For each item on the VVIQ, the subject is asked to “try to form a visual image, and consider your experience carefully. For any image that you do experience, rate how vivid it is using the five-point scale described below. If you do not have a visual image, rate vividness as ‘1’. Only use ‘5’ for images that are truly as lively and vivid as real seeing.” By its very nature, it's a subjective measure that relies on introspection.

But how well do we really know the quality of our private visual imagery? Eric Schwitzgebel has argued that it's really quite poor:3
“...it is observed that although people give widely variable reports about their own experiences of visual imagery, differences in report do not systematically correlate with differences on tests of skills that [presumably] require visual imagery, such as mental rotation, visual creativity, and visual memory.”

And it turns out that many of these cognitive skills do not require visual imagery. A recent study found that participants with aphantasia were slower to perform a mental rotation task (relative to controls), but they were more accurate (Pounder et al., 2018). The test asked participants to determine whether a pair of objects is identical, or mirror images of each other. Response times generally increase as a function of the angular difference in the orientations of the two objects. The overall slowing and accuracy advantage in those with aphantasia held across all levels of difficulty, so these participants must be using a different strategy than those without aphantasia.




Another study found that people with aphantasia were surprisingly good at reproducing the details of a complex visual scene from memory (Bainbridge et al., 2019).4

What test does require visual imagery? The phenomenon of binocular rivalry involves the presentation of two different images to each eye using specialized methods or simple 3D glasses. Instead of forming a unified percept, the images presented to the left and right eye seem to alternate. Thus, binocular rivalry involves perceptual switching. The figure below was taken from the informative video of Carmel and colleagues (2010) in JoVE. I highly recommend the video, which I've embedded at the end of this post.


A recent study examined binocular rivalry in aphantasia using the setup shown in Fig 1 (Keogh & Pearson, 2018). The key trick is that participants were cued to imagine one of two images for 6 seconds. Then they performed a vividness rating, followed by a brief presentation of the binocular rivalry display. Finally, the subjects had to report which color they saw.

- click for larger view -



The study population included 15 self-identified aphantasics recruited via Facebook, direct contact with the investigators, or referral from Professor Adam Zeman, and 209 control participants recruited from the general population. The VVIQ verified poor or non-existent visual imagery in the aphantasia group.

For the binocular rivalry test, the general population showed a priming effect from the imagined stimulus they were more likely to report that the subsequent test display matched the color of the imagined stimulus (green or red) at a greater than chance level (better than guessing). As a group, the individuals with aphantasia did not show priming that was greater than chance. However, as can be seen in Fig. 2E, results from this test were not completely diagnostic. Some with aphantasia showed better-than-chance priming, while a significant percentage of the controls did not show the binocular rivalry priming effect.


Fig. 2E (Keogh & Pearson, 2018). Frequency histogram for imagery priming scores for aphantasic participants (yellow bars and orange line) and general population (grey bars and black dashed line). The green dashed line shows chance performance (50% priming).


Furthermore, scores on the VVIQ in the participants with aphantasia did not correlate with their priming scores (although n=15 would make this hard to detect). Earlier work by these investigators suggested that the VVIQ does correlate with overall priming scores in controls, and binocular rivalry priming on an individual trial is related to self-reported vividness on that trial. Correlations for the n=209 controls in the present paper were not reported, however. This would be quite informative, since the earlier study had a much lower number of participants (n=20).

What does this mean? I would say that binocular rivalry priming can be a useful “objective” measure of aphantasia, but it's not necessarily diagnostic at an individual level.


Related Posts

The Shock of the Unknown in Aphantasia: Learning that Visual Imagery Exists

Imagine These Experiments in Aphantasia


Footnotes

1 see Mnemonic introspection in macaques is dependent on superior dorsolateral prefrontal cortex but not orbitofrontal cortex.

2 The VVIQ is not without its detractors...

3 Thanks to Rolf Degan for bringing this paper to my attention.

4 Also see this reddit thread on Sketching from memory.


References

Bainbridge WA, Pounder Z, Eardley A, Baker CI (2019). Characterizing aphantasia through memory drawings of real-world images. Cognitive Neuroscience Society Annual Meeting.

Keogh R, Pearson J. (2018). The blind mind: No sensory visual imagery in aphantasia. Cortex 105:53-60.

Marks DF. (1973). Visual imagery differences in the recall of pictures. British journal of Psychology 64(1): 17-24.

Pounder Z, Jacob J, Jacobs C, Loveday C, Towell T, Silvanto J. (2018). Mental rotation performance in aphantasia. Vision Sciences Society Annual Meeting.

Schwitzgebel E. (2002). How well do we know our own conscious experience? The case of visual imagery. Journal of Consciousness Studies 9(5-6):35-53.  {PDF}

Shepard RN, Metzler J. Mental rotation of three-dimensional objects. (1971) Science 171(3972): 701-3.



Brain Awareness Video Contest 2019

0
0


What Color is Monday? This video on synesthesia is one of the Top Ten videos in the Society for Neuroscience Brain Awareness Video Contest.  

Voting for the 2019 People's Choice Award closes 12 p.m. Eastern time on August 30, 2019.

However, it wasn't immediately apparent to me how you're supposed to cast your vote...

The entire playlist is on YouTube.  


4:09 Now playing

Multitasking


2
4:43 Now playing

How Ketamine Treats Depression


3
4:40 Now playing

Procrastination: I'll Think of a Title Later


4
3:55 Now playing

Seeing Culture in Our Brain


5
3:58 Now playing

Theory of Mind


6
4:00 Now playing

How Neuroscience Informs Behavioural Economics


7
4:13 Now playing

What Color is Monday


8
4:13 Now playing

Why do adolescents go to sleep late?


9
3:00 Now playing

An Inside Look: Alzheimer's Disease

10
 
3:57 Now playing

Technology Makes Us Bigger

Manipulating Visual Cortex to Induce Hallucinations

0
0



What is a hallucination? The question seems simple enough. “A hallucination is a perception in the absence of external stimulus that has qualities of real perception. Hallucinations are vivid, substantial, and are perceived to be located in external objective space.” When we think of visual hallucinations, we often think of trippy colorful images induced by psychedelic drugs (hallucinogens).

Are dreams hallucinations? How about visual imagery? Optical illusions of motion from viewing a non-moving pattern? No, no, and no (according to this narrow definition). Hallucinations are subjective and inaccessible to others, much as my recent posts discussed the presence or absence of visual imagery in individual humans. However, people can tell us what they're seeing (unlike animals).

Visual hallucinations can occur in psychotic disorders such as schizophrenia and schizoaffective disorder, although auditory hallucinations are more common in those conditions. Visual hallucinations are more often associated with neurodegenerative disorders. Among patients with Parkinson's Disease, 33% to 75% experience visual hallucinations, usually related to dopaminergic or anticholinergic drug therapy.

In contrast, hallucinations in dementia with Lewy Bodies (DLB) are diagnostic of the disease, and not related to pharmacological treatment. “Recurrent complex visual hallucinations ... are typically well-formed, often consisting of figures, such as people or animals.” The cause may be related to pathology in subcortical visual structures such as the superior colliculus and the pulvinar, rather than the visual cortex itself. A more specific hypothesis is that loss of α7 nicotinic receptors in the thalamic reticular nucleus could lead to hallucinations in DLB.


Charles Bonnet Syndrome (CBS)

Visual hallucinations are also caused by certain types of visual impairment, e.g. age-related macular degeneration, which leads to the loss of central vision. Damage to the macular portion of the retina can cause people to “see” simple patterns of colors or shapes that aren't there, or even images of people, animals, flowers, planets, and scary figures. Individuals with CBS know that the hallucinations aren't real, but they're distressing nonetheless.


image from the Macular Society 1


“Why are you discussing DLB and CBS here,” you might ask, “because these conditions don't involve abnormal stimulation of the visual cortex.” I brought them up because visual hallucinations in humans can occur for any number of reasons, not just from manipulation of highly specific cell types in primary visual cortex (which only occurs in optogenetic experiments with animals).



Electrical Stimulation Studies in Humans

A typical starting point here would be Wilder Penfield and the history of surgical epileptology, but I'll skip ahead to the modern day. Patients with intractable epilepsy present teams of neurosurgeons, neurologists, neurophysiologists, and neuroscience researchers with a unique opportunity to probe the inner workings of the human brain. Stimulating and recording from regions thought to be the seizure focus (or origin) guide neurosurgeons to the precise tissue to remove, and data acquired from neighboring brain bits is used to make inferences about neural function and electrophysiological mechanisms.




An exciting study by Dr. Joseph Parzivi and colleagues (2012) stimulated regions of the fusiform face area (FFA) in the inferior temporal cortex while a patient was undergoing surgical monitoring. Two FFA subregions were identified using both fMRI and electrocorticography (ECoG).



The location of the face-selective regions converged across ECoG and fMRI studies that presented various stimuli and recorded brain responses in the FFA and nearby regions (1 = posterior fusiform; 2 = medial fusiform). Then the investigators stimulated these two focal points while the patient viewed faces, objects, and photos of famous faces and places. Electrical brain stimulation (EBS) of the FFA produced visual distortions while the patient viewed real faces. Sham stimulation, and EBS of nearby regions, did not produce these perceptual distortions. The article included a video of the experiment, which is worth watching.




Another patient viewed pictures of faces during FFA stimulation and reported the persistence of facial images once they were gone, and the mixing of facial features, but no distortions (this is known as palinopsia). A third study induced the scary phenomenon of seeing yourself (self-face hallucination, or autoscopic hallucination), upon EBS of a non-FFA region (right medial occipitoparietal cortex). A video of this experiment is on YouTube.

“But wait,” you say, “you've been describing complex visual hallucinations and distortions of the face because the EBS was in higher-order visual areas that are specialized for faces. What happens when you stimulate primary visual cortex?” The answer is less exciting (but not unexpected): phosphenes, those non-specific images of light that appear when you close your eyes and press on your eyeballs (Winawer & Parvizi, 2016). These can be mapped retinotopically according to their location in the visual field. {also see this 1930 article by Foerster & Penfield: 2
"Stimulation of the occipital pole in area 17 produces an attack which is ushered in by an optic aura such as light, flames, stars, usually in the opposite visual field."}

But EBS of primary visual cortex is a coarse instrument. Here's where the latest refinements in optogenetics finally enter the picture (Marshel et al., 2019).



I won't attempt to cover the complex and novel techniques in Panel 1 and Panel 2 above. So I'll quote others who rave about what a breakthrough they are (and they are): amazing work, incredible breakthrough, Key advances in current paper include multiSLM to stimulate neurons based on function, and a red-shifted opsin allowing simultaneous 2p. And one day (hypothetically speaking), I'd like to present more than direct quotes and my cartoonish version of the optogenetic ensemble and behavioral training methods. But today isn't that day.
Using ChRmine [a fancy new opsin] together with custom holographic devices to create arbitrarily specified light patterns [horizontally or vertically drifting gratings], we were able to measure naturally occurring large-scale 3D ensemble activity patterns during visual experience and then replay these natural patterns at the level of many individually specified cells. We found that driving specific ensembles of cells on the basis of natural stimulus-selectivity resulted in recruitment of a broad network with dynamical patterns corresponding to those elicited by real visual stimuli and also gave rise to the correctly selective behaviors even in the absence of visual input.

Briefly, the investigators captured patterns of activity in V1 layer 2/3 neurons and layer 5 neurons that responded to horizontal or vertical gratings, and then played back the same patterns to those neurons in the absence of a visual stimulus. There goes the coarseness of EBS-induced phosphenes in humans... But obviously, the one great advantage of human studies is that your subjects can tell you what they see. Nonetheless, everyone wants to say that laser-activated nerve cells cause the mice to hallucinate vertical bars.

What really happened is that mice were trained to discriminate between horizontal and vertical gratings. The task required them to respond to the vertical, but not the horizontal. After training, visual stimulation with gratings was compared to optogenetic stimulation of classifier-identified neural ensembles in the absence of gratings. How well did the mice perform with optogenetic-only stimulation?

Modified from Fig. 5 (Marshel et al., 2019).(A) Discrimination performance during visual-only stimulation (black) and tuned-ensemble stimulation (red) over several weeks. (B) Discrimination performance for tuned-ensemble stimulation versus visual trials (P > 0.1 paired t test, two-tailed, n = 112 sessions across five mice).


Eventually the mice did just about as well on the discrimination task with optogenetic stimulation of the horizontally or vertically-tuned neurons, compared to when the horizontal or vertical stimuli were actually presented. Were these mice “hallucinating” vertical gratings?  Or did they merely learn to respond when a specific neural ensemble was activated? Isn't this somewhat like neurofeedback? During training, the mice were rewarded or punished based on their correct or incorrect response to the “vertical” ensemble stimulation. They can't tell us what, if anything, they saw under those conditions.

And the authors themselves noted the following limitation, that “mice initially required some training involving paired optogenetic and visual stimuli before optogenetic activation alone sufficed to drive behavioral discrimination.” Marshel et al. correctly invoked the “it takes a village” explanation that many other cortical and subcortical regions are required to generate a full natural visual percept.

My frustration with the press coverage stems from inaccurate language and overblown interpretations.3  [So what else is new?]  From the New York Times:

Why Are These Mice Hallucinating? Scientists Are in Their Heads
In a laboratory at the Stanford University School of Medicine, the mice are seeing things. And it’s not because they’ve been given drugs.

With new laser technology, scientists have triggered specific hallucinations in mice by switching on a few neurons with beams of light. The researchers reported the results on Thursday in the journal Science.

The technique promises to provide clues to how the billions of neurons in the brain make sense of the environment. Eventually the research also may lead to new treatments for psychological disorders, including uncontrollable hallucinations.

The Stanford press release doesn't use “hallucination” in the title, but a few are sprinkled throughout the text for dramatic effect: “Hallucinations are spooky” and “Hallucinating mice.”

Should we classify the following as a spooky hallucination: Optical stimulation of 20 vertical bar neurons in behaviorally trained mice who then perform the task as if the drifting vertical gratings were present in their visual field. I would say no. To be fair, in the Science paper the authors used the word “hallucinations” only once, and it wasn't to describe mouse percepts.
Studying specific sensory experiences with ensemble stimulation under different conditions may help advance development of therapeutic strategies . . . for neuropsychiatric symptoms such as hallucinations or delusions. More broadly, the ability to track and control large cellular-resolution ensembles over time during learning, and to selectively link cells and ensembles together into behaviorally relevant circuitry, may have important implications for studying and leveraging plasticity underlying learning and memory in health and disease.

I'm focusing on only one small aspect of the study, albeit the one that grabs media attention. The results were highly informative in many other ways, and I do not want to detract from the monumental technical achievements of the research team.


Footnotes

1 This is a terrific resource, with loads of information, additional artistic renderings, an eBook, and a must-see video.

2 There's no escaping Penfield...

3 See Appendix for expert opinion, since I am not an expert...


References

Foerster O, Penfield W. (1930). The structural basis of traumatic epilepsy and results of radical operation. Brain 53:99-119.

Marshel JH, Kim YS, Machado TA, Quirin S, Benson B, Kadmon J, Raja C, Chibukhchyan A, Ramakrishnan C, Inoue M, Shane JC, McKnight DJ, Yoshizawa S, Kato HE, Ganguli S, Deisseroth K. (2019). Cortical layer-specific critical dynamics triggering perception. Science Jul 18.

Parvizi J, Jacques C, Foster BL, Witthoft N, Rangarajan V, Weiner KS, Grill-Spector K. (2012). Electrical stimulation of human fusiform face-selective regions distorts face perception. J Neurosci. 32(43):14915-20.

Winawer J, Parvizi J. (2016). Linking Electrical Stimulation of Human Primary Visual Cortex, Size of Affected Cortical Area, Neuronal Responses, and Subjective Experience. Neuron 92(6): 1213-1219.


Appendix

Before lodging this critique, I consulted select experts on Twitter...






Ivanka Trump to Head New Agency of Precrime

0
0

A Precog capable of predicting future crimes in the film version of Minority Report.


In a strange twist suitable for the dystopian reality show broadcast from the West Wing dining room, a charity formed to fight pancreatic cancer has morphed into project SAFE HOME— “Stopping Aberrant Fatal Events by Helping Overcome Mental Extremes”.



After three highly publicized mass shootings killed 34 people in the US, a variation on the “guns don't kill people...” trope was issued by President Trump: “mental illness and hatred pulls [sic] the trigger, not the gun.” He was right about hatred: two of the shooters espoused white supremacist views, the other was a misogynist. But rather than anger the NRA with tiny incremental changes to control access to firearms, a better approach is to develop a national plan to stigmatize people with mental illnesses, who are more likely to be the victims of violent crime than the perpetrators:
White House considers new project seeking links between mental health and violent behavior

Bob Wright, the former NBC chair and a Trump friend, is one of the proposal’s supporters.

The White House has been briefed on a proposal to develop a way to identify early signs of changes in people with mental illness that could lead to violent behavior.

Supporters see the plan as a way President Trump could move the ball forward on gun control following recent mass shootings as efforts seem to be flagging to impose harsher restrictions such as background checks on gun purchases.

The proposal is part of a larger initiative to establish a new agency called the Health Advanced Research Projects Agency or HARPA, which would sit inside the Health and Human Services Department. Its director would be appointed by the president, and the agency would have a separate budget, according to three people with knowledge of conversations around the plan.

The Suzanne Wright Foundation, started by Bob Wright to fight pancreatic cancer after his wife died from the disease, has advocated for the formation of a DARPA-like federal agency called HARPA. The original vision for HARPA was to “leverage federal research assets and private sector tools to develop capabilities for diseases, like pancreatic cancer, that have not benefited from the current system.”



91% of pancreatic cancer patients die within 5 years– often because the cancer is too advanced to treat by the time of diagnosis. An early detection test for pancreatic cancer would be the most effective weapon to save lives from this disease. ... CodePurple advocates for HARPA ... as the most promising vehicle to develop a pancreatic cancer detection test.



According to the Washington Post:
The HARPA proposal was initially pitched as a project to improve the mortality rate of pancreatic cancer through innovative research to better detect and cure diseases. Despite internal support over the past two years, the model ran into what was described as “institutional barriers to progress,” according to a person familiar with the conversations. 

So why not flip your game by seizing a tragic moment in time to transform yourself into legacy-making material?
“[Trump is] very achievement oriented and I think all presidents have difficulties with science,” Wright said in an interview. “I think their political advisers say, ‘No that’s not a game for you,’ so they sort of back off a bit.”

He added: “But the president has a real opportunity here to leave a legacy in health care.”

The newly-realized HARPA would use artificial intelligence, machine learning, commercial surveillance technology (e.g., Apple Watches, Fitbits, Amazon Echo, Google Home), and “powerful tools [NOT] collected by health-care provides like fMRIs, tractography and image analysis.”
HARPA would develop “breakthrough technologies with high specificity and sensitivity for early diagnosis of neuropsychiatric violence,” says a copy of the proposal. “A multi-modality solution, along with real-time data analytics, is needed to achieve such an accurate diagnosis.”

And because of her vast experience in these technologies and her theoretical contributions to the neuroethics of predicting violent behavior, Ivanka Trump is the best person to lead such an effort:
“It would be perfect for her to do it — we need someone with some horsepower — someone like her driving it. ... It could get done,” said one official familiar with the conversations.

Further Reading

Oh Good, White House Reportedly Considering Dystopian Plan to Try to Detect the Next Mass Shooter

The Minority Report, by Philip K. Dick


Further Watching

Person of Interest, created by Jonathan Nolan (Memento)

   ( How Person of Interest Became Essential Science Fiction Television )

Are there evil people or only evil acts?

0
0

“I can guarantee that someone in the world thinks you are evil. Do you eat meat? Do you work in banking? Do you have a child out of wedlock? You will find that things that seem normal to you don't seem normal to others, and might even be utterly reprehensible. Perhaps we are all evil. Or, perhaps none of us are.”

– Julia Shaw, Evil: The Science Behind Humanity's Dark Side

Earlier this month, Science magazine and Fondation Ipsen co-sponsored a webinar on Impulses, intent, and the science of evil. “Can research into humankind’s most destructive inclinations help us become better people?”

It's freely available on demand. Let the controversy commence...


Are There Evil People or Only Evil Acts?

Moderator (Sean Sanders, Ph.D. Science/AAAS):  “... How do we define evil? ... Are there evil people or only evil acts?”

In brief, Dr. Abigail Marsh said no, there are absolutely not evil people; Dr. David Brucato mostly agreed with that; and Dr. Michael Stone gave an elaborate example using an offensive term ("gay pedophile"– as if anyone would refer to a male pedophile who targets little girls as a "straight pedophile").



Dr. Marsh was not amused...

More detail below.


Michael Stone, M.D. Columbia University:  [I'm skipping his first response on etymology and religion.]

Abigail Marsh, Ph.D. Georgetown University:  “... I don't think it's ever appropriate to refer to a person as evil. Actions are certainly evil and some people are highly predisposed to keep committing evil actions, but evil does has this very supernatural connotation.


Um, and like so many supernatural ideas, I think the concept of evil is pulled in whenever we have trouble understanding why someone would do such a thing, right, we talk about evil spirits or forces because it's so difficult to understand, um, for most people why anybody would be driven to do something to cause people pain and suffering for no reason. Um... but there is an explanation, we may not know what it is yet, but there is an explanation for these behaviors, and so uh... but the use of the word 'evil' doesn't get us any closer to understanding that. It leaves us in this supernatural rut rather than thinking of these behaviors as things that do have unfortunately human motivations .. but that are not the totality of the person. Evil is a very essentialist term as well. It assumes this sort of homogeneity within the person which is not usually true.” [I'm biased in this direction.]

Gary Brucato, Ph.D. Columbia University:  [after the moderator has implied that Stone & Brucato's book suggests that although rare, there are truly evil people.]  “...... What we have to clarify is that rarely, even in the most egregious repeat offenders, do you see somebody that from dusk to dawn is committing acts that are considered evil.  ... [I'll note here that Dr. Marsh is subtly nodding her head.]

Dr. Stone:  “Therefore there are a very very small number of people ... who do evil things as it were from the minute they wake up in the morning until they go to sleep at night. The one who comes closest to mind is the one I interviewed for the Discovery channel program some years ago and that was uh David Paul Brown his real name, who then changed his name when he was in prison the first time to Benjamin Nathaniel Bar-Jonah [actually, it was Nathaniel Benjamin Levi Bar-Jonah] who was a gay pedophile [sic] who would seduce boys coming out of a theater and then try to capture them if he could and kill them and so on. Some of them escaped and managed to identify him.1 [He was imprisoned and then released] ... OK. So. Out in Montana, he dressed as a policeman with a fake badge... and would seduce little boys ... coming out of a school ... he would ... kill them, eat part of the boy ... [more details about cannibalism] ... He had thousands of pictures of boys and on the walls making up very bad comments and puns as if uh uh some young kid as if that were a Chinese menu item, on a menu, some young kid.” [other sources say girls were among the victims]. He could be counted on, one of the few people I know of, who was evil day in and day out. That's very rare...”



Dr. Stone seems amused...



Dr. Marsh looks dejected


Labels Don't Get Us Anywhere

Moderator:  “... I feel this disgust and you know repulsion uh thinking about this. And so I'm assuming that this is what drives people to label someone as evil. Um and I wonder if that label is useful. You know if we look at maybe the children that you Abby are doing your research with um if you see these inclinations is it helpful to put labels on them and where does that get us you know scientifically and and in terms of treatment?

Dr. Marsh:  “I don't think it gets us anywhere, it's one of the many reasons I wouldn't ever refer to that term uh to call a human being evil. Um... the children I work with didn't make a choice to have the personalities they do or to have the life experiences that have led them to the place that they are and instead we know that psychopathy— again this condition of having very low levels of remorse and caring and compassion for other people has all the hallmarks of a mental illness — has a strong heritability component, having negative life experiences causes the prognosis to get worse, there are very clear characteristic brain and cognitive changes. It looks like any other psychological disorder in these key ways and so calling people who are affected by this condition evil is not helping us to develop treatments to try to improve their prognosis and to try to improve the odds that they won't go on to do things that affect the rest of us negatively. Um because what I what it does is calling someone evil robs us of the ability to view someone compassionately.”


It's Nearly Impossible to Predict...

Moderator: Do we all have the propensity to do evil deeds?

Dr. Marsh:  “...[regarding] 'horrible and unpredicted' acts, shooting up dozens of innocent people ... I think that when acts like that are so unpredictable, it often leads us to draw the incorrect conclusion, I guess anybody is capable of an act of evil so serious because if we can't predict who it can be, I guess it can be anybody. Um it is true that it is very hard to predict accurately who will engage in acts of significant violence like that especially when dealing with young men in whom various psychological disorders may be emerging for the first time that contribute to those actions. But it's absolutely not the case that everybody is capable of actions like that...”


Prevention, Not Prediction

This brings us to my previous post on a proposal to predict mass shootings via Apple Watches, Fitbits, Amazon Echo, Google Home and AI, and how this effort would be futile (not to mention horribly intrusive and stigmatizing). But we wouldn't want to anger the NRA, now would we?

An FBI study on pre-attack behaviors of 63 active shooters in the US found that only 25% had ever been diagnosed with a mental illness (only three of whom were diagnosed with a psychotic disorder).

A Department of Defense report on Predicting Violent Behavior says:
There is no panacea for stopping all targeted violence. Attempting to balance risks, benefits, and costs, the Task Force found that prevention as opposed to prediction should be the Department's goal. Good options exist in the near term for mitigating violence by intervening in the progression of violent ideation to violent behavior.
It should seem obvious that...

Dr. Stone: “...it's much more easy to get rid of the weaponry that allows these things to happen than it is to do psychotherapy, particularly on people with psychopathic tendencies who are not very amenable to psychotherapy anyway...”

Most Americans favor stricter gun control, and many of us think that our lax gun control laws are the greatest insanity, as are the politicians who refuse to do anything about it.


Further Reading

Aggression Detectors: The Unproven, Invasive Surveillance Technology Schools Are Using to Monitor Students

Trump's claims and what experts say about mental illness and mass shootings

Ivanka Trump to Head New Agency of Precrime


Predicting Mass Shootings via Intrusive Surveillance and Scapegoating of the Mentally Ill

No news from the hypothetical HARPA organization (Health Advanced Research Projects Agency) or the Suzanne Wright Foundation since the initial Washington Post report on their joint proposal for project SAFE HOME— “Stopping Aberrant Fatal Events by Helping Overcome Mental Extremes.”


Footnote

1 I had initially included more of the gory details, then decided a truncated version was better.

Is Mourning Rewarding? (revisited)

0
0


Can we reduce the persistent, unbearable pain of losing a loved one to 15-20 voxels of brain activity in the nucleus accumbens (O'Connor et al., 2008)? No? Then what if I told you that unrelenting grief — and associated feelings of sheer panic, fear, terminal aloneness, and existential crisis — isn't “suffering”. It's actually rewarding!

Well I'm here to tell you that it isn't.

Looking back on a post from 2011, you never realize it's going to be you.1


The top figure shows that activity in the nucleus accumbens was greater in response to grief-related words vs. neutral words in a group of 11 women with “Complicated” Grief (who lost a mother or sister to breast cancer in the last 5 years), compared to a group of 10 women with garden-variety Non-complicated Grief (O'Connor et al., 2008). Since the paper was published in 2008, and the standards for conducting fMRI studies have changed (larger sample sizes are necessary, no more “voodoo correlations”), I won't go on about that here.


When Grief Gets Complicated?

Grief is never simple, it's always complicated. The death of a cherished loved one can create a situation that seems totally intolerable. Almost everyone agrees that navigating such loss doesn't rely on one acceptable road map. Yet here it is. Normal people are supposed to move through a one year mourning period of “sorrow, numbness, and even guilt and anger. Gradually these feelings ease, and it's possible to accept loss and move forward.” If you don't, well then it's Complicated. This is a stigmatizing and limiting view of what it means to grieve the loss of a loved one.2

But is there really such there a thing as Complicated Grief? Simply put, it's “a chronic impairing form of grief brought about by interference with the healing process.” There are “maladaptive thoughts and dysfunctional behaviors” according to The Center for Complicated Grief. However, it's not named as an actual disorder in either of the major psychiatric manuals. In ICD-11, preoccupation with and longing for the deceased, accompanied by significant emotional distress and functional impairment beyond six months, is called Prolonged Grief Disorder. In DSM-5, Complicated Grief has morphed into Persistent Complex Bereavement Disorder, a not-exactly-reified condition subject to further study.


Dopamine Reward

Dopamine and its putative reward circuitry are way more complex than a simple one-to-one mapping. Studies in rodents have demonstrated that the nucleus accumbens (NA) can code for negative states, as well as positive ones, as shown by the existence of “hedonic coldspots” that generate aversive reactions, in addition to the usual hotspots (Berridge & Kringelbach, 2015). These studies involved microinjections of opioids into tiny regions of the NA.




If a chronically anguished state is portrayed as rewarding, it's time to recalibrate these terms. As I said in 2011:

If tremendous psychological suffering and loss are associated with activity in brain regions such as the ventral tegmental area and nucleus accumbens, isn't it time to abandon the simplistic notion of dopamine as the feel-good neurotransmitter? To quote the authors of Mesolimbic Dopamine in Desire and Dread (Faure et al., 2008):
It is important to understand how mesocorticolimbic mechanisms generate positive versus negative motivations. Dopamine (DA) in the nucleus accumbens is well known as a mechanism of appetitive motivation for reward. However, aversive motivations such as pain, stress, and fear also may involve dopamine in nucleus accumbens (at least tonic dopamine signals).

Grief-Related Words Are Rewarding

So what happens when you take a disputed diagnostic label and combine it with reverse inference in a neuroimaging study? (when you operate under the assumption that activity in a particular brain region must mean that a specific cognitive process or psychological state was present).

The NA activity was observed while the participants viewed grief words vs. neutral words that were superimposed over a photograph: a photo of the participant's deceased mother or a photo of someone else's mother. And it didn't matter whose mother was pictured, the difference was due to the words, not the images.3



Sample stimulus provides an [unintentional?] example of the emotional Stroop effect.


That's pretty hard to explain by saying that “the pangs of grief would continue to occur with NA activity, with reward activity in response to the cues motivating reunion with the deceased” if the effect is not specific to an image of the deceased.


Yearning and the Subgenual Cingulate

Why beat a dead horse, you ask? Because a recent study (McConnell et al., 2018) did not heed the advice above (sample size should be increased, beware reverse inference). The participants were 9 women with Complicated Grief (CG), 7 women with Non-complicated Grief (NG), and 9 Non-Bereaved (NB). The NA finding did not replicate, nor were there any differences between CG and NG and NB (over the entire brain). A post-hoc analysis then extracted a single question from a 19-item inventory and found that yearning for the dead spouse in all 16 Bereaved participants was correlated with activity in the subgenual cingulate (“depression-land” or perhaps “rumination-land”), for the comparison of an anticipation period vs. presentation of spouse photo. There were 5 spouse photos and 5 photos of strangers (note that it was not possible to predict which would be presented). The authors recognized the limitations of the study, yet pathologized yearning in Complicated and Non-complicated Grief alike.

I realize that the general motivation behind these experiments might be admirable, but you really can't come to any conclusions about how grief — a highly complex emotional response unique to each individual — might be represented in the brain.


Footnotes

1See There Is a Giant Hole Where My Heart Used To Be from October 2, 2018.

The posts on illness and death that I never wrote:
(yes, I was really serious about these)

2I was skeptical when someone sent me this book, It's OK That You're Not OK: Meeting Grief and Loss in a Culture That Doesn't Understand (by Megan Devine). I thought it was going to be overly 'self-helpy'. But it's actually been immensely helpful.

3 The idea of creating a self-relevant stimulus set was utterly horrifying to me.


References

Berridge KC, Kringelbach ML. (2015). Pleasure systems in the brain. Neuron 86(3):646-64.

Faure A, Reynolds SM, Richard JM, Berridge KC. (2008). Mesolimbic dopamine in desire and dread: enabling motivation to be generated by localized glutamate disruptions in nucleus accumbens. J Neurosci. 28:7184-92.

McConnell MH, Killgore WD, O'Connor MF. (2018). Yearning predicts subgenual anterior cingulate activity in bereaved individuals. Heliyon 4(10):e00852.

O'Connor MF, Wellisch DK, Stanton AL, Eisenberger NI, Irwin MR, Lieberman MD. (2008). Craving love? Enduring grief activates brain's reward center. Neuroimage 42:969-72.


The Neural Correlates of Channeling the Dead

0
0


November 2nd is the Day of the Dead, a Mexican holiday to honor the memory of lost loved ones. If you subscribe to certain paranormal belief systems, the ability to communicate with the dearly departed is possible via séance, which is conducted by a Medium who channels the spirit of the dead.

Since I do not subscribe to a paranormal belief system, I do not think it's possible to communicate with my dead wife. Nor am I especially knowledgeable about the differences between mediumship vs. channeling:
Mediumship is mostly about receiving and interpreting messages from other worlds.

Mediums often deliver messages from loved ones and spirit guides during readings.
. . .

...channeling is often about receiving messages from other types of entities, such as nature spirits, spirit guides, or even angels.

In short, Channels can communicate with a broader class of non-corporeal entities, for instance Mahatma Ghandi or Cleopatra (not only the dead relatives of paying clients).

What seems to be uncontroversial, however, is that Channels who enter into a trance state to convey the wisdom of Gandhi may experience an altered or “expanded” state of consciousness (regardless of the veracity of their communications). This permuted state of arousal should be manifest in the electroencephalogram (EEG) as an alteration in spectral power across the range of frequency bands (e.g., theta, alpha, beta etc.) that have been associated with different states of consciousness.

A group of researchers at the Institute of Noetic Sciences adopted this view in a study of persons who claimed the ability to channel (Wahbeh et al., 2019). The participants (n=13; 11 ♀, 2 ) were on average 57 year old white women of upper middle class socioeconomic status, representative of the study site in Marin County, California. The authors screened 155 individuals to arrive at their final sample size.1 Among the stringent inclusion criteria was the designation of being a Channel who directly and actively conveys the communications of a discarnate entity or spirit (rather than being a passive relay).2The participants were free of major psychiatric disorders, including psychosis and dissociation (according to self-report). Oh, and they had the ability to remain still during the channeling episodes, which was advantageous for the physiological measurements.

The participants alternated between channeling and no-channeling in 5 minute blocks while EEG and peripheral physiological signals (skin conductance, heart rate, respiration, temperature) were recorded. At the end of each counterbalanced session (run on separate days), voice recordings were obtained while the participants read stories.




Contrary to the authors' predictions, they found no significant differences between the channeling and no-channeling conditions for any of the physiological measures, nor for the EEG analyzed in standard frequency bands (theta 3–7 Hz; alpha 8–12 Hz; beta 13–20 Hz and low gamma 21–40 Hz) across 64 electrodes. I'll note here that the data acquisition and analysis methods were top-notch. The senior author (Arnaud Delorme) developed the widely used EEGLAB toolbox for data analysis, which was described in one of the most highly cited articles in neuroscience.3

Modest differences in voice parameters were observed: the channeled readings were softer in volume and slower in pace. The authors acknowledged that the participants could have impersonated an alternate voice during the channeling segments, whether consciously or unconsciously.

So does this mean that channeling is a sham? The authors don't think so. Instead, they recommended further investigation: “future studies should include other measures such as EEG connectivity analyses, fMRI and biomarkers.”


Footnotes

1This is a rather esoteric population, so I won't fault the researchers for having a small sample size.

2“The channeler goes into a trance state at will (the depth of the trance may vary) and the disincarnate entity/spirit uses the channeler’s body with permission to communicate directly through the channeler's voice, body movements, etc. (rather than the channeler receiving information mentally or otherwise and then relaying what is being received).”

3 I was rather critical of a previous study by this research group, which was ultimately retracted from Frontiers in Neuroscience. See Scientific Study Shows Mediums Are Wrong 46.2% of the Time.


Reference

Wahbeh H, Cannard C, Okonsky J, Delorme A. (2019). A physiological examination of perceived incorporation during trance. F1000Research 8:67.



Bev Tull, the fake medium on Bad Girls.


Olfactory Attraction and Smell Dating

0
0

Smell Dating, an interactive exhibit by Tega Brain and Sam Lavigne


A conceptual art installation, an extended olfactory performance piece, an elaborate participatory project, or an actual smell-based dating service? Smell Dating is all of these and more!




How it works
  1. We send you a t-shirt
  2. You wear the shirt for three days and three nights without deodorant.
  3. You return the shirt to us in a prepaid envelope.
  4. We send you swatches of t-shirts worn by a selection of other individuals.
  5. You smell the samples and tell us who you like.
  6. If someone whose smell you like likes the smell of you too, we'll facilitate an exchange of contact information.
  7. The rest is up to you.

My initial view of the project was based a recent showing of the interactive exhibit, where the participants could sniff small swatches of cloth, rate the unknown wearer's attractiveness (UNATTRACTIVE — NEUTRAL — ATTRACTIVE), learn how others voted, and see basic background information about the wearer (e.g., 30 year old female bisexual pescatarian). The first two I sniffed were odorless, but then there was #8...

The art installation is part of Useless Press, “a publishing collective that creates eclectic Internet things.” I assumed it was an elaborate joke, not an actual matchmaking service, but the artists must have had a grant to implement the idea in real life.





In Shanghai, people signed up over a two week period and paid ¥100 to become a “member.”
Smell Dating @ Shanghai [culminated] in the Sweat Lab, a participatory installation event... Visitors are invited to volunteer in the Smell Dating Sweat Lab and intimately experience the smells of strangers. During this event we will prepare the smell samples from our members t-shirts. Shirts will be meticulously cut up and batched to be sent back to Smell Dating members.

Smell Dating premiered in New York in March 2016 and received extensive press coverage, most of which took it seriously. Young female writers at The Guardian, Business Insider, Time, Racked, and a gay man at HuffPo tried out the service. The Buzzfeed reporter realized, “Yes, this is mostly a stunt-y gag” but also touched on the science behind smell and attraction. The health reporter at Time wrote about the underlying science in detail (e.g., major histocompatibility complex) and interviewed smell scientists, including Dr. Noam Sobel (founder of SmellSpace.com), Dr. Richard Doty (author of The Great Pheromone Myth), and Dr. Gary Beauchamp (Emeritus Director of the Monell Chemical Senses Center).

The creators of Smell Dating (Tega Brain and Sam Levine) consulted with olfactory scientists and provided an extensive reading list on the web site.

Most everyone agrees that odors evoke emotion, and the sense of smell has a unique relationship to autobiographical memory. But, as Richard Doty asks, do human pheromones exist?
While it is apparent that, like music and lighting, odors and fragrances can alter mood states and physiological arousal, is there evidence that unique agents exist, namely pheromones, which specifically alter such states?

It turns out that scientific opinion on this matter is decidedly mixed, even polarizing, as I'll discuss in the next post.


Reference

Doty RL. (2014). Human Pheromones: Do They Exist? In: Mucignat-Caretta C, editor. Neurobiology of Chemical Communication. Boca Raton (FL): CRC Press/Taylor & Francis; Chapter 19.




Smell Dating from Tega Brain.

Pheromone Friday

0
0


Pheromones, emitted chemicals that elicit a social response in members of the same species, have been most widely studied in insects as a mode of communication. In the insect world, pheromones can signal alarm, mark trails, control worker bee behavior, and elicit sexual behavior.

Sex pheromones are the chemicals that come to mind in popular lore. Do human beings secrete substances that are likely to attract potential mates? Unscrupulous players in the fragrance industry would like you to believe that's the case. Unable to attract women (or men)? There's a difference between marketing an intoxicating and sensual fragrance that's pleasing to the nose and snake oil such as:




Amazon even cautions prospective customers about SexyLife.





{BTW, humans lack a functional vomeronasal organ, the part of the accessory olfactory system that detects pheromones / chemosignals / non-volatile molecules (Petrulis, 2013).}


Don't we already know that human pheromones are a crock?

It depends on how you define pheromone, some would say.1“In mammals [rodents], few definitive cases have been identified in which single pheromone compounds evoke robust sexual behaviours, which might reflect an important contribution of signature mixtures in sexual communication” (Gomez-Diaz & Benton 2013, The joy of sex pheromones). In rodents, reproductive responses to “odor blends” or chemosignals are heavily modulated by experience, as opposed to the instinctive and fixed behaviors elicited by pheromones in insects. The evidence supporting the existence of mammalian pheromones is so weak that Richard Doty has called it The Great Pheromone Myth.

If rats don't have “pheromones” per se, why look for them in humans? Tristram Wyatt, who believes that human pheromones probably exist, wrote a paper called The search for human pheromones: the lost decades. He criticized the literature on four androgen-related steroids (androstenone, androstenol, androstadienone and estratetraenol), saying it suffers from publication bias, small sample sizes, lack of replication, and commercial conflicts of interest. There is no bioassay-based evidence that these molecules are human pheromones, yet “the attraction of studies on androstadienone (AND) and/or estratetraenol (EST) seems unstoppable” (Wyatt, 2015).

{Curiously, the SexyLife ad accurately lists the putative male pheromones, although their depicted functions are pure fantasy.}

Unstoppable it is. Supporters of human pheromones have recently published positive results on male sexual cognition, male dominance perception, cross-cultural chemosignaling of emotions, and sex differences in the main olfactory system.2


Olfactory Attraction

On the other hand, a null finding from 2017 drew a lot of attention from popularmediaoutlets and Science magazine, where the senior author stated: “I’ve convinced myself that AND and EST are not worth pursuing.” In that study, AND & EST had no effect on the participants' attractiveness ratings for photographs of opposite-sex faces (Hare et al., 2017).

The evolutionary basis of Smell Dating was given a cold shower by studies showing that the fresh (and odorless) armpit sweat of men and women, when incubated in vitro with bacteria that produce body odor, were rated identically on pleasantness and intensity (reviewed in Doty, 2014). Meanwhile, the day-old smelly armpit sweat of men was rated as equally unpleasant by men and women.3 Likewise, pleasantness and intensity ratings for female armpit sweat did not differ between men and women. This doesn't bode well for heterosexual dating...

Odors and fragrances are an important part of attraction, of course, but don't call them pheromones.


Footnotes

1 There is an accepted definition for "pheromone".

2 Since humans don't have an accessory olfactory system with its fun vomeronasal organ, the main olfactory system would have to do the pheromone-detecting work.

3 This could be due to larger apocrine glands, hairy armpits, and more carnivorous diets in men (Doty, 2014).


Further Reading

Scientific post in favor of human pheromones:
“Whether one chooses to believe in the existence of human pheromones or not, steroids clearly serve an essential olfactory signaling function that impacts broadly ranging aspects of the human condition from gender perception to social behavior to dietary choices.”

PET studies on AND, EST, and sexual orientation:

References

Doty RL. (2014). Human Pheromones: Do They Exist? In: Mucignat-Caretta C, editor. Neurobiology of Chemical Communication. Boca Raton (FL): CRC Press/Taylor & Francis; Chapter 19.

Gomez-Diaz C, Benton R. (2013). The joy of sex pheromones. EMBO Rep. 14(10): 874-83.

Hare RM, Schlatter S, Rhodes G, Simmons LW. (2017). Putative sex-specific humanpheromones do not affect gender perception, attractiveness ratings orunfaithfulness judgements of opposite sex faces. R Soc Open Sci. 4(3):160831.

Petrulis A. (2013). Chemosignals, hormones and mammalian reproduction. Horm Behav. 63(5): 723-41.

Wyatt TD. (2015). The search for human pheromones: the lost decades and the necessity of returning to first principles. Proc Biol Sci. 282(1804):20142994.


Computational Psychiatry, Self-Care, and The Mind-Body Problem

0
0
Schematic example of how the “mind” (cerebral cortex) is connected to the “body” (adrenal gland) - modified from Fig. 1 (Dum et al., 2016):
“Modern medicine has generally viewed the concept of psychosomaticdisease with suspicion. This view arose partly because no neural networks were known for the mind, conceptually associated with the cerebral cortex, to influence autonomic and endocrine systems that control internal organs.”

Psychosomatic illnesses are typically seen in pejorative terms — it's all in your head so it must not be real! Would a known biological mechanism lessen the stigma? For over 40 years, Dr. Peter Strick and his colleagues have conducted careful neuroanatomical tracing studies of motor and subcortical systems in the primate brain. A crucial piece of this puzzle requires detailed maps of the anatomical connections, both direct and indirect. How do the frontal lobes, which direct our thoughts, emotions, and movements, influence the function of peripheral organs?

In their new paper, Dum, Levinthal, and Strick (2019) revisited their 2016 work. The adrenal medulla (within the adrenal gland) secretes the stress hormones adrenaline and noradrenaline. To trace the terminal projections back to their origins in the spinal cord and up to the brain, the rabies virus was injected in the target tissue. The virus is taken up at the injection site and travels backward (in the retrograde direction) to identify neurons that connect to the adrenal medulla with one synapse: sympathetic preganglionic neurons in the spinal cord. Longer survival times allow the virus to cross second-, third-, and fourth-order synapses. The experiments revealed that cortical influences on the adrenal originate from networks involved in movement, cognition, and affect.

Modified from Fig. 5 (Dum et al., 2016). Pathways for top-down cortical influence over the adrenal medulla. Motor areas are filled yellow, and medial prefrontal areas are filled blue. (A)lateral surface. (B)medial wall.

The mind–body problem: Circuits that link the cerebral cortex to the adrenal medulla

“The largest influence originates from a motor network that includes all seven motor areas in the frontal lobe. ... The motor areas provide a link between body movement and the modulation of stress. The cognitive and affective networks are located in regions of cingulate cortex. They provide a link between how we think and feel and the function of the adrenal medulla.”
Based on these anatomical results, the authors concluded with a series of speculative links to alternative medicine practices, including yoga and Pilates; smiling to make yourself feel better; and back massage for stress reduction.
Because of this arrangement, we speculate that there is a link between the cortical control of 'core' muscles and the regulation of sympathetic output. This association could provide a neural explanation for the use of core exercises, such as yoga and Pilates, to ameliorate stress.
  • The orofacial representation of M1 provides a small focus of output to the adrenal medulla.
This output may provide a link between the activation of facial muscles, as in a 'standard' or 'genuine' smile, and a reduction in the response to stress.
  • Another large motor output region is in postcentral cortex, corresponding to the sensory representation of the trunk and viscera in primary somatosensory cortex.
This output may provide a neural substrate for the reduction of anxiety and stress that follows passive stimulation of back muscles during a massage.
I was a bit surprised to see these suggestions in a high-impact journal. Which leads us to the next topic.




Self-Care and Its Discontents

What can be bad about trying to reduce daily stress and improve your own health?

A recent paper by Jonathan Kaplan (Self-Care as Self-Blame Redux: Stress as Personal and Political)1 is critical of the way the self-care movement shifts the burden of alleviating stress-related maladies from society to the individual. Economic disadvantage is disproportionately associated with poor health outcomes, to state the obvious. Kaplan argues that focusing on individual self-care blames the victim for their response to a chronically stressful environment, rather than focusing on ways to effect structural changes to improve living conditions. In his efforts to highlight social inequities as a cause of stress-related illnesses, Kaplan goes too far (in my view) to discount all self-help practices that aim to preserve health.

It can be empowering for patients to be active participants in their health care, whether at the doctor's office, in the hospital, or at home. One great example is CREST.BD: A Collaborative Research and Knowledge Exchange Network at the University of British Columbia. They've established the Bipolar Wellness Centre (online resource to support evidence-based bipolar disorder self-management) and developed a Quality of Life Tool (free web-based tool to help people with bipolar disorder and healthcare providers use CREST.BD’s bipolar-specific quality of life scale).2

Then we have the wellness industry. Depending on what pop health source you read, there are 5, 45, 25, 12, 10, 10, 20 (etc.) essential self-care practices that you can incorporate into your daily routine (if you have the time and money). Wellnesslifestyleinsta-brands of the rich and famous hold up an impossible standard for upper-middle class white women [mostly]3 to attain. Perhaps our friendly neuroanatomists want to work on their core strength — they can follow @sianmarshallpilates for Pilates inspiration!


Back to Kaplan's point about blame...




It's easy to urge your followers to “stay happy!” and “move on!” if you have a net worth of $250 million, and if you don't have a psychiatric diagnosis. These 'Six Things' occupy a place in the pantheon of victim-blaming. People with mental illnesses are not effortlessly able to “stay happy!” or “move on!” or stop repetitive hand-washing (OCD) or avoid reckless spending (manic episode). And this is NOT their fault. And it doesn't make them mentally weak.

Most psychiatric disorders, in essence, involve thoughts, emotions, and/or behaviors that spin out of control. Here, I'm using control in a colloquial (but not absolute) sense, meaning: it's frequently difficult to stop a downward spiral once it gets started. Although overly simplistic...
  • Major depression involves thoughts (ruminations) and feelings of worthlessness and utter bleakness that spin out of control.
  • Generalized anxiety disorder involves thoughts (worry) about an imagined awful future that spin out of control.
  • Panic disorder involves a thwarted escape or safety response to perceived danger that has spun out of control.
  • Mania involves elevated mood and intense motivation for reward that spin out of control.
  • Obsessive-compulsive disorder involves maladaptive repetitive behaviors (that spin out of control) meant to quell maladaptive worrisome thoughts that have spun out of control.
  • Borderline personality disorder involves overly intense negative emotions that spin out of control and lead to self-destructive behaviors.
If people were able to control all this (without external intervention), the condition wouldn't reach the level of “disorder” — causing functional impairment and (usually) significant distress (but not always; e.g., people in the midst of a full-blown manic episode lack insight). I know this cartoonish level of description can raise the specter of free will and responsibility, especially in the context of criminal behavior. Are people with antisocial personality disorder not accountable for their horrible deeds? This timeless debate is beyond the scope of this post.


Computational Psychiatry

Or you can get mathematically fancy and formalize every single mental illness as a result of “faulty Bayesian priors”. Meaning, the brain's own “prediction machine” has incorporated inaccurate assumptions about the self or others or how the world works. A disordered Bayesian brain also ignores empirical evidence that contradicts these assumptions. The process of active inference— the brain's way of minimizing “surprise” when reconciling a top-down internal model and bottom-up external input  — has gone awry (Prossner et al., 2018; Linson & Friston, 2019). Although a sense of agency (or control) is a critical part of the active inference framework, I don't think an impairment in active inference is a choice. Or that one has control over this impairment. In fact, there's a Bayesian formulation of behavioral control (or lack thereof) that considers depression in terms of pessimistic, overly generalized priors, i.e. the depressed person assumes a lack of control over their circumstances.

Learned Helplessness (Huys & Dayan, 2009).


Using this mathematical model, you can confound the “stay happy!” crowd when you use all 24 equations to explain the concept of learned helplessness and its relevance to human depression.

Maybe one day, Bayesians will have a stable of Instagram influencers. Get to work on your branding ideas!


Footnotes

1 Thanks to Neuroskeptic for tweeting about this paper, along with the quote that individuals may "end up being seen (and seeing themselves) as responsible for their own failures to adequately ameliorate the stresses that they suffer."

2Full Disclosure: my late wife was a Peer Researcher with CREST.BD.

3While searching for health and wellness Instagram influencers, I was pleasantly surprised to find @hellolaurenash (a Chicago-based blogger, editor, and yoga and meditation teacher who founded a holistic wellness platform for marginalized communities) and @mynameisjessamyn (a body-positive yoga expert who wants to change the largely white and thin face of yoga and make the practice more accessible to all). I know absolutely nothing about the prevalence of diversity among health and wellness Instagram influencers, just like I know absolutely nothing about Computational Psychiatry.


References

Dum RP, Levinthal DJ, Strick PL. (2016). Motor, cognitive, and affective areas of the cerebral cortex influence the adrenal medulla. Proceedings of the National Academy of Sciences 113(35): 9922-9927.

Dum RP, Levinthal DJ, Strick PL. (2019). The mind–body problem: Circuits that link the cerebral cortex to the adrenal medulla. Proceedings of the National Academy of Sciences 116(52): 26321-26328.

Friston K, Schwartenbeck P, FitzGerald T, Moutoussis M, Behrens T, Dolan RJ. (2013). The anatomy of choice: active inference and agency. Frontiers in Human Neuroscience 7:598.

Huys QJ, Dayan P. (2009). A Bayesian formulation of behavioral control. Cognition 113(3):314-328.

Kaplan J. (2019). Self-Care as Self-Blame Redux: Stress as Personal and Political. Kennedy Inst Ethics J. 29(2):97-123.  PDF.

Linson A, Friston K. (2019). Reframing PTSD for computational psychiatry with the active inference framework. Cognitive Neuropsychiatry 24(5):347-368.

Prosser A, Friston KJ, Bakker N, Parr T. (2018). A Bayesian Account of Psychopathy: A Model of Lacks Remorse and Self-Aggrandizing. Computational Psychiatry 2:92-114.

Smash the wellness industry

... Wellness is a largely white, privileged enterprise catering to largely white, privileged, already thin and able-bodied women, promoting exercise only they have the time to do and Tuscan kale only they have the resources to buy.

Finally, wellness also contributes to the insulting cultural subtext that women cannot be trusted to make decisions when it comes to our own bodies, even when it comes to nourishing them. We must adhere to some sort of “program” or we will go off the rails.

People Neurology: Bennet versus Ann feud captured live!

0
0


In a People Neurology exclusive, contentious footage of Dr. Ann McKee and Dr. Bennet Omalu was captured at the 5th Annual Chronic Traumatic Encephalopathy Conference. Dr. Omalu was not invited due to their long-standing animosity, but he crashed the party anyway during Dr. McKee's highly anticipated Keynote. While she was presenting quantitative proteomic analysis of the postmortem brain tissue of Aaron Hernandez, Dr. Omalu stood up and admonished the entire audience: “Remember, I discovered CTE! [NOTE: this is false.1] You will all answer for this on judgment day.”

The crowd gasped...
 
“Don't believe the blonde white woman who claimed she discovered CTE!”

“Ha. I never claimed I discovered CTE,” Dr. McKee snorted.
 
“His criteria don’t make sense to me! I don’t know what he’s doing.”

“The final decision is still with the doctor who is examining. Not every CTE case will have all those [NINDS] guidelines,” Dr. Omalu retorted.

“His criteria for diagnosing CTE are all over the map,” McKee said.

“This is the problem. People lump me with him, and they lump my work with him, and my work is nothing like this.”




The acrimonious exchange, the conference, and the ridiculous magazine cover are all fictitious, but the quotes are faithful renditions reported by the Washington Post in a scathing critique:
From scientist to salesman
How Bennet Omalu, doctor of ‘Concussion’ fame, built a career on distorted science

. . .
Nearly 15 years [after his first paper], Omalu has withdrawn from the CTE research community and remade himself as an evangelist, traveling the world selling his frightening version of what scientists know about CTE and contact sports. In paid speaking engagements, expert witness testimony and in several books he has authored, Omalu portrays CTE as an epidemic and himself as a crusader, fighting against not just the NFL but also the medical science community, which he claims is too corrupted to acknowledge clear-cut evidence that contact sports destroy lives.

. . .
But across the brain science community, there is wide consensus on one thing: Omalu, the man considered by many the public face of CTE research, routinely exaggerates his accomplishments and dramatically overstates the known risks of CTE and contact sports, fueling misconceptions about the disease, according to interviews with more than 50 experts in neurodegenerative disease and brain injuries, and a review of more than 100 papers from peer-reviewed medical journals.

Much of the reporting isn't new: it was widely known four years ago that Omalu exaggerated his contributions to the field (including the “discovery” of CTE), and that he blasted his critics:

“There is a good deal of jealousy and envy in my field. For me to come out and discover the paradigm shift, it upset some people. I am well aware of that.”

What was new is that respected experts publicly questioned Omalu's past work and his widely disseminated claims.

The biggest revelation was that the histology images in one influential paper did not show CTE, and did not appear to be from the brain of the subject in question.
McKee and other experts confirmed, in interviews, something that long has been an open secret in the CTE research community: Omalu’s paper on Mike Webster — the former Pittsburgh Steelers great who was the first NFL player discovered to have CTE — does not depict or describe the disease as the medical science community defines it.

On the more technical side, the WaPo article provided a basic overview of the CTE pathology and what it does to the brain, along with helpful graphics.

Our sister station, Netflix Neurology, will review Killer Inside: The Mind of Aaron Hernandez (the former NFL player and convicted murderer who died by suicide while incarcerated).



Ann McKee with the brain of Aaron Hernandez,
which showed extensive CTE findings


Footnote

1 In 1928, Harrison S. Martland published PUNCH DRUNK, a paper about boxers with brain damage. And the CTE syndrome was first named by Macdonald Critchley in 1949: Punch-drunk syndromes: The chronic traumatic encephalopathy of boxers.

Netflix Neurology: Inside the Brain of Aaron Hernandez (for a few seconds)

0
0

from Dr. Ann McKee / Boston University


A recent addition to the Netflix “making a murderer” franchise is Killer Inside: The Mind of Aaron Hernandez. At the end of any such story, there is no single answer as to what “made” the murderer.

The story of Aaron Fernandez is still in the public eye because of his fame as a professional football player for the New England Patriots (2010-2012). He was so successful that he signed a 5 year, $40 million contract with the team in August 2012. His alleged involvement in a July 2012 double homicide came to light in 2014, after he had been charged with the June 2013 murder of his friend, Odin Lloyd. For the latter crime, he was found guilty and sentenced to life without parole. He was acquitted of the double homicide, but two days later he hanged himself with a bed sheet in his jail cell.

His brain was donated to the Boston University CTE Center. From extensive coverage in the New York Times and elsewhere, we already knew that the autopsy revealed extensive chronic traumatic encephalopathy (CTE).

If you hope to gain insight into repetitive head injury, brain pathology, and violent behavior from watching this documentary, you'll be disappointed. The 3-part series spent 5 minutes on CTE and 3 hours 15 minutes on everything else his childhood, violent father, hurtful mother, immense athletic talent, football career, ex-con friends, girlfriend and daughter, heavy drug use, street life, weapons collection, paranoia, alleged shootings, alleged same-sex relationships, arrests, murder trials, conviction, appeal, recorded jailhouse telephone conversations, outwardly professed homophobia, death by suicide, and numerous interviews with friends and former players.

Much of this material was pruient and unnecessary, especially the speculations about his hidden sexual orientation and how this might have fueled his anger.


Prosecution Considered a “Fear of Outing” Motive

This argument was preposterous and a rarity in the history of violence involving the LGBTQ community: Hernandez supposedly feared that his friend would reveal his secret life as a bisexual man, so he killed Lloyd to preserve his image as a hyper-masculine heterosexual man. This baffling obsession with sexuality is distracting and dangerous, as aptly explained by D. Watkins:
There's no evidence proving that Hernandez's sexuality made him a killer. So why is the newly resurfaced Hernandez conversation centered around his sex life? Probably because sex is juicy, forbidden and learning that Hernandez may have been gay provides the consumers with content for endless hours of gossip about what public figures do in their personal lives.
Fortunately, this argument was not allowed at trial.


The Potential Role of CTE Was an Afterthought

A Rolling Stone interview with director Geno McDermott revealed the project began as a 90 minute documentary initially presented at DOC NYC in 2018. Netflix was interested in expanding the doc into a multi-part series. The gay angle emerged when high school friend/lover Dennis SanSoucie agreed to an on-camera interview. Other additions included newly available recordings of prison phone calls, and a coda about CTE, the neurodegenerative disease that may be associated with repeated concussions in high-impact sports (in concert with other poorly delineated factors).

At the very end of Killer Inside, self-serving celebrity defense attorney Jose Baez spoke about the family's decision to donate Aaron's brain to the CTE Center at Boston University.



Dr. Ann McKee with the brain of Aaron Fernandez


Dr. McKee said Hernandez had very advanced disease for a 27 year old:
...and not only was it advanced microscopically, especially in the frontal lobes which are very important for decision-making, judgment and cognition, this would be the first case we've ever seen of that kind of damage in such a young individual.

I can say this is substantial damage that undoubtedly took years to develop. This is not something that is developed acutely or just in the last several years. I imagine these changes had been evolving over maybe even as long as a decade.



Then we see interviews with non-experts, who make causal connections between Aaron's CTE and his erratic, violent, tragic behavior. Worst of all is sleazy lawyer Jose Baez, who drummed up business for other players to sue the NFL under false pretenses (there is currently no way to accurately diagnose CTE in living persons).

Why didn't Aaron's brother, who grew up with the same abusive father and played football for many years, become a murderer? I'll let former NFL player Jermaine Wiggins have the last word:
My thoughts to people who think that CTE was somehow involved, I think that's an absolute cop out. There are thousands of former NFL players out there that might have dealt with concussions, I've dealt with them. So to use that as a cop out? I'm not... no, no. C'mon, we're smarter than that, people.”

Further Reading

Is CTE Detectable in Living NFL Players?
this 2013 post is still true today

Brief Guide to the CTE Brains in the News. Part 1: Aaron Hernandez
Viewing all 218 articles
Browse latest View live


Latest Images