Quantcast
Are you the publisher? Claim or contact us about this channel


Embed this content in your HTML

Search

Report adult content:

click to rate:

Account: (login)

More Channels


Showcase


Channel Catalog



Channel Description:

Deconstructing the most sensationalistic recent findings in Human Brain Imaging, Cognitive Neuroscience, and Psychopharmacology

older | 1 | .... | 6 | 7 | (Page 8) | 9 | newer

    0 0

    Fig. 4 (modified from Ezzyat et al., 2018). Stimulation targets showing numerical increase/decrease in free recall performance are shown in red/blue. Memory-enhancing sites clustered in the middle portion of the left middle temporal gyrus.


    Everyone forgets. As we grow older or have a brain injury or a stroke or develop a neurodegenerative disease, we forget much more often. Is there a technological intervention that can help us remember? That is the $50 million dollar question funded by DARPA's Restoring Active Memory (RAM) Program, which has focused on intracranial electrodes implanted in epilepsy patients to monitor seizure activity.

    Led by Michael Kahana's group at the University of Pennsylvania and including nine other universities, agencies, and companies, this Big Science project is trying to establish a “closed-loop” system that records brain activity and stimulates appropriate regions when a state indicative of poor memory function is detected (Ezzyat et al., 2018).

    Initial “open-loop” efforts targeting medial temporal lobe memory structures (entorhinal cortex, hippocampus) were unsuccessful (Jacobs et al., 2016). In fact, direct electrical stimulation of these regions during encoding of spatial and verbal information actually impaired memory performance, unlike an initial smaller study (Suthana et al., 2012).1

    {See Bad news for DARPA's RAM program: Electrical Stimulation of Entorhinal Region Impairs Memory}


    However, during the recent CNS symposium on Memory Modulation via Direct Brain Stimulation in Humans, Dr. Suthana suggested that “Stimulation of entorhinal white matter and not nearby gray matter was effective in improving hippocampal-dependent memory...” 2

    {see this ScienceNews story}


    Enter the Lateral Temporal Cortex

    Meanwhile, the Penn group and their collaborators moved to a different target region, which was also discussed in the CNS 2018 symposium: “Closed-loop stimulation of temporal cortex rescues functional networks and improves memory” (based on Ezzyat et al., 2018).


    Fig. 4 (modified from Ezzyat et al., 2018). Horizontal section. Stimulation targets showing numerical increase/decrease in free recall performance are shown in red/blue. Memory-enhancing sites clustered in the middle portion of the left middle temporal gyrus.


    Twenty-five patients performed a memory task in which they were shown a list of 12 nouns, followed by a distractor task, and finally a free recall phase, where they were asked to remember as many of the words as they could. The participants went through a total of 25 rounds of this study-test procedure.


    Meanwhile, the first three rounds were “record-only” sessions, where the investigators developed a classifier a pattern of brain activity that could predict whether or not the patient would recall the word at better than chance (AUC = 0.61, where chance =.50).” 3 The classifier relied on activity across all electrodes that were placed in an individual patient.


    Memory blocks #4-25 alternated between Simulation (Stim) and No Stimulation (NoStim) lists. In Stim blocks, 0.5-2.25 mA stimulation was delivered for 500 ms when the classifier AUC predicted 0.5 recall during word presentation. In NoStim lists, stimulation was not delivered on analogous trials, and the comparison between those two conditions comprised the main contrast shown below.


    Fig. 3a (modified from Ezzyat et al., 2018). Stimulation delivered to lateral temporal cortex targets increased the probability of recall compared to matched unstimulated words in the same subject (P < 0.05) and stimulation delivered to Non-lateral temporal targets in an independent group (P < 0.01).


    The authors found that that lateral temporal cortex stimulation increased the relative probability of item recall by 15% (using a log-binomial model to estimate the relative change in recall probability). {But if you want to see all of the data, peruse the Appendix below. Overall recall isn't that great...}

    Lateral temporal cortex (n=18) meant MTG, STG, and IFG (mostly on the left). Non-lateral temporal cortex (n=11) meant elsewhere (see Appendix below). The improvements were greatest with stimulation in the middle portion of the left middle temporal gyrus. There are many reason for poor encoding, and one could be that subjects were not paying enough attention. The authors didn't have the electrode coverage to test that explicitly. This leads me to believe that electrical stimulation was enhancing the semantic encoding of the words. The MTG is thought to be critical for semantic representations and language comprehension in general (Turken & Dronkers, 2011).

    Thus, my interpretation of the results is that stimulation may have boosted semantic encoding of the words, given the nature of the stimuli (words, obviously), the left lateralization with a focus in MTG, and the lack of an encoding task. The verbal memory literature clearly demonstrates that when subjects have a deep semantic encoding task (e.g., living/non-living decision), compared to shallow orthographic (are there letters that extend above/below?) or phonological tasks, recall and recognition are improved. Which led me to ask some questions, and one of the authors kindly replied (Dan Rizzuto, personal communication). 4

    1. Did you ever have conditions that contrasted different encoding tasks? Here I meant to ask about semantic vs orthographic encoding (because the instructions were always to “remember the words” with no specific encoding task).
       
    • We studied three verbal learning tasks (uncategorized free recall, categorized free recall, paired associates learning) and one spatial navigation task during the DARPA RAM project. We were able to successfully decode recalled / non-recalled words using the same classifier across the three different verbal memory tasks, but we never got sufficient paired associates data to determine whether we could reliably increase memory performance on this task.
     
  • Did you ever test nonverbal stimuli (not nameable pictures, which have a verbal code), but visual-spatial stimuli? Here I was trying to assess the lexical-semantic nature of the effect. 
    •  
    • With regard to the spatial navigation task, we did observe a few individual patients with LTC stimulation-related enhancement, but we haven't yet replicated the effect across the population.

    Although this method may have therapeutic implications in the future, at present it is too impractical, and the gains were quite small. Nonetheless, it is an accomplished piece of work to demonstrate closed-loop memory enhancement in humans.


    Footnotes

    1 Since that time, however, the UCLA group has reported that theta-burst microstimulation of....
    ....the right entorhinal area during learning significantly improved subsequent memory specificity for novel portraits; participants were able both to recognize previously-viewed photos and reject similar lures. These results suggest that microstimulation with physiologic level currents—a radical departure from commonly used deep brain stimulation protocols—is sufficient to modulate human behavior and provides an avenue for refined interrogation of the circuits involved in human memory.

    2 Unfortunately, I was running between two sessions and missed that particular talk.

    3 This level of prediction is more like a proof of concept and would not be clinically acceptable at this point.

    4 Thanks also to Youssef Ezzyat and Cory Inman, whom I met at the symposium.


    References

    Ezzyat Y, Wanda PA, Levy DF, Kadel A, Aka A, Pedisich I, Sperling MR, Sharan AD, Lega BC, Burks A, Gross RE, Inman CS, Jobst BC, Gorenstein MA, Davis KA, Worrell GA, Kucewicz MT, Stein JM, Gorniak R, Das SR, Rizzuto DS, Kahana MJ. (2018). Closed-loop stimulation of temporal cortex rescues functional networks and improves memory. Nat Commun. 9(1): 365.

    Jacobs, J., Miller, J., Lee, S., Coffey, T., Watrous, A., Sperling, M., Sharan, A., Worrell, G., Berry, B., Lega, B., Jobst, B., Davis, K., Gross, R., Sheth, S., Ezzyat, Y., Das, S., Stein, J., Gorniak, R., Kahana, M., & Rizzuto, D. (2016). Direct Electrical Stimulation of the Human Entorhinal Region and Hippocampus Impairs Memory. Neuron 92(5): 983-990.

    Suthana, N., Haneef, Z., Stern, J., Mukamel, R., Behnke, E., Knowlton, B., & Fried, I. (2012). Memory Enhancement and Deep-Brain Stimulation of the Entorhinal Area. New England Journal of Medicine 366(6): 502-510.

    Titiz AS, Hill MRH, Mankin EA, M Aghajan Z, Eliashiv D, Tchemodanov N, Maoz U, Stern J, Tran ME, Schuette P, Behnke E, Suthana NA, Fried I. (2017). Theta-burstmicrostimulation in the human entorhinal area improves memory specificity. Elife Oct 24;6.

    Turken AU, Dronkers NF. (2011). The neural architecture of the language comprehension network: converging evidence from lesion and connectivity analyses. Front Syst Neurosci. Feb 10;5:1.


    Appendix (modified from Supplementary Table 1) 

    - click on image for a larger view - 



    In the table above, Stim and NoStim recall percentages are for ALL words in the blocks. But:
    • Only half of the words in each Stim list were stimulated, however, so this comparison is conservative. The numbers improve slightly if you compare just the stimulated words with the matched non-stimulated words. Not all subjects exhibited a significant within-subject effect, but the effect is reliable across the population (Figure 3a)


    0 0


    Eve Marder, Alona Fyshe, Jack Gallant, David Poeppel, Gary Marcus
    image by @jonasobleser


    What Will Solve the Big Problems in Cognitive Neuroscience?

    That was the question posed in the Special Symposium moderated by David Poeppel at the Boston Sheraton (co-sponsored by the Cognitive Neuroscience Society and the Max-Planck-Society). The format was four talks by prominent experts in (1) the complexity of neural circuits and neuromodulation in invertebrates; (2) computational linguistics and machine learning; (3) human neuroimaging/the next wave in cognitive and computational neuroscience; and (4) language learning/AI contrarianism. These were followed by a lively panel discussion and a Q&A session with the audience. What a great format!


    We already knew the general answer before anyone started speaking.


    But I believe that Dr. Eve Marder, the first speaker, posed the greatest challenges to the field of cognitive neuroscience, objections that went mostly unaddressed by the other speakers. Her talk was a treasure trove of quotable witticisms (paraphrased):
    • How much ambiguity can you live with in your attempt to understand the brain? For me I get uncomfortable with anything more than 100 neurons
    • If you're looking for optimization (in[biological] neural networks), YOU ARE DELUSIONAL!
    • Degenerate mechanisms produce the same changes in behavior, even in a 5 neuron network...
    • ..so Cognitive Neuroscientists should be VERY WORRIED


    Dr. Marder started her talk by expressing puzzlement about why she would be asked to speak on such a panel, but she gamely agreed. She initially expressed some ideas that almost everyone endorses:
    • Good connectivity data is essential
    • Simultaneous recordings from many neurons is a good idea[but how many is enough?]
    But then she turned to the nightmare of trying to understand large-scale brain networks, as is the fashion these days in human fMRI and connectivity studies.
    • It's not clear what changes when circuits get big
    • Assuming a “return to baseline” is always hiding a change that can be cryptic
    • On the optimization issue... nervous systems can't optimize for one situation if it makes them unable to deal with other [unexpected] situations.
    • How does degeneracy relieve the tyranny?
    No one knows...

    Dr. Marder was also a speaker at the Canonical Computation in Brains and Machines meeting in mid-March (h/t @neuroecology), and her talk from that conference is available online.

    I believe the talks from the present symposium will be on the CNS YouTube channel as well, and I'll update the post if/when that happens.

    Speaking of canonical computation, now I know why Gary Marcus was apoplectic at the thought of “one canonical cortical circuit to rule them all.” More on that in a moment...


    The next speaker was Dr. Alona Fyshe, who spoke about computational vision. MLE, MAP, ImageNet, CNNs. I'm afraid I can't enlighten you here. Like everyone else, she thought theory vs. data is a false dichotomy. Her memorable tag line was “Kill Your Darlings.” At first I thought this meant delete your best line [of code? of your paper?], but in reality “our theories need to be flexible enough to adapt to data” (always follow @vukovicnikola #cns2018 for the best real-time conference coverage).


    Next up was Dr. Gary Marcus, who started out endorsing the famous Jonas and Kording (2017) paper Could a Neuroscientist Understand a Microprocessor? which suggested that current data analysis methods in neuroscience are inadequate for producing a true understanding of the brain. Later, during the discussion, Dr. Jack Gallant quipped that the title of that paper should have been “Neuroscience is Hard” (on Twitter, @KordingLab thought this was unfair). For that matter, Gallant told Marcus, “I think you just don't like the brain.” [Gallant is big on data, but not mindlessly]



    image via @vukovicnikola


    This sparked a lively debate during the panel discussion and the Q&A.


    Anyway, back to Marcus. “Parsimony is a false god,” he said. I've long agreed with this sentiment, especially when it comes to the brain the simplest explanation isn't always true. Marcus is pessimistic that deep learning will lead to great advances in explaining neural systems (or AI). It's that pesky canonical computation again. The cerebral cortex (and the computations it performs) isn't uniform across regions (Marcus et al., 2014).

    This is not a new idea. In my ancient dissertation, I cited Swindale (1990) and said:
    Swindale (1990) argues that the idea of mini-columns and macro-columns was drawn on insufficient data. Instead, the diversity of cell types in different cortical areas may result in more varied and complex organization schemes which would adequately reflect the different types of information stored there [updated version would be “types of computations performed there”].1

    Finally, Dr. Jack Gallant came out of the gate saying the entire debate is silly, and that we need both theory and data. But he also thinks it's silly that we'll get there with theory alone. We need to build better measurement tools, stop faulty analysis practices, and develop improved experimental paradigms. He clearly favors the collection of more data, but in a refined way. For the moment, collect large rich naturalistic data sets using existing technology.

    And remember, kids, “the brain is a horror show of maps.”



     image via @vukovicnikola



    Big Data AND Big Theory: Everyone Agrees (sorta)

    Eve Marder– The Important of the Small for Understanding the Big

    Alona Fyshe– Data Driven Everything

    Gary Marcus– Neuroscience, Deep Learning, and the Urgent Need for an Enriched Set of Computational Primitives

    Jack Gallant– Which Presents the Biggest Obstacle to Advances in Cognitive Neuroscience Today: Lack of Theory or Lack of Data?



    Gary Marcus talking over Jack Gallant. Eve Marder is out of the frame.
    image by @CogNeuroNews


    Footnote

    1Another quote from the young Neurocritic:
    As finer analyses are applied to both local circuitry and network properties, our theoretical understanding of neocortical operation may require further revision, if not total replacement with other metaphors. At our current state of knowledge, a number of different conceptual frameworks can be overlaid on the existing data to derive an order that may not be there. Or conversely, the data can be made to fit into one's larger theoretical view.


    0 0




    How is semantic knowledge represented and stored in the brain? A classic way of addressing this question is via single-case studies of patients with brain lesions that lead to a unique pattern of deficits. Agnosia is the inability to recognize some class (or classes) of entities such as objects or persons. Agnosia in the visual modality is most widely studied, but agnosias in the auditory and olfactory modalities have been reported as well. A key element is that basic sensory processing is intact, but higher-order recognition of complex entities is impaired.

    Agnosias that are specific for items in a particular category (e.g., animals, fruits/vegetables, tools, etc.) are sometimes observed. An ongoing debate posits that some category-specific dissociations may fall out along sensory/functional lines (the Warrington view), or along domain-specific lines (the Caramazza view).1 The former suggests that knowledge of living things is more reliant on vision (you don't pick up and use an alligator), while knowledge of tools is more reliant on how you use them. The latter hypothesis suggests that evolutionary pressures led to distinct neural systems for processing different categories of objects.2

    Much less work has examined how nonverbal auditory knowledge is represented in the brain. A new paper reports on a novel category-specific deficit in an expert bird-watcher who developed semantic dementia (Muhammed et al., 2018). Patient BA lost the ability to identify birds by their songs, but not by their appearance. As explained by the authors:

    BA is a dedicated amateur birder with some 30 years’ experience, including around 10 weeks each spring spent in birdwatching expeditions and over the years had also regularly attended courses in bird call recognition, visual identification and bird behaviour. He had extensive exposure to a range of bird species representing all major regions and habitats of the British Isles. He had noted waning of his ability to name birds or identify them from their calls over a similar timeframe to his evolving difficulty with general vocabulary. At the time of assessment, he was also becoming less competent at identifying birds visually but he continued to enjoy recognising and feeding the birds that visited his garden. There had been no suggestion of any difficulty recognising familiar faces or household items nor any difficulty recognising the voices of telephone callers or everyday noises. There had been no evident change in BA's appreciation of music.

    BA's brain showed a pattern of degeneration characteristic of semantic dementia, with asymmetric atrophy affecting the anterior, medial, and inferior temporal lobes, to a greater extent in the left hemisphere.



    Fig. 1 (modified from Muhammed et al., 2018).Note that L side of brain shown on R side of scan. Coronal sections of BA's T1-weighted volumetric brain MRI through (A) temporal poles; (B) mid-anterior temporal lobes; and (C) temporo-parietal junctional zones. There is more severe involvement of the left temporal lobe.



    The authors developed a specialized test of bird knowledge in the auditory, visual, and verbal modalities. The performance of BA was compared to that of three birders similar in age and experience.


    Results indicated that “BA performed below the control range for bird knowledge derived from calls and names but within the control range for knowledge derived from appearance.” There was a complicated pattern of results for his knowledge of specific semantic characteristics in the different modalities, but the basic finding suggested an agnosia for bird calls. Interestingly, he performed as well as controls on tests of famous voices and famous face pictures.

    Thus, the findings suggest separate auditory and visual routes to avian conceptual knowledge, at least in this expert birder. Also fascinating was the preservation of famous person identification via voice and image. The authors conclude with a ringing endorsement of single case studies in neuropsychology:
    This analysis transcends the effects of acquired expertise and illustrates how single case experiments that address apparently idiosyncratic phenomena can illuminate neuropsychological processes of more general relevance.

    link via @utafrith


    References

    Caramazza A, Mahon BZ. (2003). The organization of conceptual knowledge: the evidence from category-specific semantic deficits. Trends Cogn Sci. 7(8):354-361.

    Muhammed L, Hardy CJD, Russell LL, Marshall CR, Clark CN, Bond RL, Warrington EK, Warren JD. (2018). Agnosia for bird calls. Neuropsychologia 113:61-67.

    Warrington EK, McCarthy RA. (1994). Multiple meaning systems in the brain: a case for visual semantics. Neuropsychologia 32(12):1465-73.

    Warrington EK, Shallice T. (1984). Category specific semantic impairments. Brain 107(Pt 3):829-54.


    Footnotes

    1 I'm using this nomenclature as a shorthand, obviously, as many more researchers have been involved in these studies. And this is an oversimplification based on the origins of the debate.

    2 In fact, the always-argumentative Prof. Caramazza gave a lecture on The Representation of Objects in the Brain: Nature or Nurture for winning the Fred Kavli Distinguished Career Contributions in Cognitive Neuroscience Award (#CNS2018). Expert live-tweeter @vukovicnikola captured the following series of slides, which summarizes the debate as resolved in Caramazza's favor (to no one's surprise).








    0 0



    Deep brain stimulation (DBS) of the subthalamic nucleus in Parkinson's disease (PD) has been highly successful in controlling the motor symptoms of this disorder, which include tremor, slowed movement (akinesia), and muscle stiffness or rigidity. The figure above shows the electrode implantation procedure for PD, where a stimulating electrode is placed in either the subthalamic nucleus, (STN), a tiny collection of neurons within the basal ganglia circuit, or in the internal segment of the globus pallidus, another structure in the basal ganglia (Okun, 2012). DBS of the STN is more common, and more often a source of disturbing non-motor side effects.

    In brief, DBS of the STN alters neural activity patterns in complex cortico-basal-ganglia-thalamo-cortical networks (McIntyre & Hahn, 2010).

    DBS surgery may be recommended for some patients in whom dopamine (DA) replacement therapy has become ineffective, usually after a few years. DA medications include the classic DA precursor L-DOPA, followed by DA agonists such as pramipexole, ropinirole, and bromocriptine. But unfortunately, impulse control disorders (ICDs, e.g., compulsive shopping, excessive gambling, binge eating, and compulsive sexual behavior) occur in about 17% of PD patients on DA agonists (Voon et al., 2017).

    There are many first-person accounts from PD patients who describe uncharacteristic and embarrassing behavior after taking DA agonists, like this grandpa who started seeing prostitutes for the first time in his life:
    'I have become an embarrassment'

    For most of his life John Smithers was a respected family man who ran a successful business. Then he started paying for sex. Now, in his 70s, he explains how his behaviour has left him broke, alone and tormented

    I am 70 years old and used to be respectable. I was a magistrate for 25 years, and worked hard to feed my children and build up the family business. I was not the most faithful of husbands, but I tried to be discreet about my affairs.1 Now I seem to be a liability. Over the last two decades I have spent a fortune on prostitutes and lost two wives. I have made irrational business decisions that took me to the point of bankruptcy. I have become an embarrassment to my nearest and dearest.

    Also reports like: Drug 'led patients to gamble'.


    New-onset ICDs can also occur in patients receiving STN DBS, but the effects are mixed across the entire population: ICD symptoms can also improve or remain unchanged. Why this is the case is a vexing problem that includes premorbid personality, genetics, family history, past and present addictions, and demographic factors (Weintraub & Claassen).


    - click on image for a larger view -



    Neuroethicists are weighing in on the potential side effects of DBS that may alter a patient's perception of identity and self. A recent paper included a first-person account of altered personality and a sense of self-estrangement in a 46 year old woman undergoing STN DBS for PD (Gilbert & Viaña, 2018):
    The patient reported a persistent state of self-perceived changes following implantation. More than one year after surgery, her narratives explicitly refer to a persistent perception of strangeness and alteration of her concept of self. For instance, she reported:
    "can't be the real me anymore—I can't pretend . . . I think that I felt that the person that I have been [since the intervention] was somehow observing somebody else, but it wasn't me. . . . I feel like I am who I am now. But it's not the me that went into the surgery that time. . . . My family say they grieve for the old [me]. . . ."

    Many of her quotes are striking in their similarity to behaviors that occur in the manic phase of bipolar disorder {loss of control, grandiosity}:
    The patient also reported developing severe postoperative impulsivity: "I cannot control the impulse to go off if I'm angry." In parallel, while describing a sense of loss of control over some impulsions, she has also recognized that DBS gave her increased feelings of strength: "I never had felt this lack of power or this giving of power—until I had deep brain stimulation."

    {also uncharacteristic sexual urges and hypersexuality; excessively energetic; compulsive shopping}:
    ...she experienced radically enhanced capacities, in the form of increased uncontrollable sexual urges:
    "I know this is a bit embarrassing. But I had 35 staples in my head, and we made love in the hospital bathroom and that wasn't just me. It was just I had felt more sexual with the surgery than without."
    And greater physical energy:
    "I remember about a week after the surgery, I still had the 35 staples in my head and I was just starting to enter the cooler months of winter but my kids had got me winter clothes so I had nothing to wear to the follow up appointment and when I went back there of the morning, I thought "I can walk into the doctor's" even though it was 5 kilometers into town. It's like the psychologist said: "For a woman who had a very invasive brain surgery 9 days ago and you've just almost walked 10 kilometers."And on the way, I stopped and bought a very uncharacteristic dress, backless—completely different to what I usually do."

    Examining the DSM-5 criteria for bipolar mania, it seems clear (to me, at least) that the patient is indeed having a prolonged manic episode induced by STN DBS.
    In order for a manic episode to be diagnosed, three (3) or more of the following symptoms must be present:
    • Inflated self-esteem or grandiosity
    • Decreased need for sleep (e.g., one feels rested after only 3 hours of sleep)
    • More talkative than usual or pressure to keep talking
    • Flight of ideas or subjective experience that thoughts are racing
    • Attention is easily drawn to unimportant or irrelevant items
    • Increase in goal-directed activity (either socially, at work or school; or sexually) or psychomotor agitation
    • Excessive involvement in pleasurable activities that have a high potential for painful consequences (e.g., engaging in unrestrained buying sprees, sexual indiscretions, or foolish business investments)

    It's also notable that she divorced her husband, moved to another state, ruptured the therapeutic relationship with her neurologist and surgical team, and made a suicide attempt. She also took up painting and perceived the world in a more vibrant, colorful way {which resembles narratives of persons experiencing manic episodes}:
    "I don't know, all the senses came alive. I wanted to listen to Paul Kelly and all of my favorite music really loud in the toilet. And you know, also everything was colourful. . . . Well, since brain surgery I can. I didn't bother before. I can see the light . . . the light that is underlying every masterpiece in photography. . . . I've seen it like I've never seen it before . . . I am a totally different person. I like it that I love photography and music and colourful clothes, but where is the old me now?"

    However, she appears to display more insight into her altered behavior than {most} people in the midst of bipolar mania. Perhaps her reality monitoring abilities are more intact? Or it's because her symptoms wax and wane.2 But like in many manic individuals, she did not want this feeling to stop:
    "I went to the psychiatrist, and he said, 'Right, well, this is bordering on mania[NOTE: that is an understatement], you need to turn the settings right down to manage it.'I said to him, 'Please don't, this is not over the top—this is just joy.'"

    I think this line of research studying individuals with Parkinson's who have impulse control disorders due to DA replacement or DBS   can provide insight into bipolar mania. Certainly, drugs that act as antagonists at multiple DA receptor subtypes (typical and atypical antipsychotics) are used in the management of bipolar disorder.

    Patient narratives are also informative in this regard, and provide critical information for individuals considering various types of therapies for PD. In this paper, the patient was not informed by the medical team that there could be undesirable psychiatric side effects. She has taken legal action against the lead neurosurgeon, and the proceedings were ongoing when the article was written.


    ADDENDUM (May 14 2018): The study was conducted in accordance with Human Research Ethics Committee regulations. The patient provided consent to have her narratives included in publications on neuropsychiatric side effects of DBS for PD.


    Footnotes

    1One might wonder whether Mr. Smithers' premorbid propensity for affairs made him more vulnerable for compulsive sexual activity after DA agonists. And that is one consideration displayed in the box and circle diagram above.

    2 She did experience bouts of depression as well as mania, perhaps related to the stimulation parameters and precise location. And bipolar individuals also gain insight once the manic episode subsides.


    References

    Gilbert F, Viaña JN. (2018). A Personal Narrative on Living and Dealing with Psychiatric Symptoms after DBS Surgery. Narrat Inq Bioeth. 8(1):67-77.

    McIntyre CC, Hahn PJ. (2010). Network perspectives on the mechanisms of deep brain stimulation. Neurobiol Dis. 38(3):329-37.

    Voon V, Napier TC, Frank MJ, Sgambato-Faure V, Grace AA, Rodriguez-Oroz M, Obeso J, Bezard E, Fernagut PO. (2017). Impulse control disorders and levodopa-induceddyskinesias in Parkinson's disease: an updateLancet Neurol. 16(3):238-250.

    Weintraub D, Claassen DO. (2017). Impulse Control and Related Disorders in Parkinson's Disease. Int Rev Neurobiol. 133:679-717.


    0 0


    Do Plants Have “Memory”?


    A new paper by Bédécarrats et al. (2018) is the latest entry into the iconoclastic hullabaloo claiming a non-synaptic basis for learning and memory. In short, “RNA extracted from the central nervous system of Aplysia given long-term sensitization training induced sensitization when injected into untrained animals...” The results support the minority view that long-term memory is not encoded by synaptic strength, according to the authors, but instead by molecules inside cells (à la Randy Gallistel).

    Adam Calhoun has a nice summary of the paper at Neuroecology:
    ...there is a particular reflex1(memory) that changes when they [Aplysia] have experienced a lot of shocks. How memory is encoded is a bit debated but one strongly-supported mechanism (especially in these snails) is that there are changes in the amount of particular proteins that are expressed in some neurons. These proteins might make more of one channel or receptor that makes it more or less likely to respond to signals from other neurons. So for instance, when a snail receives its first shock a neuron responds and it withdraws its gills. Over time, each shock builds up more proteins that make the neuron respond more and more. These proteins are built up by the amount of RNA (the “blueprint” for the proteins, if you will) that are located in the vicinity of the neuron that can receive this information.  ...

    This new paper shows that in these snails, you can just dump the RNA on these neurons from someone else and the RNA has already encoded something about the type of protein it will produce.

    Neuroskeptic has a more contentious take on the study, casting doubt on the notion that sensitization of a simple reflex to any noxious stimulus (a form of non-associative “learning”) produces “memories” as we typically think of them. But senior author Dr. David Glanzman tolerated none of this, and expressed strong disagreement in the comments:
    “I’m afraid you have a fundamental misconception of what memory is. We claim that our experiments demonstrate transfer of the memory—or essential components of the memory—for sensitization. Now, although sensitization may not comport with the common notion of memory—it’s not like the memory of my Midwestern grandmother’s superb blueberry pies, for example—it nevertheless has unambiguous status as memory.  ...  [didactic lesson continues] ...  We do not claim in our paper that declarative memories—such as my memory of my grandmother’s blueberry pies—or even simpler forms of associative memories like those induced during classical conditioning—can be transferred by RNA. That remains to be seen.”

    OK, so Glanzman gets to define what memory is. But later on he's caught in a trap and has to admit:
    “Of course, there are many phenomena that can be loosely regarded as memory—the crease in folded paper, for example, can be said to represent the memory of a physical action.”

    That was in response to who said:
    “So a transfer of RNA that activates a cellular mechanism associated with touch isn't memory, but rather just exogenously turning on a cellular pathway. By that logic, gene therapy to treat sickle cell anemia changes blood "memory".” 2

    However, my favorite comment was from Smut Clyde:
    “Kandel set the precedent that reflexes in Aplysia are "memories", and now we're stuck with it.”

    This reminded me of Dr. Kandel's bold [outlandish?] attempt to link psychoanalysis, Aplysia withdrawal reflexes, and human anxiety (Kandel, 1983). I was a bit flabbergasted that gill withdrawal in a sea slug was considered “mentation” (thought) and could support Freudian views.3
    In the past, ascribing a particular behavioral feature to an unobservable mental process essentially excluded the problem from direct biological study because the complexity of the brain posed a barrier to any complementary biological analysis. But the nervous systems of invertebrates are quite accessible to a cellular analysis of behavior, including certain internal representations of environmental experiences that can now be explored in detail; This encourages the belief that elements of cognitive mentation relevant to humans and related to psychoanalytic theory can be explored directly [in Aplysia] and need no longer be merely inferred.

    - click on image for a larger view -



    So anticipatory anxiety in humans is isomorphic to invertebrate responses in a classical aversive conditioning paradigm, and chronic anxiety is recreated by long-term sensitization paradigms. Perhaps I missed the translational advances here, and any application to Psychoanalytic and Neuropsychoanalytic practice that has been fully realized.

    If we want to accept a flexible definition of learning and memory in animals, why not consider associative learning experiments in pea plants, where a neutral cue predicting the location of a light source had a greater effect on the direction of plant growth than innate phototropism (Gagliano et al., 2016)? Or review the literature on associative and non-associative learning in Mimosa? (Abramson & Chicas-Mosier, 2016). Or evaluate the field of ‘plant neurobiology’ and even the ‘Philosophy of Plant Neurobiology’ (Calvo, 2016). Or are the possibilities of chloroplast-based consciousness and “mentation” without neurons too threatening (or too fringe)?

    But in the end, we know we've reached peak plant cognition when a predictive coding model appears — Predicting green: really radical (plant) predictive processing (Calvo & Friston, 2017).


    Further Reading

    The Big Ideas in Cognitive Neuroscience, Explained (especially the sections on Gallistel and Ryan)

    What are the Big Ideas in Cognitive Neuroscience? (you can watch the videos of their 2017 CNS talks)


    Footnotes

    1edited to indicate my emphasis on reflex more specifically, the gill withdrawal reflex in Aplysia which can only go so far as a model of other forms of memory, in my view.

    Another skeptic (but for different reasons) is Dr. Tomás Ryan, who was paraphrased in Scientific American:
    But [Ryan] doesn’t think the behavior of the snails, or the cells, proves that RNA is transferring memories. He said he doesn’t understand how RNA, which works on a time scale of minutes to hours, could be causing memory recall that is almost instantaneous, or how RNA could connect numerous parts of the brain, like the auditory and visual systems, that are involved in more complex memories.

    3But I haven't won the Nobel Prize, so what do I know?


    References

    Abramson CI, Chicas-Mosier AM. (2016). Learning in plants: lessons from Mimosa pudica. Frontiers in psychology Mar 31;7:417.

    Bédécarrats A, Chen S, Pearce K, Cai D, Glanzman DL. (2018). RNA from Trained Aplysia Can Induce an Epigenetic Engram for Long-Term Sensitization in Untrained Aplysia. eNeuro. May 14:ENEURO-0038.

    Calvo P. (2016). The philosophy of plant neurobiology: a manifesto. Synthese 193(5):1323-43.

    Calvo P, Friston K. Predicting green: really radical (plant) predictive processing. Journal of The Royal Society Interface. 14(131):20170096.

    Gagliano M, Vyazovskiy VV, Borbély AA, Grimonprez M, Depczynski M. (2016). Learning by association in plants. Scientific Reports Dec 2;6:38427.

    Kandel ER. (1983). From metapsychology to molecular biology: explorations into the nature of anxiety. Am J Psychiatry 140(10):1277-93.

    Warnning: Do NOT Get Caught While Searching!!
    Your IP : - Country : - City:
    Your ISP TRACKS Your Online Activity! Hide your IP ADDRESS with a VPN!
    Before you searching always remember to change your IP adress to not be followed!
    PROTECT YOURSELF & SUPPORT US! Purchase a VPN Today!
    0 0


    from Balloon Analog Risk Task (BART)– Joggle Research for iPad


    Risk taking and risk preference1 are complex constructs measured by self-report questionnaires (“propensity”), laboratory tasks, and the frequency of real-life behaviors (smoking, alcohol use, etc).  A recent mega-study of 1507 healthy adults by Frey et al. (2017) measured risk preference using six questionnaires (and their subscales), eight behavioral tasks, and six frequency measures of real-life behavior.


    Table 1 (Frey et al., 2017). Risk-taking measures used in the Basel-Berlin Risk Study.

    -- click on image for a larger view --


    The authors were interested in whether they could extract a general factor of risk preference (R), analogous to the general factor of intelligence (g). They used a bifactor model to account for the general factor as well as specific, orthogonal factors (seven in this case). The differing measures above are often used interchangeably and called “risk”, but the general factor R only...
    ...explained substantial variance across propensity measures and frequency measures of risky activities but did not generalize to behavioral measures. Moreover, there was only one specific factor that captured common variance across behavioral measures, specifically, choices among different types of risky lotteries (F7). Beyond the variance accounted for by R, the remaining six factors captured specific variance associated with health risk taking (F1), financial risk taking (F2), recreational risk taking (F3), impulsivity (F4), traffic risk taking (F5), and risk taking at work (F6).

    In other words, the behavioral tasks didn't explain R at all, and most of them didn't even explain common variance across the tasks themselves (F7 below).



    Fig. 2 (Frey et al., 2017). Bifactor model with all risk-taking measures, grouped by measurement tradition. BART is outlined in red.


    Here's where we come to the recent study on “risk” and taste. The headlines were either misleading (A Sour Taste in Your Mouth Means You’re More Likely to Take Risks) or downright false no lemons were used (When Life Gives You Lemons, You Take More Risks) and this doozy (The Fruit That Helps You Take Risks – May Help Depressed And Anxious).

    To assess risk-tasking, Vi and Obrist (2018) administered the Balloon Analog Risk Task (BART) to 70 participants in the UK and 71 in Vietnam. They were randomly assigned to one of five taste groups [yes, n=14 each] of Bitter (caffeine), Salty (sodium chloride), Sour (citric acid), Umami (MSG), and Sweet (sugar, presumably). They were given two rounds of BART and consumed 20 ml of flavored drink or plain water before each (in counterbalanced order).

    [Remember that BART didn't load on a general factor of risk-taking, nor did it capture common variance across behavioral tasks.]

    As in the animation above (and a video made by the authors)2, the participant “inflates” a virtual balloon via mouse click until they either stop and win a monetary reward, or else they pop the balloon and lose money. The number of clicks (pumps) indicates risk-taking behavior. Overall, the Vietnamese students (all recruited from the School of Biotechnology and Food Technology at Hanoi University) appeared to be riskier than the UK students (but I don't know if this was tested directly). The main finding was that both groups clicked more after drinking citric acid than the other solutions.



    Why would this this balloon pumping be more vigorous after tasting a sour solution? We could also ask, why were the Vietnamese subjects more risk-averse after drinking salt water, and riskier (relative to UK subjects) after drinking sugar water?3 We simply don't know the answer to any of these questions, but the authors weren't shy about extrapolating to clinical populations:
    For example, people who are risk-averse (e.g., people with anxiety disorders or depression) may benefit from a sour additive in their diet.

    Smelling lemon oil is relaxing, but tasting citric acid promotes risk:
    Prior work has, for instance, shown that in cases of psychiatric disorders such as depression, anxiety, or stress-related disorders the use of lemon oils proved efficient and was further demonstrated to reduce stress. While lemon and sour are not the same, they share common properties that can be further investigated with respect to risk-taking.

    We're really not sure how any of this works. The authors offered many more analyses in the Supplementary Materials, but they didn't help explain the results. Although the sour finding was interesting and observed cross culturally, would it replicate using groups larger than n=14?


    Footnotes

    1 From Frey et al. (2017):
    The term “risk” refers to properties of the world, yet without a clear agreement on its definition, which has ranged from probability, chance, outcome variance, expected values, undesirable events, danger, losses, to uncertainties. People’s responses to those properties, on the other hand, are typically described as their “risk preference.”

    2 The video conveniently starts by illustrating risk as skydiving, which bears no relation to being an adventurous eater.

    3 The group difference in umami had a cultural explanation.


    References

    Frey R, Pedroni A, Mata R, Rieskamp J, Hertwig R. (2017). Risk preference shares the psychometric structure of major psychological traits. Science Advances 3(10):e1701381.

    Vi CT, Obrist M. (2018). Sour promotes risk-taking: an investigation into the effect of taste on risk-taking behaviour in humans. Scientific Reports 8(1):7987.





    0 0



    This post will be my own personalized rant about the false promises of personalized medicine. It will not be about neurological or psychiatric diseases, the typical topics for this blog. It will be about oncology, for very personal reasons: misery, frustration, and grief. After seven months of research on immunotherapy clinical trials, I couldn't find a single [acceptable] one1 in either Canada or the US that would enroll my partner with stage 4 cancer. For arbitrary reasons, for financial reasons, because it's not the “right” kind of cancer, because the tumor's too rare, because it's too common, because of unlisted exclusionary criteria, because one trial will not accept the genomic testing done for another trial.2 Because of endless waiting and bureaucracy.

    But first, I'll let NIH explain a few terms. Is precision medicine the same as personalized medicine? Yes and no. Seems to me it's a bit of a branding issue.
    What is the difference between precision medicine and personalized medicine?

    There is a lot of overlap between the terms "precision medicine" and "personalized medicine."According to the National Research Council, "personalized medicine" is an older term with a meaning similar to "precision medicine."

    Here's a startling paper from 1971, Can Personalized Medicine Survive? (by W.M. GIBSON, MB, ChB in Canadian Family Physician).




    [it's a defense of the old-fashioned family doctor (solo practitioner) by Gibson]:
    ...will the solo practitioner's demise be welcomed, his replacement being a battery of experts in the fields of medicine, surgery, psychiatry and all the new allied health sciences, infinitely better trained than their singlehanded predecessor?

    We wouldn't want any confusion between a $320 million dollar initiative and the ancient art of medicine. NIH again:
    However, there was concern that the word "personalized" could be misinterpreted to imply that treatments and preventions are being developed uniquely for each individual; in precision medicine, the focus is on identifying which approaches will be effective for which patients based on genetic, environmental, and lifestyle factors.

    The Council therefore preferred the term "precision medicine" to "personalized medicine." However, some people still use the two terms interchangeably.

    So “precision medicine” is considered a more contemporary and cutting-edge term.


    Archived from The White House Blog (Obama edition), January 30, 2015.


    What about pharmacogenomics? 
    Pharmacogenomics is a part of precision medicine. Pharmacogenomics is the study of how genes affect a person’s response to particular drugs. This relatively new field combines pharmacology (the science of drugs) and genomics (the study of genes and their functions) to develop effective, safe medications and doses3 that are tailored to variations in a person’s genes.

    At present, precision pharmacogenomics is just a “tumor grab” with no promise of treatment in most cases. There are some serious and admirable efforts, but accessibility and costs are major barriers.


    But we've been promised such a utopia for quite a while.
    Personalized medicine in oncology: the future is now (Schilsky, 2010):

    Cancer chemotherapy is in evolution from non-specific cytotoxic drugs that damage both tumour and normal cells to more specific agents and immunotherapy approaches. Targeted agents are directed at unique molecular features of cancer cells, and immunotherapeutics modulate the tumour immune response; both approaches aim to produce greater effectiveness with less toxicity. The development and use of such agents in biomarker-defined populations enables a more personalized approach to cancer treatment than previously possible and has the potential to reduce the cost of cancer care.



    IT'S 2018, WHERE IS THAT FUTURE YOU PROMISED US?

    But wait, let's go back further, to 1999:
    New Era of Personalized Medicine 
    Targeting Drugs For Each Unique Genetic Profile

    Certainly, there are success stories for specific types of cancer (e.g., Herceptin). A more recent example is the PD-1 inhibitor pembrolizumab (Keytruda®), which has shown remarkable results in patients with melanoma, including Jimmy Carter. The problem is, direct-to-consumer marketing creates false hope about the probability that a patient with another form of cancer will respond to this treatment, or one of the many other immunotherapies with PR machines. But if there's a 25% chance or even a 10% chance it'll extend the life of your loved one, you'll go to great lengths to try to acquire it, one way or another. Speaking from personal experience.



    But exaggerated claims and the use of the superlatives in describing massively expensive cancer drugs (e.g., “breakthrough,” “game changer,” “miracle,” “cure,” “home run,” “revolutionary,” “transformative,” “life saver,” “groundbreaking,” and “marvel”) are highly questionable (Abola & Prasad, 2016) and even harmful.

    It's a truly horrible feeling when you realize there are no options available, and all your hope is gone.


    References

    Abola MV, Prasad V. (2016). The use of superlatives in cancer research. JAMA oncology. 2(1):139-41.

    Gibson WM. (1971). Can personalized medicine survive?Can Fam Physician. 17(8):29-88.

    Langreth R, Waldholz M. (1999). New era of personalized medicine: targeting drugs for each unique genetic profile. Oncologist 4(5):426-7.

    Schilsky RL. (2010). Personalized medicine in oncology: the future is now. Nat Rev Drug Discov. 9(5):363-6.  {PDF}


    Footnotes

    1 

    2  But hey, we'll do yet another biopsy of your tumor, and let you know the results in 2-3 months, when you're too ill to be enrolled in any trial. Here's a highly relevant article The fuzzy world of precision medicine: deliberations of a precision medicine tumor board but I'm afraid to read it.

    3 OMFG, you have got to be kidding me. Here is a subset of the possible side effects from one toxic monoclonal antibody duo:

    Very likely (21% or more, or more than 20 people in 100):
    • fatigue/tiredness
    • decrease or loss of appetite, which may result in weight loss
    • cough
    • inflammation of the small intestine and / or large bowel causing abdominal pain and diarrhea which may be severe and life threatening

    Less likely (5 – 20% or between 5 and 20 people in 100):
    • pain and or inflammation in various areas including: muscles , joint, belly, back, chest, headache
    • flu-like symptoms such as body aches, fever, chills, tiredness, loss of appetite, cough
    • constipation
    • dizziness
    • shortness of breath
    • infection which may rarely be serious and become life threatening
    • nausea and vomiting
    • dehydration
    • skin inflammation causing hives or rash which may rarely be severe and become life threatening
    • anemia which may cause tiredness, or may require blood transfusion
    • itching
    • abnormal liver function seen by blood tests. This may rarely lead to jaundice (yellowing of the skin and whites of eyes) and be severe or life threatening
    • abnormal function of your thyroid gland which cause changes in hormonal levels. A decrease in thyroid function as seen on blood tests may cause you to feel tired, cold or gain weight while an increase in thyroid function may cause you to feel shaky, have a fast pulse or lose weight.
    • Swelling of arms and/or legs (fluid retention)
    • Changes in the level of body salts as seen on blood tests. You may not have symptoms.
    • Inflammation of the pancreas that results in increased level of digestive enzymoes (lipase, amylase) seen in bloods and may cause abdominal pain
    • Inflammation of the lungs (including fluid in the lungs) which could cause shortness of breath, chest pain, new or worse cough. It could be serious and/or life threatening. May occur more frequently if you are receiving radiation treatment to your chest or if you are Japanese.
    • Serious bleeding events leading to death may occur in patients with head and neck tumors. Please talk to your doctor immediately if you are experiencing bleeding.
    • Decrease of a protein in your blood called albumin that may cause fluid retention and results in swelling of your legs or arms

    You get the idea. I'll skip:

    Rarely (1 – 4% or less than 5 in 100 people)

    Very Rare (less than 1% or less than 1 in 100 people)


    0 0

    A great deal of neuroscience has become “circuit cracking.”
    — Alex Gomez-Marin


    A miniaturized holy grail of neuroscience is discovering that activation or inhibition of a specific population of neurons (e.g., prefrontal parvalbumin interneurons) or neural circuit (e.g., basolateral amygdala nucleus accumbens) is “necessary and sufficient” (N&S) to produce a given behavior.



    from: Optogenetics, Sex, and Violence in the Brain: Implications for Psychiatry1 


    In the last year or so, it has become acceptable to question the dominant systems/circuit paradigm of “manipulate and measure” as THE method to gain insight into how the brain produces behavior (Krakauer et al., 2017; Gomez-Marin, 2017). Detailed analysis of an organism's natural behavior is indispensable for progress in understanding brain-behavior relationships. Claims that optogenetic and other manipulations of a neuronal population can demonstrate that it is “N&S” for a complex behavior have also been challenged. Gomez-Marin (2017) pulled no punches and stated:
    I argue that to upgrade intervention to explanation is prone to logical fallacies, interpretational leaps and carries a weak explanatory force, thus settling and maintaining low standards for intelligibility in neuroscience. To claim that behavior is explained by a “necessary and sufficient” neural circuit is, at best, misleading.

    The latest entry into this fault-fest goes further, indicating that most N&S claims in biology violate the principles of formal logic and should be called ‘misapplied-N&S’ (Yoshihara & Yoshihara, 2018). They say the use of “necessary and sufficient” terminology should be banned and replaced with “indispensable and inducing” (except for a handful of instances). 2



    modified from Fig. 1A (Yoshihara & Yoshihara, 2018). The relationship between squares and rectangles as a typical example of true necessary (being a rectangle; pale green) and sufficient condition (being a square; magenta) in formal logic.


    N&S claims are very popular in optogenetics, which has become a crucial technique in neuroscience. But demonstrating true N&S is nearly impossible, because the terminology disregards: activity in the rest of the brain, whether all the activated neurons are “necessary” (instead of only a subset), what actually happens under natural conditions (rather than artificially induced), the requirement of equivalence, etc. Yoshihara & Yoshihara (2018) are especially disturbed by the incorrect use of “sufficient”, which leads to results being overstated and misinterpreted:
    The main problem comes from the word ‘sufficient,’ which is often used to emphasize that artificial expression of only a single gene or activation of only a single neuron can cause a substantial and presumably relevant effect on the whole process of interest. Although it may be sufficient as an experimental manipulation for triggering the effect, it is not actually sufficient for executing the whole effect itself.

    And for optogenetics:
    Rather, the importance of ‘sufficiency’ experiments lies in demonstrating a causal link through optogenetic activation of neurons... Thus, words such as triggers, promotes, induces, switches, or initiates may better reflect or express the desired nuance without creating such confusion.

    Y & Y (2018) aren't shy about naming names in their Commentary, and even say that misapplied-N&S has generated unproductive and misleading studies that offer no scientific insight whatsoever. Although one could say that N&S has a different meaning in biology, or is merely a figure of speech, such strong statements have consequences for the future directions of a field.

    Thanks to BoOrg Lab for the link to Gomez-Marin.


    Footnotes

    1“...neurons necessary and sufficient for inter-male aggression are located within the ventrolateral subdivision of the ventromedial hypothalamic nucleus (VMHvl)...”

    2One of the instances uses the old discredited “command neuron” concept of Ikeda & Wiersma (1964). They call it A‘Witch Hunt’ of Command Neurons and note that only three command neurons meet the true N&S criteria (one each in lobster, Aplysia, and Drosophila).


    References

    Gomez-Marin A. (2017). Causal circuit explanations of behavior: Are necessity and sufficiency necessary and sufficient? In: Decoding Neural Circuit Structure and Function (pp. 283-306). Springer, Cham.  {PDF}

    Krakauer JW, Ghazanfar AA, Gomez-Marin A, MacIver MA, Poeppel D. (2017). Neuroscience Needs Behavior: Correcting a Reductionist Bias. Neuron. 93(3):480-490.

    Yoshihara M, Yoshihara M. (2018). 'Necessary and sufficient' in biology is not necessarily necessary - confusions and erroneous conclusions resulting from misapplied logic in the field of biology, especially neuroscience. J Neurogenet. 32(2):53-64.


    0 0


    adapted from Figure 3 (Koroshetz et al., 2018). Magnetic resonance angiography highlighting the vasculature in the human brain in high resolution, without the use of any contrast agent, on a 7T MRI scanner. Courtesy of Plimeni & Wald (MGH). [ed. note: here's a great summary on If, how, and when fMRI goes clinical, by Dr. Peter Bandettini.]


    The Journal of Neuroscience recently published a paywalled article on The State of the NIH BRAIN Initiative. This paper reviewed the research and technology development funded by the “moonshot between our ears” [a newly coined phrase]. The program has yielded a raft of publications (461 to date) since its start in 2014. Although the early emphasis has not been on Human Neuroscience, NIH is ramping up its funding for human imaging and neuromodulation.



    They've developed a Neuroethics Division, because...
    ...neuroscience research in general and the BRAIN Initiative specifically, with its focus on unraveling the mysteries of the human brain, generate many important ethical questions about how these new tools could be responsibly incorporated into medical research and clinical practice.

    I don't think most of the current grant recipients are focused on “unraveling the mysteries of the human brain”, however. They're interested in cell types, circuit diagrams, and monitoring and manipulating neural activity in model organisms such as Drosophila, zebrafish, and mice. There are aspirations for a Human Cell Atlas, but many of the other tools are very far away (or impossible) for use in humans.

    - click on image to enlarge -



    Some aspects of the terminology used by Koroshetz et al., (2018)are vague to the savvy but non-expert eye. What is a neural circuit? The authors never actually define the term. You'll get different answers depending on who you ask. We know that “individual neuroscientists have chosen to work at specific spatial scales, ranging from .. ion channels ... to systems level” and we know there is a range of temporal scales, “from the millisecond of synaptic firing to the entire lifespan” (Koroshetz et al., 2018):
    Within this diverse set of scales, the circuit is a key point of focus for two primary reasons: (1) neural circuits perform the calculations necessary to produce behavior; and (2) dysfunction at the level of the circuit is the basis of disability in many neurological and psychiatric disorders.

    So maybe key point #1 is a generic working definition of a neural circuit, and is the focus of many NIH BRAIN-funded neuroscientists. But there's a huge leap from the impressive work on e.g. mapping, manipulating, and controlling stress-related feeding behaviors in rodents, and key point #2: isolating circuit dysfunction and ultimately treating eating disorders in humans. There is a lot of “promise” and many “aspirational goals”, but the concluding sentence is just too aspirational and promises too much:
    With diverse scientists jointly working in novel team structures, often in partnership with industry, and sharing unprecedented types and quantities of data, the BRAIN Initiative offers a unique opportunity to open the door to a golden age in brain science and improved brain health for all.

    The research that gets closest to bridging this gap is electocorticography (ECoG) and deep brain stimulation (DBS) in human patients.1The exemplar cited in the NIH paper is by Swann et al. (2018), and involved testing a closed-loop DBS system in two Parkinson's patients. The Activa PC + S system (Medtronic) is able to both stimulate the brain target region (subthalamic nucleus, STN) and record neural activity at the same time. The local field potential (LFP) activity is then fed back to the stimulator, which adjusts its parameters based on a complex control algorithm derived from the neural data.

    Fig. 4 (Swann et al., 2018). Adaptive DBS.


    The unique aspect here is that the authors recorded gamma oscillations (60–90 Hz in this case) from a subdural lead over motor cortex to adjust stimulation. In earlier work, they showed this gamma power was indicative of dyskinesia (abnormal, uncontrolled, involuntary movement), so STN stimulation was adjusted when gamma was above a certain threshold. The study demonstrated feasibility, and its greatest benefit at this early point was energy savings that preserved the battery.

    It's cool work that has been promoted by NIH, but unfortunately the first author was not mentioned in the press release, not featured in the accompanying video, and her name isn't even visible on a shot of the poster that appears in the video.2  [the last author gets all the credit.]

    Future NIH BRAIN studies will address essential tremor, epilepsy, obsessive-compulsive disorder, major depressive disorder, traumatic brain injury, stroke, tetraplegia, and blindness (apparently).


    Returning to key point #1, some have criticized the distinct lack of emphasis on behavior, which echosrecentpapers (see An epidemic of "Necessary and Sufficient" neurons).


    The next tweet is critical too, and an interesting discussion ensued.


    And given all the technology development funded by BRAIN, it's a great time to be a neuroengineer, but not a neuropsychologist, ethologist, or behavioral specialist.
    Indeed, the BRAIN Initiative funded an equal number of investigators trained in engineering relative to those trained in neuroscience in 2016 (Koroshetz et al., 2018).

    Footnotes

    1DARPA is the biggest investor here.

    2We interrupt the NIH press coverage of this paper to acknowledge the first author, Dr. Nicki Swann. Dr. Swann and many of her female colleagues have described the difficulties of traveling and attending conferences while being a new mother, and offered some possible solutions. If the BRAIN Initiative is serious about addressing Neuroethics (for animals and futuristic sci-fi applications to human patients), they should also be actively involved in issues affecting women and minority researchers. And I imagine they are, it just wasn't apparent here.


    References

    Koroshetz W, Gordon J, Adams A, Beckel-Mitchener A, Churchill J, Farber G, Freund M, Gnadt J, Hsu N, Langhals N, Lisanby S. (2018). The State of the NIH BRAIN Initiative. Journal of Neuroscience Jun 19:3174-17.  NOTE: this should really be open access...

    Swann NC, de Hemptinne C, Thompson MC, Miocinovic S, Miller AM, Gilron R, Ostrem JL, Chizeck HJ, Starr PA. (2018). Adaptive deep brain stimulation for Parkinson's disease using motor cortex sensing. J Neural Eng. 15(4):046006.


    0 0


    TAKE HOME MESSAGE: All suicide attempts and parasuicidal gestures should be taken very seriously in patients with dementia.
    “Previous parasuicide is a predictor of suicide. The increased risk of subsequent suicide persists without decline for at least two decades.”

    A new case report on a 53 year old man1 with semantic dementia (SD) presented his prior parasuicidal gestures as “stereotypic behaviour” [ed. NOTE: repeated attempts to hang himself with a cord is “stereotyped behavior”], with tragic consequences:
    The patient showed abnormal behaviours such as following around his wife and frequently visiting a drug store to purchase sleeping pills, which necessitated hospitalization. Despite having no depressive symptoms including suicidal ideation, he repeatedly attempted to hang himself with a cord during a temporary stay at home. At the time of the interview, he stated, ‘I found a cord suspended from the ceiling, and so just played with it by hanging myself. It was just a play’, indicating an absence of suicidal ideation and lack of seriousness for the event. In March 2012, he died by hanging himself with a towel inside his hospital room.

    ...Despite the fact that the man had been severely depressed for two years before his SD diagnosis, had a well-documented history of suicidal ideation, and had made several suicide attempts (Kobayashi et al., 2018):
    In April 2009, the patient started to express suicidal ideation such as ‘I would like to hang myself’. From May to June, he was admitted to a psychiatric hospital because of a deliberate overdose. After being discharged, the patient started to show lack of ability to understand what others were saying, kept insisting on his own way, and became excessively fixated on certain things. In July 2010, he was dismissed from his job because of poor performance. In September 2010, the patient was hospitalized after multiple attempts to hang himself with a cord. During this hospitalization, he was found to have difficulty in naming familiar objects.

    His difficulty in naming familiar objects could be an early sign of neurodegeneration (especially in a 53 year old man), but by itself is not diagnostic. But he also had difficulty understanding what other people were saying, i.e. a problem in language comprehension. These symptoms are characteristic of semantic dementia, a type of frontotemporal lobar degeneration associated with a profound loss of meaning words and objects don't make sense any more. He did very poorly on subsequent neuropsychological testing. Neuroimaging results revealed atrophy in bilateral (but L > R) anterior and inferior temporal cortices that is characteristic of SD.



    Now, it's easy for me to sit back and be all critical. BUT:I am not a clinician, I was not involved in this case, and hindsight is often 20/20. But it always pays to err on the side of caution when suicidal actions are expressed, even in a person who denies being suicidal, but especially in one who may no longer understand exactly what he's doing.


    If you are contemplating suicide or know someone who is, please consult:

    Online Suicide Help directory


    Footnote

    1They say he's 50 in the Abstract, but the Case Presentation starts out by saying he's 53.


    Reference

    Kobayashi R, Hayashi H, Tokairin T, Kawakatsu S, Otani K. (2018). Suicide as a result of stereotypic behaviour in a case with semantic dementia. Psychogeriatrics Jul 30. [Epub ahead of print]


    Warnning: Do NOT Get Caught While Searching!!
    Your IP : - Country : - City:
    Your ISP TRACKS Your Online Activity! Hide your IP ADDRESS with a VPN!
    Before you searching always remember to change your IP adress to not be followed!
    PROTECT YOURSELF & SUPPORT US! Purchase a VPN Today!
    0 0



    “Learn while you sleep” has been the claim of snake oil salesmen since the 1950s. The old pseudoscience methods involved listening to tapes and records. From a 1958 article by Lester David:
    Max Sherover, president of the Linguaphone Institute of New York ... coined the word “dormiphonics,” defining it as a “new scientific method that makes quick relaxed learning possible, awake or asleep.” Dormiphonics, declares Mr. Sherover, works by “repeated concentrated impact of selected material on the conscious and subconscious mind.”

    An “experiment” was conducted at the Tulare County Prison, where 100 convicts “volunteered to act as guinea pigs” (considered completely unethical by today's standards). During sleep, they were subjected to low-volume recordings that exhorted them to be better human beings: “Love shall rule your life. You shall love God, your family and others. You shall do unto others as you want others to do unto you. . .” The low voice also warned them away from the evils of alcohol.


    Knight Education Recordings (1960s)
    a commercially available product of the era


    Even earlier, the Psycho-Phone (Salinger, 1927) played wax cylinders with different self-help messages, e.g., “Prosperity” and “Life Extension” on a phonograph while the unwitting customer slept. The Cummings Center Blog has a great post on this odd contraption. Salinger sold the machines for the whopping price of $235 (the equivalent of $3,250 in 2017). He didn't need Kickstarter or Indiegogo.




    In the modern era, DIY brain stimulation enthusiasts promote self-experimentation with battery-driven devices. These transcranial direct current stimulation (tDCS) kits are available online, with a primary goal of enhancing cognitive performance. Using state-of-the-art professionally manufactured devices, scientists have published thousands of peer-reviewed papers, with mixed results as to the efficacy of different tDCS protocols.

    A newer method is transcranial alternating current stimulation (tACS), which delivers stimulation within precise frequency bands with the aim of synchronizing oscillations within that band (e.g., ~10 Hz for alpha, ~1-4 Hz for delta, etc.). The goal is to modulate ongoing oscillatory brain rhythms to affect behavior.1

    Today, the importance of sleep for the consolidation of previously learned material has been well-documented. Conceptually, this is quite different from the discredited “subliminal sleep learning” from days of yore. New research aims to improve retention of information learned during the day by delivering precisely timed and calibrated tACS during slow wave sleep (Ketz et al., 2018).



    Fig. 1 (modified from Ketz et al., 2018).(A) Target detection task. (B) Memory was tested on two image types: Repeated (identical to Original) and Generalized (same as Original but from different viewpoint). (C) tDCS montage used during training (left), and tACS montage used to augment slow waves during sleep (right).


    During the day, participants were trained on a difficult military task that required them to detect hidden targets (explosive devices, snipers, suicide bombers) that were concealed or disguised (Clark et al, 2012). As in their earlier study, tDCS or sham stimulation was delivered during the training phase (over right frontal or right parietal cortices). Previous findings indicated that significant improvements in learning and performance were observed after 30 min tDCS (anodal 2.0 mA) vs. “sham” (0.1 mA). However, this tDCS finding did not replicate in the current study (see Fig, 3A, left below). Why? The authors speculated that possible differences in current generation between their previous iontophoresis system (2.0 mA) and the present use of StarStim (1.0 mA) could explain the failure to replicate.

    After training, tACS was delivered during sleep. The authors' cool closed-loop approach recorded the dominant slow wave (SW) frequency, and then delivered stimulation to match the phase and frequency of this dominant oscillation (range of 0.5 to 1.2 Hz). Fig. 3A (right) doesn't look terribly impressive, however. tACS did not improve performance for Repeated images, and had highly variable effects for Generalized images. Nonetheless, the two- and three-way interactions were significant, as was the pairwise comparison between active tACS vs. sham for Generalized images (all p's ≈ .015 for n=16).



    Fig. 3 (modified from Ketz et al., 2018). (A) Waking tDCS effects (left) and SW tACS effects during sleep (right). (B) SW events broken down per sleep stage (left) and total SW events for each stimulation condition (right). Note that active stimulation had fewer total SW events compared with sham.


    Why was there no change in performance for Repeated images but a small improvement for Generalized images? The authors recognize this conundrum and say:
    ...it is unclear why there was no improvement in Repeated images induced by SW tACS, as might be expected based on previous studies.

    Then they speculate that consolidation of essential ‘gist’ — rather than recognition of specific items — was impacted by tACS.

    Media coverage of this modest finding was predictably overblown, and originated with the Society for Neuroscience press release, Overnight Brain Stimulation Improves Memory: Non-invasive technique enhances memory storage without disturbing sleep. If it enhances memory storage, then why were Repeated images unaffected? Anyway, most commenters at Hacker News were pretty skeptical, which was a pleasant surprise.


    Footnote

    1However, there is some question whether tACS delivered at typical stimulation intensities can really entrain endogenous rhythms (Lafon et al., 2017).


    References

    Clark VP, Coffman BA, Mayer AR, Weisend MP, Lane TD, Calhoun VD, Raybourn EM, Garcia CM, Wassermann EM. (2012). TDCS guided using fMRI significantly accelerates learning to identify concealed objects. Neuroimage 59(1):117-28.

    Ketz N, Jones AP, Bryant NB, Clark VP, Pilly PK (2018). Closed-Loop Slow-Wave tACS Improves Sleep-Dependent Long-Term Memory Generalization by Modulating Endogenous Oscillations. J Neurosci. 38(33):7314-7326.

    Lafon B, Henin S, Huang Y, Friedman D, Melloni L, Thesen T, Doyle W, Buzsáki G, Devinsky O, Parra LC, A Liu A. (2017). Low frequency transcranial electrical stimulation does not entrain sleep rhythms measured by human intracranial recordings. Nat Commun. 8(1):1199.





    Hugo Gernsback
    December 1921


    0 0


    Dr. Bernard Carroll (Nov 21, 1940 – Sep 10, 2018)




    I was friends with Dr. Carroll (“Barney”) on Twitter, and always enjoyed his wit.



    Before that he was an early commenter and supporter of my blog, The Neurocritic. Which pleased me to no end, given this brief biography from his blogger site.


    My blogs Health Care Renewal
    Occupation Psychopharmacology
    Introduction Past chairman FDA Psychopharmacologic Drugs Advisory Committee.
    Past chairman, department of psychiatry Duke University Medical Center.
    Interests Professional ethics, medicine


    He didn't know who I was and didn't care. He assessed me by the quality of my writing, and allowed me entrée into a world I would have no access to otherwise.1

    As I'm facing the most catastrophic loss of my life, I will miss him too. He was a brilliant, principled, and compassionate man.


    Remembrance from Health Care RenewalRemembering Dr Bernard Carroll


    Obituary in BMJ by Dr. Allen Frances (and Dr. Barney Carroll):

    Barney Carroll: the conscience of psychiatry
    A pioneer in biological psychiatry, more recently Bernard Carroll (‘‘Barney’’) became a withering critic of its compromised ethics and corruption by industry. Shortly before his death, he helped prepare this obituary—his last chance to help correct the perverse incentives that too often influence the conduct and reporting of scientific research.
    . . .

    Barney rejected grand biological theories that offered neat, simple-but-wrong explanations of psychopathology. Ever aware of the complexity of the human brain, he was an early rejecter of blind optimism that any simple imbalance of monoamine transmitters could account for the wide variety of mental disorders. More recently, he deplored the ubiquitous hype that suggested that genetics or neuroimaging or big data mining could provide simple answers to deeply complex questions. He predicted—presciently—that these powerful new tools would have great difficulty in producing solid, replicable findings that could be translated to clinical practice.





    Footnote

    1 i.e., Very senior male psychiatrists. When I wrote my blog post about being female, and my wife's diagnosis of stage 4 cancer...

    So yeah, think of this as my “coming out”. Sorry if I've offended anyone with my ability to blend into male-dominated settings.

    Thank you for reading, and for your continued support during this difficult time.

    ...Barney was the first to comment, with his usual wit and grace: “I am pretty sure we can handle that. Bless you both.”

    0 0

    With profound grief, I announce that Sandra’s journey has come to an end.



    Gardens at Government House, Victoria BC (June 2017)


    Sandra Dawson was taken from this earth by the indiscriminate brutality of metastatic cancer. She died on October 2, 2018 at the age of 51. This horrific experience was not a “fight.” She did NOT lose a battle against the unchecked proliferation of malignant cells. Instead, Sandra saw the final phase of her life a journey. She was incredibly brave while facing the ravages of this terrible disease, and she was ultimately accepting of her fate. She was gracious and generous in sharing the final stages of her journey with friends and family, and also with nearly 25,000 followers of her @unsuicide Twitter account.1There was an outpouring of love and support and visitors and flowers, which buoyed her spirits and made her feel loved.

    She really loved flowers.




    Sandra was many things – a writer, a blogger, a jewelry designer, a crochet artist, a mental health advocate, a board member of the Mental Health Commission of Canada, and the 2016 winner of a Sovereign's Medal for Volunteers from The Governor General of Canada, for over a decade of work in suicide prevention.



    Government House, Victoria BC (June 2017)



    September 10 was World Suicide Prevention Day, and Dr. Erin Michalak of CREST.BD wrote a touching tribute to Sandra’s work.

    Sandra Dawson’s Legacy

    . . .

    “Most significantly, Sandra created the Unsuicide directory of online and mobile crisis supports, as well as a popular corresponding Twitter feed (@Unsuicide) with close to 25,000 followers. Her Unsuicide online supports are authentically grounded in her lived experience of bipolar disorder, but also unfailingly focused on helping people, regardless of their geography, to access credible and safe online and mobile support tools. In 2016, she was awarded the Sovereign’s Medal for Volunteers from the Governor General of Canada in acknowledgement of the impact of her work as an advocate for people facing mental health challenges and in suicide prevention.”

    But mostly I think of her as a writer.



    Radar Queer Reading Series, SF Public Library (October 2016)


    She was also my partner and wife of nearly 12 years.


    December 2017


    We met in 2006 through our respective blogs, The Neurocritic and Neurofuture. The neuroblogging community was quite small then. Neurofuture started in January 2006 a blog about Brain Science and Neurofuturism that was ahead of its time (so to speak):
    The future is now, in many ways. Neuroscience and psychiatry are fields that have experienced tremendous growth, especially in the last few decades, and these advances already have practical applications. … At the same time, much is still unknown…
    . . .

    Neuroscience, psychiatry, neuroethics and transhumanism are the four areas of focus for this blog. They have applications in a broad range of fields, and I'll be aggregating diverse information. Expect a lot of interesting links. I invite your comments.

    In June 2006, she started a video blog, Channel N, that shared interesting content related to neuroscience, psychology, and mental health. Channel N eventually moved to Psych Central, a trusted mega-site for mental health, depression, bipolar, ADHD & psychology information. Sandra also wrote posts for World of Psychology, the main PsychCentral blog, including many Top 10 lists, which were always popular.

    Along with Steve Higgins, she blogged for Omni Brain (from December 2006 – January 2008), which was “an exploration of the serious, fun, ridiculous / past, present, future of the brain and the science that loves it” – as part of the long-defunct Science Blogs network.


    But Sandra’s real love was writing fiction (mostly under the pseudonym S. Kay). She wrote an unpublished novel (or two), flash fiction, and a novella that was published by Maudlin House (ironically titled Joy).





    The advent of Twitter really changed her writing. She started writing microfiction, ultra-short stories in the form of Tweets (140 characters or less). Sometime they were standalone zaps that told an entire tiny tale.





    Other times, she crafted a number of tweets together to tell a longer story. These were published in various venues and included pieces such as Neurotech Light and DarkCloud Glitches, Facebook Algorithm of Death,2 and her final piece, Goth Robots (robots were always a favorite theme; see the interview Weird words with S. Kay). Her blueberrio tumblr has a comprehensive list of her published work.




    Her masterwork was Reliant, “an apocalypse in tweets” published in 2015 by the late tNY Press (but still available for purchase at Amazon):
    “Selfies, sexbots, and drones collide in these interwoven nanofictions about a society before, during, and after its collapse. With dazzling humor and insight, debut author S. Kay reveals a future that looks disconcertingly like the present. Beautifully illustrated by Thoka Maer, Reliant is a bold examination of society's unrequited love for technology.”
    There was a nice review in Entropy by Christopher Iacono.




    But my proudest literary-moment-by-proxy was when Sandra read at Writers With Drinks, a long-standing, monthly series of readings by spectacular writers, held in a bar and hosted by the talented and amusing Charlie Jane Anders. It was a fun evening and the ideal crowd for reading Reliant.



    Writers With Drinks (Nov. 14, 2015)


    Sandra's next book, Lost in the Land of Bears (designed and published by Reality Hands), had a truly unique limited edition faux fur cover, but it's still available as an e-book.


    James Knight wrote a great review at Sabotage Reviews.


    Sandra was an early adopter of all forms of online communication. She was an avid blogger, social media user, and before that an online diarist. She was prescient about the future of social media:
    I have no optimism that social media will bring the world together with mutual empathy improving society. Sheep are still sheep and their bleatings still need shepherds to make them a coherent flock. An important lesson for the next decade. The media is still the media and if anything, is more segregated than ever.

    Sandra Dawson, January 4, 2007


    I could go on and on about her other wildly creative projects, like her Spambot Psychosis origami text cube, her beachpunk jewelry, her minibook necklaces (sample here), her upcycled cashmere brooches, her Postcards from the Post-Apocalypse, and her exhibit of crocheted art hats (and bonus EEG cap) at Femina Potens (the Cultivating Cozy exhibition).



    January 18th, 2008



    But what I can't express in words right now is how much I'll miss her.






    Footnotes

    1 Like me, she had many Twitter accounts and blogs and pseudonyms; the latter included Sandra K, Sandra Kiume, and S. Kay.

    2Sadly, this was based on a true story that had an even more tragic ending.




    I love you.
    RIP.

    0 0
  • 10/31/18--01:59: Survey Skeleton


  • Karger Medical and Scientific Publishers has a lovely Survey Skeleton peeking out enticingly on some of their journal websites now.




    It's to lure you to take their survey, where you can win attractive prizes....




    ...such as the unique Vesalius: The Fabric of the Human Body (value CHF 1,500).





    Just thought you should know.



    0 0


    Frontispiece from: Blicke in die Traum- und Geisterwelt (A look into the dream and spirit world), by Friedrich Voigt (1854).


    What are you most afraid of? Not finding a permanent job? Getting a divorce and losing your family? Losing your funding? Not making this month's rent? Not having a roof over your head? Natural disasters? Nuclear war? Cancer? Having a loved one die of cancer?

    FAILURE?

    There are many types of specific phobias (snakes, spiders, heights, enclosed spaces, clowns, mirrors, etc.), but that's not what I'm talking about here.

    What are you really afraid of? Death? Pain? A painful death?

    Devils, demons, ghosts, witches, and other supernatural apparitions? This latter category (haunting, demon possession) is common among many cultures with religious or spiritual practices, and can evoke primal fear. As a former Catholic, I am still frightened by movies or TV shows that involve demonic possession, like American Horror Story: Asylum.




    I used this show as an exemplar in a post about Possession Trance Disorder in DSM-5.

    A fantastic long-form article by Nike Mariani has just appeared in The Atlantic. The author intermixes the individual case study of Louisa Muskovits with the history of exorcism and facts about its modern-day resurgence.

    American Exorcism
    Priests are fielding more requests than ever for help with demonic possession, and a centuries-old practice is finding new footing in the modern world.
     . . .
    • The official exorcist for Indianapolis has received 1,700 requests so far in 2018.
    • Father Thomas said that as many as 80 percent of the people who come to him seeking an exorcism are sexual-abuse survivors.
    • Some abused children are subjected to such agonizing experiences that they adopt a coping mechanism in which they force themselves into a kind of out-of-body experience. As they mature, this extreme psychological measure develops into a disorder that may manifest unpredictably. “There is a high prevalence of childhood abuse of different kinds with dissociative disorders,” Roberto Lewis-Fernández, a Columbia University psychiatry professor who studies dissociation, told me.

    This brings me to another topic I've been meaning to write about for weeks. Sleep paralysis is the terrifying condition of being half awake but unable to move (or speak or scream). It can feel like you're frozen in bed, aware of your surroundings yet completely paralyzed. This is because the complete muscle atonia typically experienced during REM sleep has oozed into lighter stages of non-REM sleep. Scary dream imagery can intrude while in this state, making it even worse.

    A fascinating new paper covers interpretations of this frightening phenomenon across different cultures (Olunu et al., 2018). A common theme is being attacked, visited, or sat upon by supernatural beings, such as demons, witches, ghosts, and spirits.


    -- click on image for a larger view --


    The eerie presences are called Jinn in Egypt, Kabus in Iran, Phi Um in Thailand, Old Hag in lots of places, and the especially horrifying Kokma in Saint Lucia, which are “attacks by dead spirits or unbaptized babies that jump into a body and squeeze the throat”. In Nigeria, believers in supernatural explanations exist alongside others who hold rational explanations:
    Nigerians describe it as “visitation of an evil spirit, witches, or some form of spiritual attack.” Others have beliefs that it may be due to anxiety or emotions associated with family problems.
    The Wikipedia page on the folklore of the night hag also has a pretty good listing.


    Interestingly, sleep paralysis was considered as a partial explanation for “demonic possession” in the case of Louisa Muskovits (Atlantic):
    Louisa seemed to vacillate between this unhinged state and her normal self. One minute she would snarl and bare her teeth, and the next she would beg for help. “It definitely had this appearance where she was fighting within herself,” Harp [her former therapist] told me.

    . . .
    [Another time] Louisa ... woke up abruptly, only to find her body locked in place—but with the added shock of what seemed to be visual hallucinations, including one of a giant spider crawling into her bedroom. Louisa was so jolted that she barely ate or slept for three days. “I didn’t feel safe,” she said. “I felt violated.”

    . . .
    Sleep paralysis seemed like a promising explanation. A phenomenon in which sufferers move too quickly in and out of rem sleep for the body to keep up, sleep paralysis causes a person’s mind to wake up before the body can shake off the effects of sleep. Hovering near full consciousness, the person can experience paralysis and hallucinations.

    But Louisa didn’t think this could account for the hand on her collarbone, which she swore she’d felt while she was completely awake [oh of course it can account for this phenomenology!].


    What are your experiences of sleep paralysis?


    Further Reading

    When Waking Up Becomes the Nightmare: Hypnopompic Hallucinatory Pain

    The Phenomenology of Pain During REM Sleep

    The Neurophysiology of Pain During REM Sleep

    Possession Trance Disorder in DSM-5

    Spirit Possession as a Trauma-Related Disorder in Uganda

    "The spirit came for me when I went to fetch firewood" - Personal Narrative of Spirit Possession in Uganda

    Possession Trance Disorder Caused by Door-to-Door Sales

    Fatal Hypernatraemia from Excessive Salt Ingestion During Exorcism

    Diagnostic Criteria for Demonic Possession

    The Devilish Side of Psychiatry


    Reference

    Olunu E, Kimo R, Onigbinde EO, Akpanobong MU, Enang IE, Osanakpo M, Monday IT, Otohinoyi DA, John Fakoya AO. (2018). Sleep Paralysis, a Medical Condition with a Diverse Cultural Interpretation. Int J Appl Basic Med Res. 8(3):137-142.



    Scene from The Wailing. Although it's certainly not for everybody, it is an amazing film.



    Warnning: Do NOT Get Caught While Searching!!
    Your IP : - Country : - City:
    Your ISP TRACKS Your Online Activity! Hide your IP ADDRESS with a VPN!
    Before you searching always remember to change your IP adress to not be followed!
    PROTECT YOURSELF & SUPPORT US! Purchase a VPN Today!
    0 0



    Nothing says home for the holidays like a series of murders committed by family members with a shared delusion. So sit back, sip your hot apple cider or spiked egg nog, and revel in family dysfunction worse than your own.

    {Well! There is an actual TV show called Homicide for the Holidays, which I did not know. Kind of makes my title seem derivative... but it was coincidental.}


    “Folie à deux”, or Shared Psychotic Disorder, was a diagnosis in DSM-IV-TR:

    (A) A delusion develops in an individual in the context of a close relationship with another person(s), who has an already-established delusion. 

    (B) The delusion is similar in content to that of the person who already has the established delusion. 

    (C) The disturbance is not better accounted for by another Psychotic Disorder (e.g., Schizophrenia) or a Mood Disorder With Psychotic Features and is not due to the direct physiological effects of a substance (e.g., a drug of abuse, a medication) or a general medical condition.

    Folie à deux occurs in the secondary partner, who shares a delusion with the primary partner (diagnosed with schizophrenia, delusional disorder, or psychotic depression). In in DSM-5, folie à deux no longer exists as a specific disorder. Instead, the secondary partner is given a diagnosis of “other specified schizophrenia spectrum and other psychotic disorder” with a specifier: “delusional symptoms in the partner of individual with delusional disorder” (APA, 2013).

    The first cases were reported in the 19th century by the French psychiatrists Baillarger (1860) and Lasègue & Falret (1877). The latter authors note that insanity isn't contagious, but under special circumstances...
    a) In the “folie à deux” one individual is the active element; being more intelligent than the other he creates the delusion and gradually imposes it upon the second one who is the passive element. At the beginning the latter resists but later, little by little, he suffers the pressure of his associate, although at the some degree he also reacts and influences the former to correct, modify, and coordinate the delusion that then becomes their common delusion, to be repeated to all in an almost identical fashion and with the same words.
    The two individuals are in a close relationship and typically live in an isolated environment.

    A recent paper by Guivarch et al. (2018) covered the history of the disorder, and performed a literature review on folie à deux and homicide. They found 17 articles:
    In the cases examined, homicides were committed with great violence, usually against a victim in the family circle, and were sometimes followed by suicide. The main risk factor for homicide was the combination of mystical and persecutory delusions. The homicides occurred in response to destabilization of the delusional dyads.

    Body mutilation is not uncommon: “These features appear in the reported case of a mother who was delusional and killed her young son by hitting him on the head 3 times with a hatchet.”

    The authors presented a detailed history of induced psychosis involving Mr. A (the secondary) and Mrs. A (the primary, who had a family history of delusion). Shortly after getting married, they had a child who was removed by social services due to inadequate parenting.
    Subsequently, the couple engaged in several years of delusional wandering in France and Italy, traveling from village to village to accomplish “a divine mission”, during which time they were hosted in monasteries or abbeys. They expressed delusional feelings but never visited a psychiatrist and were never confronted by the police. The couple's relationship transformed; the partners stopped having sexual relations and quickly established a delusional hierarchical relation, with Mrs. A being called “Your Majesty” and Mr. A considering himself “King of Australia, Secretary of Her Majesty”.

    After about 20 years of this, in a fit of overkill, Mr. A murdered a random 11 year old child by inflicting 44 stab wounds. Earlier, he had felt humiliated and persecuted at a police check point, which provoked an “incident.” The murder of the child was part of their delusional divine mission, to make a necessary sacrifice that would restore balance.


    Paranoia of the exalted type in a setting of folie à deux

    The famous case of Pauline P (“a dark, rather sulky looking but not unattractive girl of stocky build”) and Juliet H (“a tall, willowy, frail, attractive blonde with large blue eyes”) was also mentioned (Medlicott, 1955). The two girls established a very close bond, constructed an elaborate make-believe world of fictional characters, withdrew from all others, became sexually involved, and developed a superiority complex. They killed Pauline's mother “because one of the girls was going to move with her parents, which would have led to the separation of the delusional dyad (Medlicott, 1955).” This formed the basis of the fantastic 1994 film, Heavenly Creatures, featuring Melanie Lynskey and Kate Winslet.




    Granted, their indissoluble bond was pathological, but laughable 1955 views of same-sex relationships were on display in this analysis:
    There is of course no doubt that the relationship between these two girls was basically homosexual in nature. Pauline made attempts in 1953 of establishing heterosexual relationships, but in spite of intercourse on one occasion there was no evidence of real erotic involvement. All her escapades were fully discussed with Juliet which is a common feature amongst people basically homosexual in orientation.

    Yes, we can generalize and say that all teenage girls in the 1950s commonly bragged about their heterosexual exploits with their lesbian lovers.

    From Pauline's 1953 diary:
    “To-day Juliet and I found the key to the 4th World.  ... We have an extra part of our brain which can appreciate the 4th World. Only about 10 people have it. When we die we will go to the 4th World, but meanwhile on two days every year we may use the key and look in to that beautiful world which we have been lucky enough to be allowed to know of, on this Day of Finding the Key to the Way through the Clouds.”

    Your family gatherings may not always be harmonious, but presumably your delusional children are not plotting to kill you. Happy Holidays.





    References

    Baillarger J. (1860). Quelques exemples de folie communiquée. Gazette Des Hôpitaux Civils et Militaires 38: 149-151.

    Guivarch J, Piercecchi-Marti MD, Poinso F. (2018). Folie à deux and homicide: Literature review and study of a complex clinical case. International Journal of Law and Psychiatry 61:30-39.

    Lasègue C, Falret J. (1877). La folie à deux (ou folie communiquée). Annales Médico Psychologiques 18: 321-355. English translation (Dialogues in Philosophy, Mental and Neuro Sciences, December 2016).

    Medlicott R. (1955). Paranoia of the exalted type in a setting of folie à deux; a study of two adolescent homicides. The British journal of medical psychology 28:205-223.


    0 0

    Our memory for the details of real-life events is poor, according to a recent study.

    Seven MIT students took a one hour walk through Cambridge, MA. A day later, they were presented with one second video clips they may or may not have seen during their walk (the “foils” were taken from another person's recording). Mean recognition accuracy was 55.7%, barely better than guessing.1


    Minimal recognition memory for detailed events. Dashed line is chance performance. Adapted from Fig. 2 of
    Misra et al. (2018).


    How did the researchers capture the details of what was seen during each person's stroll about town (2.1 miles / 3.5 km)? They were fitted with eye tracking glasses to follow their eye movements (because you can't remember what you don't see), and a GoPro camera was mounted on a helmet.


    from Fig. 1 (Misra et al., 2018).


    One problem with this setup, however, was that the eye tracking data had to be excluded. The overwhelmingly bright summer sun prevented the eye tracker from obtaining accurate images of the pupil. Thus, Experiment 2 was performed inside the Boston Museum of Fine Arts with a separate group of 10 students.


    from Fig. 1 (Misra et al., 2018).


    Recognition performance was better in Experiment 2. Mean accuracy was 63.2% well above chance (p=.0005) but still not great. Participants correctly identified clips they had seen 59% of the time, and correctly rejected clips they hadn't seen 67% of the time. One participant (#4) was really good, and you'll notice the individual differences below.

    Dashed line is chance performance. Adapted from Fig. 2 of Misra et al. (2018).


    In Exp. 2, the investigators were able to look at the influence of eye fixations on memory performance. Not surprisingly, people were better at remembering what they looked at (fixated on), but this only held for certain categories of items: talking people, objects rated as “distinctive” (but not distinctive faces), and paintings (but not sculptures).




    How do the authors interpret this finding? We don't necessarily pay attention to everything we look at.
    “What subjects fixated on also correlated with performance (Fig. 4), but it is clear that subjects did not remember everything that they laid eyes on. There is extensive literature showing that subjects may not pay attention or be conscious of what they are fixating on. Therefore, it is quite likely that, in several instances, subjects may have fixated on an object without necessarily paying attention to that object. Additionally, attention is correlated with the encoding of events into memory. Thus, the current results are consistent with the notion that eye fixations correlate with episodic memory but they are neither necessary nor sufficient for successful episodic memory formation.”

    For me personally, 2018 was a year to forget.2 Yet, certain tragic images are etched into my mind, cropping up at inopportune times to traumatize me all over again. That's a very different topic for another time and place.


    May your 2019 brighten the sky.


    The number 2019 is written in the air with a sparkler near a tourist camp outside Krasnoyarsk, Russia, on January 1, 2019. (The Atlantic)


    Footnotes

    1 However:
    “Two subjects from Experiment I were excluded from the analyses. One of these subjects had a score of 96%, which was well above the performance of any of the other subjects (Figure 2). The weather conditions on the day of the walk for this subject were substantially different, and this subject could thus easily recognize his own video clips purely from assessing the weather conditions. Another subject was excluded 260 because he responded 'yes'>90% of the trials.”

    2 See:

    I should have done this by now...

    The Lie of Precision Medicine

    Derealization / Dying

    There Is a Giant Hole Where My Heart Used To Be

    How to Reconstruct Your Life After a Major Loss


    Reference

    Misra P, Marconi A, Peterson M, Kreiman G. (2018). Minimal memory for details in real life events. Sci Rep. 8(1):16701.





    0 0


    Before answering that question, I'll tell you about an incredibly impressive ethnographic study and field survey. For a one year period, the investigators (Pretus, Hamid et al., 2018) conducted field work within the community of young Moroccan men in Barcelona, Spain. As the authors explain, the Moroccan diaspora is an immigrant community susceptible to jihadist forms of radicalization:
    Spain hosts Europe’s second largest Moroccan diaspora community (after France) and its least integrated, whereas Catalonia hosts the largest and least integrated Moroccan community in Spain. Barcelona ... was most recently the site of a mass killing ... by a group of young Moroccan men pledging support for the Islamic State. According to a recent Europol’s latest annual report on terrorism trends, Spain had the second highest number of jihadist terrorism-related arrests in Europe (second only to France) in 2016...

    After months of observation in selected neighborhoods, the researchers approached prospective participants about completing a survey, with the assurance of absolute anonymity. No names were exchanged, and informed consent procedures were performed orally, to prevent any written record of participation. The very large sample included 535 respondents (average age 23.47 years, range 18–42), who were all Sunni Muslim Moroccan men.

    The goal of the study was to look at sacred values in these participants, and whether these values might affect their willingness to engage in violent extremism. “Sacred values are immune or resistant to material tradeoffs and are associated with deontic (duty-bound) reasoning...” (Pretus, Hamid et al., 2018). The term sacred values doesn't necessarily refer to religious beliefs. One of the most common is the basic human value, “it is wrong to kill another human being.” But theoretically speaking, we could include statements such as, “it is wrong to kill endangered species for sport (or for any other reason).”

    In this study, Sacred Values included:
    • Palestinian right of return
    • Western military forces being expelled from all Muslim lands
    • Strict sharia as the rule of law in all Muslim countries
    • Armed jihad being waged against enemies of Muslims
    • Forbidding of caricatures of Prophet Mohammed
    • Veiling of women in public

    What were the Nonsacred Values? We don't know. I couldn't find examples anywhere in the paper. It's crucial that we know what these were, to help understand the “sacralization” of nonsacred values, which was observed in an fMRI experiment (described later). So I turned to the Supplemental Material of Berns et al. (2012), inferring that the statements below are good examples of nonsacred values in a population of adults in Atlanta.
    • You are a dog person.
    • You are a cat person.
    • You are a Pepsi drinker.
    • You are a Coke drinker.
    • You believe that Target is superior to Walmart.
    • You believe that Walmart is superior to Target.

    But what if the nonsacred values in the present study of violent extremism were a little more contentious and meaningful?
    • You are a fan of FC Barcelona.
    • You are a fan of AC Milan.

    Anyway, to choose participants for the fMRI experiment, the investigators first divided the entire group into those who were more (n=267) or less (n=268) vulnerable to recruitment into violent extremism (see Appendix for details). An important comparison would have been to directly contrast brain activity in these two groups, but that wasn't done here. Out of the 267 men more vulnerable to violent extremism, 38 agreed to participate in the fMRI study. These 38 were more likely to Endorse Militant Jihadism (score 4.24 out of 7) than the general fMRI pool (3.35) and the non-fMRI pool (2.43).1 

    A battery of six sacred and six nonsacred values was constructed individually for each person and presented in the scanner, along with a number of grammatical variants, for a list of 50 different items per condition. The 38 participants were randomly assigned to one of two manipulations in a between-subjects design: exclusion (n=19) and inclusion (n=19) in the ever-popular ball-tossing video game of Cyberball. [PDF]2



    Unfortunately, this reduced the study's statistical power. Nonetheless, a major goal of the experiment was to examine how social exclusion affects the processing of sacred values. I don't know if Cyberball studies are ever conducted in a within-subjects design (perhaps with an intervening task), or if exposure to one of the two conditions is too “contaminating”. At any rate, in real life, discrimination against Muslim immigrants is isolating and causes exclusion from social and economic benefits. Feelings of marginalization can result in greater radicalization and support for (and participation in) extremist groups. At this point in time, I don't think neuroimaging can add to the extensive knowledge gained from years of field work.

    Nevertheless, the investigators wanted to extend the findings of Berns et al. (2012) to a very different population. The earlier study wanted to determine whether sacred values are processed in a deontological way (based on strict rules of right and wrong) or in a utilitarian fashion (based on cost/benefit analysis of outcome). As interpreted by those authors, processing sacred values was associated with increased activation of left temporoparietal junction (semantic storage) and left ventrolateral prefrontal cortex (semantic retrieval). Berns et al. suggested that “sacred values affect behaviour through the retrieval and processing of deontic rules and not through a utilitarian evaluation of costs and benefits.” Based on those results, the obvious prediction in the present study is that sacred values should activate left temporoparietal junction (L TPJ) and left ventrolateral prefrontal cortex (L VLPFC).


    Fig. 3A (Pretus, Hamid et al., 2018).


    Fig. 3A shows that only the latter half of that prediction was observed, and there was no explanation for the lack of activation in L TPJ. Instead, there was a finding in R TPJ in the excluded group which I won't discuss further.

    Of note, the excluded participants rated themselves as being more likely to fight and die for nonsacred values, compared to the included participants. This was termed “sacralization” and now you can see why it's so important to know the nonsacred values. Are we talking about fighting and dying for Pepsi vs. Coke? For FC Barcelona vs. AC Milan? Not to be glib, but this would help us understand why social exclusion (in an artificial experimental setting) would radicalize these participants (in an artificial experimental setting).



    Fig. 3B (Pretus, Hamid et al., 2018).Nonsacred values activate Left Inferior Frontal Gyrus (IFG, aka VLPFC) in the excluded group, but not in the included group. This was interpreted as a neural correlate of “sacralization”.


    Another interpretation of Fig. 3B is that the exclusion manipulation was distracting, making it more difficult for these participants to process stimuli expressing nonsacred values (due to increased encoding demands, syntactic processing, etc.). Exclusion increased emotional intensity ratings, and decreased feelings of belongingness and being in control. This distraction could have carried over to the task of rating one's willingness to fight and die in defense of values.

    Even if we say the brain imaging results weren't especially informative, the extensive ethnographic study and field surveys were a highly valuable source of data on a marginalized group of young Muslim men at risk of recruitment by violent extremist groups. It's a vicious cycle: terrorist attacks result in greater discrimination and persecution of innocent Muslim men, which has the unintended effect of further radicalization in some of the most vulnerable individuals. To conclude, I acknowledge that my comments may be out of turn because I have no authority or expertise, and because I'm from a country with an appalling record of discriminating against Muslims.


    Footnotes

    1I was a bit confused by some of these scores, because they changed from one paragraph to the next, and differed from what was in Table 1. Perhaps one was a composite score, and the other from an individual questionnaire.

    2I've written extensively about whether Cyberball is a valid proxy for social exclusion, but I won't get into that here.


    References

    Berns GS, Bell E, Capra CM, Prietula MJ, Moore S, Anderson B, Ginges J, Atran S. (2012). The price of your soul: neural evidence for the non-utilitarian representation of sacred values. Philos Trans R Soc Lond B Biol Sci. 367(1589):754-62.

    Pretus C, Hamid N, Sheikh H, Ginges J, Tobeña A, Davis R, Vilarroya O, Atran S. (2018). Neural and Behavioral Correlates of Sacred Values and Vulnerability to Violent Extremism. Front Psychol. 9:2462.


    Appendix


    Modified from Table 1 (Pretus, Hamid et al., 2018).

    [The] measures included (1) a modified inventory on general radicalization (support for violence as a political tactic) based on a prior longitudinal study on violent extremist attitudes among Swiss adolescents (Nivette et al., 2017); (2) a scale on personal grievances and previously used on imprisoned Islamist militants in the Philippines, and Tamil Tigers in Sri Lanka (Webber et al., 2018); (3) a scale on collective narcissism which has been shown to shape in-group authoritarian identity and support for military aggression against outgroups (de Zavala et al., 2009); (4) a self-report delinquency inventory adapted from Elliott et al. (1985), based on the disproportionate number of Muslim European delinquents who join jihadist terrorist groups (Basra and Neumann, 2016); and (5) a series of items assessing endorsement of militant jihadism (“The fighting of the Taliban, Al Qaida, ISIS is justified,” “The means of jihadist groups are justified,” “Attacks against Western nations by jihadist groups are justified,” “Attacks against Muslim nations by jihadist groups are justified,” “Attacks against civilians by jihadist groups are justified,” “Spreading Islam using force is every part of the world is an act of justifiable jihad,” and “A Caliphate must be resurrected even by force”) that we combined into a reliable composite score, “Endorsement of Militant Jihadism”...

    0 0
  • 01/27/19--22:46: Unlucky Thirteen


  • Today is the 13th anniversary of this blog. I wanted to write a sharp and subversive post.1 Or at least compose a series of self-deprecating witticisms about persisting this long. Alas, it has been an extremely  difficult year.

    Instead, I drew inspiration from Twitter (@neuroecology) and a blogger who's been at it even longer than I (@DoctorZen). Very warily I might add, because I knew the results would not be flattering or pretty.

    Behold my scores on the “Big Five” personality traits (and weep). Some of the extremes are partly situational, and that's why I'm presenting these traits separately. Sure, negative emotionality is a relative fixed part of my personality, but the 100% scores on depression and anxiety are influenced by grief (due to the loss of my spouse of 12 years). Personality psychologists would turn this around and say that someone high in trait negative emotionality (formerly known as the more disparaging “neuroticism”) would be predisposed to depression and anxiety.




    Another fun trait score is shown below. This one might be even sadder. Yeah, I'm introverted, but people in my situation often tend to withdraw from friends, family, and society.2Again, reverse the causality if you wish, but social isolation is not an uncommon response.





    But hey, I am pretty conscientious, as you can see from my overall test results on the Big Five. You too can take the test HERE.




    I'll have something more interesting for you next time.



    Footnotes

    1Why? To prove to myself that I can still do it? To impress the dwindling number of readers? To show how the blog has not exceeded its expiry date it still has relevance in its own modest and quirky way.

    2Hey, I actually had two social engagements this weekend! My lack of assertiveness is disturbing, however. But I absolutely do not want to take the lead on anything right now.





    0 0

    It ended in a tie!




    Granted, this is a small and biased sample, and I don't have a large number of followers. The answers might have been different had @russpoldrack (Yes in a landslide) or @Neuro_Skeptic (n=12,458 plus 598 wacky write-in votes) posed the question.

    Before the poll I facetiously asked:
    Other hypothetical questions (that you don't need to answer) might include:
    • Are you a clinical neuropsychologist? 
    • Do you use computational modeling in your work?1
    • What is your age?
    Here, I was thinking:
    • Clinical neuropsychologists would say No
    • Computational researchers would say Yes
    • On average, older people would be more likely to say No than younger people

    After the poll I asked, “So what ARE the differences between executive function and cognitive control? Or are the terms arbitrary, and their usage a matter of context / subfield?”

    No one wanted to expound on the differences between the terms.2
    I answered No, because I think the terms are arbitrary, and their usage a matter of context and subfield. Not that Wikipedia is the ultimate authority, but I was amused to see this:

    Executive functions

    From Wikipedia, the free encyclopedia
      (Redirected from Cognitive control)
    Executive functions (collectively referred to as executive function and cognitive control) are a set of cognitive processes that are necessary for the cognitive control of behavior: selecting and successfully monitoring behaviors that facilitate the attainment of chosen goals. Executive functions include basic cognitive processes such as attentional control, cognitive inhibition, inhibitory control, working memory, and cognitive flexibility

    Nature said this:

    Cognitive control

    Cognitive control is the process by which goals or plans influence behaviour. Also called executive control, this process can inhibit automatic responses and influence working memory. Cognitive control supports flexible, adaptive responses and complex goal-directed thought. Some disorders, such as schizophrenia and ADHD, are associated with impairments of executive function.

    They're using the terms interchangeably! The terms cognitive control, executive control, executive function, and executive control functions are not well-differentiated, except in specific contexts. For instance, the Carter Lab definition below sounds specific at first, but then branches out to encompass many “executive functions” not named as such.

    Cognitive Control

    "Cognitive control" is a construct from contemporary cognitive neuroscience that refers to processes that allow information processing and behavior to vary adaptively from moment to moment depending on current goals, rather than remaining rigid and inflexible. Cognitive control processes include a broad class of mental operations including goal or context representation and maintenance, and strategic processes such as attention allocation and stimulus-response mapping. Cognitive control is associated with a wide range of processes and is not restricted to a particular cognitive domain. For example, the presence of impairments in cognitive control functions may be associated with specific deficits in attention, memory, language comprehension and emotional processing. ...

    Actually, the term Cognitive Control dates back to the 1920s, if not further. Two quick examples.

    (1) When talking about Charles Spearman and his theory of intelligence and his three qualitative principles, Charles S. Slocombe (1928) said:
    “To these he adds five quantitative principles, cognitive control (attention), fatigue, retentivity, constancy of output, and primordial potency...”
    Simple! Cognitive Control = Attention.

    (2) Frederick Anderson (1942), in The Relational Theory of Mind:
    “Meanings, then, are mental processes which, although not themselves objects for consciousness, actively modify and characterize that of which we are for the moment conscious. They differ from other subconscious processes in this respect, that we have cognitive control over them and can at any moment bring them to light if we choose.”
    Cognitive Control = having the capacity of “bringing things into consciousness” — is this different from attention, or “paying attention” to something by making it the focus of awareness?


    Moving into the 21st century, two of the quintessential contemporary cognitive control papers that [mostly] banish executives from their midst are:

    Miller and Cohen (2001):
    “The prefrontal cortex has long been suspected to play an important role in cognitive control, in the ability to orchestrate thought and action in accordance with internal goals.”

    Botvinick et al. (2001):
    “A remarkable feature of the human cognitive system is its ability to configure itself for the performance of specific tasks through appropriate adjustments in perceptual selection, response biasing, and the on-line maintenance of contextual information. The processes behind such adaptability, referred to collectively as cognitive control, have been the focus of a growing research program within cognitive psychology.”

    I originally approached this topic during research for a future post on Mindstrong and their “digital phenotyping” technology. Two of their five biomarkers are Executive Function and Cognitive Control. How do they differ? There's an awful lot of overlap, as we'll see in a future post.


    Footnotes

    1Another fun (and related) determinant might be, “does your work focus on the dorsal anterior cingulate cortex? In which case, the respondent would answer Yes.

    2 except for one deliberately obfuscatory response.


    References

    Anderson F. (1942). The Relational Theory of Mind. The Journal of Philosophy 39(10):253-60.

    Botvinick MM, Braver TS, Barch DM, Carter CS, Cohen JD. (2001). Conflict monitoring and cognitive control. Psychol Rev. 108(3):624-52.

    Miller EK, Cohen JD. (2001). An integrative theory of prefrontal cortex function. Annual Rev Neurosci. 2001;24:167-202.

    Slocombe CS. (1928). Of mental testing—a pragmatic theory. Journal of Educational Psychology 19(1):1-24.


    Appendix

    Many, many articles use the terms interchangeably. I won't single out anyone in particular. Instead, here is a valiant attempt by Nigg (2017) to make a slight differentiation between them in a review paper entitled:
    On the relations among self-regulation, self-control, executive functioning, effortful control, cognitive control, impulsivity, risk-taking, and inhibition for developmental psychopathology.
    But in the end he concludes, “Executive functioning, effortful control, and cognitive control are closely related.”

    Warnning: Do NOT Get Caught While Searching!!
    Your IP : - Country : - City:
    Your ISP TRACKS Your Online Activity! Hide your IP ADDRESS with a VPN!
    Before you searching always remember to change your IP adress to not be followed!
    PROTECT YOURSELF & SUPPORT US! Purchase a VPN Today!

older | 1 | .... | 6 | 7 | (Page 8) | 9 | newer