Quantcast
Viewing all articles
Browse latest Browse all 223

This Neuroimaging Method Has 100% Diagnostic Accuracy (or your money back)

doi:10.1371/journal.pone.0129659.g003

Did you know that SPECT imaging can diagnose PTSD with 100% accuracy (Amen et al., 2015)? Not only that, out of a sample of 397 patients from the Amen Clinic in Newport Beach, SPECT was able to distinguish between four different groups with 100% accuracy! That's right, the scans of (1) healthy participants, and patients with (2) classic post-traumatic stress disorder (PTSD), (3) classic traumatic brain injury (TBI), and (4) both disorders..... were all classified with 100% accuracy!

TRACK-TBI investigators, your 3T structural and functional MRI outcome measures are obsolete.

NIMH, the hard work of developing biomarkers for mental illness is done, you can shut down now. Except none of this research was funded by you...

The finding was #19 in a list of the top 100 stories by Discover Magazine.


How could the Amen Clinics, a for-profit commercial enterprise, accomplish what an army of investigators with billions in federal funding could not?

The authors1 relied on a large database of scans collected from multiple sites over a 20 year period. The total sample included 20,746 individuals who visited one of nine Amen Clinics from 1995-2014 for psychiatric and/or neurological evaluation (Amen et al., 2015). The first analysis included a smaller, highly selected sample matched on a number of dimensions, including psychiatric comorbidities (Group 1).

- click on image for larger view -


You'll notice the percentage of patients with ADHD was remarkably high (58%, matched across the three patient groups). Perhaps that's because...


I did not know that.
 Featuring Johnny Cash ADD.


SPECT uses a radioactive tracer injected 30 minutes before a scan that will assess either the “resting state” or an “on-task” condition (a continuous performance task, in this study). Clearly, SPECT is not the go-to method if you're looking for decent temporal resolution to compare two conditions of an active attention task. The authors used a region of interest (ROI) analysis to measure tracer activity (counts) in specific brain regions.

I wondered about the circularity of the clinical diagnosis (i.e., were the SPECT scans used to aid diagnosis), particularly since “Diagnoses were made by board certified or eligible psychiatrists, using all of the data available to them, including detailed clinical history, mental status examination and DSM-IV or V criteria...” But we were assured that wasn't the case: “These quantitative ROI metrics were in no way used to aid in the clinical diagnosis of PTSD or TBI.” The rest of the methods (see Footnote 2) were opaque to me, as I know nothing about SPECT.

A second analysis relied on visual readings (VR) of about 30 cortical and subcortical ROIs. “Raters did not have access to detailed clinical information, but did know age, gender, medications, and primary presenting symptoms (ex. depressive symptoms, apathy, etc.).”  Hmm...

But the quantitative ROI analysis gave superior results to the clinician VR. So superior, in fact, that the sensitivity/specificity in distinguishing one group from another was 100% (indicated by red boxes below). The VR distinguished patients from controls with 100% accuracy, but was not as good for classifying the different patient groups during the resting state scan only a measly 86% sensitivity, 81% specificity for TBI vs. PTSD, which is still much better than other studies. However, results from the massively sized Group 2 were completely unimpressive. 3


- click on image for larger view, you'll want to see this -



Why is this so important? PTSD and TBI can show overlapping symptoms in war veterans and civilians alike, and the disorders can co-occur in the same individual. More accurate diagnosis can lead to better treatments. This active area of research is nicely reviewed in the paper, but no major breakthroughs have been reported yet. So the claims of Amen et al. are remarkable. Stunning if true. But they're not. They can't be. The accuracy of the classifier exceeds the precision of the measurements, so this can't be possible. What is the test-retest reliability of SPECT? What is the concordance across sites? Was there no change in imaging protocol, no improvements or upgrades to the equipment over 20 years? SPECT is sensitive to motion artifact, so how was that handled, especially in patients who purportedly have ADHD?

SPECT has been noted for its poor spatial resolution compared to other functional neuroimaging techniques like PET and fMRI. A panel of 16 experts did not include SPECT among the recommended imaging modalities for the detection of TBI. Dr. Amen and his Clinics in particular have been criticized in journals (Farah, 2009; Adinoff & Devous, 2010a, 2010b; Chancellor &, Chatterjee, 2011) and blogs (Science-Based Medicine, The Neurocritic, and Neurobollocks) for making unsubstantiated claims about the diagnostic accuracy and usefulness of SPECT.

Are his latest results too good to be true? You can check for yourself! The paper was published in PLOS ONE, which has an open data policy:
PLOS journals require authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception.

When submitting a manuscript online, authors must provide a Data Availability Statement describing compliance with PLOS's policy. If the article is accepted for publication, the data availability statement will be published as part of the final article.

Before you get too excited, here's the Data Availability Statement:
Data Availability: All relevant data are within the paper.

But this is not true. NONE of the data are available within the paper. There's no way to reproduce the authors' analyses, or to conduct your own. This is a problem, because...
Refusal to share data and related metadata and methods in accordance with this policy will be grounds for rejection. PLOS journal editors encourage researchers to contact them if they encounter difficulties in obtaining data from articles published in PLOS journals. If restrictions on access to data come to light after publication, we reserve the right to post a correction, to contact the authors' institutions and funders, or in extreme cases to retract the publication.

So all you “research parasites” out there4 you can request the data. I thought this modest proposal would create a brouhaha until I saw a 2014 press release announcing the World's Largest Database of Functional Brain Scans Produces New Insights to Help Better Diagnose and Treat Mental Health Issues:
With a generous grant from the Seeds Foundation [a Christian philanthropic organization] in Hong Kong, Dr. Amen and his research team led by neuroscientist Kristen Willeumier, PhD, have turned the de-identified scans and clinical information into a searchable database that is shared with other researchers around the world.

In the last two years, Amen and colleagues have presented 20 posters at the National Academy of Neuropsychology. The PR continues:
The magnitude and clinical significance of the Amen Clinics database – being the world's largest SPECT imaging database having such volume and breadth of data from patients 9 months old to 101 years of age – makes it a treasure trove for researchers to help advance and revolutionize the practice of psychiatry.

Does this mean that Dr. Amen will grant you access to the PLOS ONE dataset (or to the entire Amen Clinics database) if you ask nicely? If anyone tries to do this, please leave a comment.


Footnotes

1 The other authors included Dr. Andrew “Glossolalia” Newberg and Dr. Theodore “Neuro-LuminanceSynaptic Space” Henderson.

2 Methods:
To account for outliers, T-score derived ROI count measurements were derived using trimmed means [91] that are calculated using all scores within the 98% confidence interval (-2.58 < Z < -2.58). The ROI mean for each subject and the trimmed mean for the sample are used to calculate T with the following formula: T = 10*((subject ROI_mean - trimmed regional_avg)/trimmed regional_stdev)+50.
3 Results from the less pristine Group 2 were not impressive at all, I must say. Group 2 had TBI (n=7,505), PTSD (n=1,077), or both (n=1,017) compared to n=11,147 patients without either (these were not clean controls as in Group 1). Given the massive number of subjects, the results were clinically useless, for the most part (see Table 6).

4 A brand new editorial in NEJM by Longo and Drazen (who decry “research parasites”) is causing a twitterstorm with the hashtags #researchparasites and #IAmAResearchParasite.


References

Adinoff B, Devous M. (2010a). Scientifically unfounded claims in diagnosing and treating patients. Am J Psychiatry 167(5):598.

Adinoff B, Devous M. (2010b). Response to Amen letter. Am J Psychiatry 167(9):1125-1126.

Amen, D., Raji, C., Willeumier, K., Taylor, D., Tarzwell, R., Newberg, A., & Henderson, T. (2015). Functional Neuroimaging Distinguishes Posttraumatic Stress Disorder from Traumatic Brain Injury in Focused and Large Community Datasets PLOS ONE, 10 (7) DOI: 10.1371/journal.pone.0129659

Chancellor B, Chatterjee A. (2011). Brain branding: When neuroscience and commerce collide. AJOB Neuroscience2(4): 18-27.

Farah MJ. (2009). A picture is worth a thousand dollars. J Cogn Neurosci. 21(4):623-4.

Viewing all articles
Browse latest Browse all 223