From Virology blog, 30 October 2015.
Professors Peter White, Trudie Chalder and Michael Sharpe (co-principal investigators of the PACE trial) respond to the three blog posts by David Tuller, published on Virology blog on 21st, 22nd and 23rd October 2015, about the PACE trial.
Overview
The PACE trial was a randomized controlled trial of four non-pharmacological treatments for 641 patients with chronic fatigue syndrome (CFS) attending secondary care clinics in the United Kingdom (UK) (www.wolfson.qmul.ac.uk/current-projects/pace-trial) The trial found that individually delivered cognitive behaviour therapy (CBT) and graded exercise therapy (GET) were more effective than both adaptive pacing therapy (APT), when added to specialist medical care (SMC), and SMC alone. The trial also found that CBT and GET were cost-effective, safe, and were about three times more likely to result in a patient recovering than the other two treatments.
There are a number of published systematic reviews and meta-analyses that support these findings from both before and after the PACE trial results were published (Whiting et al, 2001, Edmonds et al, 2004, Chambers et al, 2006, Malouff et al, 2008, Price et al, 2008, Castell et al, 2011, Larun et al, 2015, Marques et al, 2015, Smith et al, 2015). We have published all the therapist and patient manuals used in the trial, which can be down-loaded from the trial website (www.wolfson.qmul.ac.uk/current-projects/pace-trial).
We will only address David Tuller’s main criticisms. Most of these are often repeated criticisms that we have responded to before, and we will argue that they are unjustified.
Main criticisms:
13% of patients had already “recovered” on entry into the trial
Some 13% of patients entering the trial did have scores within normal range (i.e. within one standard deviation of the population means) for either one or both of the primary outcomes of fatigue and physical function – but this is clearly not the same as being recovered; we have published a correction after an editorial, written by others, implied that it was (White et al, 2011a). In order to be considered recovered, patients also had to:
* Not meet case criteria for CFS
* Not meet eligibility criteria for either of the primary outcome measures for entry into the trial
* Rate their overall health (not just CFS) as “much” or “very much” better.
It would therefore be impossible to be recovered and eligible for trial entry (White et al, 2013).
Bias was caused by a newsletter for patients giving quotes from patients and mentioning UK government guidance on management. A key investigator was on the guideline committee
It is considered good practice to publish newsletters for participants in trials, so that they are kept fully informed both about the trial’s progress and topical news about their illness. We published four such newsletters during the trial, which can all be found at www.wolfson.qmul.ac.uk/current-projects/pace-trial. The newsletter referred to is the one found at this link: www.wolfson.qmul.ac.uk/images/pdfs/participantsnewsletter3.pdf.
As can be seen no specific treatment or therapy is named in this newsletter and we were careful to print feedback from participants from all four treatment arms. All newsletters were approved by the independent research ethics committee before publication. It seems very unlikely that this newsletter could have biased participants as any influence on their ratings would affect all treatment arms equally.
The same newsletter also mentioned the release of the UK National Institute for Health and Care Excellence guideline for the management of this illness (this institute is independent of the UK government). This came out in 2007 and received much media interest, so most patients would already have been aware of it. Apart from describing its content in summary form we also said “The guidelines emphasize the importance of joint decision making and informed choice and recommended therapies include Cognitive Behavioural Therapy, Graded Exercise Therapy and Activity Management.” These three (not two as David Tuller states) therapies were the ones being tested in the trial, so it is hard to see how this might lead to bias in the direction of one or other of these therapies.
The “key investigator” on the guidelines committee, who was mentioned by David Tuller, helped to write the GET manuals, and provided training and supervision for one of the therapies; however they had left the trial team two years before the newsletter’s publication.
Bias was caused by changing the two primary outcomes and how they were analyzed
These criticisms were first made four years ago, and have been repeatedly addressed and explained by us (White et al, 2013a, White 2015), including explicit descriptions and justification within the main paper itself (White et al, 2011), the statistical analysis plan (Walwyn et al, 2013), and the trial website section of frequently asked questions, published in 2011 (www.wolfson.qmul.ac.uk/images/pdfs/pace/faq2.pdf).
The two primary outcomes for the trial were the SF36 physical function sub-scale and the Chalder fatigue questionnaire, as in the published trial protocol; so there was no change in the outcomes themselves. The only change to the primary outcomes from the original protocol was the use of the Likert scoring method (0, 1, 2, 3) of the fatigue questionnaire. This was used in preference to the binary method of scoring (0, 0, 1, 1). This was done in order to improve the variance of the measure (and thus provide better evidence of any change).
The other change was to drop the originally chosen composite measures (the number of patients who either exceeded a threshold score or who changed by more than 50 per cent). After careful consideration, we decided this composite method would be hard to interpret clinically, and would not answer our main question of comparing effectiveness between treatment arms. We therefore chose to compare mean scores of each outcome measure between treatment arms instead.
All these changes were made before any outcome data were analyzed (i.e. they were pre-specified), and were all approved by the independent Trial Steering Committee and Data Monitoring and Ethics committee.
Our interpretation was misleading after changing the criteria for determining recovery
We addressed this criticism two years ago in correspondence that followed the paper (White et al, 2013b), and the changes were fully described and explained in the paper itself (White et al, 2013). We changed the thresholds for recovery from the original protocol for our secondary analysis paper on recovery for three, not four, of the variables, since we believed that the revised thresholds better reflected recovery. For instance, we included those who felt “much” (and “very much”) better in their overall health as one of the five criteria that defined recovery. This was done before the analysis occurred (i.e. it was pre-specified). In the discussion section of the paper we discussed the limitations and difficulties in measuring recovery, and stated that other ways of defining recovery could produce different results. We also provided the results of different criteria for defining recovery in the paper. The bottom line was that, however we defined recovery, significantly more patients had recovered after receiving CBT and GET than after other treatments (White et al, 2013).
Requests for data under the freedom of information act were rejected as vexatious
We have received numerous Freedom of Information Act requests over the course of many years. These even included a request to know how many Freedom of Information requests we had received. We have provided these data when we were able to (e.g. the 13% figure mentioned above came from our releasing these data). However, the safe-guarding of personal medical data was an undertaking enshrined in the consent procedure and therefore is ethically binding; so we cannot publicly release these data. It is important to remember that simple methods of anonymization does not always protect the identity of a person, as they may be recognized from personal and medical information. We have only considered two of these many Freedom of Information requests as vexatious, although an Information Tribunal judge considered an earlier request was also vexatious (General Regulation Chamber, 2013).
Subjective and objective outcomes
These issues were first raised seven years ago and have all been addressed before (White et al, 2008, White et al, 2011, White et al, 2013a, White et al, 2013b, Chalder et al, 2015a). We chose (subjective) self-ratings as the primary outcomes, since we considered that the patients themselves were the best people to determine their own state of health. We have also reported the results of a number of objective outcomes, including a walking test, a stepping test, employment status and financial benefits (White et al, 2011a, McCrone et al, 2012, Chalder et al, 2015). The distance participants could walk in six minutes was significantly improved following GET, compared to other treatments. There were no significant differences in fitness, employment or benefits between treatments. We interpreted these data in the light of their context and validity. For instance, we did not use employment status as a measure of recovery or improvement, because patients may not have been in employment before falling ill, or they may have lost their job as a consequence of being ill (White et al, 2013b). Getting better and getting a job are not the same things, and being in employment depends on the prevailing state of the local economy as much as being fit for work.
There was a bias caused by many investigators’ involvement with insurance companies and a failure not to declare links with insurance companies in information regarding consent
No insurance company was involved in any aspect of the trial. There were some 19 investigators, three of whom have done consultancy work at various times for insurance companies. This was not related to the research and was listed as a potential conflict of interest in the relevant papers. The patient information sheet informed all potential participants as to which organizations had funded the research, which is consistent with ethical guidelines.
References
Castell BD et al, 2011. Cognitive Behavioral Therapy and Graded Exercise for Chronic Fatigue Syndrome: A Meta‐Analysis. Clin Psychol Sci Pract 18; 311-324.
doi: http://dx.doi.org/10.1111/j.1468-2850.2011.01262.x
Chalder T et al, 2015. Rehabilitative therapies for chronic fatigue syndrome: a secondary mediation analysis of the PACE trial. Lancet Psychiatry 2; 141-152.
doi: http://dx.doi.org/10.1016/S2215-0366(14)00069-8
Chalder T et al, 2015a. Methods and outcome reporting in the PACE trial–Author’s reply. Lancet Psychiatry 2; e10–e11. doi: http://dx.doi.org/10.1016/S2215-0366(15)00114-5.
Chambers D et al, 2006. Interventions for the treatment, management and rehabilitation of patients with chronic fatigue syndrome/myalgic encephalomyelitis: an updated systematic review. J R Soc Med 99: 506-520.
Edmonds M et al, 2004. Exercise therapy for chronic fatigue syndrome. Cochrane Database Syst Rev 3: CD003200. doi: http://dx.doi.org/10.1002/14651858.CD003200.pub2
General Regulation Chamber (Information Rights) First Tier Tribunal. Mitchell versus Information commissioner. EA 2013/0019.
www.informationtribunal.gov.uk/DBFiles/Decision/i1069/20130822%20Decision%20EA20130019.pdf
Larun L et al, 2015. Exercise therapy for chronic fatigue syndrome. Cochrane Database of Systematic Reviews Issue 2. Art. No.: CD003200.
doi: http://dx.doi.org/10.1002/14651858.CD003200.pub3
Malouff JM et al, 2008. Efficacy of cognitive behavioral therapy for chronic fatigue syndrome: a meta-analysis. Clin Psychol Rev 28: 736–45.
doi: http://dx.doi.org/10.1016/j.cpr.2007.10.004
Marques MM et al, 2015. Differential effects of behavioral interventions with a graded physical activity component in patients suffering from Chronic Fatigue (Syndrome): An updated systematic review and meta-analysis. Clin Psychol Rev 40; 123–137. doi: http://dx.doi.org/10.1016/j.cpr.2015.05.009
McCrone P et al. Adaptive pacing, cognitive behaviour therapy, graded exercise, and specialist medical care for chronic fatigue syndrome: a cost effectiveness analysis. PLoS ONE 2012; 7: e40808. Doi: http://dx.doi.org/10.1371/journal.pone.0040808
Price JR et al, 2008. Cognitive behaviour therapy for chronic fatigue syndrome in adults. Cochrane Database Syst Rev 3: CD001027.
doi: http://dx.doi.org/10.1002/14651858.CD001027.pub2
Smith MB et al, 2015. Treatment of Myalgic Encephalomyelitis/Chronic Fatigue Syndrome: A Systematic Review for a National Institutes of Health Pathways to Prevention Workshop. Ann Intern Med. 162: 841-850. doi: http://dx.doi.org/10.7326/M15-0114
Walwyn R et al, 2013. A randomised trial of adaptive pacing therapy, cognitive behaviour therapy, graded exercise, and specialist medical care for chronic fatigue syndrome (PACE): statistical analysis plan. Trials 14: 386. http://www.trialsjournal.com/content/14/1/386
White PD et al, 2007. Protocol for the PACE trial: a randomised controlled trial of adaptive pacing, cognitive behaviour therapy, and graded exercise, as supplements to standardised specialist medical care versus standardised specialist medical care alone for patients with the chronic fatigue syndrome/myalgic encephalomyelitis or encephalopathy. BMC Neurol 7:6. doi: http://dx.doi.org/10.1186/1471-2377-7-6
White PD et al, 2008. Response to comments on “Protocol for the PACE trial”. http://www.biomedcentral.com/1471-2377/7/6/COMMENTS/prepub#306608
White PD et al, 2011. The PACE trial in chronic fatigue syndrome – Authors’ reply. Lancet 377; 1834-35. DOI: http://dx.doi.org/10.1016/S0140-6736(11)60651-X
White PD et al, 2011a. Comparison of adaptive pacing therapy, cognitive behaviour therapy, graded exercise therapy, and specialist medical care for chronic fatigue syndrome (PACE): a randomised trial. Lancet 377:823-36. doi: http://dx.doi.org/10.1016/S0140-6736(11)60096-2
White PD et al, 2013. Recovery from chronic fatigue syndrome after treatments given in the PACE trial. Psychol Med 43: 227-35. doi: http://dx.doi.org/10.1017/S0033291713000020
White PD et al, 2013a. Chronic fatigue treatment trial: PACE trial authors’ reply to letter by Kindlon. BMJ 347:f5963. doi: http://dx.doi.org/10.1136/bmj.f5963
White PD et al, 2013b. Response to correspondence concerning ‘Recovery from chronic fatigue syndrome after treatments in the PACE trial’. Psychol Med 43; 1791-2. doi: http://dx.doi.org/10.1017/S0033291713001311
White PD et al, 2015. The planning, implementation and publication of a complex intervention trial for chronic fatigue syndrome: the PACE trial. Psychiatric Bulletin 39, 24-27. doi: http://dx.doi.org/10.1192/pb.bp.113.045005
Whiting P et al, 2001. Interventions for the Treatment and Management of Chronic Fatigue Syndrome: A Systematic Review. JAMA. 286:1360-68. doi: http://dx.doi.org/10.1001/jama.286.11.1360
AND HERE'S DAVID TULLER'S RESPONSE
From Virology blog, 30 October 2015.
David Tuller’s three-installment investigation of the PACE trial for chronic fatigue syndrome, “Trial By Error,” has received enormous attention. Although the PACE investigators declined David’s efforts to interview them, they have now requested the right to reply. Today, virology blog posts their response to David’s story, and below, his response to their response.
According to the communications department of Queen Mary University, the PACE investigators have been receiving abuse on social media as a result of David Tuller’s posts. When I published Mr. Tuller’s articles, my intent was to provide a forum for discussion of the controversial PACE results. Abuse of any kind should not have been, and must not be, part of that discourse.
Last December, I offered to fly to London to meet with the main PACE investigators to discuss my many concerns. They declined the offer. Dr. White cited my previous coverage of the issue as the reason and noted that “we think our work speaks for itself.” Efforts to reach out to them for interviews two weeks ago also proved unsuccessful.
After my story ran on virology blog last week, a public relations manager for medicine and dentistry in the marketing and communications department of Queen Mary University e-mailed Dr. Racaniello. He requested, on behalf of the PACE authors, the right to respond. (Queen Mary University is Dr. White’s home base.)
That response arrived Wednesday. My first inclination, when I read it, was that I had already rebutted most of their criticisms in my 14,000-word piece, so it seemed like a waste of time to engage in further extended debate.
Later in the day, however, the public relations manager for medicine and dentistry from the marketing and communications department of Queen Mary University e-mailed Dr. Racaniello again, with an urgent request to publish the response as soon as possible. The PACE investigators, he said, were receiving “a lot of abuse” on social media as a result of my posts, so they wanted to correct the “misinformation” as soon as possible.
Because I needed a day or two to prepare a careful response to the PACE team’s rebuttal, Dr. Racaniello agreed to post them together on Friday morning.
On Thursday, Dr. Racaniello received yet another appeal from the public relations manager for medicine and dentistry from the marketing and communications department of Queen Mary University. Dissatisfied with the Friday publishing timeline, he again urged expedited publication because “David’s blog posts contain a number of inaccuracies, may cause a considerable amount of reputational damage, and he did not seek comment from any of the study authors before the virology blog was published.”
The charge that I did not seek comment from the authors was at odds with the facts, as Dr. Racaniello knew. (It is always possible to argue about accuracy and reputational damage.) Given that much of the argument for expedited posting rested on the public relations manager’s obviously “dysfunctional cognition” that I had unfairly neglected to provide the PACE authors with an opportunity to respond, Dr. Racaniello decided to stick with his pre-planned posting schedule.
Before addressing the PACE investigators’ specific criticisms, I want to apologize sincerely to Dr. White, Dr. Chalder, Dr. Sharpe and their colleagues on behalf of anyone who might have interpreted my account of what went wrong with the PACE trial as license to target the investigators for “abuse.” That was obviously not my intention in examining their work, and I urge anyone engaging in such behavior to stop immediately. No one should have to suffer abuse, whether online or in the analog world, and all victims of abuse deserve enormous sympathy and compassion.
However, in this case, it seems I myself am being accused of having incited a campaign of social media “abuse” and potentially causing “reputational damage” through purportedly inaccurate and misinformed reporting. Because of the seriousness of these accusations, and because such accusations have a way of surfacing in news reports, I feel it is prudent to rebut the PACE authors’ criticisms in far more detail that I otherwise would. (I apologize in advance to the obsessives and others who feel they need to slog through this rebuttal; I urge you to take care not to over-exert yourself!)
In their effort to correct the “misinformation” and “inaccuracies” in my story about the PACE trial, the authors make claims and offer accounts similar to those they have previously presented in published comments and papers. In the past, astonishingly, journal editors, peer reviewers, reporters, public health officials, and the British medical and academic establishments have accepted these sorts of non-responsive responses as adequate explanations for some of the study’s fundamental flaws. I do not.
None of what they have written in their response actually addresses or resolves the core issues that I wrote about last week. They have ignored many of the questions raised in the article. In their response, they have also not mentioned the devastating criticisms of the trial from top researchers from Columbia, Stanford, University College London, and elsewhere. They have not addressed why major reports this year from the Institute of Medicine and the National Institutes of Health have presented portraits of the disease starkly at odds with the PACE framework and approach.
I will ignore their overview of the findings and will focus on the specific criticisms of my work. (I will, however, mention here that my piece discussed why their claims of cost-effectiveness for cognitive behavior therapy and graded exercise therapy are based on inaccurate statements in a paper published in PLoS One in 2012).
13% of patients had already “recovered” on entry into the trial
I did not write that 13% of the participants were “recovered” at baseline, as the PACE authors state. I wrote that they were “recovered” or already at the “recovery” thresholds for two specific indicators, physical function and fatigue, at baseline—a different statement, and an accurate one.
The authors acknowledge, in any event, that 13% of the sample was “within normal range” at baseline. For the 2013 paper in Psychological Medicine, these “normal range” thresholds were re-purposed as two of the four required “recovery” criteria.
And that begs the question: Why, at baseline, was 13% of the sample “within normal range” or “recovered” on any indicator in the first place? Why did entry criteria for disability overlap with outcome scores for being “within the normal range” or “recovered”? The PACE authors have never provided an explanation of this anomaly.
In their response, the authors state that they outlined other criteria that needed to be met for someone to be called “recovered.” This is true; as I wrote last week, participants needed to meet “recovery” criteria on four different indicators to be considered “recovered.” The PACE authors did not provide data for two of the indicators in the 2011 Lancet paper, so in that paper they could not report results for “recovery.”
However, at the press conference presenting the 2011 Lancet paper, Trudie Chalder referred to people who met the overlapping disability/”normal range” thresholds as having gotten “back to normal”—an explicit “recovery” claim. In a Lancet comment published along with the PACE study itself, colleagues of the PACE team referred to these bizarre “normal range” thresholds for physical function and fatigue as a “strict criterion for recovery.” As I documented, the Lancet comment was discussed with the PACE authors before publication; the phrase “strict criterion for recovery” obviously survived that discussion.
Much of the coverage of the 2011 paper reported that patients got “back to normal” or “recovered,” based on Dr. Chalder’s statement and the Lancet comment. The PACE authors made no public attempt to correct the record in the months after this apparently inaccurate news coverage, until they published a letter in the Lancet. In the response to Virology Blog, they say that they were discussing “normal ranges” in the Lancet paper, and not “recovery.” Yet they have not explained why Chalder spoke about participants getting “back to normal” and why their colleagues wrote that the nonsensical “normal ranges” thresholds represented a “strict criterion of recovery.”
Moreover, they still have not responded to the essential questions: How does this analysis make sense? What are the implications for the findings if 13 % are already “within normal range” or “recovered” on one of the two primary outcome measures? How can they be “disabled” enough on the two primary measures to qualify for the study if they’re already “within normal range” or “recovered”? And why did the PACE team use the wrong statistical methods for calculating their “normal ranges” when they knew that method was wrong for the data sources they had?
Bias was caused by a newsletter for patients giving quotes from patients and mentioning UK government guidance on management. A key investigator was on the guideline committee.
The PACE authors apparently believe it is appropriate to disseminate positive testimonials during a trial as long as the therapies or interventions are not mentioned. (James Coyne dissected this unusual position yesterday.)
This is their argument: “It seems very unlikely that this newsletter could have biased participants as any influence on their ratings would affect all treatment arms equally.” Apparently, the PACE investigators believe that if you bias all the arms of your study in a positive direction, you are not introducing bias into your study. It is hard to know what to say about this argument.
Furthermore, the PACE authors argue that the U.K. government’s new treatment guidelines had been widely reported. Therefore, they contend, it didn’t matter that–in the middle of a trial to test the efficacy of cognitive behavior therapy and graded exercise therapy–they had informed participants that the government had already approved cognitive behavior therapy and graded exercise therapy “based on the best available evidence.”
They are wrong. They introduced an uncontrolled, unpredictable co-intervention into their study, and they have no idea what the impact might have been on any of the four arms.
In their response, the PACE authors note that the participants’ newsletter article, in addition to cognitive behavior therapy and graded exercise therapy, included a third intervention, Activity Management. As they correctly note, I did not mention this third intervention in my Virology Blog story. The PACE authors now write: “These three (not two as David Tuller states) therapies were the ones being tested in the trial, so it is hard to see how this might lead to bias in the direction of one or other of these therapies.”
This statement is nonsense. Their third intervention was called “Adaptive Pacing Therapy,” and they developed it specifically for testing in the PACE trial. It is unclear why they now state that their third intervention was Activity Management, or why they think participants would know that Activity Management was synonymous with Adaptive Pacing Therapy. After all, cognitive behavior therapy and graded exercise therapy also involve some form of “activity management.” Precision in language matters in science.
Finally, the investigators say that Jessica Bavington, a co-author of the 2011 paper, had already left the PACE team before she served on the government committee that endorsed the PACE therapies. That might be, but it is irrelevant to the question that I raised in my piece: whether her dual role presented a conflict of interest that should have been disclosed to participants in the newsletter article about the U.K. treatment guidelines. The PACE newsletter article presented the U.K. guideline committee’s work as if it were independent of the PACE trial itself, when it was not.
Bias was caused by changing the two primary outcomes and how they were analyzed
The PACE authors seem to think it is acceptable to change methods of assessing primary outcome measures during a trial as long as they get committee approval, announce it in the paper, and provide some sort of reasonable-sounding explanation as to why they made the change. They are wrong.
They need as well to justify the changes with references or citations that support their new interpretations of their indicators, and they need to conduct sensitivity analyses to assess the impact of the changes on their findings. Then they need to explain why their preferred findings are more robust than the initial, per-protocol findings. They did not take these steps for any of the many changes they made from their protocol.
The PACE authors mention the change from bimodal to Likert-style scoring on the Chalder Fatigue Scale. They repeat their previous explanation of why they made this change. But they have ignored what I wrote in my story—that the year before PACE was published, its “sister” study, called the FINE trial, had no significant findings on the physical function and fatigue scales at the end of the trial and only found modest benefits in a post-hoc analysis after making the same change in scoring that PACE later made. The FINE study was not mentioned in PACE. The PACE authors have not explained why they left out this significant information about their “sister” study.
Regarding the abandonment of the original method of assessing the physical function scores, this is what they say in their response: “We decided this composite method [their protocol method] would be hard to interpret clinically, and would not answer our main question of comparing effectiveness between treatment arms. We therefore chose to compare mean scores of each outcome measure between treatment arms instead.” They mention that they received committee approval, and that the changes were made before examining the outcome data.
The authors have presented these arguments previously. However, they have not responded to the questions I raised in my story. Why did they not report any sensitivity analyses for the changes in methods of assessing the primary outcome measures? (Sensitivity analyses can assess how changes in assumptions or variables impact outcomes.) What prompted them to reconsider their assessment methods in the middle of the trial? Were they concerned that a mean-based measure, unlike their original protocol measure, did not provide any information about proportions of participants who improved or got worse? Any information about proportions of participants who got better or worse were from post-hoc analyses—one of which was the perplexing “normal range” analysis.
Moreover, this was an unblinded trial, and researchers generally have an idea of outcome trends before examining outcome data. When the PACE authors made the changes, did they already have an idea of outcome trends? They have not answered that question.
Our interpretation was misleading after changing the criteria for determining recovery
The PACE authors relaxed all four of their criteria for “recovery” in their 2013 paper and cited no committees who approved this overall redefinition of this critical concept. Three of these relaxations involved expanded thresholds; the fourth involved splitting one category into two sub-categories—one less restrictive and one more restrictive. The authors gave the full results for the less restrictive category of “recovery.”
The PACE authors now say that they changed the “recovery” thresholds on three of the variables “since we believed that the revised thresholds better reflected recovery.” Again, they apparently think that simply stating their belief that the revisions were better justifies making the changes.
Let’s review for a second. The physical function threshold for “recovery” fell from 85 out of 100 in the protocol, to a score of 60 in the 2013 paper. And that “recovery” score of 60 was lower than the entry score of 65 to qualify for the study. The PACE authors have not explained how the lower score of 60 “better reflected recovery”—especially since the entry score of 65 already represented serious disability. Similar problems afflicted the fatigue scale “recovery” threshold.
The PACE authors also report that “we included those who felt “much” (and “very much”) better in their overall health” as one of the criteria for “recovery.” This is true. They are referring to the Clinical Global Impression scale. In the protocol, participants needed to score a 1 (“very much better”) on this scale to be considered “recovered” on that indicator. In the 2013 paper, participants could score a 1 (“very much better”) or a 2 (“much better”). The PACE authors provided no citations to support this expanded interpretation of the scale. They simply explained in the paper that they now thought “much better” reflected the process of recovery and so those who gave a score of 2 should also be considered to have achieved the scale’s “recovery” threshold.
With the fourth criterion—not meeting any of the three case definitions used to define the illness in the study—the PACE authors gave themselves another option. Those who did not meet the study’s main case definition but still met one or both of the other two were now eligible for a new category called “trial recovery.” They did not explain why or when they made this change.
The PACE authors provided no sensitivity analyses to measure the impact of the significant changes in the four separate criteria for “recovery,” as well as in the overall re-definition. And remember, participants at baseline could already have achived the “recovery” requirements for one or two of the four criteria—the physical function and fatigue scales. And 13% of them already had.
Requests for data under the freedom of information act were rejected as vexatious
The PACE authors have rejected requests for the results per the protocol and many other requests for documents and data as well—at least two for being “vexatious,” as they now report. In my story, I incorrectly stated that requests for per-protocol data were rejected as “vexatious.” In fact, earlier requests for per-protocol data were rejected for other reasons.
One recent request rejected as “vexatious” involved the PACE investigators’ 2015 paper in The Lancet Psychiatry. In this paper, they published their last “objective” outcome measure (except for wages, which they still have not published)—a measure of fitness called a “step-test.” But they only published a tiny graph on a page with many other tiny graphs, not the actual numbers from which the graph was drawn.
The graph was too small to extract any data, but it appeared that the cognitive behavior therapy and graded exercise therapy groups did worse than the other two. A request for the step-test data from which they created the graph was rejected as “vexatious.”
However, I apologize to the PACE authors that I made it appear they were using the term “vexatious” more extensively in rejecting requests for information than they actually have been. I also apologize for stating incorrectly that requests for per protocol data specifically had been rejected as “vexatious.”
This is probably a good time to address the PACE authors’ repeated refrain that concerns about patient confidentiality prevent them from releasing raw data and other information from the trial. They state: “The safe-guarding of personal medical data was an undertaking enshrined in the consent procedure and therefore is ethically binding; so we cannot publicly release these data. It is important to remember that simple methods of anonymization does [sic] not always protect the identity of a person, as they may be recognized from personal and medical information.”
This argument against the release of data doesn’t really hold up, given that researchers share data all the time without compromising confidentiality. Really, it’s not that difficult to do!
(It also bears noting that the PACE authors’ dedication to participant protection did not extend to fulfilling their protocol promise to inform participants of their “possible conflicts of interest”—see below.)
Subjective and objective outcomes
The PACE authors included multiple objective measures in their protocol. All of them failed to demonstrate real treatment success or “recovery.” The extremely modest improvements in the exercise therapy arm in the walking test still left them more severely disabled people with people with pacemakers, cystic fibrosis patients, and relatively healthy women in their 70s.
The authors now write: “We interpreted these data in the light of their context and validity.”
What the PACE team actually did was to dismiss their own objective data as irrelevant or not actually objective after all. In doing so, they cited various reasons they should have considered before including these measures in the study as “objective” outcomes. They provide one example in their response. They selected employment data as an objective measure of function, and then—as they explain in their response, and have explained previously–they decided afterwards that it wasn’t an objective measure of function after all, for this and that reason.
The PACE authors consider this interpreting data “in light of their context and validity.” To me, it looks like tossing data they don’t like.
What they should do, but have not, is to ask whether the failure of all their objective measures might mean they should start questioning the meaning, reliability and validity of their reported subjective results.
There was a bias caused by many investigators’ involvement with insurance companies and a failure not to declare links with insurance companies in information regarding consent
The PACE authors here seriously misstate the concerns I raised in my piece. I did not assert that bias was caused by their involvement with insurance companies. I asserted that they violated an international research ethics document and broke a commitment they made in their protocol to inform participants of “any possible conflicts of interest.” Whether bias actually occurred is not the point.
In their approved protocol, the authors promised to adhere to the Declaration of Helsinki, a foundational human rights document that is explicit on what constitutes legitimate informed consent: Prospective participants must be “adequately informed” of “any possible conflicts of interest.” The PACE authors now suggest this disclosure was unnecessary because 1) the conflicts weren’t really conflicts after all; 2) they disclosed these “non-conflicts” as potential conflicts of interest in the Lancet and other publications, 3) they had a lot of investigators but only three had links with insurers, and 4) they informed participants about who funded the research.
These responses are not serious. They do nothing to explain why the PACE authors broke their own commitment to inform participants about “any possible conflicts of interest.” It is not acceptable to promise to follow a human rights declaration, receive approvals for a study, and then ignore inconvenient provisions. No one is much concerned about PACE investigator #19; people are concerned because the three main PACE investigators have advised disability insurers that cognitive behavior therapy and graded exercise therapy can get claimants off benefits and back to work.
That the PACE authors made the appropriate disclosures to journal editors is irrelevant; it is unclear why they are raising this as a defense. The Declaration of Helsinki is about protecting human research subjects, not about protecting journal editors and journal readers. And providing information to participants about funding sources, however ethical that might be, is not the same as disclosing information about “any possible conflicts of interest.” The PACE authors know this.
Moreover, the PACE authors appear to define “conflict of interest” quite narrowly. Just because the insurers were not involved in the study itself does not mean there is no conflict of interest and does not alleviate the PACE authors of the promise they made to inform trial participants of these affiliations. No one required them to cite the Declaration of Helsinki in their protocol as part of the process of gaining approvals for their trial.
As it stands, the PACE study appears to have no legitimate informed consent for any of the 641 participants, per the commitments the investigators themselves made in their protocol. This is a serious ethical breach.
I raised other concerns in my story that the authors have not addressed. I will save everyone much grief and not go over them again here.
I want to acknowledge two additional minor errors. In the last section of the piece, I referred to the drug rituximab as an “anti-inflammatory.” While it does have anti-inflammatory effects, rituximab should more properly be referred to as an “immunomodulatory” drug.
Also, in the first section of the story, I wrote that Dr. Chalder and Dr. Sharpe did not return e-mails I sent them last December, seeking interviews. However, during a recent review of e-mails from last December, I found a return e-mail from Dr. Sharpe that I had forgotten about. In the e-mail, Dr. Sharpe declined my request for an interview.
I apologize to Dr. Sharpe for suggesting he hadn’t responded to my e-mail last December.
“You can trust us; just see how many references we have provided.” – to our own papers!!
‘..patients themselves were the best people to determine their own state of health.’ That’s rich!
Weak muscles? ‘All in your head.’
Foggy brain? ‘You’re imagining it.’
Permanently exhausted? ‘Clearly you need some exercise to make you forget your exhaustion.’
Etc.
David Tuller’s response is excellent, as are the following (yes, I know, total brain-fade if one tries to read more than a tiny bit at a time, but highly informative and worth keeping a note of):
http://blogs.plos.org/mindthebrain/2015/10/29/uninterpretable-fatal-flaws-in-pace-chronic-fatigue-syndrome-follow-up-study/#.VjJenRjUV3Y.twitter
http://www.meaction.net/2015/11/01/prof-jonathan-edwards-pace-trial-is-valueless/
However, it is, I think, inevitable that the only bits of all this awful PACE hoo-ha that will enter the public’s, and mainstream medical minds, will be the stuff about “online abuse”, and “vexatious” requests – compare the vague allegations of “death threats” that Simon Wesselly used to wheel out at convenient times. People with ME will continue to be portrayed as unstable, aggressive, and delusional about their lack of capacity for exercise. It’s all extremely frustrating and depressing, and I don’t really seeing it changing any time soon. The narrative about ME in this country at least, was long ago hijacked by a small group of highly influential people, who seem to be given carte-blanche to repeatedly trot out the same old garbage to great publicity. It suits the government, hell-bent on austerity measures and savage cuts to disability benefits, and very hard-pressed medics such as the average GP simply don’t have the time to go into all the real details and inform themselves about how catastrophically flawed and downright dangerous the current NICE backed GET recommendation is.
Sorry to end on such a downer, but those of us who have had this wretched bloody illness for 30 plus years, are sick of hearing the same psychobabble given the time of day again and again.
I am deeply grateful to all those who try so hard to counteract it, like the MEA and the medics quoted in the links, and can only hope that one day, their voices will triumph.
Well Said Soloman and Findlow. I’m now friendless through trying to campaign. Please can you tell me who can help get through to the general public?? That would do for a start!!