Author Topic: Acupuncture for low back pain  (Read 1830 times)

0 Members and 1 Guest are viewing this topic.

Offline HanEyeAm

  • Not Enough Spare Time
  • **
  • Posts: 234
Re: Acupuncture for low back pain
« Reply #30 on: February 20, 2017, 03:16:11 PM »

I don't think it is appreciated that "low quality evidence" is still evidence. The Reports article outlines the definitions for evidence if you are interested (cf PRISMA for more info). It doesn't mean no evidence. And it suggest to me, at least, that more research is warranted, improving the study quality at each stage.

Evidence-based medicine is consulting the evidence base, clinical judgement, and patient choice. Sometimes "low evidence" of a moderate effect is the best option. At least, the ACP thinks so. Do you think they would be so easily fooled by foo or political mischief?

  Low quality evidence is all there is.  The proponents of acupuncture always trot out the argument that there is not enough research because in spite of massive volume, none of it verifies their wondrous claims for acupuncture.  The fact is the research is overwhelming and more would just be beating the proverbial dead horse.  Like a lot of things people believe, acupuncture is a contrivance that sounds plausible to a few gullible victims.  The origins of acupuncture are ambiguous but with any common sense it appears to be just some made up shit.  First you have to assume that chi is a thing and that acupoints have meaning.  That's a big stretch.  I want strong evidence and not some testimonial like proponents seem to prefer.

I see what HanEyeAm is trying to say.  Imagine if in the 1900s someone came up with the treatment of washing hands and sterilizing instruments before surgery, but they said they were removing evil spirits from their hands and tools.  Imagine further that their technique involved a lot of verbal spells and hand gestures in addition to the actual washing and sterilization.  Then, to study it, you compared the spell-casting, hand-waving version to just washing your hands and sterilizing your instruments.  You would conclude that there's no difference between the real magic ceremony and the control procedure.

To apply that to acupuncture, maybe toothpicks in random spots on your back really do make you feel better by some mechanism.  Saying that traditional acupuncture doesn't work by comparing it to sham acupuncture doesn't say that there's not some kind of mechanism, just that the woo parts of it are complete bunk.  I think HanEyeAm agrees that sham acupuncture is just as good as the woo kind, but that both may be having some kind of effect that needs more study.

I would further assume (because I'm too lazy to look) that there's already a ton of studies that show that woo and sham acupuncture are about equal to directed relaxation or massage or any other thing that involves someone chilling out while someone else pays attention to them.

So, the bottom line is that the method of acupuncture is a sham, but that it may result in decreased pain in some general way.

It would be interesting to see studies that compare all different kinds of guided relaxation to see if we can narrow down what has the best effect on what sympoms, and see if we can work out the common mechanism.
Thank you, Billzbub, spot on, great example, and well said!
« Last Edit: February 20, 2017, 03:18:24 PM by HanEyeAm »

Offline jt512

  • Well Established
  • *****
  • Posts: 1355
    • jt512
Re: Acupuncture for low back pain
« Reply #31 on: February 20, 2017, 06:13:03 PM »
I don't blame you at all for not wanting to read them. If you change your mind, the Nature one is actually pretty good, IMO. It really gets into the details, discusses how the findings relate to relatively recent standards for acupuncture research, etc.

I took a quick look at the paper, and this jumped out at me. 



Those are their results for osteoarthritis, presented in a funnel plot, used to visually assess for the possible presence of publication bias.  In the absence of publication bias, a funnel plot of the studies should be symmetrical and funnel shaped.  Publication bias usually results in small non-significant studies being suppressed, in which case the funnel plot will be asymmetrical with studies missing in the lower corner where small non-significant studies should be. 

The caption to the above figure laughably reads, "Visual inspection of the funnel plot suggested symmetry...indicating no evidence of publication bias."  They do a little better in the body of the text: "The contour-enhanced funnel plot suggested an asymmetry and Egger’s test indicated publication bias (coefficient = −3.71; P = 0.02). However, metatrim analysis found that no study was missing or should be added."

Metatrim is the trim-and-fill module in Stata, the statistical package that the authors used.  Trim-and-fill is a statistical method that estimates the number of missing studies, along with their effect sizes and standard errors (precision).  I don't know the details of Stata's implementation of trim and fill.  There are variations.  The inventor of trim-and-fill recommends that a fixed-effect model be used for estimating missing studies even when a random-effects model is used for the main meta-analysis.  Using a fixed-effect trim-and-fill, I find five missing studies using the metafor package in R.  The trim-and-fill-adjusted funnel plot, with the hypothesized missing studies (white circles) added, looks like this:



Redoing the random-effects meta-analysis with the hypothesized missing studies included reduces the pooled effect estimate from the authors' –0.77 to –0.22, and the adjusted estimate is not statistically significant (p=.28).

Given the glaring asymmetry of the original funnel plot and the significant Egger's test, the authors' analysis that found that no missing studies should be added seems strange, and leads me to question the quality of this meta-analysis.
« Last Edit: February 20, 2017, 11:19:41 PM by jt512 »

Offline HanEyeAm

  • Not Enough Spare Time
  • **
  • Posts: 234
Re: Acupuncture for low back pain
« Reply #32 on: February 20, 2017, 11:42:31 PM »
I don't blame you at all for not wanting to read them. If you change your mind, the Nature one is actually pretty good, IMO. It really gets into the details, discusses how the findings relate to relatively recent standards for acupuncture research, etc.

I took a quick look at the paper, and this jumped out at me. 



Those are their results for osteoarthritis, presented in a funnel plot, used to visually assess for the possible presence of publication bias.  In the absence of publication bias, a funnel plot of the studies should be symmetrical and funnel shaped.  Publication bias usually results in small non-significant studies being suppressed, in which case the funnel plot will be asymmetrical with studies missing in the lower corner where small non-significant studies should be. 

The caption to the above figure laughably reads, "Visual inspection of the funnel plot suggested symmetry...indicating no evidence of publication bias."  They do a little better in the body of the text: "The contour-enhanced funnel plot suggested an asymmetry and Egger’s test indicated publication bias (coefficient = −3.71; P = 0.02). However, metatrim analysis found that no study was missing or should be added."

Metatrim is the trim-and-fill module in Stata, the statistical package that the authors used.  Trim-and-fill is a statistical method that estimates the number of missing studies, along with their effect sizes and standard errors (precision).  I don't know the details of Stata's implementation of trim and fill.  There are variations.  The inventor of trim-and-fill recommends that a fixed-effect model be used for estimating missing studies even when a random-effects model is used for the main meta-analysis.  Using a fixed-effect trim-and-fill, I find five missing studies using the metafor package in R.  The trim-and-fill-adjusted funnel plot, with the hypothesized missing studies (white circles) added, looks like this:



Redoing the random-effects meta-analysis with the hypothesized missing studies included reduces the pooled effect estimate from the authors' –0.77 to –0.22, and the adjusted estimate is not statistically significant (p=.28).

Given the glaring asymmetry of the original funnel plot and the significant Egger's test, the authors' analysis that found that no missing studies should be added seems strange, and leads me to question the quality of this meta-analysis.

Thanks for the extra work on this, JT.

Just to be clear, you weren't interpreting the contour-enhanced funnel plot (Figure 11) as a typical funnel plot, were you?

I see why you would want to re-run the meta-analyses with the hypothesized missing studies as you have done. What of the work that warns against doing so, especially when there is asymmetry among lower precision studies (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1570006/)? I am not in the thick of this stuff, so if you have an opinion about this, I would benefit from hearing it.

BTW, there is more info on the STATA method for contour-enhanced funnel plots here: http://www.stata-journal.com/article.html?article=gr0033

Also, it should be noted that the authors expressed a concern for publication bias regarding osteoarthritis quite clearly in Table 7, and mentioned in the text that this was a reason for rating the level of evidence as low.

Offline jt512

  • Well Established
  • *****
  • Posts: 1355
    • jt512
Re: Acupuncture for low back pain
« Reply #33 on: February 21, 2017, 02:33:00 AM »
Thanks for the extra work on this, JT.

Just to be clear, you weren't interpreting the contour-enhanced funnel plot (Figure 11) as a typical funnel plot, were you?

Well, they're both just scatter plots of effect size vs. study size, so, um, yeah.

Quote
I see why you would want to re-run the meta-analyses with the hypothesized missing studies as you have done. What of the work that warns against doing so, especially when there is asymmetry among lower precision studies (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1570006/)? I am not in the thick of this stuff, so if you have an opinion about this, I would benefit from hearing it.


That paper does not warn against what you say it warns against.  Of course, one would be naive to assume that the pooled estimate including hypothesized missing studies was "true."  What calculating this pooled effect size does is provide an indication of the potential magnitude of the bias in the raw pooled effect size due to publication bias.

Quote
Also, it should be noted that the authors expressed a concern for publication bias regarding osteoarthritis quite clearly in Table 7, and mentioned in the text that this was a reason for rating the level of evidence as low.

They make three mutually inconsistent claims about publication bias for the OA studies.  In the caption of Figure 11 they say there is no publication bias; in Table 7 they say there is publication bias and it matters; and in the text they say that there is publication bias, but it doesn't matter.  They seem to want to have their cake and/or eat it, too.

Notice that every funnel plot in the paper shows the same thing: the larger the study, the smaller the effect size.  This is a red flag, suggesting that small studies are likely to be suppressed when their results are negative, or worse, studies are monitored for statistical significance and stopped when they attain it, a common "questionable research practice" that exploits random fluctuations in effect size to find false statistical significance.  Of course, there could be legitimate reasons why small studies find bigger effects than large studies, but if there are, those reasons should be pretty obvious and should be easy to explain.
« Last Edit: February 21, 2017, 02:56:33 AM by jt512 »

Offline arthwollipot

  • Stopped Going Outside
  • *******
  • Posts: 5057
  • Observer of Phenomena
Re: Acupuncture for low back pain
« Reply #34 on: February 21, 2017, 02:41:58 AM »
The difference between levels of evidence is absolutely hair-splitting. To a degree that is normal in science. Very specific (operational) definitions are essential in good quality research and in understanding our world.

As mentioned above, "low quality evidence" is operationally defined to a meticulous degree in the articles and various standards like PRISMA. Although studies (and standards) don't all use the same exact definitions, they should be well described to allow replication that is reliable. This is how good quality systematic reviews (and RCTs) are conducted and compared (and judged).

AFAIK, there is no standard operational definition for "poor evidence." It is at best a blunt, possibly misleading description and opinion. As in, "evidence for climate change is poor." See? Completely subjective and thus unassailable.

Unfortunately, words sometimes mean things. Dismissing us because we used a common English synonym is not only hair-splitting, it is unnecessarily pedantic hair-splitting.

Offline Fast Eddie B

  • Frequent Poster
  • ******
  • Posts: 2447
Re: Acupuncture for low back pain
« Reply #35 on: February 21, 2017, 08:00:59 AM »
I think an anology was floated upthread about hand washing.

Coincidentally, this just came up on my Twitter feed:



An example of what "good" or "strong" evidence looks like, without quibbling pedantically about definitions.
"And what it all boils down to is that no one's really got it figured out just yet" - Alanis Morisette
• • •
"I doubt that!" - James Randi

Offline HanEyeAm

  • Not Enough Spare Time
  • **
  • Posts: 234
Re: Acupuncture for low back pain
« Reply #36 on: February 21, 2017, 10:02:28 AM »
Thanks for the extra work on this, JT.

Just to be clear, you weren't interpreting the contour-enhanced funnel plot (Figure 11) as a typical funnel plot, were you?

Well, they're both just scatter plots of effect size vs. study size, so, um, yeah.

Quote
I see why you would want to re-run the meta-analyses with the hypothesized missing studies as you have done. What of the work that warns against doing so, especially when there is asymmetry among lower precision studies (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1570006/)? I am not in the thick of this stuff, so if you have an opinion about this, I would benefit from hearing it.


That paper does not warn against what you say it warns against.  Of course, one would be naive to assume that the pooled estimate including hypothesized missing studies was "true."  What calculating this pooled effect size does is provide an indication of the potential magnitude of the bias in the raw pooled effect size due to publication bias.

Quote
Also, it should be noted that the authors expressed a concern for publication bias regarding osteoarthritis quite clearly in Table 7, and mentioned in the text that this was a reason for rating the level of evidence as low.

They make three mutually inconsistent claims about publication bias for the OA studies.  In the caption of Figure 11 they say there is no publication bias; in Table 7 they say there is publication bias and it matters; and in the text they say that there is publication bias, but it doesn't matter.  They seem to want to have their cake and/or eat it, too.

Notice that every funnel plot in the paper shows the same thing: the larger the study, the smaller the effect size.  This is a red flag, suggesting that small studies are likely to be suppressed when their results are negative, or worse, studies are monitored for statistical significance and stopped when they attain it, a common "questionable research practice" that exploits random fluctuations in effect size to find false statistical significance.  Of course, there could be legitimate reasons why small studies find bigger effects than large studies, but if there are, those reasons should be pretty obvious and should be easy to explain.

Thank you for this contribution, JT! I don't have as negative of an impression from the contour-enhanced funnel plots: although the larger N studies have smaller effect sizes, a smaller effect size is needed for statistical significance, and it is clear that many studies still surpass the holy P<.05. Further, as mentioned by the authors, I think that the number of larger N studies that are not significant suggests against p-hacking (although I didn't see a formal evaluation of this in the article and don't have the time or skills to look more deeply, unfortunately).

I appreciate that you have taken the time to run the additional analysis, explain the rationale, and comment on problems with the analyses/reporting/interpretation of findings in this meta-analysis. Good stuff!



Offline HanEyeAm

  • Not Enough Spare Time
  • **
  • Posts: 234
Re: Acupuncture for low back pain
« Reply #37 on: February 21, 2017, 10:30:22 AM »
I think an anology was floated upthread about hand washing.

Coincidentally, this just came up on my Twitter feed:



An example of what "good" or "strong" evidence looks like, without quibbling pedantically about definitions.

Neat illustration! The story of how Dr. Semmelweis linked hand washing and deaths from infection is fascinating: the mortality rate was higher among mothers treated by male physicians vs. female midwives. He first hypothesized the infections were related to body position during birth (midwives used the side position, physicians used back position) or priests ringing bells. After systematically ruling those out, he turned to hand-washing, given observations about possible links between physician work with cadavers and infection. He didn't really understand why hand washing with a chlorine solution worked in his lifetime and the link wasn't accepted until years later when germ theory was developed. NPR has a great, short story on it here:
http://www.npr.org/sections/health-shots/2015/01/12/375663920/the-doctor-who-championed-hand-washing-and-saved-women-s-lives

And if you want "pedantic," take a closer look at the practice of epidemiology! Without strict definitions, it is hard to do the work, disseminate findings, and replicate studies.

Offline HanEyeAm

  • Not Enough Spare Time
  • **
  • Posts: 234
Re: Acupuncture for low back pain
« Reply #38 on: February 21, 2017, 10:53:57 AM »
The difference between levels of evidence is absolutely hair-splitting. To a degree that is normal in science. Very specific (operational) definitions are essential in good quality research and in understanding our world.

As mentioned above, "low quality evidence" is operationally defined to a meticulous degree in the articles and various standards like PRISMA. Although studies (and standards) don't all use the same exact definitions, they should be well described to allow replication that is reliable. This is how good quality systematic reviews (and RCTs) are conducted and compared (and judged).

AFAIK, there is no standard operational definition for "poor evidence." It is at best a blunt, possibly misleading description and opinion. As in, "evidence for climate change is poor." See? Completely subjective and thus unassailable.

Unfortunately, words sometimes mean things. Dismissing us because we used a common English synonym is not only hair-splitting, it is unnecessarily pedantic hair-splitting.

I'm sorry if my approach contributed to you feeling dismissed. You certainly have the right to your opinion on what is considered good or poor evidence and coming up with your own definition of what that means to you. Please feel free to share your definition if you like. If I was truly dismissing the opinion of those who claim that there is "poor evidence" of acupuncture, I wouldn't keep posting information about the scientific pursuit of understanding acupuncture's effects.

I did my best to explain the importance of standardized definitions that scientists use when conducting science and contributing to the evidence base. I hope that it is of interest to some readers, but either I am not doing a good job of conveying the information or it is simply being dismissed by some.

Honestly, if I ever had an undergrad/graduate student, postdoc, or research assistant that said a research protocol or IRB consent form was too pedantic or hair-splitting, s/he would be out of my lab in a heartbeat. That person would likely think it ok to take liberties with the research protocol and be a significant liability to me, the institution, the study's scientific integrity, and ultimately to the public.

« Last Edit: February 21, 2017, 10:56:33 AM by HanEyeAm »

Offline SQ the ΣΛ/IGMд

  • Atheist extraordinaire
  • Poster of Extraordinary Magnitude
  • **********
  • Posts: 12043
  • Pondering the cosmos since 1969
Re: Acupuncture for low back pain
« Reply #39 on: February 21, 2017, 11:03:25 AM »
The hate on acupuncture is way strong on SGU. More than deserved I think.

As I mentioned on the infant acupuncture thread, there are recent meta-analyses published in reputable journals (or at least by reputable publishers) that support acupuncture:

http://aim.bmj.com/content/early/2016/07/08/acupmed-2015-010989.abstract
http://www.nature.com/articles/srep30675
https://www.ncbi.nlm.nih.gov/pubmed/28115321
https://www.ncbi.nlm.nih.gov/pubmed/27852100
https://www.ncbi.nlm.nih.gov/pubmed/27764035

Chronic pain is a funny thing... often with no obvious etiology and always subjective. Something works, for many, and whether it is placebo, touch, kind word from the acupuncturist, an NOS body response to needles, etc., there is something worth exploring.

But, yeah, meridians are butkis.

Curious what your thoughts are on bee venom acupuncture.
"That's ridiculous, spooks. That's silly!" ~ The Tin Woodsman - The Wizard of Oz ~

"Like it or not, we are stuck with science.  We had better make the best of it." ~ Carl Sagan, The Demon-Haunted World ~

Offline HanEyeAm

  • Not Enough Spare Time
  • **
  • Posts: 234
Re: Acupuncture for low back pain
« Reply #40 on: February 21, 2017, 01:00:56 PM »
The hate on acupuncture is way strong on SGU. More than deserved I think.

As I mentioned on the infant acupuncture thread, there are recent meta-analyses published in reputable journals (or at least by reputable publishers) that support acupuncture:

http://aim.bmj.com/content/early/2016/07/08/acupmed-2015-010989.abstract
http://www.nature.com/articles/srep30675
https://www.ncbi.nlm.nih.gov/pubmed/28115321
https://www.ncbi.nlm.nih.gov/pubmed/27852100
https://www.ncbi.nlm.nih.gov/pubmed/27764035

Chronic pain is a funny thing... often with no obvious etiology and always subjective. Something works, for many, and whether it is placebo, touch, kind word from the acupuncturist, an NOS body response to needles, etc., there is something worth exploring.

But, yeah, meridians are butkis.

Curious what your thoughts are on bee venom acupuncture.
New one to me!

BTW, I don't administer, promote, research, or make $ from accupuncture. Some patients of mine have used it and colleagues administer it or are fans. I may help a MD colleague who practices it (very small part of his work) write a grant proposal for examining accupuncture for neuropathic pain. Neither of us believe in the meridian stuff but note that patients who have significant neuropathic pain can respond well to it. Trust me, if we do it, we will have biostatistical support, use multiple control conditions, and make it as rigorous as possible. Seeing patients suffer chronic pain: absolutely horrible.

As Billzbub mentioned earlier, I want to know, at the core, why accupuncture and sham often have positive effects. I suspect that eventually the needles and meridian stuff will be shed, sent to the medical history books, and whatever common therapeutic element will become front-and-center in future treatments.
« Last Edit: February 21, 2017, 01:04:53 PM by HanEyeAm »

Offline SQ the ΣΛ/IGMд

  • Atheist extraordinaire
  • Poster of Extraordinary Magnitude
  • **********
  • Posts: 12043
  • Pondering the cosmos since 1969
Re: Acupuncture for low back pain
« Reply #41 on: February 21, 2017, 01:11:34 PM »
That's an admiral quest. I hope you achieve results that can help people in some way.
"That's ridiculous, spooks. That's silly!" ~ The Tin Woodsman - The Wizard of Oz ~

"Like it or not, we are stuck with science.  We had better make the best of it." ~ Carl Sagan, The Demon-Haunted World ~

Offline jt512

  • Well Established
  • *****
  • Posts: 1355
    • jt512
Re: Acupuncture for low back pain
« Reply #42 on: February 21, 2017, 02:15:20 PM »
Thank you for this contribution, JT! I don't have as negative of an impression from the contour-enhanced funnel plots: although the larger N studies have smaller effect sizes, a smaller effect size is needed for statistical significance...

You just described publication bias!  Studies are more likely to be published if they attain statistical significance than if they do not.  The smaller the study, the larger the observed effect size must be to attain significance.  Therefore, the smaller the study, the more exaggerated the effect size will be when significant.  The result will be a negative relationship between study size and observed effect size, which in turn will positively bias the calculated pooled effect size.  Look at the funnel plots in the paper.  Except for maybe one of them, there is practically a linear relationship between study size and effect size.  This is very fishy.

Quote
...and it is clear that many studies still surpass the holy P<.05. Further, as mentioned by the authors, I think that the number of larger N studies that are not significant suggests against p-hacking (although I didn't see a formal evaluation of this in the article and don't have the time or skills to look more deeply, unfortunately).

It only argues that some studies were not p-hacked, which is what we would expect.  If some studies were p-hacked or suppressed due to lack of significance, then the pooled effect size will be exaggerated.

Offline HanEyeAm

  • Not Enough Spare Time
  • **
  • Posts: 234
Re: Acupuncture for low back pain
« Reply #43 on: February 21, 2017, 09:18:49 PM »
Thank you for this contribution, JT! I don't have as negative of an impression from the contour-enhanced funnel plots: although the larger N studies have smaller effect sizes, a smaller effect size is needed for statistical significance...

You just described publication bias!  Studies are more likely to be published if they attain statistical significance than if they do not.  The smaller the study, the larger the observed effect size must be to attain significance.  Therefore, the smaller the study, the more exaggerated the effect size will be when significant.  The result will be a negative relationship between study size and observed effect size, which in turn will positively bias the calculated pooled effect size.  Look at the funnel plots in the paper.  Except for maybe one of them, there is practically a linear relationship between study size and effect size.  This is very fishy.

Quote
...and it is clear that many studies still surpass the holy P<.05. Further, as mentioned by the authors, I think that the number of larger N studies that are not significant suggests against p-hacking (although I didn't see a formal evaluation of this in the article and don't have the time or skills to look more deeply, unfortunately).

It only argues that some studies were not p-hacked, which is what we would expect.  If some studies were p-hacked or suppressed due to lack of significance, then the pooled effect size will be exaggerated.

I largely agree with you, JT. An exception is that I didn't quite describe publication bias: I described a contour-enhanced funnel plot (reflecting power, I suppose), but I didn't mention the asymmetry more often seen in the smaller N studies that would suggest publication bias. I fully agree there is evidence of publication bias in most of the graphs and that is certainly problematic to some degree: I may just not think they are as significant or fishy as you do.

Offline jt512

  • Well Established
  • *****
  • Posts: 1355
    • jt512
Re: Acupuncture for low back pain
« Reply #44 on: February 21, 2017, 09:27:57 PM »
but I didn't mention the asymmetry more often seen in the smaller N studies that would suggest publication bias.

You didn't use the word "asymmetry," but you described it: you noted that the larger studies have smaller effect sizes.

You then went on to state the reason for it: smaller studies need larger effect sizes to attain significance.  If all studies were published (and studying the same effect), then there would be no systematic relationship between study size and reported effect size.  But we observe such a relationship.  The most likely reason for this is publication bias (or optional stopping).
« Last Edit: February 21, 2017, 09:37:51 PM by jt512 »

 

personate-rain