I don't blame you at all for not wanting to read them. If you change your mind, the Nature one is actually pretty good, IMO. It really gets into the details, discusses how the findings relate to relatively recent standards for acupuncture research, etc.
I took a quick look at the paper, and this jumped out at me.
Those are their results for osteoarthritis, presented in a funnel plot, used to visually assess for the possible presence of publication bias. In the absence of publication bias, a funnel plot of the studies should be symmetrical and funnel shaped. Publication bias usually results in small non-significant studies being suppressed, in which case the funnel plot will be asymmetrical with studies missing in the lower corner where small non-significant studies should be.
The caption to the above figure laughably reads, "Visual inspection of the funnel plot suggested symmetry...indicating no evidence of publication bias." They do a little better in the body of the text: "The contour-enhanced funnel plot suggested an asymmetry and Egger’s test indicated publication bias (coefficient = −3.71; P = 0.02). However, metatrim analysis found that no study was missing or should be added."
Metatrim is the trim-and-fill module in Stata, the statistical package that the authors used. Trim-and-fill is a statistical method that estimates the number of missing studies, along with their effect sizes and standard errors (precision). I don't know the details of Stata's implementation of trim and fill. There are variations. The inventor of trim-and-fill recommends that a fixed-effect model be used for estimating missing studies even when a random-effects model is used for the main meta-analysis. Using a fixed-effect trim-and-fill, I find five missing studies using the metafor package in R. The trim-and-fill-adjusted funnel plot, with the hypothesized missing studies (white circles) added, looks like this:
Redoing the random-effects meta-analysis with the hypothesized missing studies included reduces the pooled effect estimate from the authors' –0.77 to –0.22, and the adjusted estimate is not statistically significant (p=.28).
Given the glaring asymmetry of the original funnel plot and the significant Egger's test, the authors' analysis that found that no missing studies should be added seems strange, and leads me to question the quality of this meta-analysis.