I dont have relevant experience to really analyse it, but my initial impression is that it seems of higher quality than many acupuncture studies Ive read but still has some strange elements.
One thing that stands out to me in the Depression study section is the "Treatment Concordance" (aka, what treatment wing did the subject want to be put in vs what treatment wing did the patient get randomised to).
58.8% of the acupuncture wing wanted to be in the acupuncture wing, while 59.9% of the counseling wing didnt want to be in the counseling wing. Although we dont have actual data given for which wing they preferred, the non-acupuncture + non-counseling wing had only 1 person out of 151 who actually wanted to be in that wing so it seems reasonable to assume the large majority of people wanted to be in either the acupuncture or counseling wing: If we simply ignore the existence of the "neither" wing to get a rough estimate, we have 57 + 75 = 132 (21.9%) who wanted counseling vs 177 + 178 = 355 (58.8%) who wanted acupuncture.
In other words, it seems likely that a large majority of participants for the study came in already wanting acupuncture. To me, if your participants are coming in to the study being nearly three times as likely to prefer acupuncture to counseling (especially where there is no such trend known to exist in the general public) that points to a significant problem with selection bias in the recruiting method for the study.
I suspect, but am not familiar enough with statistics and the study design to back it up, that the earlier meta analysis part may have similar problems: They focus fairly strongly on blinding in their quality analysis and study selection criteria but seem to put less emphasis on other problems such as selection bias and p-hacking.