As you previously explained, you have based your conclusion on the fact that the 12-month difference in weight loss between the Atkins and Zone diets was only 0.4 kg greater than the diet effect quoted in the paper that the study had 80% power to detect—2.7 kg. But both Meg and I have tried to explain to you that this comparison is irrelevant. The 2.7 kg figure was merely used for planning purposes, and has no role in the analysis.
So there is another reason for them to declare that there is NOT a significant difference between Ornish and Atkins, a difference of about 2.6. Go ahead and point me to the correct reason for that. They sure seem to be taking that into account.
First of all, the difference in 12-mo weight loss between Atkins and Ornish was 2.1 kg. Secondly, they did not "declare" that there was no statistically significant difference between these diets. Reporting a non-statistically significant result is not a declaration; it's the lack of a declaration. When two research findings are not statistically significantly different, as a rule, you can't make any claim about what the true difference is. In particular, a non-statistically significant difference does not imply that there is no difference in fact. That is, failure to reject the null hypothesis is not, as a rule, support for the null hypothesis.
Third, the authors' stating that they found no significant 12-mo weight loss difference between Atkins and Ornish is not based on a comparison of the observed difference with 2.7 kg. The authors explain how they made determinations of statistical significance in the Statistical Analysis section of the paper: "Differences among diets for 12-month changes from baseline were tested by ANOVA. For statistically significant ANOVAs, all pairwise comparisons among the 4 diets were tested using the Tukey studentized range adjustment. . . . All statistical tests were 2-tailed using a significance level of .05." Simply put, they compared the diet results, not with 2.7 kg, but with each other
, calculated p-values for each pair of diets, and declared the pair of diets significantly different when the calculated p-value was less than .05. I hope that this is finally clear.
Although adherence to the 4 sets of dietary guidelines varied within each treatment group and waned over time . . . we believe that the adherence levels obtained are a fair representation of studying the diets and variations in macronutrient intake under realistic conditions and, therefore, increase the external validity of the findings.
So, it is neither true that the study's findings were barely significant nor that the study was compromised by problems with compliance or its measurement.
The big zinger, from which you pretend to have spared me, is to quote the authors ACKNOWLEDGING the possibility of the very thing I said could be the case? But I am "wrong" because the authors say they are not especially concerned with the difference between biological and compliance causes?
First of all, I did more than quote from the paper. I explained why the study design and the author's comments imply that they purposefully left compliance up to subjects, and therefore that your criticism about compliance was mistaken. The aim of the study was to determine if these diets lead to differences in weight loss, as the diets are actually implemented by dieters in the real world
under real-world, rather than laboratory, conditions. The fact that you ignored my explanation does not mean that my entire argument (the so-called "zinger") consisted of one quote from the paper. What I said you were wrong about is that actual compliance, or its measurement, was a weakness of this study.
So let's pretend that I believe your claim to being condescending. This whole time, you are pretending to argue that the study design and results are not fairly assessed as being open to a possibility that the weight loss was caused by compliance, even though you are sitting on the useless point that the authors actually think that is true but don't care.
I would like to pretend that I can make sense out of that paragraph, but I'm not that good an actor.
And you seem to have forgotten that this study was offered up as supporting a specific claim. Saying that the people behind the study are not concerned with exactly which effect was measured does NOT change the fact that the study is weak evidence for the subject addressed here.Looking back
, the study was offered in support of two claims: (1) that "counting calories is ineffective for weight loss" and (2) that "weight loss in the group with the most weight lost was directly proportional to their carbohydrate intake."
With regard to the first claim, although I don't agree with the claim, I do agree that the study supports it. Subjects on the Atkins diet, which did not count calories, lost more weight than subjects on any of the other diets, including the Zone and LEARN diets, which prescribed caloric restriction.
The second claim doesn't make sense as worded, but it is true that the group with the greatest weight loss, Atkins, had the lowest carbohydrate intake. Carbohydrate intake for the Atkins group was significantly lower than every other diet group at every post-baseline assessment. The method of assessing dietary intake, which you mis-characterized as "shaky," was, in fact, adequate to assess carbohydrate intake at the group level.
So, no, I do not agree that the study is weak evidence for these claims.