Just to further clarify Jay's point.
The 80% figure is the power of the study against the alternative of a difference of 2.7 kg. That is, the probability of not rejecting the null when the true value of the difference is 2.7, is 20%. That may seem high to you (Carl), but since they reject the null, it is frankly irrelevant. As a reminder, the statistical significance is the probability of the opposite mistake - the probability that you would reject the null when in fact it was true.
You are doing the same thing he was doing. You are hung up on your strict pass/fail view of significance, and can't seem to understand that passing the significance test only means that random variations were unlikely to be the cause of the results.
What we have is a study where the results barely rose above the noise level.
You continue to reiterate the erroneous claim that "the results barely rose about the noise level." As I previously explained, that is false. The p-value for the main result, a difference in 12-month weight loss between diets, was .01. While I wouldn't call a p-value of .01 definitive, it is fairly strong evidence against the null hypothesis, and is certainly not "barely above the noise level."
As you previously explained, you have based your conclusion on the fact that the 12-month difference in weight loss between the Atkins and Zone diets was only 0.4 kg greater than the diet effect quoted in the paper that the study had 80% power to detect—2.7 kg. But both Meg and I have tried to explain to you that this comparison is irrelevant. The 2.7 kg figure was merely used for planning purposes, and has no role in the analysis.
When that happens, it is absolutely silly to think that one small study has proven much in an area of research which is filled with difficulties in compliance.
Your point about compliance is also wrong. I was going to let it go, because I didn't want you to feel picked on; but you've repeated it now for the third time, and you seem impervious to criticism, anyway.
It is clear from both the study design and the authors' comments that they intended differences in compliance to be part of the hypothesis about why the diets might differ in effectiveness. It is clear from the design because the study was conducted among free-living subjects who prepared all their own meals and were not given ongoing encouragement to comply with the diets. It was an intentional part of the study design to let compliance fall where it may. This is also clear from the following comment by the authors:
Although adherence to the 4 sets of dietary guidelines varied within each treatment group and waned over time . . . we believe that the adherence levels obtained are a fair representation of studying the diets and variations in macronutrient intake under realistic conditions and, therefore, increase the external validity of the findings.
So, it is neither true that the study's findings were barely significant nor that the study was compromised by problems with compliance or its measurement.