Observational studies are a mainstay of epidemiology. In observational studies, investigators gather data passively rather than manipulating variables. For example, if you want to know if people who wear tight shoes develop bunions, you would find a group of people who wear tight shoes and one that doesn't. You would try your best to make sure the groups are the same in every way besides shoe tightness: age, gender, weight, etc. Then you would follow them for 10 years to see how many people in each group develop bunions. You would then know whether or not wearing tight shoes is associated with bunions.
Observational data can never tell us that one thing caused another, only that the two are associated. The tight shoes may not have caused the bunions; they may simply have been associated with a third factor that was the true cause. For example, maybe people who wear tight shoes also tend to eat corn flakes, and corn flakes are the real cause of bunions. Or perhaps bunions actually cause people to wear tight shoes, rather than the reverse. Observational data can't resolve these questions definitively.
To establish causality, you have to do a controlled trial. In the case of our example, we would select 2,000 people and assign them randomly to two groups of 1,000. One group would wear tight shoes while the other would wear roomy shoes. After 10 years, we would see how many people developed bunions in each group. If the tight shoe group had more bunions, we could rightly say that tight shoes cause bunions. The reason this works is the randomization process (ideally) eliminates all differences between the groups except for the one you're trying to study. You should have the same number of corn flake eaters in each group if the randomization process worked correctly.
A less convincing but still worthwhile alternative would be to put tight and loose shoes on mice to see if they develop bunions. That's what researchers did in the case of the tobacco-lung cancer link. Controlled studies in animals reinforced the strong suggestion from epidemiological studies that smoking increases the risk of lung cancer.
Finally, another factor in determining the likelihood of associations representing causation is plausibility. In other words, can you imagine a way in which one factor might cause another or is the idea ridiculous? For example, did you know that shaving infrequently is associated with a 30% increase in cardiovascular mortality and a 68% increase in stroke incidence in British men? That's a better association than you get with some blood lipid markers and most dietary factors! It turns out:
The one fifth (n = 521, 21.4%) of men who shaved less frequently than daily were shorter, were less likely to be married, had a lower frequency of orgasm, and were more likely to smoke, to have angina, and to work in manual occupations than other men.So what actually caused the increase in disease incidence? That's where plausibility comes in. I think we can rule out a direct effect of shaving on heart attacks and stroke. The authors agree:
The association between infrequent shaving and all-cause and cardiovascular disease mortality is probably due to confounding by smoking and social factors, but a small hormonal effect may exist. The relation with stroke events remains unexplained by smoking or social factors.In other words, they don't believe shaving influences heart attack and stroke directly, but none of the factors they measured explain the association. This implies that there are other factors they didn't measure that are the real cause of the increase. This is a critical point! You can't determine the impact of factors you didn't measure! And you can't measure everything. You just measure the factors you think are most likely to be important and hope the data make sense.
This leads us to another important point. Investigators can use math to estimate the relative contribution of different factors to an association. For example, imagine the real cause of the increased stroke incidence in the example above was donut intake, and it just so happens that donut lovers also tend to shave less often. Now imagine the investigators measured donut intake. They can then mathematically adjust the association between shaving and stroke to subtract out the contribution of donuts. If no association remains, then this suggests (but does not prove) that the association between shaving and stroke was entirely due to shaving's association with donuts. But the more math you apply, the further you get from the original data. This type of mathematical manipulation requires certain assumptions, and in my opinion generally renders the data progressively less meaningful.
Of course, you can't adjust for things you didn't measure, as the study I cited above demonstrates. If factors you didn't measure are influencing your association, you may be left thinking you're looking at a causal relationship when in fact your association is just a proxy for something else. This is a major pitfall when you're doing studies in the diet-health field, because so many lifestyle factors travel together. For example, shaving less travels with being unmarried and smoking more. Judging by the pattern, it also probably associates with lower income, a poorer diet, less frequent doctor visits, and many other potentially negative things.
If the investigators had been dense, they may have decided that shaving frequently actually prevents stroke, simply because none of the other factors they measured could account for the association. Then they would be puzzled when controlled trials show that shaving doesn't actually influence the risk of stroke, and shaving mice doesn't either. They would have to admit at that point that they had been tricked by a spurious association. Or stubbornly cling to their theory and defend it with tortuous logic and by selectively citing the evidence. I think this happens a lot.
These are the pitfalls we have to keep in mind when interpreting epidemiology, especially as it pertains to something as complex as the relationship between diet and health. In the next post, I'll get to the meat of my argument: that modern diet-health epidemiology is a self-fulfilling prophecy and a rather unreliable way to detect causal relationships.
Tidak ada komentar:
Posting Komentar