How to Analyze Health Claims

August 8, 2018

 

There’s not a week that goes by where I don’t see two completely contradicting health theories. I’ll read an article titled “Is red wine causing cancer?” just to immediately find an article talking about how the antioxidants in red wine help prevent cancer. 

 

We live in a world with endless access to information and not a clue what to do with it all. It’s overwhelming to the average person, but to the chronically ill, it can be panic-inducing.

 

The ability to critically analyze health claims is an essential skill to have when you’re chronically ill and especially desperate to fall into the traps of pricey “fixes” and “cures.” Click-bait links on Pinterest, Facebook, and popular online magazines leave our acquaintances with the false notion that they know how to cure us. If your acquaintance has asked you if you’ve tried x, y, or z, more times than not it’s probably been fueled by something they learned from a pop magazine, or anecdotal evidence (unscientific personal experience, for ex. "my grandmother smoked every day of her life and she's still alive and healthy!").

 

This semester I’m in a Kinesiology course that focuses on analyzing current health issues. Below I’ll share some simple tips I’ve learned in my university career, so you can familiarize yourself with red flags often used in pseudoscience claims. 

 

Familiarizing yourself with a few quick tricks and tips to assess the legitimacy of health claims can save us time, money, and energy. 

 

Here are some easy ways to analyze a health claim you find online:

 

1. Look at the source: are you reading "Men's Health" or "NCBI"? The first has more of a reason to provide click-bait articles that'll draw the reader in and provide them with an overexaggerated, biased-perspective on health claims. Thankfully, most popular magazines will provide a link on their page to the original source (the source in which the research was originally published). If you click this link, you can read the article's Abstract for a quick summary of the research, and can read further if interested. 

 

"Popular magazines tend to have a lot of colourful photographs and lots of advertisements (medical journals also have lots of advertisements, most commonly for pharmaceutical products, also known as "drugs"). By contrast, scientific journals have a more serious appearance and tone, few photographs, and many figures (graphs and other things) and tables of numbers.

 

Virtually all scientific journals now publish digital editions in addition to, or instead of, print issues, and some researchers post summaries of their work as short videos on sites such as YouTube" (Brown, 2014, p. 29).

 

If you're still unsure how reliable a source is, "look at the 'About Us' and "Contact Us' links to learn more about who is behind the website and what their motivation is" (Brown, 2014, p. 33).

 

2. Look for specific words: in a previous course I took (and love) titled "Critical Analysis of Issues in Psychology", we were taught that rarely, if ever, is the word "proved" going to find its way in a reputable scientific article. If the source you're reading from says "scientists have proved that..." you can immediately see a red flag. Usually, scientific journals, or at least reputable sources, will say "evidence suggests/finds..". 

 

3. Look at the date of the article: are the findings outdated? If the research has been done 20 years ago, it's a good idea to do a quick Google search to see if there has been any research published on the topic since then. Just because an article is old doesn't mean the evidence is outdated, but sometimes that's the case.

 

4. Look for bias: researchers often have to share who has sponsored their research. You can usually find this information at the very top or bottom (often before the reference section) of a scientific journal. If the findings of the article claim that "Dairy builds strong bones and teeth" but the sponsor is "The Dairy Farmers of Canada" then you can definitely argue biased-findings. Additionally, if you do a quick Google search of the researchers, you can often find their qualifications and learn more about them; here is where you can also find bias.

 

Additionally, if a pop magazine makes an exaggerated health claim in favour of a certain product, and (not so) coincidentally also advertises the product on the sides of their website, this is a red flag.

 

5. Correlation does not equal causation: if you've stepped foot in a University-level education class, then this point will be redundant. However, this point is also the most important one to drive home because it's where most presumptions come about.

 

Simply put, correlation is the scientific term for a relationship between two different variables. Causation is when researchers can determine, through valid and reliable execution of the scientific method, that one variable causes another. Most fear-inducing, click-bait health claims say “this MIGHT cause cancer” are actually referring to a correlation (and irresponsibly so).

 

Again, language is important here. Scientists would say that variable A and variable B are related, linked, or correlated, not that “*gasp* it probably, maybe, kinda causes cancer.”

 

An example of a correlation: researchers find a link between marijuana use and relationship trouble (example provided by Psychology Today). A Pop Magazine could easily take these findings and title it "Is Marijuana Use the Reason You're Single?" And before you know it, your Aunt is sending you this article thinking that your medical marijuana use is the reason you're single! When all the researchers have actually found is an correlation. Other possible reasons behind this correlation could be that people who use marijuana are more likely to be anxious and that impacts their ability to handle relationship conflict. Or perhaps those who have relationship troubles are more likely to use marijuana because of the marijuana calms their stress. Who knows (not us nor the researcher at this point)!

 

Drawing causation from correlation is dangerous. Researchers have emphasized this by publishing correlative data they found accidentally when trying to study something else.

 

A correlation could be that those with POTS are more likely to be brunettes; but don't let a a Pop Magazine convince you that going blonde will cure your POTS!

 

Research aiming to find correlations is also implemented because of ethics.


"Causation - When an article says that causation was found, this means that the researchers found that changes in one variable they measured directly caused changes in the other. An example would be research showing that jumping of a cliff directly causes great physical damage. In order to do this, researchers would need to assign people to jump off a cliff (versus lets say jumping off of a 12 inch ledge) and measure the amount of physical damage caused. When they find that jumping off the cliff causes more damage, they can assert causality. Good luck recruiting for that study!

 

Most of the research you read about indicates a correlation between variables, not causation. You can find the key words by carefully reading. If the article says something like "men were found to have," or "women were more likely to," they're talking about associations, not causation.

 

Why the difference?

The reason is that in order to actually be able to claim causation, the researchers have to split the participants into different groups, and assign them the behavior they want to study (like taking a new drug), while the rest don't. This is in fact what happens in clinical trials of medication because the FDA requires proof that the medication actually makes people better (more so than a placebo). It's this random assignment to conditions that makes experiments suitable for the discovery of causality. Unlike in association studies, random assignment assures (if everything is designed correctly) that its the behavior being studied, and not some other random effect, that is causing the outcome.

 

Obviously, it is much more difficult to prove causation than it is to prove an association.

 

Should we just ignore associations?

 

No! Not at all!!! Not even close!!! Correlations are crucial for research and still need to be looked at and studied, especially in some areas of research like addiction.The reason is simple - We can't randomly give people drugs like methamphetamine as children and study their brain development to see how the stuff affects them, that would be unethical. So what we're left with is a the study of what meth use (and use of other drugs) is associated with. It's for this reason that researchers use special statistical methods to assess associations, making certain that they are also considering other things that may be interfering with their results.

 

In the case of the marijuana article, the researchers ruled out a number of other interfering variables known to affect relationships, like aggression, gender, education, closeness with other family members, etc. By doing so, they did their best to assure that the association found between marijuana and relationship status was real. Obviously other possibilities exist, but as more researchers assess this relationship in different ways, we'll learn more about its true nature.

 

This is how research works.

It's also how we found out that smoking causes cancer. Through endlessly repeated findings showing an association. That turned out pretty well, I think..." (Psychology Today).

 

Understanding the difference between correlation and causation is crucial in order to analyze the validity and reliability of health claims. Responsible researchers will go into detail about the limitations of their evidence

 

6. Logic: "Finally, look for examples of faulty reasoning. The evidence may be extensive and well documented, but the conclusions drawn from the evidence might be faulty. Consider the following hypothetical example:

 

-The Masai people of Africa have a low incidence of heart disease despite a high consumption of meat. Therefore, North Americans would be advised to eat more meat as a protection against heart disease. (This is a correlation -Sarah)

 

This argument has several flaws.

-First, the low incidence of heart disease might be the result of other factors (such as lower level of urban stress, higher degree of social cohesion, or lower rate of cigarette smoking) than the meat consumption.

-Second, the findings might not be transferable to North America. It would be more convincing to learn that Canadians who eat large amounts of meat have lower rates of heart disease than their counterparts (of similar age, gender, and lifestyle) who eat less meat.

-The argument is also flawed by lack of detail, although this is not 'faulty reasoning.' It would be helpful to have meat consumption specified in kilograms per person per year, and for heart disease incidence to be reported in deaths per 1,000 people per year" (Brown, 2014, p. 31-32).

 

7. Sample size: if you are looking at the original source you will be able to find a bit about their sample in the abstract, and more within the body of the article. For a researcher to be able to draw generalizations from their data, the sample they used for the experiment must be representative of people in the "real world." The sample must be randomly selected, to account for individual differences, must account for a variety of gender, race, and educational backgrounds, and must be a large enough sample so any findings are not just by chance. Current POTS studies are mostly of young women, thus making the sample technically not representative. However, since POTS mostly affects young women, the sample size is representative of POTS in "the real world." Additionally, if an illness is rare, it will be more difficult to find enough people to volunteer for the study, thus contributing to a smaller sample size. In this case, the research can still be valid and reliable, it just means we need to be aware of possible limitations. 

 

8. Author qualifications: is the author qualified to be doing the research they are doing? Or is the health claim you're reading from a Pop Magazine by a "John Doe" who's a journalist (not a doctor or researcher qualified to make, or support, this claim). If you're reading an article from a Pop Magazine, you can often click the name of the author to learn more about them, or scroll down to the bottom where the article often includes a picture of the author and a paragraph about them.

 

9. Peer-reviewed: "scholars prefer journals that are peer-reviewed. This means the articles have gone through a 'quality control' process. Peers are people like oneself [Dr. Stephen Brown]. [Dr. Stephen Brown's] peers are university teachers. The peer-reviewed journal Cardiology is aimed at heart doctors and heart scientists. It publishes articles that have been reviewed by other heart doctors and scientists" (Brown, 2014, p. 29-30).

 

10. Is the website making a claim that seems too good to be true: "if it seems too good to be true, it probably isn't true. For example, I wouldn't believe a claim that a product 'melts body fat from thighs' because I know from [Dr. Stephen Brown's] study of body weight and weight loss that fat is not lost preferentially from any area of the body. You can reduce the total amount of fat you have, but you can't direct your fat loss to any particular area of the body, even if that part has more fat than other parts" (Brown, 2014, p. 33).

 

Although I spend a lot of my time analyzing medical research, I am far from perfect. For example, just last year I bought "Bee pollen" supplements without doing any research. When they arrived, I decided to do a quick Google search before trying them and was appalled by what I found. Bee pollen supplements have been advertised to improve the immune system by companies that sell these supplements, yet they have been shown to induce anaphylaxis in those who previously did not have a bee allergy. There has been no reputable research on the ability for bee pollen supplements to improve our immune system. I still have the (unoppened) bottle of bee pollen supplements to serve as a reminder, and also because I can't find anyone who thinks the small potential benefit is worth the risk!

 

However, there are some naturopathic remedies that I have tried (and continue to take/do) that seriously lack scientific evidence.

 

I live with conditions that seriously lack scientific evidence and sometimes anecdotal evidence is enough to convince my desperate self to try something.

 

Whether it's acupuncture, or treating "leaky gut" syndrome, I have tried many things lacking valid and reliable research, and some of these remedies have even helped! When there's a low-risk, I'm always willing to try it. When there's a moderate-risk, and little-to-no research, I have a lot of thinking and praying to do. However, when there's a high-risk, and little-to-no reliable or valid research done on the treatment, then I will not do it.

 

With these tips, hopefully you'll be more equipped to make health decisions like this for yourself.

 

 

 

 

 

Share on Facebook
Share on Twitter
Please reload

Featured Posts

POTS & Exercise

April 19, 2018

1/3
Please reload

Recent Posts

October 3, 2018

Please reload

Archive