Things I don’t like… (list of fallacies)

Browsing the web for some years, I had several occasions to get frustrated, and even irritated by the conversations I have read. Here are some important points.

1. The fallacy of abusive generalization.

When it happens to me to cite data, some people sometimes respond that “it is not true, my experience shows otherwise”. So, when I wrote once that the leading cause of cancer is lifestyle and eating habits, I was replied that “I have friends who were eating well, and they died from a cancer”. (yet I have never said that this is the only cause)

But anecdote is not data. And the plural of “anecdote” is not “data”. This should be taught in school. Really. I’m serious.

Testimony is just anecdotal. The same is true of TV reports or newspaper. Putting the idea that something is more prevalent on the grounds that some random citizens have stated that it’s “what they have seen” is quite misleading. This is called convenience sampling.

The fallacy of abusive generalization is based on the assumption that personal experience is a case which is quite generalizable. Suppose I live in Paris (France’s capital city). I would say that from what I have seen in town, I would estimate that less than 1% of the population is non-white, contrary to what data shows. This naive statement ignores the fact that all areas and regions are not homogeneous; we have just to look at the U.S. to see that africans are more likely to be localized in the southern states than in the northern states. Let’s take another example. One could argue that Paris is a great city, and for this very reason rich and intelligent people tend to be ghettoized. Thus, one could argue that in poor neighborhoods, the proportion of non-white population is much higher, say 50%. Similarly, a white man who lives in a poor neighborhood could put forward the idea that his experience actually shows that more than 50% of the French population is non-white but that would not prove anything. Whether data is falsified or not is another story.

2. The argument from authority.

Sometimes I see in various conversations that scientists are trying to counter-attack by arguing that you do not have degrees or qualifications in this particular field, and thus subtly slip the idea that you have no legitimacy to discuss the issue; your only right is to close your mouth, listening to the “experts”. Whether you are more experienced or decorated gives you no excuse to play the braggart. The argument from authority consists in the foolish idea that the expert in question can not be wrong. Experts, as they are called, are always right. This sort of rejoinder seems to be an excuse to discredit the opponent, and to put an end to any discussion when someone is short of ideas.

This is usually what lazy and stupid people tend to resort. When they lack expertise on the field and/or on stats, they usually “use” a renowned researcher. For example, a renowned name who made the claim that “IQ does not measure intelligence”. They then reach the conclusion that if that person says so, it must be true. And yet, this is just an opinion. There is no research that has proven this claim to be true. Just because someone says IQ is controversial does not mean it is true.

3. Correlation does not imply causation.

Here’s a famous sentence that seems to be constantly widespread in any discussion a little bit serious. This is a formula that slyly insinuates that scientists, who perform regression analyses to estimate the correlation between two variables, have absolutely no interpretation of the result obtained. Given the hundreds of papers that I’ve read so far, both in economics, sociology, or psychology, it clearly seems not so obvious. Typically, the paper begins with an introduction which presents the general idea of ​​the thesis and the previous works that have already been made in this issue. The second section is about the data, methods, sample, etc. The third section presents the results. The fourth section, a “discussion” where the authors try to explain the results, the implications of their research and, eventually, the limitations of the study.

So from where do we get the lazy comment “correlation is not causation” ? Maybe the commentators have not bothered to read the paper in question. Or rather, the strategy is simply to spit “correlation is not causation” when a result seems to invalidate their beliefs, and to conclude that the correlation is indeed a causation when the result is in line with their beliefs. This is no more than a pitiful attempt to cut short a discussion when one is short of ideas.

Because if the researchers in question have tried to put forward their theory, it is certainly not to be dismissed by a catchphrase. If you doubt the causality of the relationship, or the direction of causality, the thing to do is to discuss the theory. The sentence “correlation is not causation” is not necessary in any serious discussion. At this level, it’s just trolling.

4. Quoting out of context.

Other form of intellectual dishonesty, quoting out of context. This is usually what I have seen from a very special keynesian with the art of randomly googling some key words to refute austrian economics. In fact, this kind of mistake could turn out to be a simple neglect. This can happen to everyone. But when such “misreading” often comes from the same person and when the bias constantly point to the same direction, the probability that it is arrived by chance is significantly reduced. This would become quite painful.

5. Speaking in bad faith.

Here is an example of intellectual dishonesty that makes me see red. Nothing else will never make me so mad. I have met it quite often. What am I referring to ? For example, when an author writes a book, sometimes it is wrong on a particular point, or two, or even three, among the fifty or hundred points (unrelated to each other) argued in his book. What these ill-intentioned criticisms tend to do is to focuse on the conflicting points. They are simply rejecting 2 or 3 ideas of the book. They then conclude that the entire work is worthless, with an “utter garbage” if they are really coarse. In other words, when an author advanced hypotheses A, B, C … X, Y, Z, having just refuted A and B can also allow them to reject all other assumptions, even though the subjects are completely different, unrelated.

I do not think of a better example of intellectual dishonesty.

6. Googling to conceal his own ignorance.

This is a case of dishonesty which is also widespread, I think : citing references which we did not bother to read. For example, when an internet user cites a reference as “The Bell Curve”, I have sometimes read answers like “The Bell Curve has been debunked” along with a link to Wikipedia on books criticizing the work of Herrnstein and Murray. First, the fact that there is a criticism does not mean it is relevant. Quoting is good. Reading is better. But the most important detail is how the references were thrown on the table. No citation of the books in question, no indication of pages, or comment, or summary. Have these people read the critics of “The Bell Curve” ? Worse. Have they actually read The Bell Curve ? Why should it be so, when they are not able to cite even a passage of the books in question ?

Obviously, it is clear that they are just simply randomly googling “Bell Curve Refuted”, “Bell Curve criticism” or “Bell Curve myth” or something like that. This strategy works as an argument from authority : to cut short a discussion by the most cowardly and despicable manner ever.

Incidentally, Richard Lynn, in The Attacks on the Bell Curve (1999) has strongly criticized 2 of the books refuting The Bell Curve. His attack seems at first glance to be fatal. Oh no, that’s true. Richard Lynn is a damn racist. His word can not be trusted, of course. Holy shit. However crazy it may be, it’s a wellspread argument.

7. Replication of a study, or the single-study syndrome.

I constantly see people, in an attempt to illustrate their point of view, citing just a single study. And sometimes with small sample size, or one that is not representative, which is far worse. However, any study needs to be replicated to ensure that this result is generalizable. Similarly, when a study is being conducted, say, in the U.S., the result is not necessarily replicable in other countries that may have some special characteristics. Non-replicability can also be expected when different statistical procedures are used.

Regarding meta-analyses, caution is also needed since publication bias are more frequent. It is clear that researchers may tend to select studies that are consistent with their preconceived ideas, in order to produce a sexy result for their meta-analysis.

8. Newer is better.

Like “correlation is not causation”, the idea that “what is new is better” seems to be in vogue. I’m afraid not to be an isolated case, again. Studies are dismissed with a wave of hand simply because they are quite dated. When two studies contradict each other, some people seem to consistently choose to accredit the most recent study, without the need to a closer look at the nature of the contradiction, and why it occurs. Thus, some new ideas and theories seem to me more accredited than they really deserve. The fact that a theory is “newer” than another does not give it any validity. Moreover, the paradox is that the more a theory survives through time, and the more the probability that it is right is likely to increase.

9. Conflicting statement is evidence of contradiction.

When the same person makes two incompatible statements, we heard this is self-defeating. Not necessarily. Time has passed between the two statements, and it is possible that this person has simply changed his mind, i.e., that he thinks the first statement is now false. But obviously, endorsing the new belief or statement does not give any proof that it is the correct one either.

This entry was posted in Miscellaneous and tagged . Bookmark the permalink.