Astrologers, homeopaths and economic forecasts cannot be trusted because the so-called science they practice is fuzzy at best and hocus-pocus at worst.
Consider, for instance, this delightful anecdote recounted by a senior journalist at a popular Mumbai-based newspaper. Sometime in 1998 or 1999, he wrote on his Facebook wall, the hugely popular astrologer Marjorie Orr took a break. Fearing a backlash from readers, his editor suggested the young man, along with two of his colleagues, ghost write the column until Orr got back to work. Some trepidation later, he got down to follow orders. He did it for six weeks until the astrologer got back from her vacation. Not a single reader from the newspaper’s erudite, English-speaking community wrote in with an angry word that his predictions were off the mark.
The post attracted a barrage of amused comments from other journalists who’ve ghosted for popular astrologers like Peter Vidal in Indian newspapers. Vidal apparently recycles his column by replacing the text for one star sign with something he’s written in the past for another star sign. Most people don’t notice. Then there is Panditji from Jaipur who misses his deadlines every once a while and the poor sub-editor at the desk had to think up “good" days of the week, “lucky" colours to be worn and “numbers to bet on" for each star sign.
“The trick," wrote this veteran desk hand at the magazine in response to the Facebook post, “was to make people believe they were on the verge of a quantum leap in life."
Then there is homeopathy: Generous column inches are devoted to it in newspapers and magazines, and practitioners make a pretty damn good living out of it. The premise around which this tub of crock continues to exist — in spite of a mountain of evidence to prove homeopathy is indeed crock — is that water molecules have memory.
But even Class VIII students exposed to elementary chemistry and concepts like Avogadro’s number know water molecules have no memory. This is because by then, they are taught the mechanics of dilution. Whatever botanical compound goes into making a homeopathic drug is diluted to an extent that there would be no trace of the molecule left in the drug — except the sugary coating made of powdered lactose, of course.
In homeopathic parlance, on average, most compounds are diluted by 30C. That is 10 to the power of 60, or one followed by 60 zeroes.
To understand how much of a dilution that is, imagine a drop of water with a diameter of 150 million km — the distance from the earth to the sun. It takes light eight minutes to travel that distance. Now imagine a drop of water with one molecule of a substance in it. That is a 30C dilution.
It is entirely possible that Samuel Hahnemann, who thought up modern homeopathy, had figured he’d gotten it all wrong in the face of this evidence. That is perhaps why he continued to argue water has memory and that even after dilution of the kind he propagated, water retains a “spirit-like" essence of the original compound that is “no longer perceptible to the senses".
Ben Goldacre writes sardonically in his superb book Bad Science: “If water has a memory, as homeopaths claim…water has been sloshing around the globe for a very long time, after all, and the water in my body as I sit here typing in London has been through plenty of other people’s bodies before mine. Maybe some of the water molecules sitting in my fingertips as I type this sentence are currently in your eyeball. Maybe some of the water molecules fleshing out my neurons as I decide whether to write ‘wee’ or ‘urine’ in this sentence are now in the Queen’s bladder (God bless her): water is the great leveler."
Both of these disciplines are similar to another perverted discipline—economic forecasts. It draws from the worst of astrology and homeopathy. While the chances of getting a prediction right using astrology are as good as flipping a coin, homeopathic remedies are no better than placebos where there is no relationship between cause and effect. But economic forecasting relies on both of these to get by and there are suckers by the millions buying into them.
To sift through why economic forecasting is fraught with inconsistencies, Nate Silver’s outstanding book, The Signal and the Noise is a good place to start. He argues this discipline faces three fundamental challenges:
“First, it is very hard to determine cause and effect from economic statistics alone. Second, the economy is always changing, so explanations of economic behaviour that hold in one business cycle may not apply to future ones. And third, as bad as their forecasts have been, the data that economists have to work with isn’t much good either."
To get a sense of how difficult this really is, let’s consider the most reliable data we have on the US economy (because it is the most closely tracked). The US government puts out 45,000 indicators each year and private data providers track as many as four million statistics. Since World War II though, they have witnessed only 11 recessions. How in the world is anybody to choose 11 outputs specifically from the data on hand that caused these recessions?
By way of example, consider the maxim “co-relation does not imply causation". What it means is that just because two variables have a statistical relationship with each other, it does not mean one is responsible for the other. For instance, Silver points out, ice cream sales and forest fires are related because both occur more often in summers. But there is no causation. You don’t set off a bush fire in some desert when you buy a tub of ice cream.
On the face of it, there seem to be three reasons why economic forecasts go horribly wrong. The first is arrogance on the part of a large majority of economists. The Economic Cycle Research Institute (ECRI), based out of New York and London, is widely respected. In 2011, the firm predicted double-dip recession, which is a recession followed by a short recovery and leading into another recession.
When quizzed on the prediction, the firm threw up a lot of data that seemed incomprehensible to most people and obfuscated it with jargon that made no sense. Like this: “ECRI’s recession call isn’t based on just one or two leading indexes, but on dozens of specialized leading indexes including the US Long Leading Index…to be followed by downturns in the Weekly Leading Index and other shorter-leading indexes. In fact, the most reliable forward-looking indicators are now collectively behaving as they did on the cusp of full blown recessions."
Silver tracked their approach to a stance articulated to their clients as far back as 2004: “Just as you do not need to know exactly how a car engine works in order to drive safely, you do not need to understand all the intricacies of the economy to accurately read those gauges."
When looked at from this prism, the only thing that matters is data and more complex data—and not even an attempt to get to the story. “There were certainly reasons for economic pessimism in September 2011—for instance, the unfolding debt crisis in Europe—but ECRI wasn’t looking at those. Instead, it had a random soup of variables that mistook co-relation for causation," concludes Silver.
The second reason economic forecasts are notoriously difficult to handle is that the society and the systems they try to predict are dynamic. Nobel Prize winner F.A. Hayek, in his acceptance speech in 1974, had explained why these systems are difficult to deal with.
Physical scientists can observe and measure the things that drive systems they are studying. But society, and therefore, the economy, is not a physical system. There are millions of variables that cannot be measured or seen. For instance, how will an individual respond to a set of stimuli in a given set of circumstances? There are no universal answers.
But economists, in their attempt to be rigorous, infuse techniques and models used by physical scientists. This is fundamentally flawed because in doing that, they have to ignore that which cannot be measured, in spite of it being integral to economics. The outcomes are incorrect predictions and actions that can harm society.
The third is that humans and the institutions they build are inherently biased. As an experiment, Silver conducted economic polls across organizations that engage in forecasting. Given the same data, he figured that the lower an entity’s reputation, the wilder its predictions. And if your reputation in the markets is higher, chances are your estimates are conservative.
The dichotomy is explained by the fact that if your reputation is on the lower side, you have little to lose. So perchance you hit bull’s eye, the chances of drawing attention is higher. If reputation is high on the other hand, there is an incentive to protect it and you’d much rather err on the side of caution.
Perhaps these are the reasons why Ezra Solomon, an influential US economist and professor at Stanford University, once caustically said: “The only function of economic forecasting is to make astrology look respectable."
This piece was originally published in Mint on Dec 05, 201