Understanding, Analysing and Crossing The Limits of Evidence

 

Evidence produced within quantitative disciplines like economics and finance carries an aura of gospel. The numbers, models, and forecasts we see in economic reports and market analyses in the news and reports seem certain, authoritative, and unarguable.

Built on large data sets that are analyzed with widely accepted theories and tools, economic and financial evidence have become hugely influential in governance and business—so much so that more qualitative approaches have been sidelined.

Even political economy, the original economics, has been pushed away in favor of what’s now called ‘evidence-based decision-making’. The presumption is that numerical data is the only solid information, and that the analytical tools used in economic and market analysis are reliable.

Of course, as we know now, this faith in economic evidence can be dangerous. As markets crashed around the world during the global financial crisis of 2007–2008, confidence in all kinds of quantitative modeling crashed with them. It became evident that society’s shepherds were not to be found in the financial industry.

Economics and finance need to be more skeptical about their evidence if they are to serve society well. What can be done?

The problem is not only human greed. Rather, as Nicholas Nassim Taleb points out in his book Fooled by Randomness, few people understand the limits of the statistical models that they create.

When people place unwarranted faith in their models, they can end up making worse decisions than if they had used no model at all. For example, according to Taleb, many models do not factor in unlikely events, preferring to generate averages and trends that exclude the real-world effects of anomalous or outlying events.

When these “black swans” do eventually occur, financial predictions fail and people lose money that they are not prepared to lose.

For example, many Dutch pensioners saw their expected retirement funds plummet as a result of the global financial crisis of 2008. This created a public outcry, since consumers had been lead to believe that their funds were stable and reliable. Since then, Dutch pension legislation requires companies to be much more open with customers about investment risks. However, because the pension providers themselves seem not to have understood or accounted for this risk, the problem cannot be solved through disclosure to consumers alone. After all, it caught financial professionals and economists around the world by surprise.

When it comes to modeling, the road to crisis is paved by good intentions as well as bad ones. No matter how sound and complete your data, you cannot turn it into sound evidence unless you understand how reliability, risk, biases, and other contingencies (even unknown ones) affect a model’s accuracy.

If one problem is a shallow understanding of the origins, qualities, and limits of evidence itself, another problem is that the amount of evidence-based decision making actually taking place is grossly overestimated. Professionals working in economics and finance are just as human as the rest of us. Most business and government decisions, no matter how data-driven the sector, are determined by multiple factors: data, politics, intuition, practicality, and bias, to name a few.

Some people place high hopes in the ability of so-called artificial intelligence to make up for humanity’s deficiencies. They imagine AI will produce analyses and forecasts that are far more robust—and more socially friendly—than humans could ever make without it. There is good reason to doubt that this will prove possible.

In the meantime, there is no such thing as a decision made by a machine alone, or on the basis of purely quantitative data. The idea that quants pay no attention to context is somewhat overstated.

Academics working in finance and business schools, for example, rely on ready-made data sets to do their analysis, but generally pay attention to the human activities that the data represent in order to make their analyses as reliable as possible.
We need to take a vocational approach to our work, which means constantly challenging the frameworks we use to produce evidence and actively asking ourselves, “Is there another way to interpret this data? If I were looking at this data from a different perspective, what kind of evidence would I see?”

via Ethnography, Economics and the Limits of Evidence – EPIC

Leave a comment