|
Home | Switchboard | Unix Administration | Red Hat | TCP/IP Networks | Neoliberalism | Toxic Managers |
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and bastardization of classic Unix |
I don’t see scientific exhaustion to be the reason for blocking criticism of neoclassics,
it is ideological fear. Much of the elite in economic science not only sees their work and school
of thought endangered, that elite has long left the objective observer spot and is in fact actively
involved in economic policy-making. Much in line with this argument, the increasing influence
of capital on the financial foundations of science and subsequently on its freedom of thought
undermine or exhaust the capability of economic science as a whole. In more practical terms,
Harvard professors of economics either originate from ‘the capital’, get promoted by it,
have their chair financed by it, or ultimately create the majority of their income from private
contracts with ‘the capital’. Such pragmatism may appear opportune in the short term,
but the value of science lies in its freedom from such influences.
There is nothing more frustrating for critics of neoclassical economics than the argument that neoclassical economics is a figment of their imagination; that, simply, there is scientific economics and there is speculative hand-waiving (by those who have never really grasped the finer points of mainstream economic theory). In this sense, neoclassicism resembles racism: while ever present and dominant, no one claims to be guided by it. |
Slightly exaggerating:
"Modern finance theory is a crock, peddled by charlatans at business schools who have managed to seal themselves off from the usual empirical tests of a theory" (from review of Pablo Triana book by Richard Smith).
According to Joseph Stiglitz, “(Economics as taught) in America’s graduate schools .... bears testimony to a triumph of ideology over science.”
The current economic crisis and "Great Recession" may be viewed as a ‘natural experiment’ in the validity of economic models and theories.
The principal mythological problem here is whether the application of badly constructed mathematical model that distort reality is pseudo science or not.
I think it is. Especially when it is used by people for destructive self-enrichment. One out of many definitions of Ponzi Scheme is: transfer liabilities to unwilling others. Detached from reality models like used in neo-classical economy fit this definition pretty well.
Also each model abstracts the world. As Greenspan (2008) reminded his readers and himself after the crash “a model, of necessity, is an abstraction from the full detail of the real world”.
So it has the limits of applicability that need to be thoroughly understood. Extension of limits of applicability to things outside the "natural realm" of the model is also a flavor of pseudo science. Classic pseudo-scientist in this regard is Milton Friedman.
Moreover as soon as any model is known to be used by some large firm there always will be attempts to use its weaknesses (talk about "Uncertainty Principle" in finance). In this respect human behavior and human values always introduce significant uncertainty which limits the applicability of models that does not account for that.
That's first of all so called equilibrium models used in neo-classical economy. Neo-classical economics is the most blatant abuser of this very useful concept, intrinsically linked to stable states of dynamic systems. I remember my reaction on reading a neo-classical economics book: how primitive are those jerks (with their half-based mathematical pretences) and why they push so hard their detached from reality models into the heads of unsuspecting this intellectual fraud students. Equilibrium models abstract from the flow of funds and the stocks of credit and debt, as well as the systemic risks implied in them; they focus on the individual optimization problems facing individuals.
As one commenter to the Brad Delong post But the Economics Profession Right Now Is Useless... noted:
Anyone who's ever designed much of an electronic circuit with negative feedback and a little time lag, would wonder that anyone would assume that economic systems are self-correcting.Underdamp it, and it can live in a state of permanent oscillation -- and those are simple systems. The bubble we just had (and the one before, and so on) proves that we've got quite a delay in the actions of our invisible hand; it is utterly faith-based to assert that it should ever set itself right, without some "interference". It's the delay that kills it -- in electronics, the addition of delay in the feedback, can convert a regulator, into an oscillator.
But perhaps economists have a better grasp of their field than us lowly (lapsed) electrical engineers.
Posted by: dr2chase
On the subject Mark Blaug says: "Economics has increasingly become an intellectual game played for its own sake and not for its practical consequences for understanding the economic world. Economists have converted the subject into a sort of social mathematics in which analytical rigor is everything and practical relevance is nothing." ( Criticisms of neoclassical economics - Wikipedia)
|
Switchboard | ||||
Latest | |||||
Past week | |||||
Past month |
This is the Black Swan gospel according to Triana. Taleb endorses it in a characteristically incendiary and intemperate foreword. He does come out all guns blazing, and you just have to go with that. Or chuck a glass of water over him, if he's in range, I suppose.
A quick recap for anyone who has spent the last two years in a coma: Taleb put together the beginnings of a rap sheet for modern mathematical finance theory in his book "The Black Swan", and rapidly attained worldwide celebrity when his criticisms appeared to be borne out by the recent financial crisis. The main tenet of Black Swan theory, rather dry sounding, but with dramatic consequences, is that price changes are not normally distributed (in the way that, say, human weight or height are), but follow a power law ('fat tails'). This implies much greater extremes of price movement than those predicted under the assumption of a normal distribution. The events that cause such price moves may be perfectly intelligible in hindsight, but are not necessarily predictable: like the existence of black swans.
The point about price distributions is actually quite an old one. Paul Levy made the same observation in the 1900s; Mandelbrot's studies of cotton prices, in the 60s, reached similar conclusions. What gives it contemporary relevance is that the finance theory underlying current regulatory practice, risk management, fund management and derivatives pricing all overwhelmingly assume that price changes are normally distributed. And they all failed at once in the recent financial crisis, when price changes were indeed far more extreme than a normal distribution implies. It doesn't look so good for orthodox financial theory just now.
So it is a good time for Triana to review modern finance theory's rap sheet, add items, and add more detail to the existing charges. It goes like this.
Chapter 2: modern finance theory is a crock, peddled by charlatans at business schools who have managed to seal themselves off from the usual empirical tests of a theory.
I'll admit I don't see what logical point there is in attacking the character of business school teachers in this manner, whether it is a correct assessment or not. However the empirical criticism really does stack up. Consider GS CFO David Viniar's notorious comments from August 2007 when the ABS meltdown got into full swing (Ch1, p12): "We were seeing things that were 25-standard-deviation moves, several days in a row". To which the rejoinder from an empirically-minded observer simply has to be "No you weren't, imbecile: those observations actually mean that your models are hopelessly wrong". There are several reasons why one can so insouciantly cheek such an august figure. If we assume Viniar means daily observations and a normal distribution, then (if the numbers I am cribbing are correct: I haven't gone back to the equations) one should expect to wait quite a lot longer than the age of the universe to see even a 16-standard deviation event, with a 25-standard deviation event taking many, many times longer than that. I suppose I should work out the exact number of years, just to see how big of a number it is: exercise for any readers with access to an arbitrary-precision mathematical engine.
You can find an old post by Yves on the subject that helped kick off some blogosphere chat.
Even if you assume (very charitably, I grimly suspect) that Viniar is not just parroting his VaR model outputs (more on that later), and is a bit more sophisticated about his distributions, he is still goofing, big time. And if Mandelbrot, and Taleb, his follower, and Triana, his follower, are right about the kind of distribution that underlies financial market price movements, there just aren't such a thing as a standard deviation of price movements, nor no correlation neither. Both standard deviation and correlation are defined in terms of variance. Since variance is infinite for stable distributions (other than the normal distribution), neither standard deviation nor correlation is defined for the distribution of market prices (a Levy skew alpha stable distribution, if you want the full geeky glory). On this theory, Viniar is talking about things that just don't exist. Not encouraging behavior in a CFO.
So here is the bleedin' obvious: given its track record of ultra-wild underestimates of the frequency of sharp price moves, the assumption of normal distributions in stock price changes must be among the most lavishly disconfirmed scientific hypotheses of all time. No wonder, then, that Taleb and Triana are somewhat ratty with its various obstinately blithe proponents.
Chapter 3: Is a bit of a digression. According to Triana, the Quants who work at banks work mostly on bits of IT dealing infrastructure, which is useful, and less often than you might think on mathematical models used in trading. The Quants tend to be physicists and engineers rather than business school graduates. Models are used in a much more skeptical, provisional way on the trading floor than they are in academia.
I'll take his word for it. Evidently, skepticism of models doesn't extend to the risk management department. And, uh, actually it doesn't look as if that trading floor skepticism managed to avert 2007's monster trading screwups, either. Except, perhaps in the case of GS, who famously hedged a lot of MBS exposure starting late in 2006, to the great indignation of folk who don't understand where fiduciary duties stop and start for broker-dealers.
Now we get into the meaty detail chapters. The non-normalness of price distributions means that a whole bunch of financial orthodoxies are dubious on theoretical grounds, and, post meltdown, there are some nasty data points to back up the theory.
First up is the Gaussian copula (Chapter 4). This is a modeling device which was used to calculate default correlations, for MBS and other bonds, and thus to structure, price, rate, and hedge CDOs. I think we already know how well that went overall– but the detail of how the behavior of various tranches of CDOs diverged from predicted paths during the '07 meltdown is instructive. Triana leaves open the question of whether the Gaussian copula was adopted out of blind faith in its efficacy, or precisely because it underrated extreme events, and thus gave an excuse for assigning a high rating, and getting a good price. Were the ratings agencies knaves or fools in this respect? I doubt we'll find out any time soon. Anyhow, from the data and testimony Triana assembles, it looks as if the Gaussian copula is dead in the water as a structured finance tool. One wonders how Remics and re-Remics are to be priced and rated. Any NC readers want to buy one?
Chapter 5: Now we are into VaR, the risk management methodology that JP Morgan gave to the world back in the early 90's, in the sadly mistaken belief that being able to generate a firm wide "risk number" daily would be a useful contribution to financial risk management. Back then you were reasonably smug about your bank if central management actually knew what the firm's positions were at all (vide Barings, Sumitomo, then fast forward again to SocGen in – oh dear – 2007), so VaR was pretty cool. It was later endorsed by the Basel regulatory framework. Then the paint started to flake off.
The shortcomings of VaR have been a regular topic at NC. That pesky normal distribution assumption again. Note the reminder from practitioner Irene in the discussion thread though – the officially sanctioned VaR model may use a rolling 2 year price history rather than a normal distribution. This desperate kludge has its own perverse side effects: in times of increased volatility, the models all tell banks to stay on the sidelines at the same time. Once the volatile part of the price history rolls out, the models are all happy again. This is not a commonsense way to run banking businesses.
The other perversity of that approach to VaR is that it encourages herd behavior in volatile markets, before the banks have even made it to the sidelines. In other words, since all the models in all the banks are essentially the same model of the same data, they all start screaming 'fire' at the same time, with predictable consequences at the exits. All this and more is well covered by Triana: particularly the way that a long period of low volatility before 2007 meant that VaR endorsed massive positions in assets that were suddenly big loss makers, when things went sour.
Banks were Gadarene enough without VaR. VaR makes it worse.
Oh, one thing that bugs me about VaR as used is this: if price histories tell you nothing about future prices (EMH), why is it that price volatility histories tell you something about future price volatility (VaR)? I'm just asking.
Anyhow, Triana makes the challenging points, with persuasive evidence: first, VaR is perfectly useless (it works until you need it, and at that point, it packs up: it is the chocolate teapot of risk management); second, like MTM, it is actively procyclical.
Chapter 6 is a brisk injunction to business schools (specifically, Sloane) to snap out of it and start teaching useful stuff.
In Chapter 7, we get to another polemic, against the Black-Scholes option pricing model. One can't fault the reasoning or evidence, but somehow this is the weakest part of the meat. It is built around a recent paper by Taleb and Haug in which they review the historical record on options market making and option pricing theory and announce that a) the parts of Black-Scholes theory that are correct are not original, having been long anticipated by Thorp-Bachelier option pricing b) the parts that are original are not correct (normal distributions are again assumed, and the model simply can't accommodate non-normal ones, unlike Thorp-Bachelier) c) no practitioners actually use Black-Scholes. The key item of evidence for (c) is the 'volatility smile' by which options traders systematically adjust option prices, so that the implied volatility (calculated according to Black Scholes methods) of options actually increases progressively for deeper and deeper out-of-the-money options. Under Black-Scholes pricing theory the implied volatility should be constant across all option strike prices. Traders don't do it that way: they are compensating for the way the BS model fails to accommodate fat tails. QED. And by the way, Triana adds, there's no such thing as implied volatility anyway, just supply and demand pushing prices around.
Well, OK to all that, so call implied volatility "demand premium" or something, and concede that Black-Scholes is a roundabout way to prices that can be reached more directly under other theories. So now what? Black-Scholes has become part of banking's infrastructure. Do we strip out all the Black Scholes models and replace them with Thorp-Bachelier models? Will it make enough of a difference to options pricing or risk management to be worth it? Triana doesn't try to determine the ROI. Instead (in the Finale) he asks whether Merton and Scholes should be stripped of their Economic prizes, and eventually concludes that instead the RiksBank prize should be given a silly name, so that people know it is a bit of a crock. It is an amazingly lightweight way to round off an otherwise enlightening discussion. It doesn't come off like a joke fallen flat either: just cheesy.
While I'm carping, I'll add a comment on the style. When I started reading the book, I kept stumbling over awesome quasi-English barbarisms, such as "qualification-inundated resumes", "dangerously faulty mathematically charged steering"; also horrific neologisms like "analyticization", "nonenthusiastically", "impacting" (adj., I kid you not, and repeatedly), and "scientification aroma" (my favourite – I want some – either in a spray dispenser or roll-on form, not fussy). To my relief Triana (or his copy editor) gets more of a grip in later chapters and it's not such a terrible read in the end. Doubtless the same relief is reflected in the generous verdicts of Taleb ("lucid"), Tett ("readable") and Skypala ("a treat"). So, do not despair if you find yourself entangled in some pretty strange thickets of verbiage early in the piece: it does get better if you plough on.
Back to Chapters 8, 9 and 10 so that we can end on a note more favourable to the book (just skip the Finale).
Chapter 8 is a good one on the way models can be used as alibis or excuses by the lazy, reckless, or incompetent. Good reading for head traders, risk managers and regulators, I'd say; and buy-siders and pension fund trustees, come to that. Chapter 9 is a quick round up of how seductive the spurious certainty of mathematical models can be, largely illustrated by LTCM and by the confusion surrounding the meaning of the VIX.
In the end, the message of the book is that quantitative finance is a delusion, and that common sense is a better starting point for risk management. Accordingly Chapter 10 is a paean to Fat Tony, the street smart invention of Taleb in "The Black Swan", and a call to reverse the quantification of finance. The negative leg of the case is argued persuasively. It is discomfiting to recognise just how little there was to quantitative finance.
On the positive side of Traiana's recommendation: well, you are welcome to make your own mind up about the reserves of common sense to be found in the banking industry just now.
The New York Times Sunday Magazine has a long piece by Joe Nocera on value at risk models, which tries to assess how much they can be held accountable for risk management failures on Wall Street.
The piece so badly misses the basics about VaR that it is hard to take it seriously, although many no doubt will.
The article mentions that VaR models (along with a lot of other risk measurement tools, such as the Black-Scholes options pricing model) assumes that asset prices follow a "normal" distribution, or the classical bell curve. That sort of distribution is also known as Gaussian.
But it is well known that financial assets do not exhibit normal distributions. And NO WHERE, not once, does the article mention this fundamentally important fact.
The distribution of prices in financial markets are subject to both "skewness" and "kurtosis". Skewness means results are not symmetrical around the mean:
Stocks and bonds are subject to negative skewness (longer tails of negative outcomes) while commodities exhibit positive skewness (and that factor, in addition to their low correlation with financial asset returns, makes them a useful addition to a model portfolio).
Kurtosis is also known informally as "fat tails". That means that events far away from the mean are more likely to happen that a normal distribution would suggest. The first chart below is a normal distribution, the second, a so-called Cauchy distribution, which has fat tails:
Now when I say it is well known that trading markets do not exhibit Gaussian distributions, I mean it is REALLY well known. At around the time when the ideas of financial economists were being developed and taking hold (and key to their work was the idea that security prices were normally distributed), mathematician Benoit Mandelbrot learned that cotton had an unusually long price history (100 years of daily prices). Mandelbrot cut the data, and no matter what time period one used, the results were NOT normally distributed. His findings were initially pooh-poohed, but they have been confirmed repeatedly. Yet the math on which risk management and portfolio construction rests assumes a normal distribution!
Let us turn the mike over to the Financial Times' John Dizard:
As is customary, the risk managers were well-prepared for the previous war. For 20 years numerate investors have been complaining about measurements of portfolio risk that use the Gaussian distribution, or bell curve. Every four or five years, they are told, their portfolios suffer from a once-in-50-years event. Something is off here.Models based on the Gaussian distribution are a pretty good way of managing day-to-day trading positions since, from one day to the next, risks will tend to be normally distributed. Also, they give a simple, one-number measure of risk, which makes it easier for the traders' managers to make decisions.
The "tails risk" ....becomes significant over longer periods of time. Traders who maintain good liquidity and fast reaction times can handle tails risk....Everyone has known, or should have known, this for a long time. There are terabytes of professional journal articles on how to measure and deal with tails risk....
A once-in-10-years-comet- wiping-out-the-dinosaurs disaster is a problem for the investor, not the manager-mammal who collects his compensation annually, in cash, thank you. He has what they call a "résumé put", not a term you will find in offering memoranda, and nine years of bonuses....
All this makes life easy for the financial journalist, since once you've been through one cycle, you can just dust off your old commentary.
But Nocera makes NO mention, zero, zip, nada, of how the models misrepresent the nature of risk. He does use the expressoins "kurtosis" and "fat tails" but does not explain what they mean. He merely tells us that VaR measures the risk of what happens 99% of the time, and what happens in that remaining 1% could be catastrophic. That in fact understates the flaws of VaR. The 99% measurement is inaccurate too.Reliance on VaR and other tools based on the assumption of normal distributions leads to grotesque under-estimation of risk. As Paul De Grauwe, Leonardo Iania, and Pablo Rovira Kaltwasser pointed out in "How Abnormal Was the Stock Market in October 2008?":
We selected the six largest daily percentage changes in the Dow Jones Industrial Average during October, and asked the question of how frequent these changes occur assuming that, as is commonly done in finance models, these events are normally distributed. The results are truly astonishing. There were two daily changes of more than 10% during the month. With a standard deviation of daily changes of 1.032% (computed over the period 1971-2008) movements of such a magnitude can occur only once every 73 to 603 trillion billion years. Since our universe, according to most physicists, exists a mere 20 billion years we, finance theorists, would have had to wait for another trillion universes before one such change could be observed. Yet it happened twice during the same month. A truly miraculous event. The other four changes during the same month of October have a somewhat higher frequency, but surely we did not expect these to happen in our lifetimes.Thus, Nocera's failure to do even a basic job of explaining the fundamental flaws in the construct of VaR renders the article grossly misleading. Yes, he mentions that VaR models were often based on a mere two years of data. That alone is shocking but is treated in an off-hand manner (as if it was OK because VaR was supposedly used for short term measurements. Well, that just isn't true. That is not how regulators use it, nor, per Dizard, investors). Indeed the piece argues that the problem with VaR was not looking at historical data over a sufficiently long period:This was one of Alan Greenspan's primary excuses when he made his mea culpa for the financial crisis before Congress a few months ago. After pointing out that a Nobel Prize had been awarded for work that led to some of the theories behind derivative pricing and risk management, he said: "The whole intellectual edifice, however, collapsed in the summer of last year because the data input into the risk-management models generally covered only the past two decades, a period of euphoria. Had instead the models been fitted more appropriately to historic periods of stress, capital requirements would have been much higher and the financial world would be in far better shape today, in my judgment." Well, yes. That was also the point Taleb was making in his lecture when he referred to what he called future-blindness. People tend not to be able to anticipate a future they have never personally experienced.
Again, just plain wrong. Use of financial data series over long periods of time, as we said above, have repeatedly confirmed what Mandelbrot said: the risks are simply not normally distributed. More data will not fix this intrinsic failing.By neglecting to expose this basic issue, the piece comes off as duelling experts, and with the noisiest critic of VaR, Nassim Nicolas Taleb, dismissive and not prone to explanation, the defenders get far more air time and come off sounding far more reasonable.
It similarly does not occur to Nocera to question the "one size fits all" approach to VaR. The same normal distribution is assumed for all asset types, when as we noted earlier, different types of investments exhibit different types of skewness. The fact that VaR allows for comparisons across investment types via force-fitting gets nary a mention.
He also fails to plumb the idea that reducing as complicated a matter as risk management of internationally-traded multii-assets to a single metric is just plain dopey. No single construct can be adequate. Accordingly, large firms rely on multiple tools, although Nocera never mentions them. However, the group that does rely unduly on VaR as a proxy for risk is financial regulators. I have been told that banks would rather make less use of VaR, but its popularity among central bankers and other overseers means that firms need to keep it as a central metric.
Similarly, false confidence in VaR has meant that it has become a crutch. Rather than attempting to develop sufficient competence to enable them to have a better understanding of the issues and techniques involved in risk management and measurement (which would clearly require some staffers to have high-level math skills), regulators instead take false comfort in a single number that greatly understates the risk they should be most worried about, that of a major blow-up.
Even though some early readers have made positive noises about Nocera's recounting of the history of VaR, I see enough glitches to raise serious questions. For instance:
L.T.C.M.'s collapse would seem to make a pretty good case for Taleb's theories. What brought the firm down was a black swan it never saw coming: the twin financial crises in Asia and Russia. Indeed, so sure were the firm's partners that the market would revert to "normal" - which is what their model insisted would happen - that they continued to take on exposures that would destroy the firm as the crisis worsened, according to Roger Lowenstein's account of the debacle, "When Genius Failed." Oh, and another thing: among the risk models the firm relied on was VaR.I am a big fan of Lowenstein's book, and this passage fails to represent it or the collapse of LTCM accurately. Lowenstein makes clear that after LTCM's initial, spectacular success, the firm stated trading in markets where it lacked the data to do the sort of risk modeling that had been its hallmark. It was basically punting on a massive scale and thus deviating considerably from what had been its historical approach. In addition, the firm was taking very large positions in a lot of markets, yet was making NO allowance for liquidity risk (not overall market liquidity, but more basic ongoing trading liquidity, that is, the size of its positions relative to normal trading volumes). In other words, there was no way it could exit most of its positions without having a price impact (both directly, via the scale of its selling, and indirectly, by traders realizing that the big kahuna LTCM wanted out and taking advantage of its need to unload). That is a Trading 101 sort of mistake, yet LTCM perpetrated it in breathtakingly cavalier fashion.Thus the point that Nocera asserts, that the LTCM debacle should have damaged VaR but didn't, reveals a lack of understanding of that episode. LTCM had managed to maintain the image of having sophisticated risk management up to the point of its failure, but it violated its own playbook and completely ignored position size versus normal trading liquidity. Anyone involved in the debacle and unwind (and the Fed and all the big Wall Street houses were) would not see the LTCM failure as related to VaR. There were bigger, far more immediate causes.
So Nocera, by failing to dig deeply enough, winds up defending a failed orthodoxy. I suspect we are going to see a lot of that sort of thing in 2009.
I somehow missed this piece by Robert Nadeau in Scientific American when it came out earlier this year, and I thought it made for good Sunday/holiday reading.
Nadeau's criticisms are admittedly pretty broad and similar observations have been made elsewhere (although Nadeau does add some useful historical detail), and a short piece by a non-expert is always vulnerable to criticism. But that doesn't mean that Nadeau isn't on to something. The propensity of economics to start from abstraction is limiting, yet once certain constructs become codified via textbooks, they become part of the discipline's world view.
For instance, around the time of the release of the IPCC report and the Stern report (which endeavored to assess the economic cost of climate change), there was considerable discussion of how to properly characterize the costs and risks of inaction, and the failure of market-based approaches (Brad De Long had a fine post). There have been some debates within the profession about the neoclassical orthodoxy and heterodox economics (see here and here for examples).
Now if you want to read a fair minded yet in some ways devastating critique, and a well-written, entertaining and informative one at that, you must go immediately to Deidre McCloskey's essay, The Secret Sins of Economics. McCloskey is a real economist, a Professor of Economics, History, English, and Communication. Some academics I know regard the article as fundamental, yet also note it hasn't gotten the traction they think it deserves (it is because McCloskey is not only cross disciplinary, but transgendered to boot?).
From Scientific American:
The 19th-century creators of neoclassical economics-the theory that now serves as the basis for coordinating activities in the global market system-are credited with transforming their field into a scientific discipline. But what is not widely known is that these now legendary economists-William Stanley Jevons, Léon Walras, Maria Edgeworth and Vilfredo Pareto-developed their theories by adapting equations from 19th-century physics that eventually became obsolete. Unfortunately, it is clear that neoclassical economics has also become outdated. The theory is based on unscientific assumptions that are hindering the implementation of viable economic solutions for global warming and other menacing environmental problems.The physical theory that the creators of neoclassical economics used as a template was conceived in response to the inability of Newtonian physics to account for the phenomena of heat, light and electricity. In 1847 German physicist Hermann von Helmholtz formulated the conservation of energy principle and postulated the existence of a field of conserved energy that fills all space and unifies these phenomena. Later in the century James Maxwell, Ludwig Boltzmann and other physicists devised better explanations for electromagnetism and thermodynamics, but in the meantime, the economists had borrowed and altered Helmholtz's equations.
The strategy the economists used was as simple as it was absurd-they substituted economic variables for physical ones. Utility (a measure of economic well-being) took the place of energy; the sum of utility and expenditure replaced potential and kinetic energy. A number of well-known mathematicians and physicists told the economists that there was absolutely no basis for making these substitutions. But the economists ignored such criticisms and proceeded to claim that they had transformed their field of study into a rigorously mathematical scientific discipline.
Strangely enough, the origins of neoclassical economics in mid-19th century physics were forgotten. Subsequent generations of mainstream economists accepted the claim that this theory is scientific. These curious developments explain why the mathematical theories used by mainstream economists are predicated on the following unscientific assumptions:
The market system is a closed circular flow between production and consumption, with no inlets or outlets.Natural resources exist in a domain that is separate and distinct from a closed market system, and the economic value of these resources can be determined only by the dynamics that operate within this system.
The costs of damage to the external natural environment by economic activities must be treated as costs that lie outside the closed market system or as costs that cannot be included in the pricing mechanisms that operate within the system.
The external resources of nature are largely inexhaustible, and those that are not can be replaced by other resources or by technologies that minimize the use of the exhaustible resources or that rely on other resources.
There are no biophysical limits to the growth of market systems.
Posted by Yves Smith at 3:59 AM
Topics: The dismal science
Add to Technorati Favorites! • Technorati Links • Digg This! • Discuss on Newsvine • Submit to Reddit • Add This! • Email the author • Email this
The link to McCloskey's article does not work. Also I think Deidra ends with an "e".
Been there
It served the same purpose that Christianity did for classical imperialism--to give some sort of philisophical or moral justification for cheap raw materials (oil, metals, agricultural goods, etc.) and expensive manufactured goods.
But just as classical imperialism became philosophically indefesible and faded from the globe in the latter 18th and early 19th centuries, so now neo-imperialism has become indefensible and is seeing its philisophical underpinnings crumble away.
Malthus, The Club of Rome, et al, will be proven right about their predictions in the near future as the 'low hanging fruit' of cheaply extracted oil diminishes while no attempt is made to curb population growth of the earth. There is no substitute for the 84~ million barrels of crude that are now being consumed daily by the world.
River
Debunking Economics by Keen
More Heat Than Light by Mirowski
Foundations of Economics by Varoufakis
Anything by Hyman Minsky
The entire Post-Autistic Economics movement (horrible name) that grew out of French graduate students' dissatisfaction with the curricula they were being force fed.
The Cambridge Capital Controversy, in which the neoclassicals essentially admitted they lost but nevertheless decided to continue their program.
The list goes on and on.
If a system exists that will substantiate the viability of proposed economic solutions to a problem that has not been substantiated to any degree of predictability then why the hell don't we apply it to all economics?
Economics is all about predictability right? Then just how to we apply economics to a 'problem' that nobody has yet been able to accurately predict. Never mind that 'viable' is a value judgement.
It's silly.
The link to the article does work, I just checked it. On a Mac, it downloads the article which opens in Adobe Acrobat Reader. Did you not see the download? You need to click the article name itself. McClosky's name is a separate link to her bio.
And I did correct the name spelling (eek). Another example of my problems with proofreading.
http://www.amazon.com/Cult-Statistical-Significance-Economics-Cognition/dp/0472050079/ref=sr_1_1?ie=UTF8&s=books&qid=1211746986&sr=8-1
Fixing link and nit. And sorry re nit. The essay itself said she was U of C, and she defends libertarianism so I misinterpreted her webpage. But her cross-disciplinary post is pretty unusual, and you tend not to find those at big name schools unless they are a named chair (ie, endowed by someone), so I should have thought twice.
Some academics are morons, or uninformed, or both. Is this a news?
FWIW I agree with her point 100%; I encounter these two "sins" in my work on a daily basis and it annoys me to no end. The second of her closely-coupled sins, concerning the (mis-?)use of statistical significance will, I believe, be eventually regarded as one piece of a great intellectual fraud concerning the practice and theory of statistics. I agree with her insinuation, too, that is the real damage to people through its harm to medical research that will finally be its undoing.
BUT: her article is rather badly written. It rambles on and on as it aspires to a literery tone
in lieu of clarity (Tell me how long it takes for
you to find out what the secret sins are!). Fair-minded or informed argumentation takes a back seat
to cheap rhetorical tricks (e.g. the warning that economic insiders "simply can't grasp arguments
that are plain to people not socialized in economics" - don't you, a presumed outsider, feel really
special now?)
There are counterarguments and subleties worthy of some respect. A fairer - and at the end of the
day a more convincing - argument either addresses these, with respect, or honestly ignores them;
McCloskey throws up straw-men adversaries and arguments that are no more than caricatures playing
into her rhetorical hands.
This is an argument of a type.
Her complaint is old and well known, to many economists but also in related fields where similar
sins arise. There are strong social factors why it is rejected, ignored, or acknowledged with lip-service
only - and that's a desperate shame. But if an "academic" thinks she's said something new and is
being ignored because of cross-disciplinary (or other idiosycratic) reasons, said academic should
do what academics are suppoed to do, and understand/research the questions more deeply before making
silly comments.
First, the speculation about her observations being ignored due to her various boundary-crossings was mine, not the academic in question (now that I know she teaches at a school most regard as second-tier, that is probably an even bigger reason). And in fairness, I said he regards her work as fundamental, not this essay in particular. I can check and find out what he meant, but I suspect he meant her book "The Rhetoric of Economics" which does have a chapter on statistical significance.
The academic in question is not a lightweight; he's co-authored papers with Noble Prize winners. And he has reason to sympathize with McCloskey's view: he has written a fundamental attack on the one of the popular mathematical approaches used in the social sciences. Everyone who has seen the paper is very uncomfortable with it (since it implies that much of what has been done needs to be rethought) and is unable to contest his argument. Needless to say, he hasn't been able to get the paper published. He'd putting it in as a chapter in a book that will come out later this year, and when the book is out I will discuss it at greater length.
I'll grant her writing is self-indulgent. The bit about Cassandra at the end was toomuch, and after reading 2/3s I skipped forward to read what she thought the two big sins were. But in the humanities, some academics do aspire to a highbrow infotainment style when trying to reach a broader audience. Just looking at a few pages of her "Rhetoric of Economics" she hews to a more conventional style used when doing close readings of text, which if you don't routinely read that sort of thing, is tiring.
And it turns out she did teach at the University of Chicago for quite a few years before she got tenure at U of Illinois.
Rational Economic Man: A Philosophical Critique of Neo-Classical Economics. By Martin Hollis and Edward Nell. London: Cambridge Uni- versity Press, 1975. Pp. 279. $14.95.The cooperative effort of a philosopher and an economist, Rational Economic Man examines'the philosophic foundations of neo-Classical theory. Modern economics is found wanting because its basic tenets are based on an "unsound" positivist theory of knowledge. Since behaviorist and probabilistic premises are also weak, the authors offer an alternative theory of truth in the rationalist tradition and culminating in a model of the Classical-Marxian type.
The opening chapters offer a summary of the positivist theory of knowledge. The heart of this theory is the distinction between analytic and syn- thetic propositions. Analytic statements derive their truth content from the conventions of language. "A triangle has three angles" is such an analytic statement. Positivism asserts that such statements are irrefutable by their very nature. Relationships of analytic statements (such as we have in mathematics and geometry) produce necessarily true relations. Synthetic judgments, instead, are refutable because they are about matters of fact. Since positivism makes successful predictions of facts the test of a theory, it naturally betrays a marked preference for synthetic-type statements.
This analytic-synthetic distinction determines the ground where positivists and anti-positivists battle each other. It is a dangerous ground, strewn with mines laid by the opposing schools and threatening to blow off at least one of the contestants. Positivists have strategically deployed powerful explosives containing the deadly "deductive problem": truths like those of logic, geometry, mathematics, and (as the authors imply) marginalist economics, being rooted in analytic statements, are devoid of factual content. They are purely formal and say nothing about the world of human experience. But the anti-positivists, too, have laid booby traps charged with the Humean "inductive problem": factual, synthetic statements are indeed about the world, but how do we know that what has happened in the past will recur in the future? When does an observed relationship become a scientific law? Toward the end of their book, Hollis and Nell claim to have found a way of defusing the deductive problem with a novel interpretation of a priori, deductive, analytic statements.
The largest part of the book is devoted to an examination of the fallacies and contradictions of neo-Classical economics which-they claim-follows positivist tenets. Since testing is the sole standard of truth according to positivism, the authors examine the process of testing in neo-Classical economics. They discover that true testing is dubious if not impossible. First of all, the data of experience must frequently be "adjusted" to make them consistent with the theoretical concepts. Second, the ceteris paribus clause converts all neo-Classical theorems into tautologies. Thus they conclude that economic theorems "predict not what will be observed to happen, but what would be observed, if the values of the variables were the true ones and if 'other things' were 'equal' " (p. 33). The various assumptions of economic theory (rationality, perfect knowledge, etc.) insulate it from reality so that a failure of prediction does not prove the theory incorrect, for the theory is a tortured tautology. Its alleged synthetic statements thus turn out to be analytic.
Economist Paul Krugman recently decried "zombie economics," policies advocated by "free-market fundamentalists [who] have been wrong about everything yet now dominate the political scene more thoroughly than ever." I share his chagrin, but suggest that the problem is that Krugman was wrong to also assert that "economics is not a morality play." In fact, I believe that defeating the zombie-like resilience of laissez faire capitalism will require directly refuting the moral belief in the inherent fairness of free market outcomes.
Consider a recent suggestion by Harvard economist Greg Mankiw, former Chairman of George W. Bush's Council of Economic Advisors, that tax policy should be based on a "Just Deserts Theory" under which "people should get what they deserve." This principle, a restatement of Equity Theory, proposed by psychologist John Adams in 1963 to explain how people evaluated distributional fairness, has long played a central role in tax debates, and is one that I, like many liberals, heartily endorse. Indeed, I think that widespread support for free markets is based more on belief in their inherent morality than on belief that they promote economic growth, potentially explaining the religious fervor of free-market fundamentalists defending their faith despite the considerable counter-evidence provided by recent events.
Mankiw concisely summarizes the theory underlying the ethical argument for market capitalism: "under a standard set of assumptions... the factors of production [i.e., workers] are paid the value of their marginal product... One might easily conclude that, under these idealized conditions, each person receives his just deserts." Mankiw's long-standing opposition to higher taxes on the wealthy suggests that he thinks these conditions usually pertain in the real world, too.
Consider me skeptical. The list of "standard assumptions" open to question is long, but two are particularly problematic (Northwestern economist Jonathan Weinstein has critiqued several others). First, how can we be sure that marginal productivity is the same as social contribution? A safe cracker in a criminal gang may indeed receive loot equal to his marginal productivity, but this doesn't mean that he is creating social wealth. Thus, financial industry profits accounted for over 40 percent of all corporate profits in 2004-5, but does anyone seriously contend that Wall Street created (rather than redistributed) 40 percent of wealth during that period?
The second problem is one that Mankiw himself acknowledges when he comments that the dramatic growth in income at the very top of the economic pyramid might be thought of as a lottery, with a few lucky winners reaping the lion's share of rewards. As economists Robert Frank and Philip Cook point out in their book The Winner Take All Society, technological change and ever-larger markets have caused small differences in ability, effort or luck to translate into large differences in income. Economic theory says that such "tournament rewards" create an incentive for individuals to exert maximal effort, consistent with just deserts as long as you don't mind that "losers" get much less despite trying nearly (or just) as hard. But theory also says that tournament rewards create an incentive for people to sabotage the efforts of others and to take on as much risk as possible. Given the role that excess risk played in Wall Street's meltdown, this is hardly a ringing endorsement for the fairness (or efficiency) of free market outcomes.
So Mankiw's "easy" conclusion that markets deliver just deserts depends critically on his own moral intuition about what is just. Given humanity's well-known ability to convince ourselves that what is in our own self-interest is fair, it is hardly surprising that wealthy conservatives like Mankiw would believe that free market capitalism delivers fair outcomes. But it is noteworthy that in one real-world situation with tournament rewards -- lotteries -- society typically imposes taxes in excess of 50 percent, since winners pay regular income taxes on earnings already halved by the governmental sponsor's share of the pot.
Moreover, a large body of laboratory research investigating moral intuitions regarding the division of a pool of money has demonstrated the powerful appeal of an equal split, a preference consistent with anthropological evidence that hunter-gatherer groups are remarkably and consistently egalitarian. While a handful of studies have demonstrated that preferences for equality in the laboratory are (slightly) reduced when subjects have to earn the money at stake, this involves experimenters (who provide the money in the first place) making it clear that they consider the earner to have made a commensurate contribution in the laboratory setting.
So, sure, people like just deserts when there is compelling evidence that they are indeed just. But the egalitarianism of hunter-gatherers, whose groups undoubtedly included considerable and obvious variation in individual abilities, suggests that the standard of proof for justifying inequality can be quite high.
I therefore think it likely that conservative icon Joe the Plumber favors lower taxes not simply because his own personal experience suggests that smarter and harder-working plumbers (granted, he isn't actually a plumber) tend to provide better services and to have proportionately higher incomes as a result, but also because authorities like Mankiw assert that a complicated mathematical theory says that this intuition is true throughout the economic system. To be sure, populist Joe might claim to disdain elite theory, but as Keynes once observed, "practical men, who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist."
Thus, Tea Party advocates sustain their belief in the market's fairness by blaming the government for bailing out Wall Street and interfering with the market's ethical magic, explaining why their initial targets were Republicans who supported the bailout (like Mankiw). Meanwhile, Democrats have completely failed to link higher taxes on the wealthy to populist anger at those who prospered while driving the economy into a ditch. To regain the initiative, I believe, progressives must directly challenge the claim that unfettered markets create just deserts. This won't be easy. Free market fundamentalists have the advantage of a simple message -- ending bailouts will deliver just deserts -- and of nearly limitless funds from rich folks who benefited from the bailout but are happy to claim that it should never happen again.
Let me therefore suggest one way to start: replace the estate tax with an inheritance tax. Republicans use the term "death" tax to imply that society is confiscating a lifetime of just deserts wealth. But if taxes are to be based on Mankiw's proposal that those "who contribute more to society deserve a higher income that reflects those greater contributions," then inheritors who have contributed nothing themselves should pay substantially higher rates (full disclosure: I am myself an inheritor).
I believe a debate about inheritance taxes will allow us to distinguish two arguments that appear similar but are critically different. The claim that people should get their just deserts is tricky to implement, but offers a valid moral principle to guide public debate. But the closely related argument that government should "keep its hands off my money" represents pure selfishness by people who refuse to acknowledge that public goods like education and defense are essential for the creation and protection of private wealth. Progressives have to make clear that the attempt to eliminate taxes on inheritors suggests that conservatives believe that all-you-can-eat socialism is fine for the rich as long as there is just-deserts capitalism for everyone else.
Criticisms of neoclassical economics - Wikipedia, the free encyclopedia
Rational Economic Man A Philosophical Critique of Neo-Classical Economics DeepDyve
Neoclassical Economics mad, bad, and dangerous to know – Steve Keen's Debtwatch
General equilibrium theory - Wikipedia, the free encyclopedia
An Austrian Critique of Neo-Classical Monopsony Theory Mises Wire
Knowledge AND Ignorance Critique of Neoclassical Economics
Critique of Neo-classical Economics - Mainstream Weekly
Veblen's Criticism Of Neo Classical Theory ~ ECONOMIC THEORIES
An Islamic Critique of Neoclassical Economics Asad Zaman - Academia.edu
Books
Society
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
Quotes
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Bulletin:
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
History:
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
Classic books:
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|
You can use PayPal to to buy a cup of coffee for authors of this site |
Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.
Last modified: March 18, 2019