Translate

Thursday, May 12, 2016

A Dirty Little Secret About Capitalism

A Dirty Little Secret About Capitalism
by Bryan J. Neva, Sr.

The Eastman Kodak Company, the iconic photography company started by George Eastman in 1880, has been in business for 136 years.  In its heydays it had over 85% of the market share for the U.S. camera and film business, over $10 billion in annual sales, and employed over 145,000 people worldwide. But by 2012 it was in bankruptcy, and today Kodak is a shell of its former self. Despite inventing the digital camera in 1975, Kodak failed because it didn't transition to digital photography soon enough. And by the time Kodak finally did get into the digital photography business, it's competitors were decades ahead technologically. 

A dirty little secret about capitalism is that, despite the best efforts of business managers, employees, and governments, all businesses will eventually fail! Like people, plants, and animals, nothing lasts forever. And Kodak is just one example of the millions and millions of businesses small and large that were born, lived, prospered, declined, and eventually died. This sobering fact probably keeps most business managers awake at night.

All of us know that our lifespans are finite; so why do business managers run their companies as if they were going to live forever?  One of the longest surviving corporations, the British East India Company (which ironically owned the ship that was the guest-of-honor at the Boston Tea Party in 1773), only lived for 274 years!

Most normal people want to live, to love, to learn, and to leave a lasting legacy for their loved ones, their employers, and their community after they pass on. Since we all know we're not going to live forever, most of us do our best to raise healthy, well adjusted children so that they can just maybe do a little bit better than we did, and just maybe make the world a little bit better place to live in. And since we all know we're not going to work forever, most of us try to be good workers in order to make the companies we work for a little better after we leave. And finally, most of us try to be good, law-abiding citizens in order to make our communities a little bit better place to live in after we're gone.  

Companies should be more like normal people who know they're not going to live forever; so they should strive to be more paternalistic and leave a lasting legacy with their employees, their families, and their communities after they're gone. And in doing so, just maybe make our world a little bit better place to live in.

American Capitalism’s Great Crisis by Rana Foroohar TIME May 12, 2016

American Capitalism's Great Crisis

by Rana Foroohar




American Capitalism’s Great Crisis
// The Curious Capitalist

How Wall Street is choking our economy and how to fix it

Over the past few decades, finance has turned away from this traditional role. Academic research shows that only a fraction of all the money washing around the financial markets these days actually makes it to Main Street businesses. “The intermediation of household savings for productive investment in the business sector—the textbook description of the financial sector—constitutes only a minor share of the business of banking today,” according to academics Oscar Jorda, Alan Taylor and Moritz Schularick, who’ve studied the issue in detail. By their estimates and others, around 15% of capital coming from financial institutions today is used to fund business investments, whereas it would have been the majority of what banks did earlier in the 20th century.

This represents more than just millennials not minding the label “socialist” or disaffected middle-aged Americans tiring of an anemic recovery. This is a majority of citizens being uncomfortable with the country’s economic foundation—a system that over hundreds of years turned a fledgling society of farmers and prospectors into the most prosperous nation in human history. To be sure, polls measure feelings, not hard market data. But public sentiment reflects day-to-day economic reality. And the data (more on that later) shows Americans have plenty of concrete reasons to question their system.

This crisis of faith has had no more severe expression than the 2016 presidential campaign, which has turned on the questions of who, exactly, the system is working for and against, as well as why eight years and several trillions of dollars of stimulus on from the financial crisis, the economy is still growing so slowly. All the candidates have prescriptions: Sanders talks of breaking up big banks; Trump says hedge funders should pay higher taxes; Clinton wants to strengthen existing financial regulation. In Congress, Republican House Speaker Paul Ryan remains committed to less regulation.
All of them are missing the point. America’s economic problems go far beyond rich bankers, too-big-to-fail financial institutions, hedge-fund billionaires, offshore tax avoidance or any particular outrage of the moment. In fact, each of these is symptomatic of a more nefarious condition that threatens, in equal measure, the very well-off and the very poor, the red and the blue. The U.S. system of market capitalism itself is broken. That problem, and what to do about it, is at the center of my book Makers and Takers: The Rise of Finance and the Fall of American Business, a three-year research and reporting effort from which this piece is adapted.
To understand how we got here, you have to understand the relationship between capital markets—meaning the financial system—and businesses. From the creation of a unified national bond and banking system in the U.S. in the late 1790s to the early 1970s, finance took individual and corporate savings and funneled them into productive enterprises, creating new jobs, new wealth and, ultimately, economic growth. Of course, there were plenty of blips along the way (most memorably the speculation leading up to the Great Depression, which was later curbed by regulation). But for the most part, finance—which today includes everything from banks and hedge funds to mutual funds, insurance firms, trading houses and such—essentially served business. It was a vital organ but not, for the most part, the central one.

Over the past few decades, finance has turned away from this traditional role. Academic research shows that only a fraction of all the money washing around the financial markets these days actually makes it to Main Street businesses. “The intermediation of household savings for productive investment in the business sector—the textbook description of the financial sector—constitutes only a minor share of the business of banking today,” according to academics Oscar Jorda, Alan Taylor and Moritz Schularick, who’ve studied the issue in detail. By their estimates and others, around 15% of capital coming from financial institutions today is used to fund business investments, whereas it would have been the majority of what banks did earlier in the 20th century.

“The trend varies slightly country by country, but the broad direction is clear,” says Adair Turner, a former British banking regulator and now chairman of the Institute for New Economic Thinking, a think tank backed by George Soros, among others. “Across all advanced economies, and the United States and the U.K. in particular, the role of the capital markets and the banking sector in funding new investment is decreasing.” Most of the money in the system is being used for lending against existing assets such as housing, stocks and bonds.

To get a sense of the size of this shift, consider that the financial sector now represents around 7% of the U.S. economy, up from about 4% in 1980. Despite currently taking around 25% of all corporate profits, it creates a mere 4% of all jobs. Trouble is, research by numerous academics as well as institutions like the Bank for International Settlements and the International Monetary Fund shows that when finance gets that big, it starts to suck the economic air out of the room. In fact, finance starts having this adverse effect when it’s only half the size that it currently is in the U.S. Thanks to these changes, our economy is gradually becoming “a zero-sum game between financial wealth holders and the rest of America,” says former Goldman Sachs banker Wallace Turbeville, who runs a multiyear project on the rise of finance at the New York City—based nonprofit Demos.

It’s not just an American problem, either. Most of the world’s leading market economies are grappling with aspects of the same disease. Globally, free-market capitalism is coming under fire, as countries across Europe question its merits and emerging markets like Brazil, China and Singapore run their own forms of state-directed capitalism. An ideologically broad range of financiers and elite business managers—Warren Buffett, BlackRock’s Larry Fink, Vanguard’s John Bogle, McKinsey’s Dominic Barton, Allianz’s Mohamed El-Erian and others—have started to speak out publicly about the need for a new and more inclusive type of capitalism, one that also helps businesses make better long-term decisions rather than focusing only on the next quarter. The Pope has become a vocal critic of modern market capitalism, lambasting the “idolatry of money and the dictatorship of an impersonal economy” in which “man is reduced to one of his needs alone: consumption.”

During my 23 years in business and economic journalism, I’ve long wondered why our market system doesn’t serve companies, workers and consumers better than it does. For some time now, finance has been thought by most to be at the very top of the economic hierarchy, the most aspirational part of an advanced service economy that graduated from agriculture and manufacturing. But research shows just how the unintended consequences of this misguided belief have endangered the very system America has prided itself on exporting around the world.

America’s economic illness has a name: financialization. It’s an academic term for the trend by which Wall Street and its methods have come to reign supreme in America, permeating not just the financial industry but also much of American business. It includes everything from the growth in size and scope of finance and financial activity in the economy; to the rise of debt-fueled speculation over productive lending; to the ascendancy of shareholder value as the sole model for corporate governance; to the proliferation of risky, selfish thinking in both the private and public sectors; to the increasing political power of financiers and the CEOs they enrich; to the way in which a “markets know best” ideology remains the status quo. Financialization is a big, unfriendly word with broad, disconcerting implications.

University of Michigan professor Gerald Davis, one of the pre-eminent scholars of the trend, likens financialization to a “Copernican revolution” in which business has reoriented its orbit around the financial sector. This revolution is often blamed on bankers. But it was facilitated by shifts in public policy, from both sides of the aisle, and crafted by the government leaders, policymakers and regulators entrusted with keeping markets operating smoothly. Greta Krippner, another University of Michigan scholar, who has written one of the most comprehensive books on financialization, believes this was the case when financialization began its fastest growth, in the decades from the late 1970s onward. According to Krippner, that shift encompasses Reagan-era deregulation, the unleashing of Wall Street and the rise of the so-called ownership society that promoted owning property and further tied individual health care and retirement to the stock market.

The changes were driven by the fact that in the 1970s, the growth that America had enjoyed following World War II began to slow. Rather than make tough decisions about how to bolster it (which would inevitably mean choosing among various interest groups), politicians decided to pass that responsibility to the financial markets. Little by little, the Depression-era regulation that had served America so well was rolled back, and finance grew to become the dominant force that it is today. The shifts were bipartisan, and to be fair they often seemed like good ideas at the time; but they also came with unintended consequences. The Carter-era deregulation of interest rates—something that was, in an echo of today’s overlapping left-and right-wing populism, supported by an assortment of odd political bedfellows from Ralph Nader to Walter Wriston, then head of Citibank—opened the door to a spate of financial “innovations” and a shift in bank function from lending to trading. Reaganomics famously led to a number of other economic policies that favored Wall Street. Clinton-era deregulation, which seemed a path out of the economic doldrums of the late 1980s, continued the trend. Loose monetary policy from the Alan Greenspan era onward created an environment in which easy money papered over underlying problems in the economy, so much so that it is now chronically dependent on near-zero interest rates to keep from falling back into recession.

This sickness, not so much the product of venal interests as of a complex and long-term web of changes in government and private industry, now manifests itself in myriad ways: a housing market that is bifurcated and dependent on government life support, a retirement system that has left millions insecure in their old age, a tax code that favors debt over equity. Debt is the lifeblood of finance; with the rise of the securities-and-trading portion of the industry came a rise in debt of all kinds, public and private. That’s bad news, since a wide range of academic research shows that rising debt and credit levels stoke financial instability. And yet, as finance has captured a greater and greater piece of the national pie, it has, perversely, all but ensured that debt is indispensable to maintaining any growth at all in an advanced economy like the U.S., where 70% of output is consumer spending. Debt-fueled finance has become a saccharine substitute for the real thing, an addiction that just gets worse. (The amount of credit offered to American consumers has doubled in real dollars since the 1980s, as have the fees they pay to their banks.)
As the economist Raghuram Rajan, one of the most prescient seers of the 2008 financial crisis, argues, credit has become a palliative to address the deeper anxieties of downward mobility in the middle class. In his words, “let them eat credit” could well summarize the mantra of the go-go years before the economic meltdown. And things have only deteriorated since, with global debt levels $57 trillion higher than they were in 2007.
The rise of finance has also distorted local economies. It’s the reason rents are rising in some communities where unemployment is still high. America’s housing market now favors cash buyers, since banks are still more interested in making profits by trading than by the traditional role of lending out our savings to people and businesses looking to make longterm investments (like buying a house), ensuring that younger people can’t get on the housing ladder. One perverse result: Blackstone, a private-equity firm, is currently the largest single-family-home landlord in America, since it had the money to buy properties up cheap in bulk following the financial crisis. It’s at the heart of retirement insecurity, since fees from actively managed mutual funds “are likely to confiscate as much as 65% or more of the wealth that … investors could otherwise easily earn,” as Vanguard founder Bogle testified to Congress in 2014.

It’s even the reason companies in industries from autos to airlines are trying to move into the business of finance themselves. American companies across every sector today earn five times the revenue from financial activities—investing, hedging, tax optimizing and offering financial services, for example—that they did before 1980. Traditional hedging by energy and transport firms, for example, has been overtaken by profit-boosting speculation in oil futures, a shift that actually undermines their core business by creating more price volatility. Big tech companies have begun underwriting corporate bonds the way Goldman Sachs does. And top M.B.A. programs would likely encourage them to do just that; finance has become the center of all business education.

Washington, too, is so deeply tied to the ambassadors of the capital markets—six of the 10 biggest individual political donors this year are hedge-fund barons—that even well-meaning politicians and regulators don’t see how deep the problems are. When I asked one former high-level Obama Administration Treasury official back in 2013 why more stakeholders aside from bankers hadn’t been consulted about crafting the particulars of Dodd-Frank financial reform (93% of consultation on the Volcker Rule, for example, was taken with the financial industry itself), he said, “Who else should we have talked to?” The answer—to anybody not profoundly influenced by the way finance thinks—might have been the people banks are supposed to lend to, or the scholars who study the capital markets, or the civic leaders in communities decimated by the financial crisis.

Of course, there are other elements to the story of America’s slow-growth economy, including familiar trends from globalization to technology-related job destruction. These are clearly massive challenges in their own right. But the single biggest unexplored reason for long-term slower growth is that the financial system has stopped serving the real economy and now serves mainly itself. A lack of real fiscal action on the part of politicians forced the Fed to pump $4.5 trillion in monetary stimulus into the economy after 2008. This shows just how broken the model is, since the central bank’s best efforts have resulted in record stock prices (which enrich mainly the wealthiest 10% of the population that owns more than 80% of all stocks) but also a lackluster 2% economy with almost no income growth.
Now, as many top economists and investors predict an era of much lower asset-price returns over the next 30 years, America’s ability to offer up even the appearance of growth—via financially oriented strategies like low interest rates, more and more consumer credit, tax-deferred debt financing for businesses, and asset bubbles that make people feel richer than we really are, until they burst—is at an end.

This pinch is particularly evident in the tumult many American businesses face. Lending to small business has fallen particularly sharply, as has the number of startup firms. In the early 1980s, new companies made up half of all U.S. businesses. For all the talk of Silicon Valley startups, the number of new firms as a share of all businesses has actually shrunk. From 1978 to 2012 it declined by 44%, a trend that numerous researchers and even many investors and businesspeople link to the financial industry’s change in focus from lending to speculation. The wane in entrepreneurship means less economic vibrancy, given that new businesses are the nation’s foremost source of job creation and GDP growth. Buffett summed it up in his folksy way: “You’ve now got a body of people who’ve decided they’d rather go to the casino than the restaurant” of capitalism.

In lobbying for short-term share-boosting management, finance is also largely responsible for the drastic cutback in research-and-development outlays in corporate America, investments that are seed corn for future prosperity. Take share buybacks, in which a company—usually with some fanfare—goes to the stock market to purchase its own shares, usually at the top of the market, and often as a way of artificially bolstering share prices in order to enrich investors and executives paid largely in stock options. Indeed, if you were to chart the rise in money spent on share buybacks and the fall in corporate spending on productive investments like R&D, the two lines make a perfect X. The former has been going up since the 1980s, with S&P 500 firms now spending $1 trillion a year on buybacks and dividends—equal to about 95% of their net earnings—rather than investing that money back into research, product development or anything that could contribute to long-term company growth. No sector has been immune, not even the ones we think of as the most innovative. Many tech firms, for example, spend far more on share-price boosting than on R&D as a whole. The markets penalize them when they don’t. One case in point: back in March 2006, Microsoft announced major new technology investments, and its stock fell for two months. But in July of that same year, it embarked on $20 billion worth of stock buying, and the share price promptly rose by 7%. This kind of twisted incentive for CEOs and corporate officers has only grown since.

As a result, business dynamism, which is at the root of economic growth, has suffered. The number of new initial public offerings (IPOs) is about a third of what it was 20 years ago. True, the dollar value of IPOs in 2014 was $74.4 billion, up from $47.1 billion in 1996. (The median IPO rose to $96 million from $30 million during the same period.) This may show investors want to make only the surest of bets, which is not necessarily the sign of a vibrant market. But there’s another, more disturbing reason: firms simply don’t want to go public, lest their work become dominated by playing by Wall Street’s rules rather than creating real value.

An IPO—a mechanism that once meant raising capital to fund new investment—is likely today to mark not the beginning of a new company’s greatness, but the end of it. According to a Stanford University study, innovation tails off by 40% at tech companies after they go public, often because of Wall Street pressure to keep jacking up the stock price, even if it means curbing the entrepreneurial verve that made the company hot in the first place.

A flat stock price can spell doom. It can get CEOs canned and turn companies into acquisition fodder, which often saps once innovative firms. Little wonder, then, that business optimism, as well as business creation, is lower than it was 30 years ago, or that wages are flat and inequality growing. Executives who receive as much as 82% of their compensation in stock naturally make shorter-term business decisions that might undermine growth in their companies even as they raise the value of their own options.

It’s no accident that corporate stock buybacks, corporate pay and the wealth gap have risen concurrently over the past four decades. There are any number of studies that illustrate this type of intersection between financialization and inequality. One of the most striking was by economists James Galbraith and Travis Hale, who showed how during the late 1990s, changing income inequality tracked the go-go Nasdaq stock index to a remarkable degree.

Recently, this pattern has become evident at a number of well-known U.S. companies. Take Apple, one of the most successful over the past 50 years. Apple has around $200 billion sitting in the bank, yet it has borrowed billions of dollars cheaply over the past several years, thanks to superlow interest rates (themselves a response to the financial crisis) to pay back investors in order to bolster its share price. Why borrow? In part because it’s cheaper than repatriating cash and paying U.S. taxes. All the financial engineering helped boost the California firm’s share price for a while. But it didn’t stop activist investor Carl Icahn, who had manically advocated for borrowing and buybacks, from dumping the stock the minute revenue growth took a turn for the worse in late April.
It is perhaps the ultimate irony that large, rich companies like Apple are most involved with financial markets at times when they don’t need any financing. Top-tier U.S. businesses have never enjoyed greater financial resources. They have a record $2 trillion in cash on their balance sheets—enough money combined to make them the 10th largest economy in the world. Yet in the bizarre order that finance has created, they are also taking on record amounts of debt to buy back their own stock, creating what may be the next debt bubble to burst.

You and I, whether we recognize it or not, are also part of a dysfunctional ecosystem that fuels short-term thinking in business. The people who manage our retirement money—fund managers working for asset-management firms—are typically compensated for delivering returns over a year or less. That means they use their financial clout (which is really our financial clout in aggregate) to push companies to produce quick-hit results rather than execute long-term strategies. Sometimes pension funds even invest with the activists who are buying up the companies we might work for—and those same activists look for quick cost cuts and potentially demand layoffs.
It’s a depressing state of affairs, no doubt. Yet America faces an opportunity right now: a rare second chance to do the work of refocusing and right-sizing the financial sector that should have been done in the years immediately following the 2008 crisis. And there are bright spots on the horizon.
Despite the lobbying power of the financial industry and the vested interests both in Washington and on Wall Street, there’s a growing push to put the financial system back in its rightful place, as a servant of business rather than its master. Surveys show that the majority of Americans would like to see the tax system reformed and the government take more direct action on job creation and poverty reduction, and address inequality in a meaningful way. Each candidate is crafting a message around this, which will keep the issue front and center through November.
The American public understands just how deeply and profoundly the economic order isn’t working for the majority of people. The key to reforming the U.S. system is comprehending why it isn’t working.
Remooring finance in the real economy isn’t as simple as splitting up the biggest banks (although that would be a good start). It’s about dismantling the hold of financial-oriented thinking in every corner of corporate America. It’s about reforming business education, which is still permeated with academics who resist challenges to the gospel of efficient markets in the same way that medieval clergy dismissed scientific evidence that might challenge the existence of God. It’s about changing a tax system that treats one-year investment gains the same as longer-term ones, and induces financial institutions to push overconsumption and speculation rather than healthy lending to small businesses and job creators. It’s about rethinking retirement, crafting smarter housing policy and restraining a money culture filled with lobbyists who violate America’s essential economic principles.
It’s also about starting a bigger conversation about all this, with a broader group of stakeholders. The structure of American capital markets and whether or not they are serving business is a topic that has traditionally been the sole domain of “experts”—the financiers and policymakers who often have a self-interested perspective to push, and who do so in complicated language that keeps outsiders out of the debate. When it comes to finance, as with so many issues in a democratic society, complexity breeds exclusion.
Finding solutions won’t be easy. There are no silver bullets, and nobody really knows the perfect model for a high-functioning, advanced market system in the 21st century. But capitalism’s legacy is too long, and the well-being of too many people is at stake, to do nothing in the face of our broken status quo. Neatly packaged technocratic tweaks cannot fix it. What is required now is lifesaving intervention.
Crises of faith like the one American capitalism is currently suffering can be a good thing if they lead to re-examination and reaffirmation of first principles. The right question here is in fact the simplest one: Are financial institutions doing things that provide a clear, measurable benefit to the real economy? Sadly, the answer at the moment is mostly no. But we can change things. Our system of market capitalism wasn’t handed down, in perfect form, on stone tablets. We wrote the rules. We broke them. And we can fix them.
Foroohar is an assistant managing editor at TIME and the magazine’s economics columnist. She’s the author of Makers and Takers: The Rise of Finance and the Fall of American Business.

Friday, April 29, 2016

Reexamining our Economic Theories by Allen Laudenslager and Bryan Neva

These are very strange economic times we’re living in with one of the slowest recoveries ever seen.  We are seeing more people out of work or only able to find jobs at a fraction of their previous salaries or working below the skill level of their previous jobs or training.  The official unemployment rate of 5% doesn’t account for the millions of discouraged workers who’ve left the labor force, or the millions of people who are underemployed.

Professional economists were asleep at the switch when the economic meltdown of 2008 occurred.  Even Allen Greenspan, one of the most esteemed economists, didn’t see the economic crisis coming. Following their best training and the collective wisdom of their profession, a lot of well-trained, very smart people made decisions that seemed quite rational at the time.  Since those decisions led directly to the current economic crisis, we really need to understand what happened and why so we can try to prevent this kind of economic crisis in the future.

Each of these experts, having tens of thousands of hours of academic and on-the-job training in economics, all made the same fundamental mistake: they all believed that unregulated, free market capitalism would behave rationally.  It was a fundamental misunderstanding of how things work in the real world and too much reliance on theoretical models that didn’t account for all the factors; the biggest factor being that humans don’t always behave rationally, wisely, or altruistically.  In fact, the history of the world teaches otherwise: humans in general are irrational, greedy, self-centered, and foolish.  It's the exception to the rule they'll behave otherwise, which is why we need to regulate capitalism.

It would make more sense if only a few of these experts had made this mistake.  But that’s not what happened as far too many of these experts, suffering from groupthink, came to the same erroneous conclusions.  So which seems more likely that economists all over the world made the same mistake, or their academic training was flawed?  We believe the latter to be the case.  The 2008 economic meltdown was an example of a flawed plan that was brilliantly executed and the result was a total disaster.  

The famous American psychologist, Dr. Abraham Maslow, Ph.D., once said, “I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.”  This concept is known as the Golden Hammer Rule, and it’s simply an over-reliance on a familiar tool.  Medical doctors, for example, epitomize the Golden Hammer Rule with all the various medical specialties.  In the case of economists, they only have one Golden Hammer in their tool bag to make sense of the economic challenges we face.    

For anyone to buy anything they must have money.  To have money, they must have jobs that pay enough to buy stuff.  So any economic theory that does not hold as its keystone the availability of jobs and the income level of those jobs is fundamentally flawed. Yes, the economic measures and theories do include jobs, but only peripherally and not as the central measure of economic health.  This may seem simplistic to someone trained in “classic” economic theory, but a layperson in their simplicity and innocence recognizes the principle of good paying jobs as the key to a healthy economy.

The definition of insanity is doing the same thing the same way and expecting different results!  We've gotten where we are by following our current economic theories.  The only way to reverse our current economic morass is by reexamining our economic theories and chart a course to a more prosperous future. 

Monday, April 25, 2016

Why Economists Failed to Predict the 2008 Financial Crisis


This article was first published in May 2009  from the Wharton School of Business found at this link.  But I think it's still relevant today and worth your time to read.  I've highlight some of the salient passages.
_________*_________
There is a long list of professions that failed to see the financial crisis brewing. Wall Street bankers and deal-makers top it, but banking regulators are on it as well, along with the Federal Reserve. Politicians and journalists have shared the blame, as have mortgage lenders and even real estate agents.
But what about economists? Of all the experts, weren’t they the best equipped to see around the corners and warn of impending disaster?
Indeed, a sense that they missed the call has led to soul searching among many economists. While some did warn that home prices were forming a bubble, others confess to a widespread failure to foresee the damage the bubble would cause when it burst. Some economists are harsher, arguing that a free-market bias in the profession, coupled with outmoded and simplistic analytical tools, blinded many of their colleagues to the danger.
“It’s not just that they missed it, they positively denied that it would happen,” says Wharton finance professor Franklin Allen, arguing that many economists used mathematical models that failed to account for the critical roles that banks and other financial institutions play in the economy. “Even a lot of the central banks in the world use these models,” Allen said. “That’s a large part of the issue. They simply didn’t believe the banks were important.”
Over the past 30 years or so, economics has been dominated by an “academic orthodoxy” which says economic cycles are driven by players in the “real economy” — producers and consumers of goods and services — while banks and other financial institutions have been assigned little importance, Allen says. “In many of the major economics departments, graduate students wouldn’t learn anything about banking in any of the courses.”
But it was the financial institutions that fomented the current crisis, by creating risky products, encouraging excessive borrowing among consumers and engaging in high-risk behavior themselves, like amassing huge positions in mortgage-backed securities, Allen says.
As computers have grown more powerful, academics have come to rely on mathematical models to figure how various economic forces will interact. But many of those models simply dispense with certain variables that stand in the way of clear conclusions, says Wharton management professor Sidney G. Winter. Commonly missing are hard-to-measure factors like human psychology and people’s expectations about the future, he notes.
Among the most damning examples of the blind spot this created, Winter says, was the failure by many economists and business people to acknowledge the common-sense fact that home prices could not continue rising faster than household incomes.
Says Winter: “The most remarkable fact is that serious people were willing to commit, both intellectually and financially, to the idea that housing prices would rise indefinitely, a really bizarre idea.”
Although many economists did spot the housing bubble, they failed to fully understand the implications, says Richard J. Herring, professor of international banking at Wharton. Among those were dangers building in the repo market, where securities backed by mortgages and other assets are used as collateral for loans. Because of the collateralization, these loans were thought to be safe, but the securities turned out to be riskier than borrowers and lenders had thought.
The Dahlem Report
In a highly critical paper titled, “The Financial Crisis and the Systemic Failure of Academic Economists,” eight American and European economists argue that academic economists were too disconnected from the real world to see the crisis forming. The authors are David Colander, Middlebury College; Hans Follmer, Humboldt University; Armin Haas, Potsdam Institute for Climate Impact Research; Michael Goldberg, University of New Hampshire; Katarina Juselius, University of Copenhagen; Alan Kirman, University d’Aix-Marseille; Thomas Lux, University of Kiel; and Brigitte Sloth, University of Southern Denmark.
“The economics profession appears to have been unaware of the long build-up to the current worldwide financial crisis and to have significantly underestimated its dimensions once it started to unfold,” they write. “In our view, this lack of understanding is due to a misallocation of research efforts in economics. We trace the deeper roots of this failure to the profession’s insistence on constructing models that, by design, disregard the key elements driving outcomes in real world markets.”
The paper, generally referred to as the Dahlem report, condemns a growing reliance over the past three decades on mathematical models that improperly assume markets and economies are inherently stable, and which disregard influences like differences in the way various economic players make decisions, revise their forecasting methods and are influenced by social factors. Standard analysis also failed, in part, because of the widespread use of new financial products that were poorly understood, and because economists did not firmly grasp the workings of the increasingly interconnected global financial system, the authors say.
One result of this, argues Winter, who is not one of the authors but agrees with much of what they say, is to build into models an assumption that all market participants — bankers, lenders, borrowers and consumers — behave rationally at all times, as if they were economists making the most financially favorable choices. Clearly, he says, rational behavior is not that dependable, or else people would not do self-destructive things like taking out mortgages they could not afford, a key factor in the financial crisis. Nor would completely rational executives at financial firms invest in securities backed by those risky mortgages, which they did.
By relying so heavily on the view of humans as rational, the paper’s authors argue, economists ignore evidence of irrational behavior that is well documented in other disciplines like psychology and sociology. Even if an individual does act rationally, economists are wrong to assume that large groups of people will react to given conditions as an individual would, because they often do not. “Economic modeling has to be compatible with insights from other branches of science on human behavior,” they write. “It is highly problematic to insist on a specific view of humans in economic settings that is irreconcilable with evidence.”
The authors say economists badly underestimated the risks of new types of derivatives, which are financial instruments whose value fluctuates, often to extremes, according to the changing values of underlying securities. Traditional derivatives such as stock options and commodities futures are well understood. But exotic derivatives devised in recent years, including securities built upon pools of mortgages, turned out to be poorly understood, the authors say. Credit default swaps, a form of derivative used to insure against a borrower’s failure to repay a loan, played a key role in the collapse of American International Group.
Rather than accurately analyzing the risks posed by new derivatives, many economists simply fell back on faith that creating new financial products is good, the authors write. According to this belief, which was promoted by former Federal Reserve chairman Alan Greenspan, a wider variety of financial products allows market participants to place ever more refined bets, so the markets as a whole better reflect the combined wisdom of all the players. But because there was not enough historical data to put into models used to price these new derivatives, risk and return assessments turned out to be wrong, the authors argue. These securities are now the “toxic assets” polluting the balance sheets of the nation’s largest banks.
“While the economic argument in favor of ever new derivatives is more one of persuasion rather than evidence, important negative effects have been neglected,” they write. “The idea that the system was made less risky with the development of more derivatives led to financial actors taking positions with extreme degrees of leverage, and the danger of this has not been emphasized enough.”
‘Control Illusion’
When certain price and risk models came into widespread use, they led many players to place the same kinds of bets, the authors continue. The market thus lost the benefit of having many participants, since there was no longer a variety of views offsetting one another. The same effect, the authors say, occurs if one player becomes dominant in one aspect of the market. The problem is exacerbated by the “control illusion,” an unjustified confidence based on the model’s apparent mathematical precision, the authors say. This problem is especially acute among people who use models they have not developed themselves, as they may be unaware of the models’ flaws, like reliance on uncertain assumptions.
Much of the financial crisis can be blamed on an overreliance on ratings agencies, which gave complex securities a seal of approval, says Wharton finance professor Marshall E. Blume. “The ratings agencies, of course, use models” which “grossly underestimated” risks.
“Any model is an abstraction of the world,” Blume adds. “The value of a model is to provide the essence of what is happening with a limited number of variables. If you think a variable is important, you include it, but you can’t have every variable in the world…. The models may not have had the right variables.”
The false security created by asset-pricing models led banks and hedge funds to use excessive leverage, borrowing money so they could make bigger bets, and laying the groundwork for bigger losses when bets went bad, according to the Dahlem report authors.
At the time, few people knew that major financial institutions had become so heavily leveraged in real estate-related assets, says Wharton finance professor Jeremy J. Siegel. “Had they not been in that situation, we would not have had the crisis,” he says. “We may not even have had a recession…. Macro economists really hadn’t talked about it because these structured financial products were relatively new,” he adds, arguing that economists will have to scrutinize the balance sheets of major financial institutions more closely to detect mushrooming risks.
Lessons Not Learned
Prior to the latest crisis, there were two well-known occasions when exotic bets, leverage and inadequate modeling combined to create crises, the paper’s authors say, arguing that economists should therefore have known what could happen. The first case, the stock market crash of 1987, began with a small drop in prices which triggered an avalanche of sell orders in computerized trading programs, causing a further price decline that triggered more automatic sales.
The second case was the 1998 collapse of the Long-Term Capital Management (LTCM) hedge fund. It had built up a huge position in government bonds from the U.S. and other countries, and was forced into a wave of selling after a Russian government bond default knocked bond prices down.
“When there’s a default in one kind of bond, it causes reassessment of all the risks,” says Wharton economics professor Richard Marston. “I don’t think we have really fully learned from the LTCM crisis, or from other crises, the extent to which things are illiquid.” These crises have shown that market participants can rely too heavily on the belief they can quickly unload securities that decline in price, he says. In fact, the downward spiral can be so rapid that it leaves investors with losses far larger than they had thought possible.
In the current crisis, he says, economists “should get blamed for the overall unwillingness to take into account liquidity risk. And I think it’s going to force us to reassess that.”
Academics also are beginning to reassess business-school curricula. Wharton management professor Stephen J. Kobrin recently moderated a faculty panel that talked about a wide range of possible responses to the crisis. Among the issues discussed, he says, was whether Wharton’s curriculum should include more on regulation and risk management, as well as executive education programs for regulators and other government officials.
Kobrin said he believes many academics share “an ideological fixation with free markets and lack of regulation” that should be reexamined. “Obviously, people missed the boat on a lot of the risks that a lot of financial instruments entailed,” he says. “We need to think about what changes are needed in the curriculum.”

Sunday, April 17, 2016

There is no such thing as a free market by Dr. Ha Joon Chang

Thing 1

There is no such thing as a free market

by

What they tell you

Markets need to be free. When the government interferes to dictate what market participants can or cannot do,resources cannot flow to their most efficient use. If people cannot do the things that they find most profitable, they lose the incentive to invest and innovate. Thus, if the government puts a cap on house rents, landlords lose the incentive to maintain their properties or build new ones. Or, if the government restricts the kinds of financial products that can be sold, two contracting parties that may both have benefited from innovative transactions that fulfil their idiosyncratic needs cannot reap the potential gains of free contract. People must be left ‘free to choose’, as the title of free-market visionary Milton Friedman’s famous book goes.

What they don’t tell you

The free market doesn’t exist. Every market has some rules and boundaries that restrict freedom of choice. A market looks free only because we so unconditionally accept its underlying restrictions that we fail to see them. How ‘free’ a market is cannot be objectively defined. It is a political definition. The usual claim by free-market economists that they are trying to defend the market from politically motivated interference by the government is false. Government is always involved and those free-marketeers are as politically motivated as anyone. Overcoming the myth that there is such a thing as an objectively defined ‘free market’ is the first step towards understanding capitalism.

Labour ought to be free

In 1819 new legislation to regulate child labour, the Cotton Factories Regulation Act, was tabled in the British Parliament. The proposed regulation was incredibly ‘light touch’ by modern standards. It would ban the employment of young children – that is, those under the age of nine. Older children (aged between ten and sixteen) would still be allowed to work, but with their working hours restricted to twelve per day (yes, they were really going soft on those kids). The new rules applied only to cotton factories, which were recognized to be exceptionally hazardous to workers’ health. The proposal caused huge controversy. Opponents saw it as undermining the sanctity of freedom of contract and thus destroying the very foundation of the free market. In debating this legislation, some members of the House of Lords objected to it on the grounds that ‘labour ought to be free’. Their argument said: the children want (and need) to work, and the factory owners want to employ them; what is the problem? Today, even the most ardent free-market proponents in Britain or other rich countries would not think of bringing child labour back as part of the market liberalization package that they so want. However, until the late nineteenth or the early twentieth century, when the first serious child labour regulations were introduced in Europe and North America, many respectable people judged child labour regulation to be against the principles of the free market. Thus seen, the ‘freedom’ of a market is, like beauty, in the eyes of the beholder. If you believe that the right of children not to have to work is more important than the right of factory owners to be able to hire whoever they find most profitable, you will not see a ban on child labour as an infringement on the freedom of the labour market. If you believe the opposite, you will see an ‘unfree’ market, shackled by a misguided government regulation. We don’t have to go back two centuries to see regulations we take for granted (and accept as the ‘ambient noise’ within the free market) that were seriously challenged as undermining the free market, when first introduced. When environmental regulations (e.g., regulations on car and factory emissions) appeared a few decades ago, they were opposed by many as serious infringements on our freedom to choose. Their opponents asked: if people want to drive in more polluting cars or if factories find more polluting production methods more profitable, why should the government prevent them from making such choices? Today, most people accept these regulations as ‘natural’. They believe that actions that harm others, however unintentionally (such as pollution), need to be restricted. They also understand that it is sensible to make careful use of our energy resources, when many of them are non renewable. They may believe that reducing human impact on climate change makes sense too. If the same market can be perceived to have varying degrees of freedom by different people, there is really no objective way to define how free that market is. In other words, the free market is an illusion. If some markets look free, it is only because we so totally accept the regulations that are propping them up that they become invisible.

Piano wires and kungfu masters

Like many people, as a child I was fascinated by all those gravity-defying kungfu masters in Hong Kong movies. Like many kids, I suspect, I was bitterly disappointed when I learned that those masters were actually hanging on piano wires. The free market is a bit like that. We accept the legitimacy of certain regulations so totally that we don’t see them. More carefully examined, markets are revealed to be propped up by rules – and many of them. To begin with, there is a huge range of restrictions on what can be traded; and not just bans on ‘obvious’ things such as narcotic drugs or human organs. Electoral votes, government jobs and legal decisions are not for sale, at least openly, in modern economies, although they were in most countries in the past. University places may not usually be sold, although in some nations money can buy them – either through (illegally) paying the selectors or (legally) donating money to the university. Many countries ban trading in firearms or alcohol. Usually medicines have to be explicitly licensed by the government, upon the proof of their safety, before they can be marketed. All these regulations are potentially controversial – just as the ban on selling human beings (the slave trade) was one and a half centuries ago. There are also restrictions on who can participate in markets. Child labour regulation now bans the entry of children into the labour market. Licences are required for professions that have significant impacts on human life, such as medical doctors or lawyers (which may sometimes be issued by professional associations rather than by the government). Many countries allow only companies with more than a certain amount of capital to set up banks. Even the stock market, whose under-regulation has been a cause of the 2008 global recession, has regulations on who can trade. You can’t just turn up in the New York Stock Exchange (NYSE) with a bag of shares and sell them. Companies must fulfil listing requirements, meeting stringent auditing standards over a certain number of years, before they can offer their shares for trading. Trading of shares is only conducted by licensed brokers and traders. Conditions of trade are specified too. One of the things that surprised me when I first moved to Britain in the mid 1980s was that one could demand a full refund for a product one didn’t like, even if it wasn’t faulty. At the time, you just couldn’t do that in Korea, except in the most exclusive department stores. In Britain, the consumer’s right to change her mind was considered more important than the right of the seller to avoid the cost involved in returning unwanted (yet functional) products to the manufacturer. There are many other rules regulating various aspects of the exchange process: product liability, failure in delivery, loan default, and so on. In many countries, there are also necessary permissions for the location of sales outlets – such as restrictions on street-vending or zoning laws that ban commercial activities in residential areas. Then there are price regulations. I am not talking here just about those highly visible phenomena such as rent controls or minimum wages that free-market economists love to hate. Wages in rich countries are determined more by immigration control than anything else, including any minimum wage legislation. How is the immigration maximum determined? Not by the ‘free’ labour market, which, if left alone, will end up replacing 80–90 per cent of native workers with cheaper, and often more productive, immigrants. Immigration is largely settled by politics. So, if you have any residual doubt about the massive role that the government plays in the economy’s free market, then pause to reflect that all our wages are, at root, politically determined (see Thing 3). Following the 2008 financial crisis, the prices of loans (if you can get one or if you already have a variable rate loan) have become a lot lower in many countries thanks to the continuous slashing of interest rates. Was that because suddenly people didn’t want loans and the banks needed to lower their prices to shift them? No, it was the result of political decisions to boost demand by cutting interest rates. Even in normal times, interest rates are set in most countries by the central bank, which means that political considerations creep in. In other words, interest rates are also determined by politics. If wages and interest rates are (to a significant extent) politically determined, then all the other prices are politically determined, as they affect all other prices.

Is free trade fair?

We see a regulation when we don’t endorse the moral values behind it. The nineteenth-century high-tariff restriction on free trade by the US federal government outraged slave-owners, who at the same time saw nothing wrong with trading people in a free market. To those who believed that people can be owned, banning trade in slaves was objectionable in the same way as restricting trade in manufactured goods. Korean shopkeepers of the 1980s would probably have thought the requirement for ‘unconditional return’ to be an unfairly burdensome government regulation restricting market freedom. This clash of values also lies behind the contemporary debate on free trade vs. fair trade. Many Americans believe that China is engaged in international trade that may be free but is not fair. In their view, by paying workers unacceptably low wages and making them work in inhumane conditions, China competes unfairly. The Chinese, in turn, can riposte that it is unacceptable that rich countries, while advocating free trade, try to impose artificial barriers to China’s exports by attempting to restrict the import of ‘sweatshop’ products. They find it unjust to be prevented from exploiting the only
resource they have in greatest abundance – cheap labour. Of course, the difficulty here is that there is no objective way to define ‘unacceptably low wages’ or ‘inhumane working conditions’. With the huge international gaps that exist in the level of economic development and living standards, it is natural that what is a starvation wage in the US is a handsome wage in China (the average being 10 per cent that of the US) and a fortune in India (the average being 2 per cent that of the US). Indeed, most fair-trademinded Americans would not have bought things made by their own grandfathers, who worked extremely long hours under inhumane conditions. Until the beginning of the twentieth century, the average work week in the US was around sixty hours. At the time (in 1905, to be more precise), it was a country in which the Supreme Court declared unconstitutional a New York state law limiting the working days of bakers to ten hours, on the grounds that it ‘deprived the baker of the liberty of working as long as he wished’. Thus seen, the debate about fair trade is essentially about moral values and political decisions, and not economics in the usual sense. Even though it is about an economic issue, it is not something economists with their technical tool kits are particularly well equipped to rule on. All this does not mean that we need to take a relativist position and fail to criticize anyone because anything goes. We can (and I do) have a view on the acceptability of prevailing labour standards in China (or any other country, for that matter) and try to do something about it, without believing that those who have a different view are wrong in some absolute sense. Even though China cannot afford American wages or Swedish working conditions, it certainly can improve the wages and the working conditions of its workers. Indeed, many Chinese don’t accept the prevailing conditions and demand tougher regulations. But economic theory (at least free-market economics) cannot tell us what the ‘right’ wages and working conditions should be in China.

I don’t think we are in France any more

In July 2008, with the country’s financial system in meltdown, the US government poured $200 billion into Fannie Mae and Freddie Mac, the mortgage lenders, and nationalized them. On witnessing this, the Republican Senator Jim Bunning of Kentucky famously denounced the action as something that could only happen in a ‘socialist’ country like France. France was bad enough, but on 19 September 2008, Senator Bunning’s beloved country was turned into the Evil Empire itself by his own party leader. According to the plan announced that day by President George W. Bush and subsequently named TARP (Troubled Asset Relief Program), the US  government was to use at least $700 billion of taxpayers’ money to buy up the ‘toxic assets’ choking up the financial system. President Bush, however, did not see things quite that way. He argued that, rather than being ‘socialist’, the plan was simply a continuation of the American system of free enterprise, which ‘rests on the conviction that the federal government should interfere in the market place only when necessary’. Only that, in his view, nationalizing a huge chunk of the financial sector was just one of those necessary things. Mr Bush’s statement is, of course, an ultimate example of political double-speak – one of the biggest state interventions in human history is dressed up as another workaday market process. However, through these words Mr Bush exposed the flimsy foundation on which the myth of the free market stands. As the statement so clearly reveals, what is a necessary state intervention consistent with free-market capitalism is really a matter of opinion. There is no scientifically defined boundary for free market. If there is nothing sacred about any particular market boundaries that happen to exist, an attempt to change them is as legitimate as the attempt to defend them. Indeed, the history of capitalism has been a constant struggle over the boundaries of the market. A lot of the things that are outside the market today have been removed by political decision, rather than the market process itself – human beings, government jobs, electoral votes, legal decisions, university places or uncertified medicines. There are still attempts to buy at least some of these things illegally (bribing government officials, judges or voters) or legally (using expensive lawyers to win a lawsuit, donations to political parties, etc.), but, even though there have been movements in both directions, the trend has been towards less marketization. For goods that are still traded, more regulations have been introduced over time. Compared even to a few decades ago, now we have much more stringent regulations on who can produce what (e.g., certificates for organic or fair-trade producers), how they can be produced (e.g., restrictions on pollution or carbon emissions), and how they can be sold (e.g., rules on product labelling and on refunds). Furthermore, reflecting its political nature, the process of re-drawing the boundaries of the market has sometimes been marked by violent conflicts. The Americans fought a civil war over free trade in slaves (although free trade in goods – or the tariffs issue – was also an important issue).1 The British government fought the Opium War against China to realize a free trade in opium. Regulations on free market in child labour were implemented only because of the struggles by social reformers, as I discussed earlier. Making free markets in government jobs or votes illegal has been met with stiff resistance by political parties who bought votes and dished out government jobs to reward loyalists. These practices came to an end only through a combination of political activism, electoral reforms and changes in the rules regarding government hiring. Recognizing that the boundaries of the market are ambiguous and cannot be determined in an objective way lets us realize that economics is not a science like physics or chemistry, but a political exercise. Free-market economists may want you to believe that the correct boundaries of the market can be scientifically determined, but this is incorrect. If the boundaries of what you are studying cannot be scientifically determined, what you are doing is not a science. Thus seen, opposing a new regulation is saying that the status quo, however unjust from some people’s point of view, should not be changed. Saying that an existing regulation should be abolished is saying that the domain of the market should be expanded, which means that those who have money should be given more power in that area, as the market is run on one-dollar-one-vote principle. So, when free-market economists say that a certain regulation should not be introduced because it would restrict the ‘freedom’ of a certain market, they are merely expressing a political opinion that they reject the rights that are to be defended by the proposed law. Their ideological cloak is to pretend that their politics is not really political, but rather is an objective economic truth, while other people’s politics i s political. However, they are as politically motivated as their opponents. Breaking away from the illusion of market objectivity is the first step towards understanding capitalism.

Featured Post

Capitalism vs. Socialism vs. Distributism

Capitalism vs. Socialism  vs. Distributism by Bryan J. Neva, Sr. Since ancient times, people have bought, sold, and traded land,...