At the start of this blog, I predicted that the financial crash would cause the government to spend huge money bailing out the people who caused it, adding a great deal to the national debt and sparking a crisis of confidence in the country’s ability to repay it, which would lead to a policy of inflation so that all those dollars we have to pay back will be worth much less.
That was in early September 2008, and the thing has played out just about that way. As of this writing there’s a nonsensical debate in Congress about whether we should raise the nation’s “debt ceiling,” with Treasury Secretary Timothy Geithner dourly warning that default would be the direct result of failure to do so, and ugly for us humans. Coupla weeks back, Standard and Poor’s even downgraded its “outlook” on U.S. government debt, triggering much tongue-clucking at least. (Yet essentially no mention of the irony. These guys are down-rating our debt? Aren’t they the ones who told us–as they were paid to–that subprime mortgage debt was, always and everywhere, AAA? Aren’t they the ones who, whenever states tried to curb the criminality at the core of the subprime business model, threatened to not rate mortgages made in those states, thus thwarting those attempted curbs? Why, yes. Yes they are!)
The continued credibility afforded to outfits like Standard and Poor’s, the Federal Reserve, Geithner, and, say, CNBC is depressing (to say nothing of Fox News, the Asshole Class’ private ministry of propaganda). With no end to it in sight, I hereby offer my final post in this series, the better to try to preserve what remains of my own sanity.
I did get some things wrong. I thought Chrysler would be in bankruptcy for years, and that Fed Chairman Ben Bernanke might be in trouble during his reconfirmation hearings. And I figured Fannie and Freddie would cost us about $200 billion. The amount is looking more like $375 billion, though the losses ($135 billion so far) could be somewhat less than that. And I didn’t say this, though I should have: Fannie and Freddie are, at least in terms of their purported public purpose, going away, and that portends badness.
The New York Times floated that trial balloon in early March:
The 30-year fixed-rate mortgage loan, the steady favorite of American borrowers since the 1950s, could become a luxury product, housing experts on both sides of the political aisle say.
Then again, a lot of things people took for granted in the 1950s are now “luxury products.”
As expensive as their forays into reckless speculation have been, the end of Fannie and Freddie’s public functions will be painful for people planning on what used to be thought of as a decent life in this nation. Figure on paying at least two more percentage points on your mortgage interest (in a couple of years, about 9.5 percent instead of the 7.5 percent you should be expecting otherwise), and a slow but steady whittling away of the mortgage-interest tax deduction. I can only hope it will be a wake-up call for folks. That awakening is decades overdue, since the model of society on which the 30-year-fixed mortgage was predicated has long since been quietly abolished.
The original idea about home ownership—that home owners are a stabilizing influence on a community—proceeded from the notion that jobs and careers were stable and would continue to be. In 1955 the typical American working man could expect to spend 25 or 30 years at the same company, if not the same job, at the end of which he would receive a guaranteed pension. His job alone often paid enough to afford a mortgage, and the pension would allow him to maintain his family’s home in retirement, as it would often be fully paid-off by then. Passed to the children free and clear, the house was the cornerstone of middle-class wealth creation. In that world, owning a home made perfect sense and renting made little financial sense for most. The trick was getting the cost of financing low enough to tip a few more folks into the “owner” bucket.
Today the home-ownership-as-a-good mantra continues even though the foundation for it–steady, secure employment at homeowner wages–has long since disappeared. By 1983, the typical 50-year-old working man had a job tenure of 11 years. By 2006 it had been reduced to eight. Pensions have been effectively abolished, except for government workers (and looks like they’re about to lose them too). This change did not go unnoticed by policymakers, though most chose to see it as “economic dynamism”—a good (or at least inevitable) thing–and few made the connection between it and home ownership, or the public subsidies undergirding it.
This shift in the nature of the job market, this destruction of job security and much of the rest of the social contract, is the major American story of the past 30 years. Aspects of it animate every facet of modern life, from what age we marry (or whether we marry) to where we live to whether we get to see a doctor when we’re sick to how much tax we pay to whether the roads get paved and the underground pipes that supply our water are maintained with those taxes (or aren’t and, instead, get handed to corporate interests). It affects whether we have children, whether they have serviceable schools–basically everything that, for a century, made the United States a desirable and accommodating place for decent people to live and work.
And as these things have changed, mainly for the worse, policymakers should have adjusted policies to reverse those trends. Instead they have accelerated them, all the while citing the major economic indicators to claim that things are actually getting better.
How could that happen?
Simply put, every measure of economic output and progress, every yardstick of general well-being, from the “poverty line” to Gross Domestic Product, has been rendered inaccurate.
All of these things were first calibrated in a world where more work meant better pay, career advancement was (usually) predicated on honesty and competence, and banks existed in order to aid factory owners and mining companies to produce useful things. That none of these things is true today—self-evident, I think, and I’ve gone into detail about it already–has thrown off all the models the big shots use to manage the economy. No one talks about this, but it’s a key cause for the disconnect between a Ben Bernanke, nattering on in his “historic press conference” about how inflation is not yet a threat, and any random Regular Person, who knows that she has not gotten a pay raise in four years and is now paying double for gas, and a lot of other things too.
“Core inflation” is not, exactly, a bogus number. But it helps to realize that when a guy like Bernanke says “inflation,” he mostly means “your pay.” Meanwhile, removing “volatile energy and food prices” from the measurement presupposes that those things always were and always will be volatile and always quickly revert to the mean.
So you may ask, well, haven’t oil prices always been spiky?
As this chart shows, the price of both crude oil and gasoline rose slowly and steadily from mid-century through the early 1970s. The ’73 and ’78 oil embargoes caused major disruptions, but the real day-to-day price swings came into vogue only in the 1990s and 2000s.
Over the past decade or so, the story of oil-price volatility quietly became the story of all commodity-price volatility, as the same forces that caused the housing bubble have deregulated the commodities markets and herded ever larger piles of dumb money into ever more esoteric and proprietary “commodities index funds,” so-called, in order to extract steady fees and—evidence suggests—manipulate the markets for even better profits.
A piece in last July’s Harper’s (updated for the current issue of Foreign Policy) wandered these thickets for a bit and drew the same conclusion (and prompted this carefully worded non-denial denial from Goldman Sachs), as did Rolling Stone’s Matt Taibbi in his book, Griftopia. But even if commodities markets are not being manipulated by the few big players (‘cause, as we know, those guys would never do anything underhanded or dishonest, and to suggest they would is just ignorant conspiracy theory), the fact remains that these markets are very different from those extant when the economic measures we use today—and the theories underlying them—were developed.
Consider the poverty threshold. Devised in 1964 as a quick and dirty yardstick for determining who was really poor, the poverty line starts with the cheapest of four food plans developed by the Department of Agriculture in 1955 (and never mind that nobody eats like that anymore, if they ever did), and then multiplies it by three to arrive at the figure below which one is considered poor. Because it calculated poverty using after-tax income but was (and still is) applied using before-tax income, the number began life as a lowball estimate. Food prices then declined radically as a share of a typical family’s budget, while housing and other costs continued to rise. Yet the formula, which by the early 1980s wildly understated poverty, has never been substantially adjusted.
The poverty threshold did increase, of course. But it was adjusted not according to food costs, but according to general inflation, as measured by the Consumer Price Index. The CPI has its own statistical issues tending to cause it to understate inflation.
In January 1983 the housing price component of the CPI was replaced with “owners’ equivalent of rent” because rents are more stable. Because house prices rose and fell more than rents during the housing bubbles and crashes, housing’s effects on inflation and deflation are not reflected in the CPI. General inflation, meanwhile, since 2000 or so is calculated using PCE— the “personal consumption expenditures index”—which is a measure confected specifically to obtain a lower inflation rating than the CPI would give. We know what its purpose is because it is actually called the “deflator.”
The idea behind PCE is stunningly simple: As the prices of things we actually want increase, we substitute for them other, cheaper things we don’t want. In this way the “cost of living” is maintained more nearly constant. That this is also the very definition of a falling standard of living is both obvious and completely unacknowledged, as the reporting, each quarter, of a falling standard of living would be politically impractical. The PCE is the largest part of the Commerce Department’s report on the Gross Domestic Product—or GDP.
The GDP is supposed to count the value of all goods, services, and structures produced by everyone in the United States each quarter. Until 1991 the the country measured its national output via GNP—Gross National Product—which is similar, but with an important difference. GNP counts everything produced by Americans and American-based companies, no matter where. GDP counts everything produced in the country, no matter by whom. So if Toyota opens a car manufacturing plant in Kentucky that counts as GDP but not as GNP. As the Wiki on this helpfully explains, “this would make the use of GDP more attractive for politicians in countries with increasing national debt and decreasing assets.”
There is more to it than that, of course. Both GNP and GDP count as production every service rendered, so long as it was (or is expected to be) paid for. That means, for GDP’s purposes, it does not matter if you’re building an electric car that gets 200 miles on a three-hour charge, or a tank that drinks 6 gallons of diesel per mile, leaves a uranium slime trail 8 feet wide, and can’t go off road without exploding in flames. GDP doesn’t distinguish between an oncologist who earned $3.5 million last year saving the lives of 40 cancer patients and a Goldman commodity trader who raked off the same amount gambling on food prices with 40 other people’s money. The $3.5 million counts the same in the GDP figures.
The proliferation of Goldman commodity trader types in the U.S. economy has been much remarked upon, yet its implications have hardly begun to be analyzed, so dominant is the idea that these banker types provide valuable services. The employment of such folks, many of whom have big brains, in such a wastrel field of endeavor has exacted tremendous opportunity costs on the economy as a whole, also as yet unanalyzed. While conveying formerly unheard-of riches to an ever narrower slice of society since the 1980s, the fields of investment banking and technology have cost tens of millions of ordinary people their jobs while making everyone who is employed—including those in the technology and finance fields themselves—feel less secure.
Which brings us to the final key statistic that is no longer comparable with numbers published a few decades ago. In 1994 the Bureau of Labor Statistics revised and renumbered its several unemployment measures, while keeping in place the factors that have, more and more, skewed those numbers downward.
Here’s the Wiki on that. Notice that U-2 used to be “job losers” and became “job losers and those who completed temporary jobs.” The rise of such temporary jobs—by definition unstable—is hidden right there in plain sight. U-3 used to capture data for unemployed people 25 and over—basically what we used to think of as “adults.” U-3 is now labeled “total unemployment” and has become the Bureau’s headline, or “official” number. The old U-4 was total unemployed looking for full time work. The new U-4 is total unemployed plus discouraged workers.
Discouraged workers are people who don’t count, officially, because they didn’t look for a job during the past month.
U-5 used to be the Bureau’s main number, same as U-3 today. Today the U-5 number is the U-4 number—unemployed and discouraged, remember—plus “marginally attached workers.”
These folks are basically even more discouraged than discouraged workers. They say they’d work if someone knocked on their cardboard box and offered them a job.
The U-6 measure is somewhat improved. It used to consist of everyone looking for full-time work plus half of all those seeking part-time work, and half of everyone “employed part time for economic reasons,” as if any employment at all were not usually for “economic reasons.” The new U-6 is basically everyone counted by the new U-5 measure plus the part-timers who’d rather be working full-time. It’s the only measure of under-employment we have.
Some economists look at the late-model U-6 and see something useful, even an over-count, as it tallies up just about everyone who is not working as much as they want. The U-6 number—currently just under 16 percent—is thus comparable to the unemployment counts made in the depths of the Great Depression, some suggest.
But it is no such thing. Today’s U-6 actually still undercounts unemployment, because it leaves out everyone in the military service or in prison.
In 1936, the United States had a standing army numbering 140,000 men. This represented about .25 percent of the available labor force of 53 million. Our prison population was also far lower than it is today.
In 2011 the United States has about 1.4 million people serving in the military (not counting contractors) and a labor force of 154 million, meaning nearly 1 percent of our potential workers are serving in the military. In simplest terms, the size of the labor force has tripled—but the size of the military has increased 10-fold.
The story is much the same with the prison population, which a pair of economists estimated had lowered the stated unemployment rate by .17 percent between 1985 and 1999. About 1.7 percent of the country’s potential labor force is in prison today. Summed up, active-duty military and prison populations sideline some 4 million able-bodied people who are not factored in to the unemployment picture.
The official unemployment rate also does not count people collecting disability, a cohort that has expanded tremendously during the past decade.
The implications of all these statistical glitches have been overlooked by pretty much everyone in the business and economic press. Instead there is constant chatter about this or that “economic indicator” (Is GDP growth, revised down to a mere 1.8 percent for the first 90 days of 2011, a harbinger of a second recessionary dip?) and a pointless yet endless argument over whether the “stimulus worked.”
Lately there has been much debate between those purporting to tout the ideas of John Maynard Keynes, the early 20th century polymath and father of the “stimulus” policy, and the camp followers of Friedrich Hayek, the Austrian economist whose Markets Uber Alles theory is beloved by Libertarians and pseudo-Libertarians alike. All agree that we’ve gotten a dose of Keynes-lite (“Haste Great, More Shilling”), so the argument is about whether we’ve had too much or not enough of this medicine. As an illustrative distraction, here’s why crazy laissez-faire theories (and they were crazy from their inception, no doubt) are ascendant: better beats; better production values; cleverer presentation.
The Keynes vs. Hayek meme is cute and simple, and there is some truth to it, as far as it goes. But like the French generals who planned the Maginot Line, it overlooks new conditions. Both camps mainly take at face value the current economic indicators and the models from which these arise.
Indeed, nearly every measure economists use today makes the same fundamental error, so we get unemployment readings that, at their bedrock, assume a factory economy marked by high unionization and temporary layoffs instead of what we really have, which is a deindustrialized system marked by low job security, zero pensions, high levels of informal (or “shadow economy”) off-the-books work, and unprecedented levels of permanent workforce dropout.
Here’s The Times’ Catherine Rampell’s chart showing the changing ratio of earned income and government transfer payments as a percentage of GDP:
Or consider this data point, marked in 2006, when corporate profits had broken records (they’re higher now, believe it): At 51 percent of GDP, U.S. wages were already lower than at almost any time since 1966:
There are many others. The details don’t matter so much as the theme, which is that every official way of looking at economic development, growth, and general well-being is distorted by the unstated, grossly erroneous assumption that the bottom-line numbers we see today mean roughly the same things they meant in our grandfather’s time—and so, then, do the remedies.
In the 1930s, and up through the 1970s, the nation’s banking system was closely linked to its industrial output, the way a transmission works with an engine. The so-called “real economy” depended on commercial banks for its working capital, services like stock sales, mergers, and the like, and the banks in turn had no function independent of serving those actually productive industries.
But the stagflation of the ’70s, new rules that allowed partnerships to turn themselves into stock corporations, the rise of Reaganite deregulatory fervor, and the unleashing of derivatives and bond traders decoupled the banks from the real economy.
As productive industries withered (masked, partially by our invisible statistical sophism and also by the survivors’ use of outsourcing and fraudulent accounting, a la Sunbeam Corp’s infamous “channel-stuffing” fiasco of the late 1990s), the banks quietly withered too, becoming hollow dens of compulsive gamblers utterly lacking the skills their fathers and grandfathers had.
If in the past bailing out the banks trickled-down to “Main Street” through renewed lending (particularly when accompanied by massive “make-work” public works projects and then ambitiously ramped-up arms production), today it only recapitalizes the busted gamblers. It does nothing for the moribund “real economy” and, worse, it rewards, again, the sociopaths who caused this crash—and who hollowed-out of the nation’s productive capacity since Jimmy Carter’s presidency.
The decision by George Bush and Henry Paulson and Ben Bernanke and Barack Obama to bail out the banks thus failed on every count—even as it is touted by liberal, Keynesian folks like Paul Krugman as a mild success.
What should have been done?
A policy directly punishing the culprits was never discussed, because the framework in which the crisis was seen was so distorted by both the interests of the presidents’ top economic advisers (all high rollers in, and/or owners of, the casino, all of who insist there is no casino) and the general analytical malaise that has infected institutions such as the Federal Reserve, BLS, and others. But even absent better economic yardsticks, such a policy would not have been difficult to devise. The elements of it have been discussed over the years, both at the reformist fringes and even by such mainstream commentators as Krugman, who has long advocated for higher margin requirements, for example, on stock traders.
The first element of such a policy is taxation. To put it plainly, we must tax the assholes for everyone’s own good. So instead of a marginal top rate on capital gains (so called) of 20 percent, the rate on annual gains above, say, $1 million ought to be closer to 70 percent. Sacrilege, I know, but remember: “capital gains” used to be about the sale of a business built up over generations. Lately it’s much more likely to be winnings in a currency bet or an interest rate swap. Write a rule exempting real businesses, farms, and the like from the top tax, but find and employ the most vicious, bloodthirsty litigators to run down and rip the spines out of the pink-faced little charlatans who try to get around the tax on carry trades and commodity speculation.
The details necessary to accomplish this task are many, their complexity on par with the derivative instruments they would curb. There are many who could devise simpler ways of doing it, and they should. But the principle of the thing is clear: We need to tax the antiproductive amassment of gambling winnings because such stores of wealth have proven anathema to democracy and a functioning economy.
But the people who make all their money this way are in charge—why would they do this?
They won’t, of course. The rest of us have to force them. And that starts with understanding the problem. I’ve tried to lay it out, over the past two and a half years or so, in these posts. A few of you have written encouraging things, and I thank you. Some of you have suggested I’m naïve or worse, that my shitty outlook derives from having my head up my keister. And I thank you too.
To the rest, thanks for reading. I hope I’ve occasionally written something that surprised you, or caused you to hit a link to someone whose work is smarter and better than mine.