PPACA goes to SCOTUS: Health reform appears to be in danger.

Posted by on Mar 28, 2012

Filed Under (Health Care)

The argument over the fate of the Patient Protection and Affordable Care Act, a.k.a “Obamacare,” is taking place before the United States Supreme Court this week.  Three questions are being considered.  The first is a technical question regarding whether the challenge to the law can be heard now or if it has to wait until someone actually pays the penalty the law imposes.  Perhaps this is an interesting legal matter, but there isn’t much economics there.  The second question is whether the individual mandate, which requires most Americans to buy health insurance or face a penalty, is a constitutional exercise of Congress’s power to regulate interstate commerce or not.  This is the key question, since, if the justices decide that Congress overstepped its powers in passing the law, the part of the law that results in nearly universal health insurance could be struck down.  The third question is, if the individual mandate is struck down, how much of the rest of the law will go along with it.

As I said, the second question is the key, and SCOTUS heard arguments on this question yesterday.  By all accounts, the administration bumbled it.  Here’s an excerpt from the New York Times:

But several of the more conservative justices seemed unpersuaded that a ruling to uphold the law could be a limited one. Justice Alito said the market for burial services had features similar to the one for health care. Chief Justice Roberts asked why the government could not require people to buy cellphones to use to call emergency service providers.

Justice Scalia discussed the universal need to eat.

“Everybody has to buy food sooner or later, so you define the market as food,” he said. “Therefore, everybody is in the market. Therefore, you can make people buy broccoli.”

Justice Alito asked Mr. Verrilli to “express your limiting principle as succinctly as you possibly can.”

So, the justices wanted to know if they allow the individual mandate to stand, what won’t Congress be able to regulate.  The administration’s response:

Instead of a brisk summary of why a ruling upholding law would not have intolerably broad consequences, Mr. Verrilli gave a convoluted answer. First of all, he said, Congress has the authority to enact a comprehensive response to a national economic crisis, and the mandate should be sustained as part of that response.

He added: “Congress can regulate the method of payment by imposing an insurance requirement in advance of the time in which the service is consumed when the class to which that requirement applies either is or virtually most certain to be in that market when the timing of one’s entry into that market and what you will need when you enter that market is uncertain and when you will get the care in that market, whether you can afford to pay for it or not and shift costs to other market participants.”

Huh?

Here’s what they should have said.  It is true that people need to buy burial services and food, but these markets differ from health care in that there is no threat of “adverse selection” as there is in health insurance, and adverse selection has the potential to make it impossible for individuals to purchase insurance at reasonable rates (i.e., the sick pay less than their expected cost of care and the rich pay more) unless that market is regulated.

Take the market for broccoli or burial services.  As the justices point out, it is true that everybody needs food or burial, eventually.  However, my ability to consume food or be buried does not depend in any crucial way on what others do.  In economic terms, we say that there is no “market failure” here.

Now, take the market for health care services.  Suppose there are two kinds of people.  Healthy people have expected annual health care costs of $1000, while sick people have expected annual health care costs of $11,000.  If there are equal numbers of healthy and sick people, then the average cost of caring for all people is (1000 + 11000)/2 = $6000.  Next, assume that individuals know whether they are healthy or sick, but health insurers don’t know whether a person is healthy or sick, or, as is done in the new health care law, are prohibited from using this information to charge different prices to healthy and sick people. 

Suppose the insurer charges a price of $6000.  If everyone purchased this insurance, the insurer would break even.  But, a person who expects to have only $1000 healthcare costs would not be willing to purchase this coverage, since by doing so they spend $6000 for something worth $1000.  A person who expects to so spend $11,000 on health care would be willing to buy coverage.  But, if the insurer expects that only sick people will buy the insurance, it will not be willing to sell it at a price of $6000, since by doing so it would lose $5000 on each policy.

In this example, the only sustainable outcome is where the insurer charges $11,000 and only the sick people buy insurance.  But, at this price, the sick people are no better off than they would be without insurance, and the insurer earns zero profit.  So, nobody benefits from this market.  In cases like these, we say that adverse selection (the fact that those who value a product the most are likely to be the most costly to serve) has led to a market failure.  In this case, the fact that the healthy are unwilling to purchase health insurance voluntarily makes it impossible for the sick to purchase it at a reasonable price.  It is this interdependence that makes health care markets and broccoli markets fundamentally different.

This market failure could be addressed by mandating that everybody had to buy insurance, as is done in PPACA.  In this case, the price of insurance would be $6000.  Firms would just break even, sick people would benefit from insurance, and healthy people would be forced to subsidize the sick against their will.

Rather than focus on the distinction made above, the administration has argued that the key distinction between health care and other markets is that there is a degree of uncertainty about future use in health care markets that is not present in other markets.  This case is empirically weak, and more importantly, not a market failure that requires government intervention.  They have also argued that even those who choose not to purchase health insurance often consume health care, and in many cases these costs are passed onto others.  This argument, however, would seem to fail the broccoli test: if not eating broccoli today means I’ll be less healthy and more likely to collect government benefits in the future, then, by extension, the government should be able to force me to eat broccoli.  I don’t think the administration wants to be making this argument (at least not today, in front of the Supreme Court).

The example I discussed above illustrates how the PPACA provisions that prohibit charging higher prices to sick individuals and the individual mandate work together.  The result is a situation where everyone is covered by private insurance.  However, there is clear redistribution from the healthy to the sick.  This may be desirable from a social perspective, and somewhere in the administration’s convoluted argument is the idea that the healthy are always at risk of becoming sick.  Whether you favor PPACA on social grounds depends on your individual preference for redistribution and, even if you are in favor of increased redistribution, whether you think intervening in health insurance markets is the best way to do it.  However, one thing that is indisputable is that health insurance markets are different from broccoli and burial markets.   The administration’s failure to effectively show that there is a bright line between the market for health insurance and broccoli might result in PPACA being overturned.

Cheaper Gasoline, or Energy Independence: You Can’t Have Both

Posted by on Mar 23, 2012

Filed Under (Environmental Policy, Finance, U.S. Fiscal Policy)

Politicians like to say they want the U.S. to produce at least as much energy as it consumes – “energy independence”.  And they certainly want to reassure consumers that they are doing something about the high price of gasoline.  But the two goals are inconsistent.  You can’t have both.  Indeed, the current high price of oil is exactly what is now REDUCING our dependence on foreign oil!

We all know the price of gasoline has been increasing lately, now well over $4 per gallon in some locations.  Five-dollar gas is predicted by Summer.  In addition, the New York Times just reported that our dependence on foreign oil is falling.  “In 2011, the country imported just 45 percent of the liquid fuels it used, down from a record high of 60 percent in 2005.”  The article points out that this strong new trend is based BOTH on the increase of U.S. production of oil AND on the decreased U.S. consumption of it.  And both of those factors are based on the recent increases in oil and gasoline prices.  Those higher prices are enough to induce producers to revisit old oil wells and to use new more-expensive technology to extract more oil from those same wells.  The higher prices also are enough to induce consumers to conserve.  Purchases of large cars and SUVs are down.  Many people are driving less, even in their existing cars.  A different article on the same day’s New York Times, on the same front page, also reports that “many young consumers today just do not care that much about cars.”

Decreased dependence on foreign oil does sound like good news.   Actually, it is good for a number of reasons. (1)  It is good for business in oil-producing states, helping raise them out of the current economic slow-growth period.  (2) It is good for national energy security, not to have to depend on unstable governments around the rest of the world.  (3)  It reduces the overall U.S. trade deficit, of which the net import of oil was a big component.  And (4) the reduced consumption of gasoline is good for the environment. 

On the other hand, the increased U.S. production of oil is not good for the environment, as discussed in the same newspaper article just mentioned.   As an aside, I would prefer to do more to decrease U.S. consumption of oil – not only from increased fuel efficiency but also by the use of alternative non-fossil fuels – and perhaps less from increased U.S. production of oil from dirty sources such as shale or tar sands.  But that’s not the point for the moment.

The point for the moment is just that maybe the higher price of gasoline is a GOOD thing!  We can’t take even small steps toward decreasing U.S. dependence on foreign oil UNLESS oil and gas prices rise.  Any politician who tells you otherwise is pandering for your vote.  It is the high price of oil that is both increasing U.S. production and decreasing U.S. Consumption.

 

 

Too Many Pets: a Supply-Side Problem

Posted by on Mar 9, 2012

Filed Under (Other Topics)

Bob Barker, former host of the long-running TV game show “The Price is Right”, used to end each daily broadcast with the plea (source): “Help control the pet population. Have your pets spayed or neutered.”  Drew Carey, the replacement after Bob Barker’s retirement, continues to end each show with this same admonition.  Indeed, this is a quite curious thing to say at the end of a game show where someone just won thousands of dollars in cash and prizes.  So why did Bob Barker say this? What did Bob Barker mean?

In short, the United States has a pet overpopulation problem resulting in the euthanizing of 3-4 million cats and dogs per year (source).  One animal every 8 seconds!  These animals are euthanized because they do not have homes and stray animals are deemed a public nuisance, so animal control laws often require stray animals to be euthanized if an owner is not quickly found.

In economics jargon, these cats and dogs are euthanized because the supply of companion animal outstrips the demand for companion animals.  There is no doubt that a significant demand for companion animals exists in the United States.  The National Pet Owners Survey estimates the cat and dog population in U.S. households at 78.2 million and 86.4 million, respectively.  Obviously, Americans want companion animals and many spend significant amounts of money to acquire one.

Indeed, pet stores and breeders sell companion animals in a market system, so why the over-supply of pets?

An important reason for the over-supply of dogs and cats come from the fact that many pet owners do not spay or neuter their animals.  Inevitably, when they procreate, litters of kittens and puppies need to find new homes.  That is, when a pet is not spayed or neutered, the pet supply (likely!) increases in a non-market transaction.   When the demand does not keep up with supply and these animals cannot find homes, then they end-up in shelters and are often euthanized.

As an animal lover, Bob Barker wants to minimize the number of animal euthanasia cases.  The platform of a daily TV game show reaching millions of people allowed Bob Barker to raise awareness about the need for pet owner to spay and neuter their animals.  In honor of Bob Barker’s efforts over the many years, I remind all my readers that February 28th was National Spay & Neuter Day. It is not too late to have your pet spayed or neutered!

Make the LEAP!

Posted by on Mar 2, 2012

Filed Under (Other Topics, U.S. Fiscal Policy)

Academic research is inherently a “public good”, which means that once a professor does all the research work and writes the paper, the social marginal cost of another reader is ZERO!  If the research is useful, then it could be useful to additional readers at no extra cost whatsoever.  Any charge for reading it would discourage those who could benefit while imposing no social cost whatever.  Thus, the optimal price to charge per reader is zero.

But that’s not what journals charge.  Non-profit associations might charge very little to subscribe to their journals, basically enough to cover their printing cost and mailing cost.  Now, however, any research paper can be provided even more cheaply on a website.  One useful purpose of an academic journal, still, is for the editor and reviewers to pass judgment on whether the research is good enough to be published, and to make further suggestions for improvement before publication.   So, each paper to be published has some cost to review it and some cost to post it on the web.  Even then, the social marginal cost of one additional person to read it is still zero!

How can a non-profit journal cover the cost of editing and reviewing the paper, and still provide free access?  Just as for many kinds of “public good”, the nonprofit organization might need donations!

Even worse is the still-huge number of academic journals that are published not by a non-profit research association or by a university press, but by a private for-profit company.  Those private publishers own the copyrights, and so they can charge a high enough price to make money, above and beyond their costs.  And even worse than most private for-profit publishers is Elsevier.

Elsevier had a good idea, years ago, when they founded a large number of field journals in economics and in other disciplines.  Elsevier now owns about 90% of the private for-profit academic journals, a virtual monopoly, so they charge huge prices and make huge profits.  Those journals have become prestigious, and so authors want to publish in them.  In order to “get in good” with the editors, those potential authors are willing to review other submitted papers for free.  Elsevier uses all this free help from university professors who are reviewers, to improve the quality of the product that they sell, in order to make even higher profits.

I don’t blame Elsevier, a private company, for trying to make money.  They have done a good job of it.  But as university professors, we do NOT need to provide free help to them!  I highly recommend reading a paper by Ted Bergstrom called “Free Labor for Costly Journals” in which he points out that we academic researchers at non-profit or state-run universities are helping private publishers make profits.  I would also recommend a new blog by Prof. Jacob Vigdor of Duke University.   

Mathematicians are forming a boycott of Elsevier.  For another example, the nonprofit “Association of Environmental and Resource Economists” (AERE) are discussing whether to break away from Elsevier and start a new non-profit journal (read about all the difficulties in an article starting on page 23 of the AERE Newsletter).   Finally, Ted Bergstrom has lots of info on his website.

We are stuck in a “bad equilibrium.”  University researchers want to publish in the prestigious journals, which are often journals of private publishers like Elsevier.  So those researchers review for free, for Elsevier, and they want their university to subscribe to those good journals of Elsevier.  And profits are made, by Elsevier.  We’d all be better off if we could “leap” to the “good equilibrium” where only non-profit associations and universities publish academic journals, at cost.  Then when we review papers for free for those journals, and when the universities subscribe to those journals, we are all contributing to a public purpose, the provision of a public good.

Doc Fix: Time to Start Over

Posted by on Feb 22, 2012

Filed Under (Health Care)

Last week, Congress struck a deal to head off a pending 27 percent decrease in what Medicare pays to physicians.  Well, head it off until the end of the year.  Then we’ll be right back where we started from, except the amount of the pay cut will be even larger.

So, what’s it all about?  It all goes back to an attempt in the Balanced Budget Act of 1997 to slow the rate of growth in what Medicare pays to physicians.   Each year, Medicare decides how much to increase the fees it pays to physicians.  In order to reduce the rate of growth in these fees, the 1997 BBA instituted something called the Sustainable Growth Rate formula to help dictate what those increases should be.  In hindsight, the term has turned out to be quite ironic, since the growth rate it proposes has turned out to be anything but sustainable.  In fact, Congress often overrides the changes dictated by the SGR in what has become called a “doc fix.”

The SGR formula is too complicated to discuss, but it’s basic aim is to reduce the rate of Medicare spending on physicians.  Each year, Medicare projects what it thinks it will cost to care for recipients based on past behavior, inflation, and population growth.  If actual spending turns out to be close to this projection, physicians are rewarded by an increase in fees the following year.  On the other hand, if actual spending is too much above the projection, the SGR formula kicks in and lowers fees across the board in an attempt, over time, to bring actual spending back in line with projections.

As usually happens, in the early years the formula worked fine.  Medical expenditures were in line with expectations and docs got a small increase in fees.  However, in 2002, the SGR formula imposed a 5 percent cut in physician fees that was actually implemented.  Then, in 2003, when the SGR formula once again dictated a fee reduction, Congress stepped in and prevented the fee cut from happening.  This was the first Doc Fix.  Along with the Doc Fix, Congress included language that said that the SGR formula in future years should continue to be calculated as if Congress had not imposed the Doc Fix.

In subsequent years, actual expenditure continued to be high relative to projections, and Congress continued to override the SGR formula.  Since past Doc Fixes were not taken into account, each year the size of the adjustment to physician fees needed to bring payments in line with the original SGR formula has grown until now it has reached a whopping 27%.  And, every year it becomes clearer that if Congress wasn’t going to let physician fees decrease by 5% or 10%, they’re certainly not going to let them decrease by 27% or 35%.

So, what should we do about the Doc Fix?  The original intent of the SGR was a good one: slow down the rate of growth of healthcare spending. But, it is clear that the SGR approach doesn’t work.  At this point, physicians rightfully assume that eventually Congress will pass another Doc Fix, and they will continue to get paid higher rates than the SGR would dictate.  Consequently, the SGR formula has no power to persuade physicians to rein in spending.

Thus, I think the first step to is to reset the SGR.  Instead of sticking to the original formula, which requires a thirty percent reduction in physician fees, in the short run we should re-base the formula, so that next year maintaining the SGR would require a much smaller decrease in fees — on the order of a few percentage points — if physicians do not reduce overall spending on their own. This would restore the original intent of the SGR, applying pressure on providers to reduce overall spending.

Next, we need to rethink the way we approach the whole problem.  Even if Congress had the courage to enforce the payment reductions imposed by the SGR, the approach would still be fundamentally flawed because it creates a situation where it forces physicians to compete for an increasing share of an ever-shrinking pie.  If physicians know that the total amount of money available to physicians is fixed and they expect fees to be reduced as they are under the SGR, then a rational physician who wants to maintain income will have to respond by performing more procedures.  However, all physicians have this incentive, so we should expect all of them to deliver more services (some of which may not be as medically necessary), and this will force the SGR to lower physician fees even more.  The result is a vicious cycle that leads to more and more care being provided without substantially increasing patient outcomes.

While it is clear the SGR has to go, it is less clear what it should be replaced with.  However, the fundamental problem – that the SGR actually encourages more care – would be alleviated if we switched a greater share of provider compensation from payments for the quantity of services provided to payments for the quality of outcomes.

Is Obama the “Damn Politician” that FDR Warned About?

Posted by on Feb 17, 2012

Filed Under (Retirement Policy, U.S. Fiscal Policy)

As posted on Forbes yesterday …

In 1941, President FDR explained why he chose to fund Social Security through a payroll tax in as follows:

“We put those payroll contributions there so as to give the contributors a legal, moral, and political right to collect their pensions and their unemployment benefits. With those taxes in there, no damn politician can ever scrap my social security program”

For more than seven decades, FDR’s strategy has proven effective.  Talk to someone in or near retirement – even people who consider themselves small government conservatives – and you will often hear them state that they have a right to their Social Security benefit because they paid for it over their working life.

President Roosevelt knew that the key to the political sustainability of Social Security was the establishment of an entitlement mentality, and the key to establishing an entitlement mentality was the linkage between payroll contributions and benefits.  If Social Security were structured as a means-tested welfare-style program – that is, it if were financed by a progressive income tax rather than through payroll contributions – it might have never lasted this long.

Given this, it is important that President Obama and Congress have just agreed to extend the payroll tax cut and to continue to use budget gimmickry to turn Social Security into a partly general-revenue-financed program.

Here is how it works.  The 2% payroll tax cut reduces revenue to Social Security by about 15 percent.  But Social Security does not have a spare 15 percent of revenue lying around: rather, it is currently running quite close to break-even on a cash flow basis, and faces enormous long-run deficits.  To get around this, President Obama and Congress have decided to replace the lost payroll tax revenue by transferring money from general revenue (which derives primarily from the income tax) into the Social Security trust funds.

This budget gimmick has the short-term political benefit of making the Social Security trust funds seem unaffected by this tax cut.  But it also means that we are deviating substantially from FDR’s vision of a retirement program being paid for (on a pay-as-you-go basis) by participant contributions.  By moving down the path of general revenue financing of Social Security, we achieve the short-term “progressive” aim of increasing the degree of income-based redistribution (because income tax rates rise with income, whereas payroll tax rates do not).

But in the long-run, this has the potential to erode political support for the program.  By shifting the funding burden onto the income tax, the program starts to look more like a welfare program than a contributory social insurance program.

I am not the first to notice the irony of this.  My very good friend Chuck Blahous, who served eight years in the National Economic Council for President George W. Bush, and who was appointed by President Obama as one of two Public Trustees for Social Security, just released a paper explaining why this payroll tax cut is bad policy.  Among the seven reasons he provides is that doing so destroys the “historical Social Security compact.”  In a Washington Post article back in December, Dr. Blahous stated that these budget gimmicks are “a grave step for Social Security.”

This view is not limited to experts on the Republican side: the other Public Trustee of Social Security (a Democrat) – Robert Reischauer, the highly respected president of the Urban Institute — agrees with Dr. Blahous.  While Reischauer was more sympathetic to the tax cut, he also noted that it “could, if it continues for a substantial period of time, undermine one of the foundational arguments that makes the Social Security program inviolate.”

Perhaps the most succinct summary of the irony comes from Jason Fichtner, a Senior Research Fellow at the Mercatus Center and former Chief Economist and (acting) Deputy Commissioner for the Social Security Administration.  He summed it up the situation quite succinctly in an email to me by noting that “in 15 years we might look back on this time in history and discuss how President Obama, as a Democrat, was the president that started the path to killing Social Security.”

So, maybe President Obama really is the Damn Politician that FDR was worried about?

 

Fiscal Sustainability AND Retirement Security: A Reform Proposal for the Illinois State Universities Retirement System (SURS)

Posted by on Feb 9, 2012

Filed Under (Retirement Policy, U.S. Fiscal Policy)

I have released a paper today that proposes a new plan for the State Universities Retirement System.  Co-authored with Robert Rich, the Director of IGPA, the paper proposes a hybrid system that would be partially funded by both workers and universities. It contains several components that reflect some of the ideas that have been publicly discussed by state leaders in recent weeks.

 The proposal has four basic components: 

1) Create a new hybrid retirement system for new employees that would combine a scaled-down version of the existing SURS defined benefit plan with a new defined contribution plan that would include contributions from both employee and employer; 

2) Peg the SURS “Effective Rate of Interest” to market rates; 

3) Redistribute the SURS funding burden to include a modest increase in employee contributions and new direct contributions from universities, thereby reducing state government’s burden on state government; and

4) Align pension vesting rules with the private sector, which would decrease the years new employees hired after January 1, 2011 would need to work for their pension benefit to be vested.

The plan is intended to substantially reduce state expenditures on public pensions, while still providing a reasonable source of secure retirement income to university employees. 

Click here to read the full paper.

Reducing Regulatory Obstacles to Retirement Income Security

Posted by on Feb 7, 2012

Filed Under (Retirement Policy, U.S. Fiscal Policy)

With nearly 80 million baby boomers starting their march into retirement, many policy-makers have begun to focus on how to provide secure retirement income in a fiscally sustainable way.  This is no small challenge in an era of enormous deficits.

Although Social Security plays an important role in providing income that retirees cannot outlive, the benefits provided by Social Security are insufficient to ensure that most retirees can maintain their pre-retirement living standards.  However, increasing these benefits would be horrible fiscal policy: because the pay-as-you-go nature of Social Security has collided with an aging population, this program faces enormous fiscal problems that are going to require reductions – not increases – in the rate at which benefits grow.

Thus we have two opposing forces: a need for more retirement income, and a need to cut government spending on entitlement programs like Social Security.  What can be done?

Fortunately, the private sector can play an important role here, but only if the regulatory environment allows for it.  Presently, the regulatory landscape surrounding employer sponsorship of retirement plans is burdensome and enormously complex.  In many cases, the best thing the government can do to promote a greater role for the private sector in providing guaranteed retirement income is to “get out of the way.”  Ironically, however, there are other instances in which the best way the government can promote private sector solutions is to get more involved – if only by providing guidance on how plan sponsors can improve their plans without running afoul of existing regulations.  Getting guaranteed income options into 401(k)’s and other retirement plans is one such case.

In recent years, the financial services industry has increasingly focused on how to provide plan sponsors and plan participants with products that help to provide guaranteed lifetime income.  The resulting innovation over the past decade has been impressive, as companies have introduced a wide range of insurance and investment products that provide individuals with lifetime income.

However, employers that sponsor 401(k) plans have been slow to adopt.  As a result, most 401(k) participants in the U.S. still do not have access to annuities or other income products in their plans.  Although there are many reasons for this, there is little question that part of the reluctance of plan sponsors to provide annuities is that they have been scared off by regulatory and fiduciary concerns.

Last week, the Treasury Department proposed guidance to help address a few of the many issues that stand in the way of better private sector retirement plans.

In a nutshell, the proposed guidance does three things:

First, it makes it easier for plan sponsors to allow retirees to have a mix of lump-sum and annuity choices.  Put simply, it makes very little sense for most retirees to annuitize either 0% or 100% of their retirement assets.  Annuities provide guaranteed income, help to protect against out-living one’s assets, and help to guard against market volatility.  On the other hand, having some non-annuitized wealth available is extremely valuable when faced with uncertain expenses such as for long-term care.  Given that the optimal financial plan for most individuals would be to have some of both (e.g., annuities and a lump-sum), it only makes sense for our regulatory infrastructure to encourage this.

Second, a number of academic papers have established the potential value of annuity products that have a deferred payoff structure.  That is, with a small fraction of one’s wealth at, say, age 65, one can buy a product that will start paying income at age 85.  In the industry, these are sometimes called “longevity insurance” (although the name is very unfortunate, because all life annuities – whether they are deferred or not – are providing insurance against the financial costs of longevity).  The proposed regulatory guidance would help ensure that these products are more easily available.

Third, Treasury issued two “revenue rulings” that clarify how rules designed to protect employees and their spouses apply when a plan offers an income option.

These rules are useful, but far from sufficient.  Looking ahead, plan sponsors and participants would be better off if policymakers also took at least three additional steps.

First, the Department of Labor needs to provide much greater clarity about how plan sponsors who wish to provide lifetime income options can do so while protecting themselves from fiduciary risk.  This could include providing a “safe harbor” rule for the selection of the annuity provider.  Too many plan sponsors continue to be spooked off by the specter of fiduciary liability if they choose an annuity provider that runs into financial distress in the future.

Second, Congress should reform the Required Minimum Distribution rules to eliminate the various implicit and explicit barriers to lifetime income.  These rules were written by tax lawyers to ensure that the IRS could eventually get its hands on tax-deferred savings.  If these rules were instead written with an eye towards retirement income security, they would look quite different.

Third, we should encourage plan sponsors to report 401(k) and other defined contribution (DC) balances in terms of the monthly income the plan will provide, rather than simply as an account balance.  The Lifetime Income Disclosure Act that received bipartisan sponsorship in the U.S. Senate last year would be a positive step in this direction.  (My Senate testimony on this Act can be found by clicking here).

This need not be a partisan issue.  Republicans should recognize that strengthening retirement income security in our private pension system will give us more freedom to address our burgeoning Social Security deficits.  Democrats should view this as an opportunity to ensure that employers “do the right thing” by providing retirement plans to employees that actually succeed in providing a secure retirement.

Expensive Houses for Low-Income Families?

Posted by on Feb 3, 2012

Filed Under (Environmental Policy, U.S. Fiscal Policy)

A recent NY Times has an article about SOL Austin, an acronym for Solutions Oriented Living.  This housing development is interesting for at least two reasons.  First, the designs and materials are intended to be “sustainable” (whatever that means), but also “net zero” (which I gather means that it will produce all the energy consumed).  The houses have solar panels and geothermal wells.

Second, however, it is interesting because it is in east Austin, the low-income part of town.  In fact, a 1928 “city plan” decided that east Austin would be “designated African-American”.  The 1962 construction of Interstate I-35 further divided east from west.  The relatively flat east side of Austin had all the industrial blight, pollution, and low-income housing.  In fact, it was quite cheap!  The hilly west side of Austin had the fancy new upscale houses with views of the Hill Country.

One would think that the intellectual-academic, left-leaning, high-income households of west Austin might be more interested in sustainable housing that could go “off the grid.”  Why then are these developers building super-energy-efficient houses in east Austin?

Well, for one thing, the 2010 census showed a 40% increase in east Austin’s white population and a drop in minority population.  In correlated fashion, land prices in east Austin have risen considerably.  In fact, a different article in the NY Times tells about a study based on the 2010 census finding that all residential segregation in U.S. cities has fallen significantly.  Cities are more racially integrated than at any time since 1910.  It finds that all-white enclaves “are effectively extinct”.  Black urban ghettos are shrinking. “An influx of immigrants and the gentrification of black neighborhoods contributed to the change, the study said, but suburbanization by blacks was even more instrumental.”

Since I’m visiting here in Austin, Texas, it is easy enough to go see the new development.  As you can see in the snapshot below, the houses have a modern box-like style.  They range from 1,000 to 1,800 square feet.  That explains the article’s reference to “matchbox” houses.    But the roofs are sloped enough to hold photovoltaic arrays and to channel rainwater into barrels.  

The developers said they wanted to “examine sustainability on a more holistic level, that would not just look at green buildings, but in our interest in affordability, in the economic and social components of sustainability as well.”  As stated in the NY Times article, the developers “hammered out a plan with … the nonprofit Guadalupe Neighborhood Development Corporation, to sell 16 of the 40 homes to the organization.  The group, in turn, sold eight of the houses at a subsidized rate to low-income buyers (who typically were able to buy a house valued at more than $200,000 for half price).”  Each of those 16 subsidized homes has a photovoltaic array on the roof, though not necessarily large enough to produce all of the needed power for the house.

Of the “market-rate” houses, all sold at prices in the low $200,000’s.  Eleven have been sold, and thirteen have yet to be built.  Because of the financial and housing crisis, however, the “holistic” development ideas have not worked perfectly.  Homeowners got rebates from Austin Energy and tax credits from the federal government. So far, however, only four market-rate house owners paid the extra $24,000 for photovoltaic arrays substantial enough to fully power a house.  Only one is also heated and cooled by a geothermal well.  But they all have thermally efficient windows, foam insulation, and Energy Star appliances.

So far, only one couple paid to install the geothermal well and the extra energy monitoring system:  a systems engineer and a microbiologist.  So, “sustainability” in low-income neighborhoods might still require some gentrification.

Warren Buffett is not the Oracle of Public Finance

Posted by on Feb 1, 2012

Filed Under (Finance, U.S. Fiscal Policy)

It is being reported today that Senator Sheldon Whitehouse (D-R.I.) is introducing a bill that would impose a minimum 30% tax on individuals earning more than $1 million per year.  This type of tax policy – which is essentially a new version of Alternative Minimum Tax – has been dubbed the “Buffett Rule” due to the news last year that Warren Buffet had a lower tax rate than his secretary.

Warren Buffett claims to have a tax rate of 17.4 percent.  His claim, however, is only true if one ignores one of the most basic economic principles of tax analysis: that the person who writes the check is not necessarily the same as the person that bears the economic burden of a tax.  In economics, this distinction is known as the difference between “legal incidence” (i.e., the entity with legal responsibility for paying taxes) and “economic incidence” (i.e., a measure of who really bears the economic burden of the tax).

In almost any undergraduate public finance textbook, one can find simple examples of how these concepts diverge.  For example, politicians often make a big deal of the fact that the FICA payroll taxes used to support Social Security and Medicare are split evenly between employers and employees.  But economists tend to believe that nearly all of the economic burden of the payroll tax falls on workers.  In other words, even though employers pay their share of the FICA tax, in the long-run the result is that workers are paid less than they would be paid in the absence of the tax.  Thus, it is the workers and not the firms who are truly paying the tax, in spite of how it appears.

The discussion around Mr. Buffet’s taxes – as well as the more recent discussion around the release of Governor Romney’s tax returns – has completely missed this point.  Those discussions have focused solely on the legal incidence of the personal income tax system, and have failed to think through the economic incidence of the overall tax system.

How so?  It is not uncommon for wealthy individuals like Mr. Buffett to receive much of their income in the form of dividends and capital gains.  This type of income may appear as if it is receiving “preferential” tax treatment, but the reality is that it is taxed heavily.  This is driven by the fact that corporate income is taxed at the corporate level before it is available to be paid out as dividends (or used to repurchase shares, which can lead to capital gains for investors who retain their shares).  The U.S. imposes a very high – 35% – marginal tax rate on corporate income.  Thus, if a firm earns another $1000, it pays $350 in taxes, leaving only $650 to go to shareholders.  If those shareholders are then taxed at a 15% rate, that is another $97.50 that goes to the government.  This leaves only $552.50 in the pockets of shareholders for every $1000 of pre-tax earnings that are paid as dividends.  Thus, the effective marginal tax rate on this income is more like 47.5% than it is 15%.

Of course, there are at least two important caveats to this stylized example.  First, the economics profession has simply not been able to come up with a definitive estimate of who really bears the burden of the corporate income tax.  One of the leading tax scholars of our day – Alan Auerbach of the University of California at Berkeley – wrote a terrific summary of what we know on this topic back in 2005 (the paper, which was ultimately published in the NBER Tax Policy and the Economy series, is available as an NBER working paper here.)  He notes that one of the major lessons is that “for a variety of reasons, shareholders may bear a certain portion of the corporate tax burden … thus, the distribution of share ownership remains empirically quite relevant to corporate tax incidence analysis.”  This is hardly a ringing endorsement that we should assume the entire incidence falls on Warren Buffet and other shareholders, but it is quite clear that we should not be ignoring corporate taxes when making policy statements about the fairness of the tax system.

A second caveat is that not all corporations face a 35% marginal effective tax rate.  Corporate income taxation is nothing if not a complex labyrinth of rules, exceptions, and exceptions to the exceptions.  Again, however, we know that for most corporate earnings, the rate of corporate taxation is well above zero, which is the rate it would need to be for us to feel as if we can ignore it when making statements of the kind Mr. Buffett makes.

A fellow Forbes contributor, Josh Barro, points out a number of problems with the Buffett Rule, the most important of which is that it would exacerbate the already-existing tax distortion that favors debt over equity.  If Congress wants to do this, that is their prerogative.  But we should not allow them to justify potentially bad tax policy on the basis of a naïve and misleading understanding of tax incidence.