You are currently browsing the monthly archive for February 2010.
The Obama administration is planning to use the government’s enormous buying power to prod private companies to improve wages and benefits for millions of workers, according to White House officials and several interest groups briefed on the plan….By altering how it awards $500 billion in contracts each year, the government would disqualify more companies with labor, environmental or other violations and give an edge to companies that offer better levels of pay, health coverage, pensions and other benefits..
I have a hard time seeing how this is justified over broad, direct wage subsidies by efficiency or fairness criteria.
The economic impact of drug legalization always sparks debate, and a commonly cited name is Harvard economist Jeffrey Miron. He has a new paper out where he estimates that the legalization and taxation of all drugs would decrease government costs by $48.7 billion and increase revenues by $34.3 billion, which means we are foregoing a total $83 billion. The paper also includes a state-by-state breakdown for those interested.
He calls his estimate a conservative ballpark that only addresses the impact on the governments budget. A much more difficult, but I would guess economically more important, impact is the effect on the population’s productive ability; what economists call human capital. Short sighted analysis focuses on the productivity impact of a more drugged up country. I think those impacts would be absolutely dwarfed by the human capital destruction that occurs when you send a young man or woman to prison for a drug charge, both the short-term lost productivity and the long-term damage that is done to their productive ability.
Nonetheless, Miron’s numbers are an important fiscal impact ballpark to have.
Jack Rose was a part-time musician working different day jobs to stay afloat. In 2001, during an extended period of collecting unemployment checks, he sold his electric guitar and used the free time his unemployment income provided him to master the finger pick style of acoustic guitar. The time was well spent, and he was able to become a full-time musician and spread the blues, ragtime, and Americana sounds that influenced him. Unfortunately, Jack passed away in February just before his highly anticipated record was to be released on a prominent independent label.
But Jack is a good example of what you can do with yourself when you’re collecting unemployment checks; that is, invest in a skill with which you can make a living. It’s the best thing for society and yourself. Obviously, not everyone (almost nobody, in fact) should spend their unemployment time learning finger pick guitar. But there are the free online courses offered by MIT and hundreds of other places to educate yourself online. It is a great age to be an autodidact. There are also plenty of grants and free programs for continuing education that will pay for classes at community college.
What you shouldn’t do is sit around lamenting that the government won’t give you something to do. If collecting unemployment makes you feel like you need to contribute something society the best contribution you can make is to do whatever you can to make it less likely that you will need to rely on unemployment in the future.
I’m not judging people for being unemployed; many of them did all the right things you’re supposed to do to make yourself employable, and are unemployed at no fault of their own. But there are almost always ways to increase you’re human capital and learn valuable new skills. And even if you are perfectly trained for future employment, and you just need to wait for a cyclical bounceback in the economy, there are other ways to contribute to society. Volunteer at a charity, babysit your neighbors kids, help people fill job applications, improve their resumes, or do their own job training. Just don’t sit around complaining that the government isn’t giving you a ditch to dig.
A few months ago there was a large stir around a man who had been in a coma for 23 years who began communication by way of a facilitator who “helped” him type on an electronic keypad, letting the world now that he was conscious. Skeptics noted that this was exactly the same sort of thing that people were falsely claiming to do for severely autistic children, and that without more evidence there was no reason to believe this was legit. Sadly, this turned out to be the exactly the case:
Dr Steven Laureys, one of the doctors treating him, acknowledged that his patient could not make himself understood after all. Facilitated communication, the technique said to have made Houben’s apparent contact with the outside world possible, did not work, Laureys declared.
“We did not have all the facts before,” he said. “To me, it’s enough to say that this method doesn’t work.”
Greg Mankiw mocks Team Obama’s endorsement of price controls on health care.
Very, very strange. You would think that all those future Nobel-prize-winning economists working for the President would explain to him the history and economics of government price controls. Imposing price controls certainly wasn’t President Nixon’s finest hour.
Maybe President Obama should instead follow in President Ford’s footsteps and start wearing a WHINE button on his lapel, forWhip Healthcare Inflation Now, Egad
Its not immediately clear to me that this isn’t a viable strategy, however.
If you believe that a large portion of the health care industry is rent dissipating then putting clamps on it might not be such a bad idea. That is, we spend lots of money on health care because we want to buy “the best care available” regardless of what that care is. However, this just encourages the creation of new therapies that we now have to buy in order to have “the best care available.”
Here is a though experiment that helps illustrate:
Your child is in the hospital and the doctor says that therapy X will cost $500K but will increase child’s chance of survival by 23%. $500K is your life savings plus everything that you could borrow plus a little more you will have to get from friends and family. Maybe the church could help. Maybe . . .
For a millisecond perhaps you think that its not worth it. Your kid is still probably going to die. But, you push that feeling down. My God this is YOUR CHILD. Its worth it.
Now imagine the same situation but the doctor comes in and says the $500K treatment is going to increase your kid’s chance of survival by 17%. Do you say “Well I was almost on the fence at 23%, so I am going to let my kid go at 17%”
I am betting not.
What you are buying is “the best shot”, not any particular chance of survival. Thus, if I spend billions increasing the best shot only by a few percentage points you will buy my new treatment, even though the actual change in your family’s prospects are extremely small.
In many cases people will pay more just for uncertainty. Imagine the following: we have a treatment that we know is only effective in 5% of cases. That sounds pretty bleak. However, we have this brand new treatment that is not without risks. Yet, if it works could save 90% of patients. Many people would instinctively be willing to pay more for the second treatment.
However, the second treatment is new and risky. Its possible that it won’t help anyone. Its possible that it could make your child worse. However, the fact that you don’t have to admit to the bleak 5% odds makes the treatment enticing. Again, you are not buying life. You are buying a reprieve from thinking about the death of your child.
Its important to remember that this argument doesn’t suggest that all health care is useless. It doesn’t even suggest that all of the “latest care” is useless. Only that it is less useful than the price might suggest.
My Deaton and Muellbauer is a little rusty so, I’d have to spend some time looking through the actual economics of a rationed market.
Nonetheless this seems like a step towards dividing the baby in half.
Greg Mankiw points to an article in the Economist about what unions want from Obama, and what he has and hasn’t given them. Here is one interesting fact:
But [Obama’s] biggest favour has been green, foldable and borrowed. For example, he encourages the use of “Project Labour Agreements” on big federal construction projects, whereby contractors must recruit through a union hiring hall. Such agreements inflate costs by 12-18%…
Per hour worked, state and local government workers enjoy 34% higher wages and 70% more benefits than their private-sector counterparts
As the Economist notes, and critics often complain, this means that we get less schools, roads, and other public services for our money.
I am interested in grand bargains between pragmatic moderate progressives, like Matt Yglesias and Ezra Klein, and pragmatic moderate libertarians, like Tyler Cowen. I think that a liberalization of public services aimed at getting the most for our money in exchange for greater and more widely distributed actual wage subsidies could be one of these bargains.
My question is this: how much (and what mix) of an increase in the earned income tax credit, food stamps, and other progressive income subsidies would it take for moderate progressives to sign off on a complete liberalization of government services? I’m talking repeal of Davis-Bacon, federal right-to-work law for all public sector employment, all federal education dollars contingent on removal of strict charter schools limitations, and a complete ban on all “made in America” or local sourcing requirement provisions for all programs receiving any federal subsidies (that means you, Amtrak!).
I’ve only half thought through both the carrots and the sticks for progressives here, so I’m not wedded to this list. I’m just curious if there is a set of policies that generally accomplishes a wider more fairly distributed welfare program in exchange for public sector liberalization. Would progressives be interested at all?
It seems to me that the authors of this study have identified but not really controlled for the endogeneity problem that gun control regulations are both caused by and the cause of the gun prevalence. They report:
Access to lethal weapons is an important risk factor for suicide. Our study suggests that general barriers to firearm access created through state regulation can have a significant deterrent effect on male suicide rates in the United States. Permit requirements and bans on sales to minors were the most effective of the regulations analyzed.
My instinct is that decreasing access to guns will just lead to more non-gun suicides, which is what one study they cite actually found:
An evaluation of the 1996 National Firearms Agreement (NFA) in Australia documents a decline in firearm suicides after the implementation of the agreement (Klieve et al., 2009). However, these findings may be confounded with an overall decline in gun ownership that preceded the NFA. Additionally, there was some evidence of increased suicides by hanging.
However, I can imagine some people might deterred from a temporary impulse to kill themselves, and that the extra time that buys them might be enough to get help, change their mind, etc. Nevertheless, it is an interesting question, and I think some clever scientist out there should be able to think of an actual instrumental variable to get a better answer to this question.
Offline I have been peppered by questions about the “uselessness”, “fudge factor”, and “down right dishonesty” of the Administration’s goal to create or save 4 million jobs.
This view, I argue, is wrongheaded. Either both terms are immeasurable or neither is.
The nerdier among you might satisfied by my simply noting that “zero is a number.” For the rest here is a longer rundown.
When we say a job was “saved” by the stimulus package we mean the following are all true:
- The job existed before the stimulus package was passed
- The job exists now
- The job would not exist now if the stimulus package had not passed
Now, what do we mean when we say a job was “created” by the stimulus package? We mean the following are all true:
- The job did not exist before the stimulus package was passed
- The job exists now
- The job would not exist now if the stimulus package had not passed
The only difference between these two sets of conditions is the first line. However, the first line is no less measurable under the conditions for “saved” as it is for the condition of “created”
In both cases the hard to measure condition is the third, whether or not the job would exist now if the stimulus package had not passed. However, this is just as hard to measure for jobs created as it is for jobs saved. Either both terms are meaningful or neither is.
I think that the confusion comes because the term “saved” inherently carries the notion of “by the actions of the government.” In this case the stimulus package. While one could think of jobs “created” generally, without the concept of cause.
However, as soon as we have asserted the condition: “The stimulus package will” we are in the world of causation and in that world condition (3) exists for both created and saved.
That is to say, you can either measure the impact of the stimulus package or you cannot. Whether the baseline is a positive number (above zero) or a negative number (below zero) is neither here nor there.
Note: Those who find Hennessey’s analysis compelling should note that he is exactly correct right up until the end where he says
Had he said that his Administration would create 4 million jobs, then we would have a simple metric: 134.3 million + 4 million = 138.3 million. Each month, we could compare the nonfarm employment level to 138.3 million to see if it was higher or lower than that goal.
Hennessey is implicitly assuming that zero jobs created is a meaningful baseline in the “created” case. However, zero ain’t nothin’ but a number. That is, there is no reason why we should assume that in the absence of administration policy exactly zero jobs would be created.
Note on Note: Yes, the additive identity is often a special number, just not in this case.
You should not hold someone to a lower standard because they are an important voice for something you believe. This is Joe Stiglitz reviewing Naomi Klein and not holding her to a reasonable standard of truth because he agrees with her ideological cause. It’s important to call out people defending things you believe when they are loose with the facts, otherwise they can create a caricature of good ideas in the public mind, and become a strawman for opponents to distract debate with.
This is my problem with John Stossel. Yes, he’s often an eloquent voice for libertarianism, and he promotes those ideas to a broad audience. But while his perspective may differ from the typical TV newsman, his gross oversimplification of complex issues, unfortunately, often does not.
Take a recent piece of his from Reason, where he defends school choice. Like a lot of what Stossel says and does it’s peppered with statements that are distracting oversimplifications:
So when will we permit competition and choice, which works great with everything else?
Seriously, John? Everything else? You couldn’t have said “almost everything” or “many other things”? To a libertarian predisposed to believe in competition and choice, your minds eye will breeze over this sentence without distraction. But to a progressive, these sorts of gross generalizations about the limitations of competition and choice are exactly the extreme form of libertarianism they despise; to them, reading a statement like that is a distracting and off-putting jolt that detracts from the credibility of the rest of the article. To understand how this sentence feels to a progressive, imagine reading an article by a progressive who writes that “governments can always fix market failures and make everyone better off”. At that statement, they’ve lost you for good.
It’s because of glib statements like this that other libertarians have to constantly assure people that they don’t want the police to be privatized, and that they do believe in public goods, and there are limitations to what free markets can achieve.
Another example is his discussion of the Head Start program. He wants the reader to believe that Head Start has been proven to be a failure. I’m no expert on this, but the evidence is certainly more mixed than he portrays. For starters, the study he cites uses a single cohort of students from 2002-2003. With this one year sample Stossel insinuates that the 45-year-old program in it’s entirety has been proven ineffective.
The study Stossel cites criticizes Head Start because it’s impact fades quickly:
The study showed that at the end of one program year, access to Head Start positively influenced children’s school readiness. When measured again at the end of kindergarten and first grade, however, the Head Start children and the control group children were at the same level on many of the measures studied.
Of course, this type of fade-out is understood to be common amongst educational interventions, and also ignores potential longer-term benefits. Stossel’s clear-cut, absolute rebuke contrasts with this recent paper on Head Start by David Deming in the journal Applied Economics:
…some studies find evidence of fade-out for African American participants compared to their more advantaged white peers… if fade-out generalizes to all long-term impacts, the benefits of many of these interventions have been overstated. However, studies of model preschool interventions find dramatic improvements in long-term outcomes among program participants, despite rapid fade-out of test score gains.
In addition to the positive results of Deming’s study, his summation of the literature on long-term gains suggests that they are real:
The best evidence for the long-term impact of Head Start comes from two recent studies…Using different data sources and identification strategies, each finds long-term impacts of Head Start on outcomes such as educational attainment, crime, and mortality…
Now, the quick fade out of short-term gains is an important point. If we are going to spend more money on the program we demand to know if and how they will improve it to prevent the short-run gains from the intervention from being lost. This isn’t what Stossel does though. He instead uses these results to declare that Head Start has been proven a failure over it’s 45 year life. How is someone expected to believe that the rest of his article makes accurate claims?
Cafepress has apparently seen a surge in the demand for items featuring the above image. It makes me laugh and shudder at the same time. Can somebody please make a James Buchanan version of “Miss Me Yet?”?
I’ve always felt there’s an inherent tradeoff we face with greater federalization of education. On the one hand, higher degrees of federal government involvement could almost certainly improve education in our worst performing states. I’m sure federal standards and incentives could improve states with governments too beholden to teachers unions that go to outrageous lengths to limit charter schools (I’m looking at you, New York), and states that set low bars for standardized testing.
On the other hand, more local autonomy allows schools and districts to better tailor the education to their local populations. For instance, federal standards for vocational education could restrict local districts abilities to custom tailor their vocational education programs to better match the local supply and demand for vocational work, which is vastly different in different parts of the country. Is it so hard to imagine a federal vocational standards board mandating a nationwide “green jobs” program, or an “organic, sustainable, urban agriculture” program?
In addition, local autonomy makes schools to be more accountable for their performance. Less local autonomy means less blame (or credit) that can be laid at the feet of the local school districts, administrators, and even teachers, for the failure (or success) of a school.
A new paper by by Torberg Falch and Justina Fischer provides some new evidence on the effects of decentralized government on school performance. They use an dataset of standardized test scores as the measure of education performance, and the percent of local government spending relative to national government spending as the measure of government decentralization. The data are for 25 OECD countries from 1980 to 2000. They find that a 10% increase in decentralization increased test scores by 0.7 standard deviations. Interestingly, they also find that the total size of the public sector has a negative impact on scores.
There’s an obvious generalization problem with these results: areas with more local autonomy are probably areas that demand more local autonomy, which are also probably the places with the local knowledge and institutional capital to successfully manage a local school system. You can’t conclude from these results alone that public school performance can be improved by exogeneously granting more local autonomy. It does suggest, perhaps, that you can improve public school performance by granting local autonomy to those areas that demand it, and that taking that autonomy away could reduce public school performance. This is important to consider as we move towards an increasingly federalized education system.
David Barker says that
The purchase of Alaska from Russia for $7.2 million, ridiculed in 1867 as “Seward’s Folly,” is now viewed as a shrewd business deal. A purely financial analysis of the transaction, however, shows that the price was greater than the net present value of cash flow from Alaska to the federal government from 1867 to 2007.
In short, the Federal Government sends more money to Alaska than we get back, plus we paid for the darn thing. Yes, Alaska has a lot of natural resources but Alaska was likely to become a part of Canada anyway and we could have bought those resources.
Of course Barker ignores the cost of Alaska’s most famous export.
HT: Alex Tabarrok
Also, if he’s going to praise all the jobs created by it, he should probably point out that he believes the project on net is going to destroy jobs. If that’s logically consistent enough to please his constituents, then great. But to omit those points is to imply more support for the project than is accurate. Did the congressmen make those disclaimers?
More importantly the suggestion is that stimulus projects will destroy more productive private sector jobs through crowding out. More government sector = less private sector.
We have to ask how will that crowding out take place?
Perhaps, through the bond market. However, the cost of short term government debt is essentially zero and long term debt is at record lows. This of course is what encourages stimulus advocates.
Moreover, many of the same congressman were in favor of tax cuts as stimulus which should have the same bond market effects as government spending.
We have to think they mean that private sector jobs will be crowded out through the real goods market. That market is likely to have its most intense effect locally. Sure spending on government construction projects in Portland, Oregon will raise the input prices that builders face in Louisville, Kentucky.
However, spending on government construction in Louisville, Kentucky is going to raise input prices in Louisville, Kentucky by a whole lot more.
If you were consistent you would want the stimulus money to be spent as far away from your state as possible so as to have the minimum economic dislocation.
Ultimately, that’s my problem with a lot of the Anti-Keynesian rhetoric. Even the people who are spouting it, for the most part, don’t believe it. Most people actually think its better to have a government spending project in their town as opposed to someone else’s town.
Now, it could be that almost everyone is wrong. Fine. But, you shouldn’t be attacking policy on grounds that you, yourself don’t believe.
Greg Mankiw defends Republican congressman who opposed the stimulus but are bragging back home about directing stimulus money to their districts. He points out that it is not logically inconsistent to both oppose government spending as a means to fight a recession, and to also direct such spending it to their constituents in the event that there is inevitably going to be stimulus.
I won’t disagree with the claim that the position is logical; once there’s definitely going to be a pie, it’s not wrong to fight for your share, and that’s important to point out. Furthermore, Mankiw admits, as will I, that he’s not familiar with the details of these supposed congressional hypocrites. I’ll go a step further though, and say that the details matter here, and without them you can’t really defend the congressmen.
For instance, when celebrating the greatness of a local stimulus project, a congressman owes the local constituents a disclaimer that if he could have things his way, he would rather this project not exist. Also, if he’s going to praise all the jobs created by it, he should probably point out that he believes the project on net is going to destroy jobs. If that’s logically consistent enough to please his constituents, then great. But to omit those points is to imply more support for the project than is accurate. Did the congressmen make those disclaimers?
Another problem I have with his defense is this metaphor of his:
Many Democratic congressmen opposed the Bush tax cuts… But once these tax cuts were passed, I bet these congressmen paid lower taxes. I bet they did not offer to hand the Treasury the extra taxes they would have owed at the previous tax rates. Would it make sense for the GOP to suggest that these Democrats were disingenuous or hypocritical? I don’t think so.
A more apt metaphor would be if Democratic congressmen who opposed the Bush tax cuts then went to their home towns and made stump speeches bragging about the jobs those tax cuts were going to create. Would it make sense for the GOP to suggest that these Democrats were disingenuous or hypocritical? I think Mankiw would agree that it would.
We forget that when Thomas Carlyle coined the epithet “the dismal science”, he was condemning not the gloomy Malthus but John Stuart Mill and his stubborn, infuriating opposition to slavery.
That’s Tim Harford, in the Financial Times, trying to convince his readers that economists aren’t bad people. I think economists are more frequently accused of being cold-hearted, emotionless, logicians who think with a heartless rationalism, rather than simply bad people. The economist’s best response to these accusations brings to mind a Simpsons quote:
Ned: You ugly, hate-filled man!
Moe: Hey, hey, I may be ugly and hate-filled, but I… um, what was the third thing you said?
The last few weeks and months have been an intellectual roller coaster. I have discovered that much the academic training and thinking I have had and done up until now was
a waste. [Caught up in the moment. This was clearly over the top. Lets just say, sub-optimal]
I had thought that by picking economics, I was picking the most important science. This is the science of choice. This is about evaluating what we do with our lives. Every science, every discipline, even the maintenance of a state capable of producing science depends on political economy.
Inside economics I thought I was picking the most important area – poverty and education. Well, in truth I thought the most important area was growth. In particular, how to get the third world growing. But, I was choosing my subfield when China and India were surging. I thought that the problem of third world growth was for practical purposes, likely solved. China and India would become so wealthy that together with Europe we could lift up Africa by brute force if necessary. The global growth future was bright.
So I thought, the problem of the 21st century will be what to do about individuals who are not getting all they could out of the modern global economy. What factors make individual poverty likely? What factors lead to such high poverty clusters? How do we increase individual productivity?
Now I see I was wrong. All of this assumes that we will continue in the same economic paradigm that we have for last 250 years or so. I see that this is much less likely than I would have assumed.
It seems likely that we are on the verge of a new economic paradigm; one defined by much more rapid technological expansion. The new problem is not making sure everyone is plugged in to technology but making sure that the power of the technology itself does not overwhelm us and destroy us.
I know this sounds a bit crazy and if someone had just said it to me like this I would have be inclined to slough it off as well. However, having spent a significant amount of time reading people who have clearly done their homework I am convinced.
What this means for my own intellectual journey, I am not yet sure.
So apparently I have been missing some of the most interesting arguments on Earth.(1) Basically, those made by Nick Bostrom, who I had not heard of a few hours ago. I am more than a little pissed about this but lets leave that alone.
Bostrom argues that the there are three possibilities.
1) Most human like civilizations will never reach the point where they can simulate reality
2) Most civilizations who do reach that stage will not want to create simulations
3) We are probably living in a simulation.
When I first heard a sketch of the argument in passing, I heard it phrased that (3) was likely. My response was what about (2)!
However, after having read Nick Bostrom’s paper (2) seems unlikely to me.
Why such a quick turn around?
Well, if a civilization becomes so advanced that it can create simulations then it is highly unlikely that it has not thought of the simulation argument. If it has thought of the simulation argument then, presuming the people are anything like us, they will want create the simulations as a test of the simulation argument.
That is, the desire to test (1) makes (2) unlikely.
Now how likely is that such people will want to test (1)?
This is admittedly very rough but it seems unlikely to me because such a civilization has already progressed to the level of being able to create a simulation. This means that they have extensive knowledge of the workings of the universe. Currently, I cannot imagine how this could be done without some interest in the scientific process. That is, the desire to test hypotheses.
Now, of course this is a statement about my imagination, not reality. However, it still seems that my subjective estimate of the probability of (2) should be low unless I encounter some evidence that makes it seem more feasible that a sufficiently advanced civilization will not want to test (1).
So as of right now it seems that either (1) or (3) is most likely.
(1) Of course this comes with caveat, that I now know of. Since it is obviously, possible, indeed likely, that I am missing out on even more interesting arguments. Do not hesitate to post them if you know them.
I’ve heard Robert Wright mention on occasion the idea that we are most likely living in a simulation. I’ve only heard the idea in passing but as far as I can tell it goes like this:
At some point in time it will become possible to run a full scale simulation of the our region of the universe. After that time many simulations will be created. As a result most of the people who exist will exist in simulations. Thus, odds are we are in a simulation.
Now again, I haven’t read the formal description of this argument but the problem for me was simply: why would someone want to construct a bunch of very detailed simulations of the past?
That is, since we can’t now create such simulations we are imagining that in the “real world” it is much later than it is now. What then is the reason for lots of careful simulation of history? Maybe one I could see, but then that gives us at most a 50-50 chance of living in a simulation.
However, I’ve also been reading Eliezer Yudkowsky. So here is my far fetched reasoning for why some more ways we could be in a simulation.
At some point in time someone will create a self improving artificial intelligence, here after known simply as AI.
Once, this AI is created it will become incredibly powerful and its “desires” will determine the future of humanity. As a result it is necessary that the first AI be Friendly. That is, that the AI implement a world that is desirable for humans. Note that it isn’t enough for the AI not to be “mean.” It needs to actively try to create situations that are good for us because otherwise the AI is likely to hurt us by accident.
So far this is standard Yudkowsky.
Here is where I come in. In order for the AI to do this it has to know what the ALL the consequences of its actions are. That means that in some sense it must simulate the future it will produce.
Moreover, because it is important that the very first AI be friendly, it is likely that the people working to develop it would not tell anyone about it. The creators would be afraid that some other untrustworthy group would use the knowledge to create un-Friendly AI.
So, perhaps someone has already created AI in the past. In which case we are the future of the AI. The AI needs to simulate its future and so is simulating us.
By now most of us accept that people are dealt different hands by nature. However, much of the commentariat speaks as if on some level it all balances out. Strengths in one area matched by weaknesses in another. Or a the very least the deck is shuffled so that the beautiful are no more likely to be brilliant. The naturally happy are no more likely to strike it rich.
Sadly nature is more cruel than that. All men are made of crooked timber but some timbers are more crooked than others.
Eric Barker has the goods:
[Criminals] are more likely to be ugly:
Being very attractive reduces a young adult’s propensity for criminal activity and being unattractive increases it. Being very attractive is also positively associated with wages and with adult vocabulary test scores, which implies that beauty may have an impact on human capital formation. The results suggest that a labor market penalty provides a direct incentive for unattractive individuals toward criminal activity. The level of beauty in high school is associated with criminal propensity seven to eight years later, which seems to be due to the impact of beauty in high school on human capital formation, although this avenue seems to be effective for females only.
A new paper by David Card, Martin Dooley, and Abigail Payne looks at Ontario’s unique public school system, which includes secular public schools and Catholic public schools open only to Catholics, to estimate the impacts of school choice and school competition:
For non-Catholics, the Ontario system functions like a typical public system in the U.S. with a single monopoly provider. For the 40% of children with Catholic backgrounds, however, the system is effectively a voucher program with two competing suppliers. Although choice is limited to Catholics, the financial incentives to compete for Catholic students potentially impact the quality of schooling for all students. Our goal is to measure the effects of these incentives using standardized student test score gains between 3rd and 6th grade.
They find that a more competitive school market, where a higher percentage of students are Catholic, leads to better performance in both Catholic and secular public schools. They estimate that if school choice was made available to all students, rather than just the Catholic students, overall performance would increase overall 6th graders standardized test scores by 6-8% of a standard deviation. These results reinforce the idea that the impact of charter schools and voucher programs shouldn’t just compare the performance of students admitted to the programs to those that aren’t, since the main impact may be to increase test scores throughout the entire school system.
Robin Hanson contrasts the public’s demand for health care regardless of the evidence of it’s efficacy, with the public’s demand for grocery stores and “car entitlements”:
If we were considering a vast new grocery store or car entitlement, the public would hardly “forget” to wonder if that would really give us more nutrition or a faster commute. But the US public has little religious-style fervor on grocery stores or cars.
If only it were so, Robin! The public’s doesn’t worry about whether a new grocery store will provide them more nutrition, but whether it will provide them with organic foods. In the same way that the public assumes more health care means more health, they assume organic foods provide more nutrition despite the fact that all evidence suggests that they don’t. The fervor for unproven food policies as at least as religious as it is for health care.
Also, I’m not sure what “car entitlements” are, but I’m pretty sure that the public would just want to know how many “green jobs” they would create.
One of the things that was striking about the recent diavlog between Razib Khan and Eliezer Yudkowsky is that Razib seemed intent on bring up the issue of race but then didn’t do a lot with it. Razib’s views on race have generated controversy in the past and in my opinion have tended to distract from other important arguments he has made.
My question for readers is this: What are your thoughts on race realism? How important is it? Should serious discussions of race be downplayed or played up?
UPDATE: Study does have several controls including importantly mother’s education and BMI. It also attempts to measure intra-family effects. My previous analysis is invalid.
Television viewing may be a sedentary activity, but it is not for that reason that it is associated with obesity in children. The relationship between television viewing and obesity among children is limited to commercial television viewing and probably operates through the effect of advertising obesogenic foods on television.
There is a well known correlation between TV watching and childhood obesity. The study seeks to control for activity level and the type of TV watch by using time-diaries. Their results show that type of TV watched is a predictor of childhood obesity but activity level is not. Kids who watched commercial free television were less likely to be obese.
However, as far as I can tell they make no attempt to control for exogenous family characteristics. No socioeconomic factors. No family dummies. No randomization.
The simplest story then, is that parents who are conscientious are going to both monitor what their children watch on TV and what they eat. I, of course, would be sympathetic to more complex stories involving backwards causation from obesity to type of TV watching.
When looking at childhood obesity we need to keep a few facts in the forefront of our minds. First, children and parents share the same genes and given the results of previous research we should be very conscious of potential genetic links between parent and child life outcomes.
Second, human personality is to a large extent determined by the interaction of the genes with the social hierarchy. In some sense we can think of much of human behavior as the genes doing the best they can with what they got, in an effort to climb the social ladder.
Obesity has strong social effects. Thus we should expect it to have strong effects on behavior. A fat kid is not going to face the same social incentive structure as a thin kid. A fat parent is not going to face the same social incentive structure as a thin parent. And a parent of a fat kid is not going to face the same social incentive structure as the parent of a thin kid.
Thus we should expect obesity in children to have strong effects on how the parents see themselves and how they interact with their children.
Arnold Kling likes to claim that economists underestimate the prevalence of price discrimination, while I have disagreed that economists tend to see price discrimination when costs are actually driving different prices for the same good. But I have a hard time coming up with a cost based explanation for this example, from Erik Barker, of price discrimination against poor people because their lack of cars prevents them from shopping around:
Even after controlling for store size and competition, prices are found to be 2%–5% higher in poor areas. It also finds that it is not the poverty level per se but access to cars that acts as a key determinant of consumers’ price search patterns.
When I read interesting literature like this on actual real world price setting behavior, much of tends to be from marketing science rather than economists. This is another great example… granted, though, they are using the Stiglitz-Salop model of price dispersion, so you’ve got to give that, at least, to economists.
Occasionally I will read Paul Krugman lament that Obama is not liberal enough, or someone at the National Review complain that he is too radical a liberal, and I am reminded to appreciate his moderation in the face of such critics. This was my reaction to an column by Harold Meyerson in the Washington Post who argues, believe it or not, that Obama has not been friendly enough to unions.
I saw the bailing out of G.M and Chrysler, and the administration’s willingness to exempt union members health plans from higher taxes as being very friendly to unions. Meyerson makes a persuasive case, though, that things have not been so great for unions thus far under Obama’s tenure:
Labor’s primary priority — the Employee Free Choice Act (EFCA) — died when the Democrats lost their 60-vote majority in the Senate. Labor’s normal priority — a functioning National Labor Relations Board — also seems out of reach… Other key legislation for which labor has lobbied, including health-care reform and financial regulations, languishes in the Senate.
True, these are all things the unions wanted, and it even seemed like they might get. He further worries that the decline of private sector unions will lead to the decline of public sector unionization:
What will life be like in an America with almost no private-sector unions or collective bargaining? … a deunionized private sector won’t readily support — politically or economically — a unionized or expansive public sector.
I agree that this is probably true, but, unlike Meyerson, I see it as a benefit to the decline of private sector unions. One man’s cost is another man’s benefit I suppose.
Marc Ambinder has some interesting pieces up at the Atlantic on the First Lady’s efforts to combat childhood obesity. I obviously have a lot to say on that but am short on time. In the mean time I will point you to Robert Lustig’s Nature article on childhood obesity, which was a watershed for me personally.
The article is a bit dense for those without a science background, nonetheless, it is extremely valuable. I recommend opening Lustig in one window and Wikipedia in another and just making the hard slog.
Here is the abstract
Childhood obesity has become epidemic over the past 30 years. The First Law of Thermodynamics is routinely interpreted to imply that weight gain is secondary to increased caloric intake and/or decreased energy expenditure, two behaviors that have been documented during this interval; nonetheless, lifestyle interventions are notoriously ineffective at promoting weight loss. Obesity is characterized by hyperinsulinemia. Although hyperinsulinemia is usually thought to be secondary to obesity, it can instead be primary, due to autonomic dysfunction. Obesity is also a state of leptin resistance, in which defective leptin signal transduction promotes excess energy intake, to maintain normal energy expenditure. Insulin and leptin share a common central signaling pathway, and it seems that insulin functions as an endogenous leptin antagonist. Suppressing insulin ameliorates leptin resistance, with ensuing reduction of caloric intake, increased spontaneous activity, and improved quality of life. Hyperinsulinemia also interferes with dopamine clearance in the ventral tegmental area and nucleus accumbens, promoting increased food reward. Accordingly, the First Law of Thermodynamics can be reinterpreted, such that the behaviors of increased caloric intake and decreased energy expenditure are secondary to obligate weight gain. This weight gain is driven by the hyperinsulinemic state, through three mechanisms: energy partitioning into adipose tissue; interference with leptin signal transduction; and interference with extinction of the hedonic response to food.
I don’t agree with all of Lustig’s conclusions as written in the article. This is in part because the accurate answer to some of the questions is: “I really have no idea and neither does anyone else.” Yet, that’s hard to get published.
Would legalizing drugs be good for economic growth? This is one of the suggestions on Richard Posner’s list of sensible actions the government could take to stimulate economic growth without increasing the deficit. Daniel Indiviglio finds Posner’s proposal to increase economic growth by legalizing drugs unconvincing, but I think Daniel’s critique is pretty far off base, and really misses the key issues.
Regarding the impact on federal and state budgets, Daniel writes:
“In 2010, federal prisons cost taxpayers $6.2 billion… And, as Posner mentions, drug-related crimes account about half of the federal prisoners. So really, the effect would be only half of that tiny reduction, and again, there are surely some drug laws you’d want to remain intact. So the portion would be even smaller… Assuming similar costs on the state level, that would spread a maximum cost-savings of $7.8 billion over the 50 states. Again, that number is not insignificant, but it wouldn’t have a dramatic impact on state budgets.”
As I pointed out in the comments section to his post, this would be an accurate assessment of government spending on drug prohibition if all drug law violators were voluntarily turning themselves in at the prison gates with signed statements confessing their crimes and waiving their rights to a fair trial. Unfortunately, people are sneaky when they break the law, which means considerable state, local, and federal police officers are required to find them, apprehend them, and arrest and detain them; followed by which a team of judges, prosecutors, public defenders, and jurors have to show up and do their thing before you can put them in jail. And all that is before the first appeal.
Daniel points out that if there’s less stuff for the police -and I presume he would include judges, lawyers, and juries- to do, then there will be less jobs, which will increase unemployment, which is a bad thing during a recession. First off, there’s plenty of other crime out there to be prevented, and some of it’s probably even the kind of crime that, unlike voluntary drug use, actually does harm to a person or person’s property against their will. Freeing up the criminal justice system to spend more time on the prevention and prosecution of crimes that actually have economic costs to society might even contribute to the positive economic impact of drug legalization.
But even if there were no criminal justice work for these newly redundant government employees to do, in the short run the lack of any real contribution to make certainly doesn’t mean that public servants would be laid off. They can clean parks or teach drivers ed if they’ve got nothing better to do. And in the long-run if the size of the criminal justice system is scaled back to reflect the decline in “crime”, this just allows governments to shift resources to higher value uses or lower taxes; you know, the kind of stuff that increases economic growth.
Indiviglio goes on in the comments to claim that if you only legalize marijuana it won’t matter much:
“I don’t think that much is really spent to prevent marijuana use, for example, compared to cocaine or heroin. (And if it is, then that’s just silly.)…The same applies to the violence. I don’t think there are too many marijuana-related crimes. So unless you’re talking about legalizing the really hard stuff, then I don’t seem much decline in this variable either.”
I’m not sure why the hardness of the drug should be related to the violence associated with the production of it; the prohibition of alcohol, for instance, caused a good deal of violence from 1920-1933. For drug producers, violence is a means to settles disputes and fight for turf and prevent market entry. Any drug that creates a stream of revenue that needs protecting by illegal means will generate violence. According to a recent Wall Street Journal articles, marijuana accounts for 50% to 65% of Mexican cartel revenues. I don’t see any reason why the drug that provides a “steady source of income that allows cartels to meet payroll and fund other activities” wouldn’t be an important cause of drug related violence, especially since it contributes significantly to the employment of the people who create said violence.
As to whether significant resources are spent by law enforcement on marijuana prohibition, this is unquestionable. According to the FBI, in 2008 12.2% of all arrests were for drug violations; more any other category of crime. Of the 1,702,537 drug violation arrests, 82.3% were for possession, and more than half of those were for marijuana possession. There were more than twice as many arrests for marijuana possession than there were for the sale or manufacture of all drug types combined.
Finally, Daniel worries about what all the prisoners are going to do for jobs when they get out of jail. I would remind Daniel that if he’s so worried about how unproductive released prisoners are then he should be all the more concerned about policies that every year turn more and more people into future released prisoners.
Andrew Sullivan is smitten
A simplified tax code, consisting of a two-bracket income tax with a large standard deduction and a business consumption tax, would pay for a means-tested safety net, and a system of tax credits, risk pools and low-income subsidies would underwrite a free (or, well, somewhat freer) market in health care. In other words, Ryan would balance our books by shifting away from programs that shuffle money around within the middle and upper-middle classes — taking tax dollars with one hand and giving health-insurance deductions, college-tuition credits, home-mortgage deductions, Social Security checks and so forth with the other — and toward programs that tax the majority of Americans to fund means-tested support for the old, the sick, and the poor.
Paul Ryan is my new hero.
There is a potentially large libertarian downside as well: the creation of extremely high marginal tax rates at low income levels. Ryan suggests that we means test everything. From the point of view of a household though, means testing is no different from marginal taxation. Every extra dollar you earn reduces the amount of government benefits you receive and so the return from working or investing is muted.
Means-testing can be particularly sinister because the tax rate is not obvious. The result is that effective marginal tax rates could climb to 100% or more. I tend to think that the elasticity of labor supply is pretty low and so the effects of marginal taxation can be easily exaggerated. Still, marginal rates approaching 100% are going to have serious incentive effects.
In addition, even more reasonable but very high marginal tax rates can have strong disincentive effects for the poor. High enough effective marginal tax rates and entering the underground economy becomes profitable, investing in education becomes unprofitable, etc.
Mechanically, means-testing lots of credits is little different than giving everyone the credit and then paying for it with high income tax rates. In some ways explicitly high tax rates are preferable because they are transparent.
Further still, there is not much difference between a tax credit and a voucher system. There can be technical details between means tested credits, universal credits funded by marginal taxation, and universal vouchers funded by marginal taxation but the basic incentive effects are all the same.
Under a Ryanesque approach the accounting will show that government spending is smaller and taxes are smaller. Under a universal voucher approach the accounting will show that government spending is huge and taxes are huge. But the basic economic effects are exactly the same. We should not get lost in the accounting and think that we have done something special because we get different numbers for “taxes” and “spending.” What matters is the effect government has on the economy.
As usual Jeff Miller is indispensible. The money quote:
Anyone who does not understand and discuss the "imputation step" as part of the BLS job creation process is not a true expert. You should ignore that source.
You don’t want to misunderstand the “imputation step” do you? Read the whole thing.
The New York Times is reporting that Wall Street is sending more cash to the GOP this year than Democrats. It seems that at least one branch of the East Coast Elite is having second thoughts. Perhaps Nancy Pelosi should send them a gentle reminder on the 2008 TARP vote.
Pretty quickly, it seems. According to a new study, schools spent 38% of their stimulus money in the ’08-’09 school year, 48% this year, and have 14% left over for next year. From a Keynesian perspective, that seems like a pretty decent pace to spend $100 billion. Some states, in fact, managed to spend 100% of their education stimulus money “immediately”. Even better, some of the money that was tied to reforms, like the $4.3 billion Race to the Top program, will probably pay many future dividends as well.
The downside of spending the money so quickly is many states and districts are once again facing a budget crunch, this time without stimulus money to bail them out. If there’s going to be another stimulus bill- oh wait, I’m sorry, that’s “jobs bill”- then it seems like education money tied to reforms could provide good short-term and long-term returns. How about instead of $4.3 billion out of $100 billion conditional on reforms, the next round could be 100% conditional on reforms. What kind of reforms could $3 billion per state buy?
Over at The Atlantic Business Channel, Daniel Indiviglio tells Virginia Postrel that just because marginal costs of producing e-books is close to zero does not mean that the price should be close to zero. He correctly points out that price only equals marginal cost when markets are perfectly competitive, and the book market is not perfectly competitive. But Postrel didn’t claim price should equal marginal cost, which you can see in her discussion of the importance of demand:
The other side of the equation is consumer response: How many more copies will people buy if the price goes down? Or, in economic lingo, what is the price elasticity of demand?
Her point is that since price elasticity is large the price should be closer to the marginal cost, because you sell a lot more books by lowering the price. What Inviglio seems to miss is that if Postrel thought that the market were perfectly competitive, she wouldn’t be discussing the price elasticity of demand, since in perfectly competitive markets producers are price takers, i.e. the price is set by the market, which means that from the book sellers perspective the price elasticity of demand would in fact be infinite.
Inviglio’s point about price not being equal to marginal cost is an important one to remember though. Even if books could be made at a no cost, the authors would write them for the publishers for free, and there were no uncertainty about which books would be popular, the price of books would not equal zero.
From Arnold Kling:
Take a look at the top ten countries in the economic freedom index. Then look at the top ten countries by population. The United States is the only country in both lists, and at the rate things are going we will not be in the top ten in terms of economic freedom much longer.
My reading is that there are serious diseconomies of scale in governance. The larger the polity, the worse the ability to govern.
What do the citizens of the state of Massachusetts gain by being a part of the United States? My guess is that the sovereign nation of Massachusetts would greatly outperform the U.S. in policy and economic growth.
The authorities do not know exactly how many people have been killed warbling “My Way” in karaoke bars over the years in the Philippines, or how many fatal fights it has fueled. But the news media have recorded at least half a dozen victims in the past decade and includes them in a subcategory of crime dubbed the “My Way Killings.”
That is from a very entertaining and interesting story in the New York Times today that is ostensibly about the “My Way Killings” in the Philipines, but really covers the worldwide phenomenon of karaoke related violence and also the culture of the Philipines, and all in an amazingly short space of words too.
It goes without saying these murders are all very sad and tragic, and that these crimes are all disturbing… But you can’t help find some twisted humor in the absurdity of John Denver driving someone into a murderous rage:
Karaoke-related killings are not limited to the Philippines. In the past two years alone, a Malaysian man was fatally stabbed for hogging the microphone at a bar and a Thai man killed eight of his neighbors in a rage after they sang John Denver’s “Take Me Home, Country Roads.” Karaoke-related assaults have also occurred in the United States, including at a Seattle bar where a woman punched a man for singing Coldplay’s “Yellow” after criticizing his version.
In previous post on the efficient market hypothesis, I sympathized with Scott Sumner’s overly demanding definition of identifying a bubble, because his definition prevents people from taking credit for spotting a bubble an implausible number of years before it popped. The example I gave was Dean Baker, who I recalled as predicting the current housing bubble in 1948. Dean corrected me in the comments, reminding me that it was in fact 2002 when he “called” the bubble, and he links to the paper where made the call.
I strongly suggest that anyone who harbors a notion that Dean called the bubble read his paper, from August of 2002, and consider whether he was calling the housing bubble that popped in 2006, or whether he was calling a housing bubble that hadn’t begun in any appreciable way.
Why don’t I think Dean actually called the bubble? First, in his paper Dean specifically refers to the period of time before we can expect prices to fall as “months”. In his conclusion, he sums up his predictions like this:
..it is likely that the HPI will follow the rent index in the months ahead, first showing considerably slower growth. In later months, it is likely that that the HPI will fall in real terms, and possibly in nominal terms, until it is back near its pre-bubble position relative to the rent index.
He is referring here to slower growth in the “months ahead” and a fall in the house price index in “later months”. In fact, house prices didn’t peak until the second quarter of 2006, four years or 48 months after Dean wrote this paper. From his choice of words here he makes it pretty clear that he did not have anything like that in mind.
Another problem is that national house prices are still above where they were when Dean spotted a “bubble”. According to the most recent Case-Shiller house price indices, the U.S. national house price average is now where it was in late 2003, a year after Dean claimed that house prices were going to fall 11% to 22%. Prices falling well below where they were when you cried bubble is a bare minimum requirement for calling a bubble. Dean has not met this very minimal requirement.
You might think prices are going to fall another 25% and prove Dean right, but prices would need to fall much more than that. If Dean was correct that there was an 11% to 22% bubble in 2002, then after the huge buildup in housing supply that occurred from 2002 to 2006, you’d expect the eventual fall in prices to bring us much lower than 11% to 22% below the 2002 price level. To put it simply, if I said “the supply of gold right now is too high to support the prices we are seeing, prices are going to fall 10% from here”, and then the supply of gold quadruples, prices need to fall more than 10% to prove me correct. This is even more true when you consider that the popping of a huge housing bubble is more destructive than the popping of a small housing bubble, so prices should be lower than Dean forecast due to a much larger destruction of wealth than he expected.
Dean’s a nice guy and all, but I think it really is inaccurate to characterize him as having called this bubble.
Bryan Caplan notices that during a blizzard the name brand items sell out ten times faster than the same item from a generic brand. This puzzles him:
Of course the most popular stuff sells out first.” But that’s a feeble explanation. After all, if X is ten times more popular than Y, then you’d expect stores to simply carry ten times as much X as Y. Why would X sell out faster in a blizzard if stores have already taken its greater popularity into account?
The underlying question seems to be, why doesn’t shelf allocation better reflect actual demand? I can think of a couple reasons that this is the case.
The driving factor here is that the amount of shelf space devoted to a product has an impact on it’s sales. In marketing science they refer to this influence as space elasticity. One reason this occurs is because of exactly what Bryan observed: the more space you have, the less likely a product is not going to be there when you want it. More important than this though, is that more product space gets peoples’ attention, and probably signals something about the demand for that product. If there’s a lot of space devoted to a product that means a lot of people buy it, which influences people via social proof. There’s an interesting behavioral story here about why space influences demand, but for now suffice it to say that it does.
This explains why shelf space matters to sales of individual products, but why would a grocery store, who is concerned with overall sales, care about relative sales of name brand vs generic products? One reason is that many grocery stores have their own generic brands that they want to sell. Another reason is that manufacturers care about how their products are on the shelf, and often make explicit agreements about such things, including paying the grocery store for better shelf space allocation. In some sections of some stores, I am told they let the manufacturers determine how their products will be displayed on the shelf, so that manufacturers are essentially leasing shelf space.
The overall reason why demand alone does not determine shelf space is that shelf space influences demand, so that manufacturers have incentive to bargain to influence shelf space.
Much of the disagreement about the truth or falsity of the efficient market hypothesis seems to stem from a disagreement about what constitutes identifiable. Scott Sumner, and a lot of other EMH proponents, define bubbles as identifiable only if the method of identification can reliably be profited from. Ryan Avent, Andrei Shleifer, and other EMH critics, argue that because of the risk in betting against a bubble, and the lack of perfect capital markets, one could have a relatively certain belief that a bubble exists but not be able to profit from it.A third route is to agree with Sumner that are identifiable and the fact that you profit from them reliably proves it, thus EMH is false.
I think this is part of the reason Sumner is frustrated by the anti-EMH crowd: he isn’t distinguishing between those that identify profitability as sufficient proof from those who don’t, so he is faced with some people arguing that “people reliably profit off of bubbles, therefore EMH is false!” and others arguing that “market irrationality prevents reliably profiting off of bubbles, and EMH is still false”. There may be people who hold both of these contradictory points at the same time, and they are wrong. But the existence of two separate arguments against EMH that happen to contradict each other is not evidence against either theory or for EMH.
Aside from the insistence of profitability, the other problem I have with Sumner’s definition of identification is the claim that identifying one bubble is not enough, you must also able to identify them. This is a very econometrician-centric way of thinking of identification: you get a particular model of asset prices and estimate a set of parameters, which you can then use to forecast. If your forecast is not reliable, then your model or your parameters are wrong, and your identification is false. But why should a model that can identify a housing market bubble be capable of identifying a tulip bubble or a tech bubble? For that matter, why should an individual with a particular set of knowledge that has allowed him to idenfity a bubble in one market be held to the burden of proof of identifying the next bubble, which will likely be in a completely different market of which he has no knowledge? Take Calculated Risk, for instance. He clearly and specifically identified the housing bubble using his extensive knowledge of the entire housing industry- from realtors to securitizers. Are we really going to demand that he identify the next bubble in gold, oil, Beanie Babies, or some other market he knows nothing about until we accept that he identified the housing bubble? I see no reason to expect this to be true. Nor do I see any reason to expect that all bubbles be equally identifiable. Using Sumner’s criteria, I’m not sure how you tell the difference between a world in which 1/5 bubbles are identifiable and a world in which 0/5 bubbles are identifiable.
Look, I’m not coming down strongly on one side of this debate- my instinct actually tends to be that, in general, you can’t beat market prices, and we should have humility about disagreeing with them. I also understand why a definition of bubble identification that does not include a higher burden of proof is unsatisfactory; it lends itself to people like Dean Baker claiming they identified the current housing bubble in, I think, 1948. Nonetheless, I’ve obviously got disagreements with the way Sumner and other EMH proponents define bubble identification. I’m still open to persuasion on this.
In my view, the question is not whether you like vouchers are not. Vouchers are inevitable, given the alternatives. Alternative 1 is to keep what we have, which is an open-ended commitment to reimburse health care providers for all procedures performed on people over the age of 65. That is not feasible–the budget blows up. Alternative 2 is to have government impose strong rationing of medical services to seniors. I think that is an unlikely alternative. It’s not just that I think that government would do a poor job. When it comes down to it, do politicians really want to be put in that position?
I am strongly sympathetic to this view. On one level vouchers do seem like the most realistic form of rationing. But, I wonder whether they are politically stable.
Are we not just committing to a regime in which voucher sizes are constantly raised? Won’t political ads in 2050 simply say: “The biggest problem facing America today is the failure of vouchers to keep up with insurance rates, leaving millions of hard working seniors without life saving care”
I just don’t see how the government can credibly commit to low voucher levels. The government can’t even credibly commit to lower reimbursement rates for MDs. Some way or another we have to cut off excess price growth at the source and I am unconvinced that vouchers will do it.