You are currently browsing the category archive for the ‘Science’ category.
I don’t know enough about the science to tell whether this is a significant step forward in neural-interface systems, or just the specific potential of neural-interface systems to aid paralyzed individuals, but this report from the NYT is encouraging in any case:
Two people who are virtually paralyzed from the neck down have learned to manipulate a robotic arm with just their thoughts, using it to reach out and grab objects. One of them, a woman, was able to retrieve a bottle containing coffee and drink it from a straw — the first time she had served herself since her stroke 15 years earlier, scientists reported on Wednesday….
Scientists have predicted for years that this brain-computer connection would one day allow people with injuries to the brain and spinal cord to live more independent lives. Previously, researchers had shown that humans could learn to move a computer cursor with their thoughts, and that monkeys could manipulate a robotic arm.
The technology is not yet ready for use outside the lab, experts said, but the new study is an important step forward, providing dramatic evidence that brain-controlled prosthetics are within reach.
I think there is a non-trivial probability that future computer interface using only our minds will be popular (I don’t think whether it will be possible is much of a question anymore). I’m not sure this will replace current inputs entirely, as the telephone did to morse code, or just compliment them, as the mouse did to the keyboard. In either case I think it adoption of such technology will go hand-in-hand with the continuing integration of brans and computers. Psychologically, I think controlling computers with your mind will make computer memories feel much more like actual memories, and will blur the line between the two further. After all, having no manual inputs means the entire process will occur internally with the appearance of moving parts in the real world: simply think what you want to know, and have what you want to know appear floating in front of your face (augmented reality). I predict this will feel very different from even just waving your hands in the air Minority Report style.
ADDED: Mark Thoma has what looks like a very interesting video on all of this from the Milken Institute. Fast forward to around the 50 minute mark for some amazing footage.
ADDED II: From MathDR in the comments, apparently Doc Oc will be real some day:
I spoke with the DARPA program manager regarding this project (this was a while ago) and I remember his statement about the impact of this research: Basically when doing experiments on monkeys with no impairment of limbs (they constrained the monkey arm with a sling to inhibit movement and force it to use the robotic arm), the question was what would happen then the sling was removed.
The monkey responded by utilizing both of its arms normally AND the robotic arm as a THIRD arm. This implies (extending to humans) that we would be able to *extend* our anatomy to multiple appendages and maybe even other toolsets (surgical tools on the end of appendages, etc.)
Luckily I’ve mostly resisted the siren call of the Googleverse aside from their search engine. But I guess I’m starting to think that I should be more careful even there. Am I just being paranoid? Or should I start using some kind of add-on to prevent Google from tracking my activity? What says the hive mind?
Maybe there is something I am missing here but Google tracking my – and everyone else’s – activity seems awesome.
Now, when it comes to the government I can grant that there are some grand issues associated with privacy and the potential of authorities to abuse that information. Though, if we were to be honest I am not sure this is a serious practical concern.
Not, to be sure, because I don’t think the government would spy on us at every chance if it could. No, instead because for most practical purposes the information in collects is of little value.
My guess – and again I welcome Julian Sanchez to say why I am wrong – is that this extends from a basic instinctual drive for privacy which in turn extends from concerns about being ostracized from the larger group for abnormal behavior.
However, in modern America this is not really a threat. There is virtually no odd behavior that does not have an associated MeetUp.com group.
Now that being said there are particular types of information that will be potentially hurtful or embarrassing unless or until we readopt an extremely strict set of social conventions.
However, if Google breaks that information open my guess is that you will indeed see a re-emergence of conventions that explicitly quarantine vice from the rest of professional and personal life
In any case my core point is that its not possible to have a society in which the majority of people have their privacy regularly and publically violated. Such regular violation will simply alter the terms on which people judge each other and therefore what is meaningfully private.
So in theory I am working towards a larger exposition on this issue, but as I get the chance I will shoot out various notes. Barry Ritholtz gives me such a chance.
In other words, yes, we can figure out actual causation.
Where I part ways with Hume is in looking at causation analysis as merely competing stories tying two facts together. It is larger than that, there is Causation-in-Fact. At the very least, we can eliminate the narratives that are demonstrably false. But to do so, we need to avoid the over simplifications, the correlation errors, the misapplied data, and recognize the complexity of causation in the world of finance.
Suppose you had figured out actual causation. How would you know?
I never asserted so absurd a proposition as that something could arise without a cause.
I on the other hand will assert something so absurd.
This is motivated by a number of posts I have seen on the methods, limitations and philosophy of science. However, before responding to those individually I think its worth getting out my general view.
My sense is that reality is like a giant block. This block extends in a number of dimensions but four are most relevant for our purposes. The three easily observable spatial dimensions and the time dimension.
At different points within the block some properties of the block vary. These are the fundamental fields. They give the block a composition.
Now, so it happens the composition of the block is patterned. So, that a given value for properties at one point in the block mean that a range of values at nearby points is more likely than other values.
In our everyday experience this allows us to perceive objects. So if I look out and say “I have an arm” I am talking about a pattern in the block.
For example, I mean that if I can only perceive a portion of the block composing the “arm” that I none the less have some pretty strong guesses about what the nearby portion of the block islike. If I then perceive the nearby portions and they do not match my guess then this is surprising. Its surprising because of the patterns.
We can call a specific pattern a shape, so that I can say my arm has a shape. Alright.
That’s all very natural in the spatial dimensions but the same thing happens in the time dimension. Objects have a shape in time.
The difference is that I perceive space and time in different ways. This possibly extends from the fact that time has a useful meta-pattern but that fully clear.
In any case reality is completely described by the shape of all objects in space-time. This is because the shape of objects in space-time indicates the patterns in the block properties and the block and its properties are the whole of reality.
What I think of as science, though I have no interest in claiming the word science, so we can call it sclience if people object is building a map of space-time for the purpose of being able to look up the shape of various objects.
Now for relatively large nearby objects within a particular slice through time this is trivially easy. I open my eyes and boom! I perceive a map of objects. All sorts of fancy stuff goes on in my brain to make this happen but from my point of view its really, really easy.
Unfortunately, this doesn’t work for small objects, far away objects or objects at another slice in time. Which is of course a pain.
So I go about using whatever techniques at my disposal for expanding the map.
The attempt to do this in the time dimension has led people to posit the notion of causality: that if this thing happens then something else will happen. Really though this is just a discussion of the shape of things through time.
Its no different than saying, if I have a forearm then I have a wrist. Its not as if there is some meta-physical relationship between forearms and wrists. And, it wouldn’t really make sense to say having a forearm causes me to have a wrist. Its just that the shape of arms is such that forearms are usually in direct proximity to wrists.
This is the same with the shape of things through time. In time things usually have a certain shape. However, trying to apply some sort of meta-physical specialness to this doesn’t really add anything.
Similarly. its not clear what one really adds with the phrase “everything must have a cause.” There is some shape of things through time, sure. However, are you saying that there must be some identifiable pattern to that shape? I don’t see why that has to be true and its not even clear that it is in fact true.
There could be regions of space-time where all four dimensions are completely mixed-up with no pattern at all.
So, it seems to me that there is no point in getting all emotionally caught up in this notion of causes. We map space-time as best we can and that is that.
No, in fact I am suggesting that H. Pylori is the cause of all peptic ulcers.
~ Barry Marshall 1983 just before an entire auditorium walked out on his presentation on the causes of peptic ulcers, previously believed to be the manifestation of a large number of distinct underlying conditions.
From Wikipedia 2012
A peptic ulcer, also known as PUD or peptic ulcer disease, is the most common ulcer of an area of the gastrointestinal tract that is usually acidic and thus extremely painful. It is defined as mucosal erosions equal to or greater than 0.5 cm. As many as 70–90% of such ulcers are associated with Helicobacter pylori
Kevin Drum writes
Don’t forget lead! Lead lead lead lead. When is the connection between reduced lead levels and reduced crime levels finally going to penetrate the minds of American journalists? I know it’s not sexy and I know everyone wants to ignore it because you can’t tell heroic stories about lead, but it’s almost certainly the single biggest contributor to crime reductions nationwide.
I don’t know the research behind lead and crime. However, overwhelmingly the presumption should be that epidemics have a single precipitating factor.
People are sometimes confused by the fact that complex conditions have a long list of necessary factors. However, the odds against more than one necessary factor pushing the phenomenon across the line into epidemic at the exact same time are astronomical.
Not to go to far astray but this is why recessions likely have a single precipitating factor. They spread too fast and burn out too quickly to be multi-casual. A fifty year period of off and on stagnation, that might be multi-causal. An 18% collapse in industrial production over 18 months? That has a vector.
A good rule of thumb – I believe – for epidemics economic, biological or social is this: If it spreads along lines of communication its entropic information. If it travels along major transportation routes its microbial. If it spreads out like a fan, its an arthropod. If its everywhere, all at once, its a molecule.
I should like to die of consumption . . Because the ladies will say, “look at that poor [Lord] Byron, look how interesting he looks in dying.”
If there is scientific infighting more significant than that over macro-stabilization, it is that going on inside the Psychiatric community.
What make its particularly hard, however, is in most pursuits we can always lean back on the notion that if we hope to make the best world, then understanding the world as it is – not as we wish it to be – is our best hope. This is not true with mental disease.
In many different ways, not just in the debate over psychotropics, the truth may increase suffering. There are times when you know that forcing someone to acknowledge their insanity, is nothing less than cruel. .
Gary Greenberg gives a admirably even handed take – given what I know his beliefs to be – on the run-up to the DSM-5. Of course, he still lobs the standard grenade:
The fact that diseases can be invented (or, as with homosexuality, uninvented) and their criteria tweaked in response to social conditions is exactly what worries critics like Frances about some of the disorders proposed for the DSM-5—not only attenuated psychotic symptoms syndrome but also binge eating disorder, temper dysregulation disorder, and other “sub-threshold” diagnoses. To harness the power of medicine in service of kids with hallucinations, or compulsive overeaters, or 8-year-olds who throw frequent tantrums, is to command attention and resources for suffering that is undeniable. But it is also to increase psychiatry’s intrusion into everyday life, even as it gives us tidy names for our eternally messy problems.
Its standard to object to the medicalization of everyday disorders. Where does it stop, people ask? Is everything a disorder?
It never stops, I answer. And, it’s a disorder if you dislike it.
The normative element in non-mental health conditions is somewhat hidden because death, disability and pain are almost universally disliked. So, if I say cancer is a disease because it kills, no one is likely to object to this. It doesn’t matter that cancer is, almost as much as anything, a part of life.
As far as we can tell no human has ever been born without the propensity to develop cancer. That people don’t die of cancer is purely a function of the fact that they die of something else before the cancer gets them.
So, why is cancer not just a part of life? Part and parcel with being a multicellular organism? The simple answer is that it causes death, disability and pain. These are widely recognized as bad and so is cancer.
What about feeling sad? To my knowledge no human has ever been born without the propensity to feel sadness. Is sadness simply part and parcel with life? The answer from my corner is, not if you don’t want to be sad.
This is the rub in all mental illness. It is the malady of not wanting to experience the world as we do. And, it raises the deepest questions about what it means to improve wellness.
I stick firmly to the notion that we improve wellness when we alter physiology to produce a preferred state of being. Preference is in the eye of the patient.
However, I do know the question “Am I sick?” has moral meaning to people. Giving a name to a condition can bring comfort or despair, even when it doesn’t change the essential experiences of the person at all.
If by chance you don’t frequent the geekier side of the twitterverse you might have missed the outpouring of wit-in-140 that followed this post by Gene Marks
If I was a poor black kid I would first and most importantly work to make sure I got the best grades possible. I would make it my #1 priority to be able to read sufficiently. I wouldn’t care if I was a student at the worst public middle school in the worst inner city. Even the worst have their best. And the very best students, even at the worst schools, have more opportunities. Getting good grades is the key to having more options. With good grades you can choose different, better paths. If you do poorly in school, particularly in a lousy school, you’re severely limiting the limited opportunities you have.
As it so happens I was a poor black and I built a rational expectations model of the very phenomenon Marks describes.
I look back on it and I see how it was the origin of the Smithian worldview I push today.
The problem is this: You want to build a model of the choices facing a poor black kid in a bad environment. You need to sketch out a decision tree and then turn that into a choice function and then – in my case – simulate the interaction between neighborhoods and student choice on a computer.
Significant insight can be gleaned from the closed form solutions but to really watch the magic you need numerical estimation.
In building this model one thing became glaring clear. The life choice that Mark’s outlines and that is advocated as prudent and reasonable by society is in fact incredibly risky.
I probably can’t convey the view-quakiness of this revelation because its now so entwined with the way I see the world. However, imagine the choice of a poor teenage girl deciding whether or not to have unprotected sex and possibly become pregnant, or to study hard, make good grades and stay in school.
Forget the unprotected sex itself, which we almost all find enticing.
The key is the pregnancy. For a 16 year-old girl regular unprotected sex will result in a full term pregnancy in the modern world with roughly probability one. There is little chance she will die in child birth. Late term miscarriages at her age are rare.
Now, just like any other parent the birth of that child will be the most important event in her life. And, the love of that child will be the most valuable thing she experiences. Some people say that looking back their career was more important than their children, but those people are few and far between.
So, if the girl has unprotected sex she gets right here, right now, the most important and valuable thing in life will happen immediately with PROBABILTY ONE.
Its difficult to get better than that. Waiting at all creates a risk that something will happen to prevent this. Even, if you can be sure it won’t – and many couples find out unfortunately that you can’t be so sure – you still have to discount the time. You have wait for the most valuable thing in your life.
Mark’s would have her set all that aside. Put away time that she will never get back – you must remember that no matter what you will never get these days back – for the chance that supposedly she will go on to college and get some job and meet some guy and then later have a different child under what might be better circumstances.
This is a risk. Taking Marks advice means that you lose a sure shot at the greatest thing in life. It means that you potentially waste time and time is the currency of life. He wants to convince you that the gamble might pay off.
Yet, how is a Bayesian supposed to tackle this problem?
I look around in my neighborhood and by definition none of the folks here have done what Marks suggests. These people are like me. I have no reason to believe that I am different.
What kind of sense would it make for me to take this gamble when no else does? No, it makes more sense to play it safe and take the sure thing.
Now, of course teachers, parents and helpful people like Marks will tell me to do otherwise. Should I believe them?
Not on your life.
By their own admission they want to see me “succeed.” That is, they benefit from my gamble. Yet, they incur none of the risks. They don’t lose time with their child. They don’t risk their fertility. They don’t experience the disutility of social climbing.
Heads they win. Tails I lose.
Listening to them would be nothing short of foolish.
And, so of course the teenage girl does not listen. Not because she is irrational, but because she is rational.
Indeed, strategies to get her to change her mind hinge on coercion or leveraging irrationality. Parents may threaten as poor parents do not have the resources to bribe.
Some teachers will try to convince students that they can do anything if they try. Clearly they can not. Others will significantly downplay the disutility of social climbing. They will cast crossing into the cultural unknown as uplifting, not depressing and possibly deeply lonely. These kind of stories border on outright deception.
Everyone will try to get her to “believe in herself.” This is an attempt to induce Caplan-esque rational irrationality. That is, to attach an emotional preference to a belief about the natural world. This is epistemologically equivalent to the nationalistic fervor that accompanies “America First.”
And, if that doesn’t work some teachers will resort to honest guilt: “We took a chance on you, now you take a chance on yourself.”
However, all of these strategies come down not to encouraging prudence and rationality, because the prudent and rational thing to do is to get pregnant. They instead hinge on emotional appeals to irrationality, noble lies and social, and sometimes physical coercion.
If you were a poor black kid, that’s what you would face.
Halt the Crisis: The ECB should announce that it will become Lender of Last Resort for all Eurozone Countries. If I were running the show I would lead this off with a particular show of force. A short term bond issue for which there is heavy short interest I would have the ECB buy into specifically and dramatically to drag bond yields to 1.25%, in an effort to cause the shorts to miss their margin calls and take heavy losses on the issue.
This is not because I have anything against shorts, think they are acting irresponsibly or even initiating some type of implicit attack. Instead, its because I don’t need to waste the rest of the afternoon or god-forbid the better part of a week with markets wondering whether or not I am serious. It will become abundantly clear within the first 10 mins of trading that I am quite serious and prices need to adjust, lest there be more losses of this type.
Replace Individual Sovereign Debt with Eurobonds: The ECB will pull on to its balance sheet all Sovereign Debt of Eurozone Nations are replace that debt with Eurobonds. This process can proceed over the course of weeks or months. With prices stabilized it need not be done fast.
Independent Issues of Sovereign Debt are Banned: Any Sovereign requesting additional funding must come through the ECB and be sterilized with Eurobonds. Rollover issues may be funded at the overnight rate. New issues will be initially funded at the overnight rate plus 2%.
Sliding Scale on New Issues: New Issues of debt, that expand the total stock of debt held by a member country will be funded on a sliding scale voted on by the European Commission. The sliding scale will provide for a penalty rate based on the Primary Balance of the nation in question.
Issue Holds For Violation of EC Rules: A violation of European Commission rules can result in a increase in the penalty rate for new issues or a hold on the funding on new issues.
The Single List is Replaced By Eurobonds: The ECB will accept only Eurobonds as collateral for primary funding operations.
Super-Sovereignty for Eurosystem Banks: Banks which participate in the Eurosystem and receive liquidity under primary funding operations, are no longer under the jurisdiction of member states. They are under direct jurisdiction of the European Commission, with the ECB as their regulator.
Right now, the EC is attempting to use “market discipline” to impose reforms on peripheral countries. As I pointed out from the beginning this is a dangerous game.
It only works if you are actually willing to drive the bond markets over a cliff and if you are actually willing to do this and the markets believe you then you go over the cliff automatically.
It is theoretically possible to thread the needle if you know that the peripheral countries are willing and importantly able to offer unconditional surrender. Then you can creditably halt the train from going over the cliff and everyone knows that you can and so you are safe.
Yet again, as we pointed out earlier, for this to fail all you need is incompetence, not intransigence, on the part of your counterparties. This is the problem with doomsday devices or credible commitments to be irrational, generally.
To solve this the EC should just abandon attempts at market discipline – which are unserious or in any case should not be used by serious people – and replace that with administrative discipline.
Simply require that member states come to the EC or the ECB for financing and that the financing may face a penalty or be outright refused. This means that the EC or the ECB can force a government shutdown without having to force a bond market crisis.
This is the power it really wants. It wants the power to say that Greek retirees will not get their pension checks unless Greece shapes up. It does not want the power to say Greek bondholders will not get paid. You see where that leads.
With this set up the only way a government can get out from underneath the EC’s thumb is by running a cash surplus (not simply a primary surplus) which is what the EC wants anyway.
The concern that people will raise is that this puts “tax payers on the hook” for debts in other countries. I don’t think this is in fact true, though one would have to sit down and look at the numbers.
What it does is puts Eurozone economic growth on the hook for the total debt in all member countries. Remember that now that refinancing operations are conducted only in Eurobonds they are “first at the table” in soaking up savings in the country.
Thus an excess of bonds over desired savings would imply inflation if the ECB took no action and higher interest rates generally if the ECB took action. The outstanding quantity of debt is not so high that I think you would ever need to raise taxes to finance it. The total debt is small enough that it can always be financed through crowding out.
And, since the EC effectively now has the means to force a cash surplus on member nations it can push the outstanding stock of debt down over time.
Some of my favorite commenters were puzzled by my post of the End of History.
Quick notes for those who haven’t followed me all the way on
this multi-year journey
1) The End of History is the notion popularized by Francis Fukuyama that Democratic-Republicanism is the ultimate form of government and that it will be universal in the near future. This represents the End of History in that our basic struggle over political structure of society will be settled.
People push back on this notion on multiple fronts but the front I push hardest on is that Democratic-Republicanism is not likely to be the optimal form of government in the future. Rather than the End of History we are in an odd phase defined by the explosive growth and extensive biological and cultural diversity among humans. These things are likely to come to an end and produce a society that is stable and has no use for democracy.
2) The second issue which is what the title of this post speaks to, is about how long we expect the human project to go on.
Putting probabilities on our extinction is a hard. However, there are several lines of reasoning that suggest it might not be too far off. The simplest goes like this. If we extrapolate what seems to be clearly possible in terms of economic growth, peace and prosperity generally in the world, then we get a global economy growing at at least 2% per capita for several hundred years.
At that point space travel becomes relatively cheap and I don’t want to go into a lot of detail here but there is strong reason to believe that at least some group of humans will want to devote themselves to space travel and exploration.
Those humans will search out new worlds and come to cover a large portion of our visible universe.
Here is the problem. From the beginning of humanity until this scenario plays out is not very long. This suggests that once sentient life gets started its pretty easy to go out along this path.
So the question is: can we really be the first? Why isn’t our portion of the universe already filled with explorers?
One possible answer is that the process of getting to exploration involves developing technologies that ultimately lead to the destruction of the potential explorers. Its not hard to see what these techs might be: Unfriendly AI, uncontrolled nanotech, highly developed chemical or biological weapons in the hands of terrorists, etc.
Thus, it might be the case that increasing technological sophistication is ultimately self-extincting because the probability of an extinction level mistake becomes very high when your technological sophistication becomes very high.
This is the Great Filter that prevents our universe from being filled with explorers and is why we don’t see any explorers right now. Since no one else has made it past this hurdle its unlikely that we will either. We would have to “beat the odds” as it were.
Another possibility is that the universe is filled with explorers but they are purposefully hiding from us, hoping that we never actually make it off of earth. If we do make it off of earth there is reason to believe they would destroy us. I won’t go deep into that here, but I think the case for destroying potentially rival civilizations is pretty strong.
3) Malthusian stagnation. So the idea, which I feel pretty confident in, is that eventually stagnation will set back in. This is not because we will exhaust all of earth’s resources or something like that. While that could happen I don’t think of it as a serious possibility.
Instead the fundamental problem is that the “Grid of Reality” is discrete and bounded and therefore finite. That is, there are minimum sized particles that interact at minimum distances. Add, to that the fact that at any moment our descendents are bound in a finite region of space by our light cone.
This implies that we are dealing with a finite number of possible configurations of reality. Once, we have mastered the ability to manipulate those configurations there is literally nowhere else to grow.
That is, the possibilities for growth are literally not limitless because there are not an unlimited number of possible configurations for our portion of the universe to be in.
No matter how large that number is – and it is of course very large – a growing economy will hit the max at some point in the future. Thus growth simply cannot be forever.
Thus, even if you could overcome basic entropy problems you still run into the fact that exponential growth means that at some time T we will have mastered all of the configurations within our light cone and we would have to cross the light cone to continue at an exponential pace. This is impossible.
Knowing this, it becomes the case that each of our decedents at some point will only be able to exercise command over some subset of possible configurations at the expense of some other descendent having command over that subset. This sets up the fundamental Malthusian tension.
Now, it is likely the case that we will hit practical constraints before we hit this “ultimate constraint”, however, knowing that there is an ultimate constraint tells us that no amount of technological progress, innovation or ingenious breakthrough can produce indefinite growth.
There is a discussion going on, on The Corner about abortion that I like. Even though I think it’s a lot less “serious” than I would prefer its much more serious than most takes I read.
By serious I mean: folks attempting to grapple with the issue rationally rather than simply identify themselves with stances that are sentimentally appealing.
Also, before I get started I want to specifically set aside issue related to “what is the scientific consensus” because that draws us into arguments from authority when we actually have lots of observable information to grapple with.
Lets grapple with that information first before making appeals that “smarter people than you think X.”
The debate was in part kicked off by this pair of posts. I am going to quote liberally.
First from David French
At long last — and against the strong headwinds of the anti-science ideologues — the law is finally catching up to biology. Next week, Mississippi voters will determine whether all human beings in the state of Mississippi are also “persons” under the law. Such a vote is a logical — if belated — concession to well-established science. Indeed, scientists are virtually unanimous in declaring that the result of conception is a human child with a distinct DNA different from his or her parents. This unanimity is the essence of “overwhelming consensus.”
Given this biological reality, is it logical, reasonable, or remotely moral to characterize some human beings as “persons” and others not? Are we not long past such outright quackery? I hope and expect that Mississippi voters will decisively reject the deniers in their midst and recognize the reality of personhood. After all, it’s a simple matter of science.
In part this is important because we can clearly make theological arguments about the morality of abortion and the notion of personhood. However, its dicey to know what the law should do about that because we have no official church in the United States and churches disagree on this issue.
So, from a legal standpoint it would be nice if there was some sort of secular means of handling this question. Also, for us agnostics and atheists it would be nice if there was a secular way of handling the fundamental morality of this issue.
French is suggesting that there is. After conception we have “a human child with distinct DNA.”
I think human child is not quite right but I don’t really want to quibble over that because I think David really means human being and that I readily concede.
The question is, are all human beings persons?
Robert VerBruggen returns the obvious reply but with a example I usually don’t think of.
David — it is certainly true, as you write, that the result of conception is an embryo with “distinct DNA.”
What’s not clear to me, however, is why “distinct DNA” should be the criterion by which we judge personhood for moral and legal purposes. As Reason’s Ronald Bailey has pointed out, 60 to 80 percent of human embryos — post-conception, with distinct DNA — are naturally destroyed by the woman’s body. Are we to see this as a large-scale massacre of human beings, develop drugs to prevent it from happening, and require all women who have unprotected sex to take them? Certainly, we would be willing to take measures like this if post-birth infants were dying in comparable numbers.
What Robert is getting at here is what I term “revealed morality.” Which is to say look, David, you certainly don’t act like you believe distinct DNA constitutes a moral person.
Otherwise you would see the prevalence of early miscarriages as one the greatest natural tragedies in the world and probably the single most important issue facing the Developed World, if not humanity itself.
The point here is not to call David French a hypocrite, but to force him – and others – to consider what they actually believe. Do you believe that distinct DNA defines a new moral person and thus the prevalence of miscarriages are the most significant human tragedy in the Developed World.
What proceeds at The Corner is the typical devolution of the discussion once people are made to feel uncomfortable. That is, accusations that Robert is calling people insensitive and qualitatively meaningless undermining of Robert’s data and word choice. However, that’s fine. I am happy that it got this far.
There are other issues that I have with the notion of defining personhood as “distinct DNA.” I treat them lightly and if people are interested we can go into more depth.
First, the obvious issue that once conception is complete we have distinct DNA but we do not know how many people we are going to get. Robert brings up the case in which we get zero born people. This case is nice for highlighting the morality of the post conception loss. However, from a theoretical standpoint there much thornier issues are when we get more than one person and when we get fractional people.
Everyone is aware that it is possible for the egg to divide post conception and produce identical twins. I think most of agree that identical twins are separate people. Thus, there must be at minimum some secondary process of personification, in which the single person becomes multiple people.
How does this take place? Its important because the method in which secondary personification takes place might render the “distinct DNA” theory of personification superfluous.
To be more specific, if something like “secondary personification” always takes place but does not always result in twins, then why are we sure there is some meaning in the “primary personification” that takes place when new human DNA strand is constructed.
Even more gnarly, however, is the case of fractional people. It is possible for two fertilized eggs, each with their own Distinct DNA, to merge into a single born human. The result is a human chimera.
What do we believe is happening here?
Are there two persons in the same body? Are the persons “merged?” Is one person killed in the process? If the later then which one? Again, answering these issues makes the question of primary personification at distinct DNA difficult.
If we believe that there are two persons then how are we to morally deal with what seems to be a single adult. Are the cells descended from one fertilization event morally responsible for the actions of the cells descended from another fertilization event? And, what to make of the fact that the adult seems to insist that he or she is in fact and integrated person?
If the persons are merged then how does the process of “personifactional integration” take place? Like secondary personification, is this an event that always happens irrespective of whether there are two persons? If not how does the mixture of cells induce “personificational integration”? The DNAs are not joined in anyway. The cells are, at a basic level, simply in close proximity to one another.
If one person is killed then which one? How could we tell?
The reason all of these questions are really gnarly is because perhaps a natural response is to give some sort of “preference” to the person represented by the mind of the adult human and/or to say that twins become separate persons because they have separate minds.
However, obviously if we are going there then having a mind is key point in personification. At a minimum “mindness” induces secondary personification or personficational integration.
Yet, we strongly believe that there is a mind-brain connection.
We can talk more about this but I think even leaving aside any scientific consensus on the issue there are specific observations we can make that should strongly suggest to everyone that the mind and the brain are dually linked.
That is, it is not simply that the brain is the organ through which the mind manifests itself, but that the structure and chemical composition of the brain can be manipulated in ways that influence the mind. Thus the mind-brain connection must go both ways.
The most obvious of these observation is the influence that chemicals introduced into the brain seem to have on the mind of the person. If you ingest even, alcohol for instance, there is the strong sensation that the alcohol is affecting your mind.
Not just weakening the mind brain connection like a paralytic. Ones actual though processes and emotions seem to change. This is an easy experiment to do and almost everyone reports the same results.
Second, there is the problem of mutation. The distinct DNA of conception will mutate over time as cell divide. I am not sure anyone thinks of this as creating new persons. How are we to make the distinction.?
While that issue could probably be patched fairly easily, the need to patch it raises questions over whether or not we should be put particular emphasis on the generation of distinct DNA in the first place.
There are many other issues but the last one that I want to touch on is the connection between humanness and personhood in the first place. Is humanness necessary to being a person?
If we meet sentient aliens are they by definition not persons? If we develop intelligent machines, machines derived from human minds are they not persons? What if they can remember being a person?
Even if you are inclined to answer no to all of these on the grounds that humans are fundamentally specially then the silly sounding but important question arises: how do you know the people you are interacting with are actually humans and not aliens or machines?
This is important because if you can’t tell the difference between a human and an alien who can perfectly impersonate a human then we have to ask whether there is a moral difference between the two. What does it mean to be “really human” if we have no fundamental way of knowing that we are not being fooled.
There are obviously many other issues related to abortion and miscarriage. And, I know for some people’s taste I gave a very generous touch to the Distinct DNA dividing line.
However, I think the personification issue is an important question and a gentle touch is our best hope of coming to some consensus over an issue that naturally spawns strong emotional reactions.
Do my Austrian friends still think anarchy and capitalism mix?
Perhaps we should let the NYPD go home and see what Order Emerges from this? A peaceful property respecting contract among all men one should suppose.
Bob Frank has an essay in the NYT adapted from his new book The Darwin Economy. The thrust of it seems to be that many goods are positional.
When the ability to achieve important goals depends on relative consumption, all bets on the efficacy of Smith’s invisible hand are off. As Darwin saw, many important aspects of life are graded on the curve, and in such cases, individual incentives often lead to mutually offsetting efforts.
The rat race is rent dissipation.
This is an idea I used to push strongly, until Justin Wolfers convinced me otherwise. I used to think happiness was largely zero sum and that the benefits of great wealth came only from the relief of great misery: infectious disease, deformation, major depression, etc.
Yet it seems real happiness does come from greater levels of consumption even if you don’t beat out your neighbors. Still, your neighbors do matter, interestingly enough for things we wouldn’t think of like, longevity.
All of that being said, the larger point I want to make is how evolutionary thinking is taking hold into everything we do.
Sometimes, I read articles from maybe 20 years ago about love or ambition or beauty that are rooted concepts other than in evolutionary psychology. It feels like reading an article on Wicca and its use in treating pneumonia.
As I have mentioned, to me the idea of the firm as some sort of conscious maximizer seems naive if not outright silly. The large scale failure of corporate strategic planning, an obvious inevitability.
No, there is one idea that rules them all – selection. Things are what they are because the world culls out the other possibility. Existence is the art of the possible and science is the study of selection, on one level or another.
The world around us is filled with three kinds of particles, not because there are only three but because the others decay to fast. Molecules “seek” low energy states because in the ever present jostling that is reality they are much more likely to fall from a high place to a low place than the other way around.
Even time itself has the arrow that it does because there are more high entropy states than low ones. Watch long enough and the low entropy ones are bound to dominate your observation.
That’s why I describe the effect in different terms – we will tend to observe those things which are highly observable. To understand our world then is to understand the rules of observation.
What are we likely to be able to see. That is what is likely to be.
I think this can reduce some great mysterious to absolute simplicity. Why for instance are we all alone. How can it be that we are the only intelligent species we know.
How can it be otherwise? There were once many species of homo, now there is but one. The same will be true for intelligent life in general. The average intelligent creature will look out into the world and see only others like her, because for her to exist the others must either have been killed or not have arrived to kill her yet.
Every thing about our world has been selected for. The unstable particles have decayed. The unstable species have gone extinct. And, the unstable economies have collapsed.
When you drop the chemical on the mutant mice nerve cells, their firing rate drops, by 30%, say. With the number of mice you have this difference is statistically significant, and so unlikely to be due to chance. That’s a useful finding, which you can maybe publish. When you drop the chemical on the normal mice nerve cells, there is a bit of a drop, but not as much – let’s say 15%, which doesn’t reach statistical significance.
But here’s the catch. You can say there is a statistically significant effect for your chemical reducing the firing rate in the mutant cells. And you can say there is no such statistically significant effect in the normal cells. But you can’t say mutant and normal cells respond to the chemical differently: to say that, you would have to do a third statistical test, specifically comparing the "difference in differences", the difference between the chemical-induced change in firing rate for the normal cells against the chemical-induced change in the mutant cells.
Again this is one of those things that difficult to frame because I want to respond “Yes, but more importantly no and if you really think about ‘eh’”
Strictly speaking the author is right. You cannot say there is a statically significant difference in the response rates between mutant mice and normal mice.
However, what you can say is that the response rates of mutant mice differs significantly from zero while the response rates of non-mutant mice do not.
That clears up everything, right?
The ultimate problem – I think – is getting too much of a bug up one’s bum about the threshold of statistical significance. You did an experiment you got some evidence. That evidence alters the way you think. Its not like whoa, I discovered the next big thing if I get something with a 5% significance level but I just have a pile of poop if I get something with a 6% significance level.
However, because we concentrate on significance levels we say that the normal mice “didn’t respond”, while the mutant mice “did respond.” That sounds like you are talking about a fundamental difference in the mice. And, since you are talking about a fundamental difference in the mice you ought to be able to say the mice are fundamentally different, right?
Well, no because its an artifact of our significance cut off. That we use this cut off is a problem.
However, doing the “difference-in-differences” stat doesn’t really help over come that because you have just applied the same falsely rigid standard to another measurement.
Indeed, one can imagine the following scenario. There are three types of mice: Normal, Mutant and All Fucked Up (AFU). The AFU mice are some ugly creatures and you really don’t want to get them mad. But, we’ll analyze their data anyway.
So using the numbers from the post: The normal mice see their firing rates drop by 15%. The mutant mice see their firing rates drop by 30%. And the AFU mice see their firing rates drop by 45%. Plus they turn green and eat the lab assistants! What kids won’t do for an RAship these days?
Now, lets do difference-in-difference between normal and mutant. It fails, so we can’t say they are different.
Now, lets do difference-in-difference between mutant and AFU. It fails so we can’t say they are different.
Now, lets do difference-in-difference between normal and AFU. It passes. Woo-hoo we get our paper published. Too bad Sanjay got eaten before he got his first co-authorship. C’est la vie.
However, look at what we are saying. We can’t say that normals respond at all to the chemical. We can’t say that normals and mutants respond differently to the chemical. And, we can’t say that mutants and AFUs respond differently.
So are mutants more like normals or AFUs? We can’t say because they are not significantly different from either. However, normals and AFUs are significantly different from each other. And, mutants and AFUs share something in common, they both respond significantly to the chemical. Whereas, normals do not.
Its like the stats are telling you nothing and everything all at the same time. That’s because they have arbitrary cut-off points in them. If you get wrapped up in the cut-off points you will be chasing your tail. If you accept that the cutoff points are arbitrary then you can make sense of the world.
You can look at the data on normal mice. You can look at the data on mutant mice. Then you might say well, that normal mice data really looks like chance. And, that mutant data really looks like there is something going on here. So I am going to tentatively say that mutants are different than normals in their response to the chemical.
But, its all shades of gray that push our beliefs in one direction or another. There is no meaningful definitive cut off that says yes there is an effect or no there is not.
People say no one would ever cut off their arm and replace it. If the technology gets there, which it looks like it will, people will think about it. They might be what you’d call an early adopter -a really early adopter- but people are going to have the option of having superior limbs, superior eyes at some point. So I think a lot of people will do it.
Someday, the ethical and legal controversies over whether bionically enhanced individuals can compete in existing sports leagues may actually make paying attention to sports interesting. We’re going to see interesting John Henry type contests in the future, except instead of competing against a steam hammer, he will be competing against a man with a steam hammer bionic arm.
Brian Palmer attempts a takedown of twin studies that shows the opposite of what he suggests
Pause for a moment to examine that astonishing claim—that Americans’ stubborn insistence on disagreeing over hot-button national issues is the result neither of two parties adjusting their views to appeal to a shifting political center nor of the fact that the issues on which we have learned to agree simply fall out of the political debate. (How many slavery proponents do you know? I hear there used to be a few of them in the U.S. Senate.) Nope. It has to be the series of base pairs in our cellular nuclei.
Twin studies rest on two fundamental assumptions: 1) Monozygotic twins are genetically identical, and 2) the world treats monozygotic and dizygotic twins equivalently (the so-called "equal environments assumption"). The first is demonstrably and absolutely untrue, while the second has never been proven.
So not to get too much into the first paragraph but the suggestion here is not that “abolitionism is in your genes” so much as abolitionism isn’t about abolitionism per se. Abolitionism is about – say – the ability to empathize with “the other” and that is in your genes.
Indeed, if you believed that the issues were really about the issues then a deeper puzzle would be why people disagree at all. People who cared only about the issue itself should tend to converge on compromise views if for no other reason then that the very fact that some other rational person holds conflicting views should cause you doubt the veracity of your own convictions.
No, people are likely biased towards things that “feel right” and what feels right is probably heavily influenced by genes.
Now on to the twin studies. Palmer points out correctly that identical twins are not, in fact, genetically identical. However, this implies that twins studies underestimate the influence of genes.
That is the way twin studies work is that one assumes that any difference between identical twins must be due to something other than genes because they have the same genes. However, if they don’t actually have the same genes then this causes you to attribute things which are really genetic in source to non-genetic causes.
Especially, the more we think the brain of identical twins are different, the “No Two Alike” puzzle melts away. That puzzle is how it is possible that conjoined twins can be so different. They have the same genes because they are monozygotic and they have the same largely the same experiences because they are literally joined at the hip (or head or chest.)
How then do we explain the sometimes large disagreements between conjoined twins. Can being “on the left” really be that much of a life changer?
Now what about the similarities between mono and dizygotic twins. Maybe being treated as an identical twin makes you different. That’s could be true but is beside the point of genetic research. What you really want to say is being treated as a twin makes you more similar.
Perhaps. This sounds plausible. But, it sounds equally plausible that identical twins might want to distinguish themselves. That constantly being confused with one another would cause them to invent differences. We really don’t know.
Importantly, however, concern about this issue is undercut by another one of Brian’s points
There have been numerous studies showing that dizygotic twins who look similar have more personality traits in common than those who are easily distinguishable.
Yet, doesn’t this say that these personality traits are genetic in origin? Its simply that the genetic influence is mediated by the fact that people respond to you based on the way you look.
We know that taller men make more money. Maybe that’s the result of the higher marginal product stemming from height. More likely its because tall men are treated differently. However, that shows that height and hence genetics is a determinant of life outcomes.
I think the twin research is still compelling evidence that gives us a deep insight not just into the effect of genes but on “how genetic” a given trait may be.
And as a last note, I would say that I sense that some of the push back against genetic determinism is push back against determinism more broadly. However, unless you want to reject the block universe then rejecting twin research and genetic determinism doesn’t get you out of the determinist box.
A tabula rasa world is just as deterministic if you think that life experiences make the man.
Andrew Gelman recently wondered how so many economists can hold two seemingly contradictory beliefs:
1. People are rational and respond to incentives. Behavior that looks irrational is actually completely rational once you think like an economist.
2. People are irrational and they need economists, with their open minds, to show them how to be rational and efficient.
This, he suggests, raises a puzzle:
“How is it that economics-writers such as Levitt are so comfortable flipping back and forth between argument 1 (people are rational) and argument 2 (economists are rational, most people are not)?”
He provides several examples. First, he quotes Steven Levitt, who argues that people are irrational and “closed-minded” with respect to repugnant ideas, whereas economists are not. But why is this irrationality rather than disagreement he asks?
Another example is Emily Oster, who argues:
“anthropologists, sociologists, and public-health officials . . . believe that cultural differences–differences in how entire groups of people think and act–account for broader social and regional trends. AIDS became a disaster in Africa, the thinking goes, because Africans didn’t know how to deal with it.
Economists like me [Oster] don’t trust that argument. We assume everyone is fundamentally alike; we believe circumstances, not culture, drive people’s decisions, including decisions about sex and disease”
How is it that economists both “assume everyone is fundamentally alike” but also have different beliefs about how people think and act than “anthropologists, sociologists, and public health officials”? That is, how can every be fundamentally alike (rational) if economists have different beliefs than everyone else, and are therefore fundamentally not like everyone else?
Gelman thinks the answer is economists like to associate themselves with rationality, because rational is “good”, or what economists might call high status. They do this by celebrating the rationality of people and by patting economists on the back for their rationality. He says “both are ways of associating oneself with rationality. It’s almost like the important thing is to be in the same room with rationality; it hardly matters whether you yourself are the exemplar of rationality, or whether you’re celebrating the rationality of others”.
I think there’s certainly something to this. But I think a better explanation for some of what Gelman is puzzling over can be found in Bryan Caplan’s Myth of the Rational Voter. Here Bryan agrees with Gelman that rationality is overassumed by many academics. His explanation, however, is that people are rationally irrational. But what does that mean?
Like most economists, Bryan defends the notion that people are rational a lot of the time. But the reason is not some inherent rationality, but because being irrational costs them. So when we are talking about why movie theaters charge so much for candy (one of Gelman’s examples) it’s likely they are being rational because irrational pricing would cost them money.
But when people have no cost to irrationality they may embrace it because they have preferences over beliefs. For instance, when we’re talking about what someone believes about repugnant ideas, they don’t have any monetary incentive to have rational beliefs, and so they don’t.
Bryan argues that the systematic difference between economist and non-economist beliefs comes from four fundamental biases: anti-foreign bias, make-work bias, pessimistic bias, and anti-market bias. (I think you could add an anti-repugnant bias in there as well and explain the Levitt quote that Gelman puzzles over.) So when people don’t have costs to believing irrationally they satisfy their preferences over beliefs, and this leads them to have beliefs that conform to these biases.
Satisfying ones preferences for irrational beliefs when there is no cost to doing so is of course rational, thus giving us rational irrationality.
Economists believe themselves to be more rational when it comes to economic topics because they have incentives to think rationally in the area where they are experts. When it comes to toxicology, in contrast, it is toxocologists who will be rational. But as Gelman has argued before, economics is an imperialistic science, and economists are likely to believe themselves as being experts in just about any subject areas where incentives matter. But how does that explain the different beliefs of, say, economists and the sociologists, anthropologists, and public health officials in the Emily Oster example above? Aren’t they all experts of a sort when it comes to “broader social and regional trend”?
Well the disagreement surely comes from the fact that all three fields tend to operate with different frameworks and ways of looking at things, but quite honestly I don’t know how to account for different frameworks of overlapping experts in the rationally irrational model of belief. Nor can I explain why economists have less biased and more rational beliefs.
As I always I struggle between my desire to push what I think are important points to my overwhelmingly well informed readership and my basic belief that there are some conversations that should not be had in public.
I understand that the conversation is being had whether I approve of it or not. Nonetheless, participating is an issue of personal ethics I haven’t completely worked out.
In the past I’ve chosen obtuseness as the middle path. I’ll stick with that for now.
For those inclined to look Professor Kramer of Brown University has a take in a major daily that mirrors much of my own on the issue. Indeed, the problem of improperly selecting test subjects is likely responsible for the general rise in "relative ineffectiveness” we see across the drug spectrum.
In any case, I think the attention this issue gets in terms of medical funding and research is vastly inadequate relative to issues like cancer and heart disease. The landscape is too dominated by folks with strong opinions and axes to grind. While the issue itself is frankly more important than bodily health.
To update C.S. Lewis, you don’t have a mental state. You are a mental state. You have a body.
This 2006 post from Will that I recently happened upon for the first time manages to say with much more clarity and eloquence something I sort of tried to say recently:
Economic laws, like the principles of all the “special sciences,” are ceteris paribus generalizations: generalizations that are true other things being equal. Econ 101 lays out the basic laws and explains what follows from them ceteris paribus. Later, students learn about cases when other things are not equal — when there are exceptions to the generalization.
….you will be utterly hopeless in reliably identifying exceptions to a ceteris paribus law when you never grasped its logic in the first place. Exhortations to mind your Econ 101 generally aren’t exhortations to stop being so darn advanced. They are exhortations to actually comprehend the principles upon which advancement depends. And, second, the fact that a law is ceteris paribus does not mean you can deny its applicability whenever you want to. Political convenience tends not to be an appropriate auxiliary condition. You can’t wave your hands and just hope that a good argument is in some upper-level textbook you haven’t read.
If you want to say that a wage floor is not going to throw some low-wage workers out of their jobs (or prevent them from getting jobs), you’ve got to say, in a principled way, why not. The burden is on those who predict an exception to an immensely reliable regularity.
This is not meant to be a post about the minimum wage, but a post about econ 101 and it’s exceptions.
The Future of Humanity Institute recently reported the results of a survey conducted at their 2011 Winter Intelligence conference. The survey asked participants, who came from fields like philosophy, computer science and engineering, and AI and robotics, several questions about the future of machine intelligence, and one of the results is somewhat worrying. Participants were asked the following question:
How positive or negative are the ultimate consequences of the creation of a human‐level (and beyond human-level) machine intelligence likely be?
They were asked to assign probabilities to: extremely good, good, neutral, bad, and extremely bad. Here is a box-and-whisker plot of the results.
The most likely outcome is extremely bad. Eyeing it up it looks like a good outcome of any degree (extremely good + good) is less likely than a bad outcome of any degree (extremely bad + bad). Given that these experts think that the result is most likely very bad, why do we hear such little discussion about how to stop intelligent machines from being invented? In response to a question about what kind of organization was most likely to develop machine intelligence, the most probable was the military. This means we have something of a lever with which to try and slow them down. Should DARPA be shut down?
Participants were also asked when human-level machine intelligence would likely be developed. The cumulative distribution below shows their responses:
The median estimate of when there is a 50% chance is 2050. That suggests we have around 40 years to enjoy before the extremely bad outcome of human-level robot intelligence arrives. The report presents a list of milestones which participants said will let us know that human-level intelligence is within 5-years. I suppose this will be a useful guide for when we should start panicking. A sample of these include:
- Winning an Oxford union‐style debate
- Worlds best chess playing AI was written by an AI
- Emulation/development of mouse level machine intelligence
- Full dog emulation…
- Whole brain emulation, semantic web
- Turing test or whole brain emulation of a primate
- Toddler AGI
- An AI that is a human level AI researcher
- Gradual identification of objects: from an undifferentiated set of unknown size- parking spaces, dining chairs, students in a class‐ recognition of particular objects amongst them with no re‐conceptualization
- Large scale (1024) bit quantum computing (assuming cost effective for researchers), exaflop per dollar conventional computers, toddler level intelligence
- Already passed, otherwise such discussion among ourselves would not have been funded, lat alone be tangible, observable and accordable on this scale: as soon as such a thought is considered a ‘reasonable’ thought to have
There you have it. These are things to look out for, which may foretell a robot disaster is on the horizon. Of course if that last respondent is right, it’s probably too late already.
Anyone arguing that employers in some industry should raise wages should have to sit down with a labor market supply and demand graph and explain what it is they’re asking for. As it is most people calling for so-and-so to raise wages don’t seem to be thinking about labor markets, and instead think of wages as something that you can always just raise without any consequences, for instance losses to workers via unemployment.
You can see this in the common refrain that workers would be better off if we had a vastly more unionized economy. Maybe workers would be better off, meaning those that can get jobs would gain. But if you’re going to hold wages above market levels you’re going to decrease employment, so you’ll benefit workers at the expense of those who can’t get a job or who take a lower paying job. Sure, there are exceptions to this, like when a union counters monopsony power. Or when unionization provides workers voice in a way that allows them to communicate more efficiently with management and raise productivity. But to prevent unemployment you need the cartelization of the labor market and subsequent higher wages to be either just enough to offset the lower wage resulting from monopsony power, or you need them to be fully offset by productivity increases. If wages go above this at all, than there will be unemployment. Rarely if ever is any time taken to argue why x% higher wages are the right amount to offset monopsony power. This dfficulty is mostly just ignored, and the fact that cartelized higher wages is an unmitigated good is just presumed.
Tom Philpott, writing at Grist, provides another example of ignoring the costs of higher wages. He tells us about the ”penny-per-pound” movement that is asking for Burger King, McDonalds, grocery stores, and other food companies to pay a penny more for each pound of tomato so that a particular group of tomato farmers can have their wages raised from $7.25 to $13.25. But what does Philpott think will happen to labor demand if the price of labor doubles? Even if the tomoto farmers promise in the short-run to not fire anyone, in the long-run does he really think that wages can remain at double the market level without downstream buyers gradually shifting to tomatoes grown in Mexico, California, Canada, or from the increasingly competitve greenhouse tomato industry? What will doubled wages do to farmers incentives to replace workers with machines? Consider the following from Philip Martin:
There were many other labor-saving changes in the mid-1960s in response to rising farm wages. Cesar Chavez and the United Farm Workers won a 40 percent wage increase for grape pickers in their first contract in 1966, increasing entry-level wages from $1.25 to $1.75 an hour at a time when the federal minimum wage was $1.25. Farm worker earnings rose faster than nonfarm earnings between the mid-1960s and late 1970s, prompting the use of bulk bins and forklifts in fields and orchards that eliminated thousands of jobs. Conveyor belts moving slowly down rows of vegetables made it possible to pick and pack lettuce, broccoli, and other vegetables in the field, eliminating jobs in packing houses.
Immigration critics like CIS, who Martin wrote his paper for, are quick to point out that there are many labor saving technologies that farmers could employ but haven’t due to low wages. In 1960 there were 45,000 workers picking California’s tomatoes. A decade later, the use of machines allowed the work to be done by less than 5,000. Would a doubling of wages cause tomato pickers to be replaced by machines? I don’t know. But it does put more pressure on the whole supply chain to shift away from these workers. Labor replacing technologies and production shifting to Mexico when that minimizes true costs doesn’t concern me per se, but this is a problem for the workers Philpott wants to help, and an issue he has to contend with.
My biggest problem with Philpott’s viewpoint however, is his contention that “low” agriculture wages tell us that we should oppose the cost minimizing efficiences and low prices pursued by Walmart and the agriculture system in general:
The creed of “Everyday Low Prices,” the zeal to churn out profit by maximizing sales volume and minimizing cost, lies at the root of our food-system dysfunction. Relentless cost-cutting means pressure to move environmental destruction off of corporate balance sheets, creating ecological sacrifice zones. It also drives companies to pay workers as little as possible, creating a vicious circle in which we need cheap, low-quality food in order to feed millions of low-wage workers.
This reflects a view of the world where the predominant way that companies lower prices is through grinding the working man ever lower. But the history of agriculture and retail productivity in this country is an amazing success story, and the gains are so large that it would literally be impossible for a large percent to have come from driving wages down. Consider a few statistics. In 1950 food consumed at home took up 22% of an average households budget, and by 1998 this had been reduced by 7%. The retail price of food fell by around 25% from 1900 to 2000, and the farm price fell by over half. The current labor force dedicated to agriculture is 1/3 of what it was in 1900, but the level of output is seven times higher. The welfare gains to households that these numbers represent are massive.
Could these drastic changes have plausibly been driven by, or outweighed by, the “pauperization” of farm labor? First note that that over that same timeperiod that food budgets fell from 22% to 7%, the average income for farm families went from below to above that for nonfarm families. It is true that wages for hired farmworkers are extremely low. But they have always been low. You’d never get that sense from Philpott’s piece, but the figure below from Bruce Gardner’s “American Agriculture in the Twentieth Century” shows the ratio of average hired farmworker wages to manufacturing wages from 1920 to 2000. It’s hard to find a “pauperization” of farm workers in there, or a golden age of farmworkers for that matter.
The problem with trying to raise farmers wages by opposing Walmart and low prices in general is that farm worker wages are a very small part of the total bill. The total amount of farm worker wages in the U.S. are just over $20 billion, while the total amount of traditional food store sales in 2009 around $550 billion. Now all the food we buy doesn’t come from U.S. farms, and all food made on U.S. farms doesn’t get eaten by us, but the relative sizes here do give you some idea of difference in magnitude. If these stores were somehow holding wages down 50%, then that would account for less than 2% of the retail price of the food.
In contrast, Jerry Hausmann estimated that by offering lower prices and driving competitors to lower prices, big box stores, including Walmart, have made consumers better off by 25% of their annual food spending. A study by Global Insight commissioned by Walmart quantified their impact on prices overall:
It estimated that “the expansion of Wal-Mart over the 1985-2004 period can be associated with a cumulative decline of 9.1% in food-at-home prices, a 4.2% decline in commodities (goods) prices, and a 3.1% decline in overall consumer prices… This amounts to a total consumer savings of $263 billion by 2004.
Walmart doesn’t -and couldn’t- lower prices this much by “pauperizing” labor. They do it through operational efficiencies and driving down the profits of competitors and suppliers. After all, is it really plausible that Walmart began entering the grocery market because they realized they could beat their competition by driving down agricultural wages that constitute 2% of the food bill? Or did they enter because existing supermarkets had large profit margins due to a lack of competition? Hausmann, at least, believes the latter is true, and provides this graph of rising supermarket profit margins throughout the 90s as evidence:
Farm work is not a well-paid or pleasant job. But if you want to advocate for higher wages for these workers you need to recognize the tradeoffs. And trying to improve the lot of these workers by opposing Walmart and lower food prices is the most expensive and ultimately unproductive way to go about it.
Another problem with Philpott’s argmument is that he is making a huge mistake, one common to environmentalists, in attacking efficiency. There is no greater friend to environmentalism than efficiency. Pollution is private benefits paid for with public costs, and as such is inefficient. Policies that mitigate pollution smartly are efficient. This is why so many economists, even conservative ones, favor a carbon tax: because it’s efficient. Environmentalists should not cede the banner of efficiency to those that oppose regulating pollution. They should not ignore the fact that taxing and regulating pollution can be a low cost policy, and that the right costs to consider are not just private but public as well.
By making themselves the enemies of low costs and efficiency, greens are on the wrong side of history, and they’re ignore the massive gains in human welfare that falling prices and rising productivity have wrought.
Robin Hanson argued some time ago that politics isn’t about policy. This was his theory for why we have so many excessive regulations that “make everyone act like high status folks act, regardless of how appropriate that is for their situation”. Here are some of his examples:
“Consider one-size-fits-all building codes, food and drug regulations, safety rules, professional licensing, and medical insurance regulations. Such rules tend to make sure that a typical rich person wouldn’t accidentally buy a product or service of a much lower quality than they would desire.”
His favored theory for why these types of laws are pervasive is that politics isn’t about policy, in that:
“We (unconsciously) don’t care much about the consequences of such policies – we instead support policies to make ourselves look good. If our support for regulations pushing high status actions is taken as a signal of our personal status, then we can want to support such regulation regardless of what results when such regulations are implemented.”
How would we know if Robin was wrong? I think that no matter what your policy priors are, there are some obvious things policy should incorporate if in fact we did care about policy outcomes. The lack of these policy features suggests to me that Robin is correct.
For one thing policies should be designed so that we can tell if they worked. One way to do this would be to incorporate randomization into their design. State and county differences in laws are already used as instruments for economic studies, but endogeneity of policy is always a concern. The experimentalist school of economics is growing more and more prominant, and government’s could hire staffs of these economists to design very subtle and smart ways to insert randomization into policies and tease out causality.
A state that wants to raise the minimum wage, for instance, could raise it in randomely selected counties. Or consider what could be learned about payday lending if some state or group of states hired Dean Karlan and Jonathan Zinman to design a study oriented policy. It certainly would have been a good use of money to randomize some of the first round stimulus package. If it worked as proponents claim then it would make it easier to get a second round of it, and if it didn’t as critics claim then it would be much less likely we would waste money on future stimulus.
Of course this would not work with all policies. Sometimes the effects are significantly long-run, indirect, or they impact hard to measure outcomes, such that it would be difficult or impossible to empirically determine whether they’ve worked. Here we must be guided by theory. But surely there are many laws where this is not the case.
Another thing you would observe if politics were about policy is sunset provisions in laws where the efficacy is examined some predetermined number of years out and they are automatically repealed unless they are shown to have had a demonstratable effect. This would go hand in hand with policies designed so that effectiveness can be measured.
Policies that are designed so that we know if they worked as intended, and are be automatically repealed if they don’t: why is such a common sense idea so foreign, and what does that tell you about politics and policy?
(Thanks to the always insightful Sister Y for inspiring this post)
Some time ago I challenged those who don’t believe that paternalistic regulation is characterized by a slippery slope to provide some examples of regulation that would prove them wrong. The problem I saw was that paternalism fans always deny the slippery slope exists by claiming that new regulations are just reasonable policies. But of course this is how the slippery slope works, as today’s new policies will be used to justify future policies and to make them look reasonable. After all, every new step is only a small distance from where we are currently standing, but what are we walking towards? Nobody took the challenge, but pivoting off of San Francisco’s Happy Meal ban I did makee some predictions about future likely paternaism:
Making fast food less attractive may protect parents when they happen to be near a McDonalds with their kids, but it doesn’t protect them from having McDonalds reach out to children in the first place and getting it into their heads that their food and toys are awesome. If you’re going to stop this problem, it must be at the root. One way to do this is to ban advertising of fast food targeted at children. This would probably start with children specific magazines and TV shows, but move to a general ban.
Now regulators are helping to make my predictions come true, as they attempt to place limits on advertising by food companies to children. Here is how Ad Age describes the guidelines:
…the rules would start in 2016 and only allow foods that contain no trans fat and not more than one gram of saturated fat and 13 grams of added sugar per “eating occasion” to be marketed to children. Also, the foods could not contain more than 210 milligrams of sodium per serving. The sodium restrictions would tighten by 2021. In a concession to industry, the rules do not include “naturally occurring” nutrients. Additionally, the foods must provide a “meaningful contribution to a healthful diet,” including from at least one major healthy food group such as fruit, vegetables, whole grain, fish, eggs and beans.
The guidelines are said to be “voluntary”, but as Ad Age points out this is a little murky:
Although not binding, whatever emerges in the final report to Congress will likely be adhered to in some fashion because the rules are put forth by a quartet of agencies that have strong sway over marketers, including the FTC, Food and Drug Administration, Centers for Disease Control and Prevention and Department of Agriculture. “Despite calling these proposals ‘voluntary,’ the government clearly is trying to place major pressure on the food, beverage and restaurant industries on what can and cannot be advertised,” the ANA said in a statement.
I would be interested in reading more about the “strong sway over marketers” that these agencies have, and exactly how the nominally voluntary guidelines would be non-voluntary in practice. This will probably come to light, as Ad Age says that this announcement is only an “opening salvo in what will be a lengthy debate between government and industry on how to solve the growing childhood obesity crisis”.
If paternalists truly were concerned about reducing childhood obesity and not simply trying to make themselves feel good, then they should be willing to include in these regulations a sunset provision that repeals them if they don’t have a demonstratable impact on childhood obesity rates in 5 years. My guess is that paternalists wouldn’t go for this, because deep down they know this isn’t going to make much if any difference in children’s health and are really interested in banning something they find distasteful.
The slippery slope from here is pretty obvious: strictly non-voluntary guidelines that require any food packaging or advertising of must be approved by a regulatory agency and subject to standards similar to those above. But we know that advertising isn’t the only way that companies influence purchasing decisions. Why shouldn’t the color of packaging be regulated? I’m sure behaviorlists can tell us which colors children like most, and I’m sure regulators would be happy to insist on gray boxes for unhealthy foods. Children are also probably more drawn to items low on grocery shelves or in the checkout aisle, so why shouldn’t regulators determine where in a store products can be placed?
I’ll repeat my challenge to paternalists: if this isn’t evidence of the slippery slope of paternalism, then what would be?
In her 2010 polemic, The Death and Life of the Great American School System, Diane Ravitch has praise and criticism for KIPP charter schools. On the one hand, she recognizes the organization improves scores for students. On the other hand, she credits this success, in part, on the schools ability to kick out hard to educate students:
…KIPP schools often have a high attrition rate. Apparently many students and their parents are unable or unwilling to comply with KIPP’s stringent demands. A 2008 study of KIPP schools in San Francisco’s Bay Area found that 60 percent of the students who started in fifth grade were gone by the end of eigth grade. The students who quit tended to be lower-performing students. The exit of such a large propotion of low-performing students –for whatever reason- makes it difficult to analys the performance of KIPP students in higher grades. In addition, teacher turnover is high at KIPP schools as well as other charter schools, no doubt because of the unusually long hours. Thus, while the KIPP schools obtain impressive results for the students who remain enrolled for four years, the high levels of student attrition and teacher turnover raise questions about the applicability of the KIPP model to the regular public schools.
Note that teacher turnover in-and-of-itself is considered a problem. I find it baffling to consider success alongside high turnover as evidence of a limitation of the sucees rather than as evidence that turnover is not necessarily a problem.
The italicized portion of her quote is of particular interest, since it is directly contradicted by a study from Mathematica Policy Institute that showed that KIPP improves test scores for students that ever attend KIPP, including those that leave early. This is a direct contradiction of her claim.
As she does throughout the book, Ravitch drives the point home with a rhetorical flourish befitting of a speech at an NEA pep rally, lamenting the unfair advantages that charter schools have, and how easy that makes it for them compared to the underdog public schools:
Regular public schools must accept everyone who applies, including the students who leave KIPP schools. They can’t throw out the kids who do not work hard or the kids who have many absences or the kids who are disrespectful or the kids whose parents are absent or inattentive. They have to find ways to educate even those students who don’t want to be there. That’s the dilemma of public education.
Ravitch creates the image of KIPP schools taking better students from public schools, and simply kicking out bad students, sending them back into the public school system. This negative model of charter success is an important theme in the book. However, another recent study by Mathematica Policy Institute shows that her claims here are also false. They found that students leave KIPP schools at the same rate as they do for nearby public schools. In fact, for black and hispanic students, the attrition rates for KIPP were lower.
Ravitch also credit’s the lottery admissions for KIPP’s success. Her argument is that
“Like other successful charter schools, KIPP admits students by lottery; by definition, only the most motivated families apply for a slot. Charters with lotteries tend to attract the best students in poor neighborhoods, leaving the public schools in the same neighborhoods worse off because they have lost some of their top-performing students. They also tend to enroll fewer of the students with high needs – English-language learners and those needing special educaiton.”
This complaint puzzles me. Ravitch once was a supporter of charter schools. But if lotteries are “by definition” going to cream skim and advantage charter schools, how did she ever support them? Her argument here is definitional, and not a matter of data. When criticized for changing positions on education reform Ravitch likes to quote Keynes who, perhaps apocryphally, said ”When the facts change, I change my mind. What do you do, sir?”, but have the definitions changed as well?
Furthermore, the Mathematica study found that KIPP did not admit the “best students”. On average, KIPP entrants were “not more advantaged than other students in their communities, as measured by poverty and prior achievement levels”. For instance, 84% of students who attended the sample of KIPP schools qualified for a free lunch program, compared to 64% in KIPP host districts, and 72% in the elementary schools that send any children to KIPP. They enroll more minorities, and they enroll students with lower test scores than the district average and the same as the average for the public schools that KIPP students came from.
They do find that KIPP tends to enroll less ESL students and students with disabilities, but this is not the same as admitting the “best students”. Ravitch clearly agrees with this, as you can see in the quote above she includes the complaint that KIPP doesn’t admit enough ESL and disabled students as distinct from and in addition to the complaint about only letting in “the best students”.
These results are important because if Ravitch claims that KIPP cream skimming higher acheiving students makes public schools worse off, then the fact that they take lower performing students must make public schools worse off. This doesn’t just remove a complaint about charters then, it actually represents a benefit to the public school system. This positive impact of more charters and choice on public schools is reinforced by studies that have shown public schools improve in response to more competition, an entire vein of literature ignored in Ravitch’s book.
These are clear examples of the facts disproving Ravitch’s claims. Will future editions of her book correct this? Will she call attention to this fact and publicly reverse her opinion of KIPP?
I just got around to watching Robin’s diavlog with Brian Christian, on Christian’s book The Most Human Human. I find the comments section fascinating.
Here is one quote
Seems to me Brian raises a valid concern. Don’t act like an animal, and don’t act like a machine. Robin seems to have a hard time understanding the concept, maybe it would help if he’d read EF Schumacher’s book with the very telling (to Brian’s argument) "Small is Beautiful– Economics as if people mattered."
It seemed to me that Christian’s point was that shallow people suck and deep people rock, and that Robin attacked this immediately.
The core split between Christian’s perspective and that of economists is that Christian implicitly assumes that good systems elevate “good people”.
So for example, if the system of an agrarian economy was so good then why did the Kings and Queens of that world spend their time playing like they were hunter-gathers.
Further, scripts whether in sales, dating or politics cannot be good if they allow people who are fundamentally shallow to rise to prominence.
We can go on to analogize that systems which allow computes – which are lessor – to fake at being human are not good because they are elevating the lessor.
Of course, the reply is that makes sense if you have all of the qualities that make a person good. But, what if you don’t? What if you are a less human, human?
Then this entire philosophy is demeaning.
In short don’t act like an animal or a machine is a nice sentiment if you are the type of person who is very different from an animal or a machine. If you are the type of person who is more like an animal or a machine or enjoys being more like an animal or a machine then it is just telling you that you suck.
Recently a panel of experts was convened by the FDA to re-examine whether artificial food coloring causes hyperactivity in children. They concluded that evidence did not show a link between the two, stating the following:
Based on our review of the data from published literature, FDA concludes that a causal relationship between exposure to color additives and hyperactivity in children in the general population has not been established
Marion Nestle, a frequently quoted expert on food policy and Professor of Public Health and Sociology at NYU, wrote about the issue on her blog and at The Atlantic. It was unclear to me from what she wrote whether or not Dr. Nestle agreed with the panel’s decision to not ban these products, so I emailed her to see if she would answer a few questions for me, and she kindly complied. I think the exchange is illustrative of two very different ways of thinking about regulation, and what regulators should consider. Below is a lightly edited version of our email exchange:
AO: I’ve been reading what you’ve written on food coloring, it’s not clear to me whether you’d support a ban on food coloring or not. I was hoping you could tell me what your position on the policy is.
MN: Since they are unnecessary and deceptive, I can’t see any reason to do anything to protect their use.
AO: You say that food coloring is “unnecessary and deceptive “. But couldn’t you say the same thing of essentially any garnish or cooking technique designed to make food appear more appealing without physically modifying the flavor?
MN: The issue is artificial. Food garnishes and cooking techniques are usually not.
AO: You say that food additives aren’t “needed” but there are many ingredients and foods which aren’t “needed” given the variety of substitutes and choices we have. If you’re looking at how much a product is worth to consumers, and trying to understand how consumers will be harmed by banning it, isn’t ”valued” a more appropriate criteria than “needed”? Shouldn’t that be what regulators consider?
MN: Valued by whom? Industry, certainly. Food is fine as it is. It doesn’t need artificial enhancements. Foods that “need” artificial dyes are not really food. They are “food-like objects.”
AO: You imply in your blog post that if this food coloring is banned, people will eat less of the unhealthy foods that use it. Why would people eat less of these foods when artificial coloring is taken out if they didn’t value that coloring? Doesn’t it have to be the case that they like it less, or that prices go up? And in either case don’t consumers have less of something they value?
MN: Surely, artificial food dyes can be replaced by something better.
AO: If a parent wants to know whether a food contains coloring, can they find out that information today?
MN: To some extent, but the labeling rules leave lots of room for loopholes.
AO: In your blog you also say that parents of hyperactive kids can easily do their own experiments. Are the available labels sufficient for this? Or are clearer labels needed?
MN: My advice to everyone (only slightly facetious) is not to buy foods from the center aisles of supermarkets, and to avoid buying anything with more than five ingredients, anything they can’t pronounce, anything artificial, and anything with a cartoon on the package. That should take care of most problems.
I confess, I did not see this one coming: the Center for Science in the Public Interest has asked the government to ban food coloring. They argue that the coloring worsens hyperactivity in some children. Marion Nestle recently provided a rundown of the science behind food coloring and hyperactivity, and I think you’ll agree with me that the evidence is less than overwhelming. She only discusses two studies in detail. The first had problems, and the second found that 1 out of 23 kids showed a reaction. She links to another, more recent study but doesn’t discuss it. You would think we would need clear and strong evidence of a serious affect before we talked about banning a product.
Nevertheless, whether or not food coloring causes hyperactivity in some children is absolutely besides the point. Surely a cup of black coffee would cause hyperactivity in children, and yet we haven’t banned it. The absolute most this implies is for a clear labeling of products that include food coloring. I say a “clear labeling”, because I was under the impression that product packages already had to list their ingredients, including food coloring. Am I mistaken?
Food paternalists may find this unimaginably barbaric, but some people like a little color on their cakes, and prefer their cheese curls orange. In fact, given the prevalence of orange cheese curls, colored cakes, and a million other uses for food coloring it would appear that lots of people really do like them. But the fact that people prefer them is exactly why food paternalists are targeting them, and the hyperactivity claim is really just an excuse. You can see this in the quote from Marion Nestle:
“These dyes have no purpose whatsoever other than to sell junk food,” Marion Nestle, a professor of nutrition, food studies and public health at New York University.
This issue isn’t really about hyperactivity, it’s about another cudgel with which to try and get people to eat healthier foods. This is an invasive, overreaching, and dishonest attempt at regulating food. I hope that the more extreme the proposals get the more people will hesitate to support the groups like CSPI when they call for bans on stuff they don’t like. Because today they may be coming for a product or ingredient you don’t value, but rest assured, tomorrow they’ll be after something you do.
This one is probably of interest to Stata nerds only, so I’ll put it below the fold.
In fact, radiation is a far less potent carcinogen than other toxic substances. Studies of more than 80,000 survivors of the Hiroshima and Nagasaki blasts have found that about 9,000 people subsequently died of some form of cancer. But only about 500 of those cases could be attributed to the radiation exposure the people experienced.
The average amount of radiation that victims in Hiroshima and Nagasaki were exposed to would increase the risk of dying from lung cancer by about 40 percent, Boice said. Smoking a pack of cigarettes a day increases the risk of dying of lung cancer by about 400 percent.
“Radiation is a universal carcinogen, but it’s a very weak carcinogen compared to other carcinogens,” Boice said. “Even when you are exposed, it’s very unlikely you will get an adverse effect. But fear of radiation is very strong.”
From Wikipedia I see that the range of estimated deaths resulting from Chernobyl is wide, but the most credible numbers appear quite low:
The Chernobyl Forum is a regular meeting of IAEA, other United Nations organizations (FAO, UN-OCHA, UNDP, UNEP, UNSCEAR, WHO, and the World Bank), and the governments of Belarus, Russia, and Ukraine that issues regular scientific assessments of the evidence for health effects of the Chernobyl accident. The Chernobyl Forum concluded that twenty-eight emergency workers died from acute radiation syndrome including beta burns and 15 patients died from thyroid cancer, and it roughly estimated that cancer deaths caused by Chernobyl may reach a total of about 4,000 among the 600,000 people having received the greatest exposures. It also concluded that a greater risk than the long-term effects of radiation exposure is the risk to mental health of exaggerated fears about the effects of radiation:The designation of the affected population as “victims” rather than “survivors” has led them to perceive themselves as helpless, weak and lacking control over their future. This, in turn, has led either to over cautious behavior and exaggerated health concerns, or to reckless conduct, such as consumption ofmushrooms, berries and game from areas still designated as highly contaminated, overuse of alcohol and tobacco, and unprotectedpromiscuoussexual activity.
Fred Mettler commented that 20 years later:The population remains largely unsure of what the effects of radiation actually are and retain a sense of foreboding. A number of adolescents and young adults who have been exposed to modest or small amounts of radiation feel that they are somehow fatally flawed and there is no downside to using illicit drugs or having unprotected sex. To reverse such attitudes and behaviors will likely take years although some youth groups have begun programs that have promise.
Will Wilkinson takes apart the hypothesis that homophobia is in our genes
So, okay. This is a fine hypothesis. Is there any evidence for it? Well, no. There isn’t. This is not to say that Gallup conducted no studies in the attempt to test his hypotheses. He did a bunch of them fifteen or so years ago. Bering lays these out in detail, resurrecting what had been a dormant line of argument in the hope that “it might spark new research.” Noting that Gallup’s “studies are imperfect, ” he goes on to praise Gallup for his courageous willingness to do science that is “exceedingly rude—unpalatable, even,” implying, it seems, that there has been little follow-up on this question due to the weak-kneed liberal fear that experimental confirmation would help “antisocial conservatives to promote further intolerance against gays.”
But it could also be that Gallup’s hypothesis lacks merit in the way that hypotheses in evolutionary psychology so often do.
Beating up on Ev-Psych is a favorite hobby of Will’s, however, there is a intriguing question to be answered: why is homophobia so widespread and why does it exist in such varied cultures? Its not obvious why this should be. If there was a widespread violent vitriol against freckles, wouldn’t you wonder why?
Indeed, in admittedly amateurish speculation, I wondered if homophobia evolved in order to increase the prevalence of homosexuals. Or, more to the point, to increase the prevalence of genes linked to homosexuality. Presumably, those carry some benefit as there is a heritability component yet an obvious reduction in reproduction.
Do I have any evidence for this whatsoever, no. However, unlike Will I do think its useful to encourage folks to dig down these lines.
I want to pivot off of some of Karl’s comments on experiments in economics. The value of experimental methods versus other econometric approaches in economics is a pretty hot topic, and some economists are pushing back in the same way that Karl does. I would argue that the value of randomization depends on the context of the problem. Sometimes they’re very useful, and sometimes they are perfect for answering a limited question. Other times non-experimental methods are equally or more valid.
One recent example of pushback comes from economist Michael Keane. His paper is interesting because of how it contrasts with Jim Manzi’s article in NRO today that Karl was responding to below, both of whom are writing about a similar problem in economics and marketing: estimating elasticities. Let me quote Jim at length, since he explains it better then I could summarize:
Suppose… you wanted to predict the effect of “disrupting the supply” of Snickers bars on the sale of other candy in your chain of convenience stores. The “elasticities” here are how much more candy of what other types you would sell if you stopped selling Snickers….
The best, and most obvious, way to establish the elasticities, is to take a random sample of your stores, stop selling Snickers in them and measure what happens to other candy sales. This is the so-called scientific gold standard method for measuring them. Even this, of course, does not produce absolute philosophical certainty, but a series of replications of such experiments establishes what we mean by scientific validation.
Michael Keane discusses these and similar issues and talks about the need for structural models, and the limitations of the experimentalist approach:
Interestingly, it is easy to do natural experiments in marketing. Historically, firms were quite willing to manipulate prices experimentally to facilitate study of demand elasticities. But it is now widely accepted by firms and academics that such exercises are of limited use. Just knowing how much demand goes up when you cut prices is not very interesting. The interesting questions are things like: Of the increase in sales achieved by a temporary price cut, what fraction is due to stealing from competitors vs. category expansion vs. cannibalization of your own future sales? How much do price cuts reduce your brand equity? How would profits under an every-day-low-price policy compare to a policy of frequent promotion? It is widely accepted that these kinds of questions can only be addressed using structural models—meaning researchers actually need to estimate the structural parameters of consumers’ utility functions. As a result, the “experimentalist” approach has never caught on.
He goes on to describe how non-experimental data was used in an influential paper of his in marketing:
However, the fact that we should pay close attention to sources of identifying variation in the data is not an argument for abandoning structural econometrics. Plausibly exogenous variation in variables of interest is a desideratum in all empirical work—not an argument for one approach over another. Consider Erdem and Keane (1996). That paper introduced the structural approach into marketing, where it rapidly became quite pervasive. But why was the paper so influential? One factor is that many found the structural model appealing…
….But at least as important is that the paper produced a big result: it provided a reliable estimate of the long run effect of advertising on brand equity and consumer demand. This had been a “holy grail” of marketing research, but prior work had failed to uncover reliable evidence that advertising affected demand at all—an embarrassing state of affairs for marketers!…
Why did we find evidence of long-run advertising effects when others had not? Was it the use of a structural model? I think that helped, but the key reason is that we had great data. Specifically, we had scanner data where households were followed for years, and their televisions were monitored so we could see which commercials each household saw. If you are willing to believe that tastes for brands of detergent are uncorrelated with tastes for television shows (which seems fairly plausible), this is a great source of exogenous variation in ad exposures. I agree that all econometric work, whether structural or not, should ideally be based on such plausibly exogenous variation in the data.
So there is one influential example where non-experimental data was successful where previous researchers in a field where experiments are popular had long been unsuccessful. There are times when experimental data is much better suited to answering a question than non-experimental data, and there are times when the converse is true.
Adam posts on epistemic humility and consistency. Jim Manzi responds
I think that Ozimek’s demand for consistency is fair, and as I said at the start of this post, important. My upcoming book is focused on (i) making the case that this kind of epistemic humility is justified on the evidence, and (ii) trying to work out some of the practical implications of this observation.
Manzi also says
At the level of semantics, I couldn’t care less what label is applied to economics. I think that the operational issue in front of us is what degree of rational deference we should give to propositions put forward by the economics profession.
I think semantics and perhaps methodologism is where Manzi and I, at least, get hung up.
My first point is that economics is and should be a science in the sense that we can and should use reason and evidence to further our understanding and predictive abilities about the economic world around us. To the extent that Manzi isn’t disputing that then the question of “is it science” is probably not meaningful.
On humility. I agree with Adam and Manzi though this extends beyond economics and climate models. There is only so much existential doubt that most people can stand but it seems highly unlikely that we are suffering from a lack of it.
“How would I know if I were wrong,” is a question we should be asking on virtually all maters from the stimulus to whether or not I am actually sitting at a computer right now.
How much time we devote to this question depends in part on how much evidence we have and in part how important the question is. Yet, social science doesn’t represent a stark departure from physical science. In all areas humility is crucial.
Which brings me to my next point, controlled random experiments are not a radically different than careful observation.
A random trial is simply a physically controlled regression analysis. There is no fundamental difference between performing a regression on data collected in the field and data generated in the lab. It is simply that in the lab you hope that you have performed all the necessary controls physically rather than statistically.
However, the physical controls can still fail. Notably, double-blind experiments are an attempt in medicine to go beyond simple randomness because simple randomness not enough. Even with double blind, however, results are often not generalizable.
On the practical implications. My impression is that Manzi fears a society in which social scientists are allowed to determine every aspect of policy, and policy is given free reign over people’s lives.
I’m not sure if I think that’s as big of a danger as Manzi does. We might agree on how bad it would be but I suspect I think its less likely. Ironically, I think its less likely in part because Hayek won this argument among economists and economists have endeavored to keep those observations alive.
More generally though, the problem of people thinking they know what’s best for other people is by no means a new one and by no means limited to social science.
To an extent social science stands as a countervailing force to this tendency because one acquires fame and influence in the social sciences by saying that what everyone else said before was wrong. This incentivizes the young to criticize the old.
In contrast, religious, military and communalist belief structures reward people for showing deference to established views.
Even still, I do recognize the reasoning behind be wary of authority generally and I hope to see Manzi as a partner in figuring out what to do about overconfidence.
I hope to have a more complete reply to Jim Manzi’s assessment, but I wanted to make a couple of remarks off the cuff.
One Manzi says
Economists will sometimes make explicit claims that “the economic science says X,” and will more frequently make implicit claims for scientific knowledge by flatly asserting the known truth of some predictive assertion. This is normally a statement made around some specific policy question – we should (or should not) execute the following stimulus program; we should (or should not) raise the minimum wage right now, etc.
. . . all we have is an informed opinion of the type we might have from an expert historian rendering an opinion about something the likelihood that Libya would revert to an authoritarian government within ten years if it overthrew Gaddafi
Its important to distinguish between economics as science and economics as a policy driver. Manzi is focusing on economic statements that are made as policy drivers and saying it is only informed opinion. Yet this informed opinion is what is being offered. Economic science is a different enterprise.
Saying that we should execute the following stimulus program is much different than saying that we have an established scientific principle that stimulus has such-and-such effect. Contrary to intuition the first statement is far, far weaker. This is why “economic science says” is heard less frequently than “we should.”
Even setting aside personal values, “we should” is, by its very nature, a statement about subjective probability distributions. It is saying I believe that the distribution of possible effects in the stimulus world is – by some metric – superior or inferior to the distribution of possible effects in the non-stimulus world.
This requires only that you have some evidence – any evidence – that the stimulus is more likely than not to do things that you judge to be good or bad.
As such, saying that “we should” do something does not make an implicit claim about factual knowledge. Moreover, no one advising the government acts as if it does. Hardcore proponents of democracy assert that “we should” follow the advice of a group of people on the grounds that they have all survived to the age of 18. This is hardly a claim to any sort of scientific knowledge.
Now, I know Manzi’s complaint will be that economists come waving models and multipliers as if their recommendations were based on well established science. However, this is not how economists have traditionally offered their evidence. Economists are famous for refusing to draw firm conclusions and offering loads of caveats. As Harry Truman famously said
Give me a one-handed economist! All my economics say, ”On the one hand… on the other.
That scarcely represents overselling policy recommendations as scientific knowledge. In recent years some economists, myself included, have responded to the near relentless pressure for clear concise statements with
Our model suggests . . .
I believe you will find this statement repeated over and over again in congressional testimony. No “there is”, “there will be”, “it is a scientific fact.”
Our model suggests, gives you one interpretation. Many economists would love to stand before legislatures and give an hours long lecture on all of the evidence and competing possibilities. However, you get 20 minutes and the audience will demand that your story have a moral.
At the end of a presentation, more than once, the very first question has been this exact phrase: That’s all very interesting professor, but are you saying we should do this or not?
It’s a joke among my friends and family that I begin the answer with “Well, . . . “ Again, no false mantle of scientific certainty.
Now, lets consider economists as pundits. Aren’t they asserting models as facts. Very rarely.
Take Paul Krugman.
Conservatives will no doubt have noticed that one of Krugman’s major themes is that their point of view is stupid. One might be inclined to think that this is a rude way of saying “you do not have access to the scientific knowledge that I do”
It is not. It is a statement about what he thinks of your intelligence and ability to draw well formed conclusions.
He is not saying, I have such a deep understanding into the nature of the economy that everyone should listen to me. He is quite literally saying that the statements of conservatives convey such a shallow and imbecilic understanding of the economy that no one should consider listening to them.
He is not claiming the mantle of science, he is claiming the mantle of not being a moron.
On the opposite side of the spectrum, go back and look at the public statements of Milton Friedman. How many times did he lean on the fact that economics had established scientific knowledge that shouldn’t be questioned.
He always began with very simple facts and then drew out a small, compelling story. At worst, he would say “look at the evidence” and then proceed to offer the type of simple statistics that wouldn’t be uncommon in an Op-Ed.
Again, no implicit claim to scientific knowledge, only an implicit claim that reasonable informed people would agree with him.
The public policy statements of economists aren’t assertions of scientific knowledge. They are informed argument and economists present them as such. There is no doubt that economists use their knowledge of economic science to inform their policy arguments. Yet, in making those arguments they are not claiming scientific knowledge that they do not possess.
What you can argue is that economists think that they are smarter than everyone else. Indeed, economists across the political spectrum have made precisely that point. From Greg Mankiw
“President Summers asked me, didn’t I agree that, in general, economists are smarter than political scientists, and political scientists are smarter than sociologists?” [former dean Peter] Ellison told the Globe.
Here (via Mark Perry, posted two days ago) are GRE scores by field. Economists rank number 4. Political scientists are number 17, and sociologists are number 23.
In short, Manzi’s true point shouldn’t be that economists falsely assert scientific knowledge were there is none. It should be that we are arrogant pricks. I think many economists would agree.
This is a question that popped up on twitter last night and I want to address it somewhat. First, I want to say up front that there is more agreement economists then the blogosphere and popular media would suggest. Here is Tyler Cowen making the same point, and I’d venture that most of the economist bloggers, even those who disagree with the mainstream on a lot, would agree with that.
That’s not to say there isn’t something to the critique. One of the strongest proponents of the un-scientific nature of economics is George Mason economist Russ Roberts. But I think even his criticism of the field is much more limited than most econ critics might think. His argument is that much of economics is a science, but a science like Darwinian biology:
Is economics a science because it is like Darwinian biology? Darwinian biology is very different from the physical sciences. Like economics it is a very useful way to organize your thinking about complex phenomena. But it is not a predictive or very precise science or whatever you want to call it…. Darwinism, like much of economics, exploits tautological reasoning. If the fossil record is incomplete or shows no change or vast periods or the pace of change is inconsistent with the fossil record, the theory is not discarded but modified with the concept of punctuated equilibrium. Is punctuated equilibrium true? There is no real way of knowing. It is our best hypothesis given very limited data. Is it a science? Sure. But it is a science that is unlike physics. That’s OK. It is still a very useful way of organizing one’s thinking about evolution. And the “imperfection” of biology is fine unless you really want to know when the elephant got his trunk. Then you are in unscientific territory. It doesn’t matter whether our understanding of natural selection is imperfect or that we simply don’t have enough fossil data. Biologists understand the limits of their field.
This is to say economics is a science, but a more difficult one, and with real limitations on knowability. Russ argues that economists should have more humility about the precision and certainty of their estimates, and to do otherwise is to engage in what he calls “scientism”. I am a skeptic by nature, and so I find Russ’ strong statements of epistemic humility appealing. I also think that he is generally correct that economists, and all people for that matter, have to much confidence about their beliefs. We should be humble about what we know, and what our studies have shown.
But the question is what beliefs does such skepticism leave you with? If one is going to have very skeptical take on economics, then one should extend an appropriate (although not necessarily symmetric) amount of skepticism to other sciences. For instance, here is Russ applying his skepticism to climate science and comparing it to macroeconomics:
I remain agnostic on AGW. I am not a climate scientist. But I know something about multiple regression analyses with complex phenomena. It is my impression that like macro models, these models do not perform well with out-of-sample predictions. That is, they are fitted to the past and then used to make predictions about the future. When the future does not turn out to be like the past predicted, the models are tweaked (improved!). The problem with this methodology is that the tweakers of the models are prone to confirmation bias.
But even Russ falls short of applying his own rigorous skepticism. For example, here is Robin Hanson chiding Russ for his uneven skepticism when it comes to his belief that handgun ownership deters crime.
The lesson to take from this isn’t that if you don’t believe economics is a science then you must reject climate science or believe guns deter crime. It’s that if you’re going to hold economics research to an extremely high burden of proof, then you should be prepared to subject all of your beliefs to such standards. What this will leave you with is mostly weak beliefs about the world for a lot of stuff that matters to you, whether it be about medicine, history, biology, psychology, criminal justice, climate science, or economics. Maybe widespread weak beliefs are a better approximation of the truth, I don’t know, but I do know very few people do or are willing to reason like that consistently. Maybe they should. But even here the vast majority of humanity has more belief changing to do than economists.
A final and related point I want to make is that macroeconomics is both science and engineering, as Greg Mankiw has argued in a paper that should be read (it’s very light reading, seriously) if you’re interested in delineating between the scientific and un-scientific parts of macro. Economics is also history, and moral philosophy, and contains many individual studies which in-and-of-themselves are not scientific. But economics as a field is a science in that all claims about reality must ultimately be rooted in empiricism, and models and paradigms must be falsifiable and eventually tested against reality.
It can oftentimes be difficult to see the scientific process at work in economics, as in other fields. Sometimes we are stuck at impasses where we are left with little more than theory to guide us, and sometimes empiricism is limited to testing particular model parameters, and ultimately our confidence should be limited by this. And sometimes what looks like pointless or tautological theorizing is really theorists attempting to build tools and lay groundwork for empiricists. It’s easy to look at some of this and think it un-scientific, but not all steps of the scientific process look like science.
Another question is, if economics weren’t a science, then would previous paradigms so have been done in by empirical outcomes? The old Keynesian Phillips Curve held that there was a tradeoff between inflation and unemployment. When that relationship broke down during the stagflation of the 70s, the Phillips Curve was invalidated, and this helped shift macro away from old Keynesianism and towards the new classical paradigm. Real Business Cycle models of the 80s were also invalidated by reality: it was clear that money mattered, and in the real world it was hard to find technology shocks to explain actual recessions.
The point here is that in the long-run economic paradigms and methodologies are judged by their ability to explain the real world. Even if individual contributions within the field may contain what looks like un-scientific analysis, the field proceeds as a science. Yes, it is a big field, and it isn’t hard to point to parts of sub-fields that are lacking in empiricism. But to wave you’re hand at economics in general and say it’s unscientific is to diminish a lot of important and useful work by researchers who’ve spent more time thinking hard about causality and empiricism than almost anyone who would make such a criticism. If you’re trying to convince people that economists should show more humility about what they know, then that is an awful arrogant way of going about it. People making such claims should be forced to sit down with James Heckman, John List, or Esther Duflo and explain to them why what they’re doing isn’t science and how they aren’t scientists.
I want to tie together two separate posts on Marginal Revolution that together make a point I’ve been meaning to make. Recently, Alex wrote about how Genetic Engineering may help humans compete against AI in future labor markets. He also points to other human advancing technologies as well:
…In the not so long run it’s not about computers substituting for labor or even complementing labor, it’s about designing labor to complement computers (and vice-versa). Think about how quickly the phone has migrated from the desk, to the hand, to the ear, to the ear canal. The technology to enhance humanity with access to the internet is literally burying itself into our heads, call it I-fi. There is more to come.
The problem is we are framing the question as being about how would we would compete with AI, and we see ourselves as quite helpless. But how would a librarian circa 1950 compete today against Google at the task of helping a student find relevant information quickly? Well they wouldn’t stand a chance, as they’d slowly shuffle through card catalogs based on the Dewey Decimal System. But, how does a librarian today, equipped with the all of their modern tools, databases, compete against Google? In many instances, Google serves the student best. But today’s librarians equipped with all their modern training and tools are still extremely useful resources for students doing research, despite the existence of Google and dozens of other similar tools. The point is we shouldn’t think about our current selves competing against AI, but our future selves and ancestors with all of the computer based knowledge and skills they will have.
This brings me to the second point from MR, this time from Tyler, about playing chess with and against computers:
If the computer is set at 2200 strength, “me plus the computer” (I override it every now and then) almost always beats “the computer alone.” Often we beat “the computer alone” very badly. If the computer is set at full strength, my counsel is worth much less, although it is not valueless.
The future will not be just you against AI in the labor markets, but you and AI against AI alone. One way to be more successful in the future will be to learn to work well with atomistic decision machines that are efficiently and logically maximizing some objective criteria in a raw emotionless matter. Both Tyler and Alex have a good head start, having spent so much time with Robin Hanson.
As the last person in the world to review Tyler Cowen’s The Great Stagnation everything I have to say will likely have been said already. You might say there is little low-hanging criticism fruit left. So I’ll keep it short.
I think Tyler is missing a lot in his solutions chapter, but that’s probably because his diagnosis directly implies so much about what we shouldn’t do policy-wise that simply convincing people that this part of argument is correct would already accomplish a lot in this vein. To the left he says that we cannot redistribute our way to growth, and to the right he says we can’t tax cut our way to it. Policy is what we do and what we don’t do, and if he could convince people that a) slow growth is a problem, and b) these policies aren’t solutions, then it would represent a huge and positive change-of-course. Going too far into policy recommendations beyond that runs the risk of distracting readers and critics from these key points.
That said, I don’t disagree much with his diagnosis, so despite understanding why he may have done so, I’m going to address some possible solutions I think he ignored. First and foremost is one of the key low-hanging fruits of yore: skilled immigration. If my count is correct, Tyler mentions immigration three times. Twice it comes in the form of what could be called his thesis statement:
“In a figurative sense, the American economy has enjoyed lots of low-hanging fruit since at least the seventeenth century, whether it be free land, lots of immigrant labor, or powerful new technologies. Yet during the last forty years, that low-hanging fruit started disappearing, and we started pretending it was still there.”
That free land is no longer available is obvious, and Tyler spends a lot of the rest of the book discussing technology, but what about the “lots of immigrant labor” that was so important before? Tyler discusses the possibility for improvement here in passing in the section on education:
“I’m also heartened by how many students from foreign countries wish to study in the United States, if only they could get the visa.”
Our current immigration policies could do a lot more to encourage skilled immigration, and worldwide immigration restrictions and inefficient immigration policies are preventing human capital from moving towards in best and highest use, which reduces global economic growth. Why was our past immigration one of the top low-hanging fruit available but the possibility for more and better immigration today barely worth mentioning? I’m not sure if Tyler does not go into this because he is not optimistic that this represents low-hanging fruit, or, as I mentioned above, because he does not want to alienate the reader with unpopular proposals for reform.
Another major issue ignored is, as Matt Yglesias has pointed out, intellectual property law. Is this an issue that has limited ability to improve economic growth? Or, like I suggested above, is Tyler ignoring divisive issues in order to focus readers on his central arguments. One piece of evidence against this hypothesis is Tyler’s inclusion, as one of the trends that should make us optimistic, the fact that democrats are turning against teachers unions, and how that makes more K-12 reform possible. Perhaps this seemed like a much less divisive point when Tyler wrote the book then it does today. In the end I am left unsure whether Tyler’s omissions reflect his pessimism, or if he’s just being strategic.
Brad Delong got me interested in the details of a few of these cases
You can sleep easy if you play by the rules even if you think the rules are non-optimal, as long as you point that out. That’s Milton Friedman.
You cannot sleep easy if you play by the rules if you think the rules give you a license to steal. That’s Robert Nozick, Robert Bork, and Ayn Rand.
That’s the difference between utilitarian and deontological theories. Deontology is a bitch.
To catch up, Robert Nozick freely entered into a lease with his landlord, Eric Segal. After living in apartment for a year or so, Nozick then sued Segal for violating rent control laws and further refused to move out unless paid additional compensation. According to his moral theories this constituted extortion.
Ayn Rand, received Social Security and possible Medicare payments to cover lung cancer treatment. This is despite her characterization of the welfare state as theft and a particularly egregious form of theft because it is legal.
Robert Bork sued the Yale Club after suffer a slip and fall, despite arguing against frivolous lawsuits. I couldn’t find enough information on Bork – in the short time I looked – to get a real sense of his moral philosophy concerning slip and falls.
For Nozick and Rand, however, these are clear breaches of the most common interpretations of their moral philosophy. Does this undermine their philosophy at all?
On one level we are of course tempted to say, no what is true is true regardless of whether the popularizer of those truths honors them. On the other hand “ought” implies “can.” If not even Nozick and Rand can hold to these principles are they a meaningful guide to how we ought to structure our society? While these are by no means view-killing breaches, they do raise the question: is anyone capable of living according to these maxims?
I looked a little in Nozick and Rand’s response. By my reading Nozick’s offers a fair degree of absolution for his philosophy while Rand’s leaves me scratching my head.
Nozick via Julian Sanchez
I knew at the time that when I let my intense irritation with representatives of Erich Segal lead me to invoke against him rent control laws that I opposed and disapproved of, that I would later come to regret it, but sometimes you have to do what you have to do.
This reads to me as this: Yes, what I did was wrong. I knew it at the time, but I was pissed.
This statement moves the onus from the philosophy to the individual. Had Nozick dithered and said “Well, but Segal deserved it” that would be different. Instead, he seems to admit that he acted immorally.
Said another way, its one thing to abandon your principles you when find that they are inconvenient to you. It’s another to fall victim to weakness of will and do something you know you will later regret. We don’t have any philosophy, save perhaps hedonism, that protects people from weakness of will.
Rand on the other hand claimed
It is obvious, in such cases, that a man receives his own money which was taken from him by force, directly and specifically, without his consent, against his own choice. Those who advocated such laws are morally guilty, since they assumed the “right” to force employers and unwilling co-workers. But the victims, who opposed such laws, have a clear right to any refund of their own money—and they would not advance the cause of freedom if they left their money, unclaimed, for the benefit of the welfare-state administration.
This is much iffier. Here she does seem to be saying that different rules apply to her followers simply because they are her followers. This has the feel of ad hocery. There might be significantly more, but it seems to be a more eloquent way of saying “We were just sticking it to the man, that was sticking it to us.”
Doesn’t the taking of benefits imply that more resources will have to be confiscated to support the program? And, while appealing for a refund makes perfect sense, simply using the system without a guarantee that you are matching funds put-in with funds taken-out and certainly without the express permission of the people who are currently being taxed seems morally ambiguous in Rand’s own terms.
Don’t do it, Ezra. You just calmly finish drinking that diet soda and don’t concern yourself for one second. I know you’re worried, because you fearfully tweeted earlier today about this Tom Philpott story at Grist about how diet soda causes cancer. But the best thing you can do for your health is not listen to Tom Philpott, because the unnecessary stress caused by worrying about the aspartame in your diet soda is far more dangerous for you than the aspartame in your soda. Philpott brings two pieces of evidence to bear in his argument that diet soda is bad, neither of them will be unfamiliar to you if you’ve followed the debate (I use the term “debate” loosely, in the same sense that we’re “debating” whether 9/11 was an inside job). The first is the old story hippies tell each other around the campfire about how Donald Rumsfeld and Reagan-fueled ’80s snuck aspartame past the FDA with all sort of hijinks. Not only is this story old, so too is it’s debunking. The GAO was asked in 1987 to do a full retrospective study of the approval of aspartame by the FDA and here is what they reported:
FDA adequately followed its food additive approval process in approving aspartame for marketing by reviewing all of Searle’s aspartame studies, holding a public board of inquiry to discuss safety issues surrounding aspartame’s approval, and forming a panel to advise the Commissioner on those issues. Furthermore, when questions were raised about the Searle studies, FDA had an outside group of pathologists review crucial aspartame studies.
The second piece of evidence is a study by the Ramazzini Foundation, whose name also familiar to anyone following the aspartame “debate”. Here is what the FDA had to say about this study in September of 2010:
FDA could not conduct a complete and definitive review of the study because ERF did not provide the full study data. Based on the available data, however, we have identified significant shortcomings in the design, conduct, reporting, and interpretation of this study. FDA finds that the reliability and interpretation of the study outcome is compromised by these shortcomings and uncontrolled variables, such as the presence of infection in the test animals.
Additionally, the data that were provided to FDA do not appear to support the aspartame-related findings reported by ERF. Based on our review, pathological changes were incidental and appeared spontaneously in the study animals, and none of the histopathological changes reported appear to be related to treatment with aspartame.
The FDA aren’t alone in believing aspartame doesn’t cause cancer. They’re joined in this conclusion by The American Cancer Society, National Cancer Institute, National Institutes of Health, and the Mayo Clinic. So you can trust Tom Philpott and the Ramazzini Foundation or you can trust the most highly esteemed medical institutions in the United States. Like I said Ezra, drink up.
If you want even more details about this tall tale (let’s face it, you don’t), here is an older post where I dig in much more.
An excerpt from a book on the Jeopardy playing computer appeared in the Wall Street Journal this week. In it, the leader of the Watson team at IBM talks about what their creation means for humanity, and the kinds of questions he hopes to inspire:
Over the next four years, Mr. Ferrucci set about creating a world in which people and their machines often appeared to switch roles. He didn’t know, he later said, whether humans would ever be able to “create a sentient being.” But when he looked at fellow humans through the eyes of a computer scientist, he saw patterns of behaviors that often appeared to be pre-programmed: the zombie-like commutes, the near-identical routines, from tooth-brushing to feeding the animals, the retreat to the same chair, the hand reaching for the TV remote. “It’s more interesting,” he said, “when humans delve inside themselves and say, ‘Why am I doing this? And why is it relevant and important to be human?’ “
His machine, if successful, would nudge people toward that line of inquiry. Even with an avatar for a face and a robotic voice, the “Jeopardy” machine would invite comparisons to the other two contestants on the stage. This was inevitable. And whether it won or lost on a winter evening in 2011, the computer might lead millions of spectators to rethink the nature, and probe the potential, of their own humanity.
He seems to be arguing that the moment when humans seriously question what separates us from machines will have as much to do with our having become more machine-like in the rote of our habits, as it does with machines becoming more human-like in their behavior.
I don’t think it is likely or that the Ferrucci is claiming it, but one could write a science fiction tale where our recognition of genuine A.I. comes purely as a result of a lower and lower bar for what we consider human intelligence, and actually occurs during a static state of machine intelligence.
Much less pessimistically, Ferrucci is hoping that competition with, and mimicry by, machines will move us to better ourselves and be less machine-like. Mr. Ferrucci apparently did not see Wall-E, where technology and A.I. allows humans to become more machine-like and lazy. For sure, spell-check has made us spend less time working on being good at spelling, and calculators have allowed us to become less skilled with basic numeracy. But will machines that can do all simple minded tasks allow us to specialize entirely out of simple-minded tasks?
This mostly makes me envision a future where 60% of the U.S. are public sector funded artists, bloggers, and service reviewers, since subjectivity will be the last place machines can compete with us. People will spend all day reviewing Netflix movies and contributing to other crowd-sourced clouds of subjectivity, where there will be at least some value to our actions which machines can’t replicate. Reviewing and ranking will be all the more important given that so many people will be artists “for a living”, and the amount of content will be massive. The art of science, from knowing what is a meaningful to hypothesis to envisioning a novel way to test it, also tells me we’ll have many more freelance scientists. That doesn’t sound so bad a world.
Kevin Drum pushes back
The healthcare front is harder to judge. I agree with Tyler that we waste a lot of money on healthcare, but at the same time, I think a lot of people seriously underrate the value of modern improvements in healthcare. It’s not just vaccines, antibiotics, sterilization and anesthesia. Hip replacements really, truly improve your life quality, far more than a better car does. Ditto for antidepressants, blood pressure meds, cancer treatments, arthritis medication, and much more. The fact that we waste lots of money on useless end-of-life treatments doesn’t make this other stuff any less real.
Matt Yglesias cosigns
I think that’s spot on. The consumer surplus involved in successful medical treatments is gigantic. Indeed, I would say that’s probably a good start at an explanation for whythere’s so much waste. But from a policy point of view this is why I often find myself moored between the impulse to “control costs” and the impulse to “expand access.” What I really want to do is promote good health and there are an awful lot of things we could do to do that at very low cost.
Points well taken and I’ll both backpedal a bit and clarify a bit.
First, yes there are big quality of life improvements that don’t show up in our life expectancy data. Treating pain and emotional distress, of many different forms, is at least as important as extending life and our system has made great strides in doing that.
I want to completely concede that the treatment of pain has improved drastically and that has made a world of difference.
Second, I want to clarify about major breakthroughs. People talk about statins, beta-blockers, chemotherapy, radiation therapy, etc in the treatment of our two major killers, cardiovascular disease and cancer. However, the actual dent these things make in like expectancy is small.
Take statins, which are one of the major weapons in the fight against heart disease. Even the published results in JAMA suggest that the number of people who have to be on statins to prevent a single person from having a single coronary event is between 44 and 258.
However, real world results almost always are worse than trials, not all coronary events are fatal, and the prevention of a coronary event only extends life expectancy if the patient doesn’t die of something else in the mean time. Which makes our frontline weapon not that effective in actually extending life.
Contrast this to penicillin in the treatment of Scarlet Fever. The number to treat is basically one. See a case of Scarlet Fever, administer penicillin. Nearly a fifth of all the people who contract the disease would die from it with no treatment. With treatment almost no one does. The extension in life can be many decades if the disease in contracted in the teens or twenties.
Statins don’t do this. Beta-blockers don’t do this. Chemotherapy in the treatment of cancer doesn’t do this. And, keep in mind these are treatments. Much of our health care dollars are spent on diagnostics. For diagnostics to have any life extending value at all you have to find something, which usually you don’t.
Diagnostics are particularly vulnerable to my critique. People feel reassured when the doctor walks in and says “the MRI was clear.” However, the docs could rephrase this as “I just spent $2000 of your money and its going to make no difference in your health outcome whatsoever, yeah!”
I think I have complained about decision trees before, but people confuse changes in their information set with changes in the state of nature. That a diagnostic procedure comes back negative doesn’t make you healthy, it just reduces your uncertainty about a predetermined fact.
Whatever was true before is true now. We just have more information. Was that information worth $2000? That depends crucially on what we do with it. However, if what we do with it is usually nothing and sometimes to initiate a treatment which will only have an effect in 1 out of 50 cases, then we really have question what we are doing here.
Free traders like to point out that technology likely destroys far more American jobs than globalization, and yet globalization skeptics do not complain when this happens. Furthermore, we like to add, why should individuals whose jobs are offshored be entitled to a better safety net than individuals whose jobs are made redundant by technology? Aside from being absolutely true, free traders like myself engage in these arguments because they bolster the case for free trade by pointing out the logical inconsistency between people’s intuitively positive feelings about technological progress and their intuitively negative feelings about free trade.
But what happens in the future if artificial intelligence means that human-like robots start replacing jobs? When the machine that replaces you has a voice and a name, like Watson, it will feel different than when the machine is a big metal contraption that attaches widget A to widget B. I suspect that the more human-like the technology that replaces human work, the more people will begin to finally heed the arguments of free traders and reconcile their feelings towards technology driven versus globalization driven job destruction. Unfortunately, this won’t be in the direction we want. Instead, people will begin to see technological progress as a “they” who is “taking our jobs”.
Because it is true I don’t think free traders should stop drawing attention to the connection between technology and free trade, but I do worry that one day it will come back to bite us as it makes the popular adoption of techno-phobic* beliefs that much easier.
*We will need a new word that reflects a bias towards favoring humans, sort of like nationalism or nativism except favoring humans instead of favoring one’s nation or it’s natives.