In another, perhaps too quick and dirty for the Atlantic, let me respond to this by Robin Hanson.

We all want the things around us to be better. Yet today billions struggle year after year to make just a few things a bit better. But what if our meagre success was because we just didn’t have the right grand unified theory of betterness? What if someone someday discovered the basics of such a theory? Well then this person might use his basic betterness theory to make himself better in health, wealth, sexiness, organization, work ethic, etc. More important, that might help him make his betterness theory even better.

After several iterations this better person might have a much better betterness theory. Then he might quickly make everything around him much better. Not just better looking hair, better jokes, or better sleep. He might start a better business, and get better at getting investors to invest, customers to buy, and employees to work. Or he might focus on making better investments. Or he might run for office and get better at getting elected, and then make his city or nation run better. Or he might create a better weapon, revolution, or army, to conquer any who oppose him.

. . .

All of which is to say that fearing that a new grand unified theory of intelligence will let one machine suddenly take over the world isn’t that different from fearing that a grand unified theory of betterness will let one better person suddenly take over the world. This isn’t to say that such an thing is impossible, but rather that we’d sure want some clearer indications that such a theory even exists before taking such a fear especially seriously

I don’t think a betterness explosion is that far fetched of a concept. Indeed, we are in the middle of one right now. It’s the sustained period of economic growth after the industrial revolution.

The important concept – to my mind – is that this betterness revolution is ultimately limited by our ability to analysis and communicate information. There are various means we might think of for this process to change radically and with it the nature of the betterness revolution.

For various reasons the grand unified theory of betterness seems illusive. But the betterness box does not. Suppose that we EM a single smart person few thousand times, put all the EMs into one box and then turn up the speed 1 million times.

Could not the box conceivably design better faster boxes to put itself into, continuing a rapid betterness explosion? Would it not look to the rest of the world as if that single individual just “took over”

About these ads