Robin Hanson doesn’t think that self-improving AI is a big deal. Usually this is type of thing is discussed in technologist terms but in a blatant attempt at turf-grabbing I will throw out a simple trial growth model.

Suppose we have an economy described by

 

Yt = Tt * F(Kt , Lt), where K and L are Capital and Labor and the usual Inada conditions apply

dK/dt = s * Yt – dK, where s is a constant savings fraction and d is a depreciation rate

dL/dt = p(Yt/Lt), where p is an increasing concave function that produces the demographic transition

dT/dt = i() * Tt, where i() is the intelligence function described below

 

Before self-improving AI, i() is a function of Lt, and Yt/Lt.  Where we might assume that the more people there are and the wealthier each one is the greater collective human intelligence. However, we wouldn’t simply say that intelligence is a function of Yt because a society of 10 billion people living on subsistance is probably less “intelligent” than a society of 1 million living with 10000 times the resources per capita. In other words di/dL < di/d(Y/L). All together we will also assume that i() is constant returns to scale.

What this implies is that technology improves, which increases population and slowly increases collective intelligence. However, this effect is limited because population expansion drives Yt/Lt down and hence keeps intelligence from taking off.

At some point, however, p() starts to slow down enough that Yt/Lt rises, this induces a rise in intelligence and hence a rise in income which produces a further rise in intelligence. This is the explosion we call the Industrial Revolution.

However, now p() is falling so that collective intelligence is held back by a limited number of humans doing the thinking.

A Yudkowsky-type intelligence explosion would have i() change modes so that i is a function of T. Yudkowsky suggests that i() under AGI would be increasing returns to scale, intelligence in and of itself would then go critical and we would reach infinite intelligence in finite time. Of course, in reality that just means we hit the ceiling on intelligence in finite time.

I haven’t worked through the math but glancing at the system I don’t think this is necessary.  If i() simply goes constant returns to scale in  T then the entire economy Yt will go critical in its reproducible factors and the system will still reach infinite wealth in finite time. Or in the real world, the wealth ceiling.

Of course, as an economist I ultimately care about wealth not, intelligence or technology.

So what’s the lesson. Right now, i() is determined by human minds. Which means that either it is limited by the total number of humans or the average wealth of humans and these two things are inversely related. We can have more minds, but then they are poorer minds who have less time to sit and think of new techs.

However, if we can break the mind gap, so that i()is only dependent on the current level of technology then our system can go critical.

Also, a second sort of lesson is that we only need the economy to go critical in its reproducible factors. Technology is one of them but so is ordinary capital.

About these ads