39 Comments

I think this is impressive. But I am starting to question the locations of different "singularities." So for example, maybe we just view the human species like some extreme environmentalists do--as just one very successful animal species. Then maybe there are no singularities because humanity and our GDP increases are no more significant than dinosaur population increases.

Or maybe more likely, we do acknowledge humans are special but deny that any change involving "the end of the human era" as Vinge says is predictable based on human GDP, so that there is only one singularity of the evolution of humanity. This might make sense in that human evolution was the one of your singularities that totally changed the optimization process, formerly DNA exchange, and afterwards exchanging thoughts using words. AI might cause another such leap.

Expand full comment

If it walks like a duck and quacks like a duck, we normally call it a duck. It doesn't matter if it's hidden Markov models or a man in a chinese box. Predicting the future correctly is a sign of intelligence no matter how it's implemented. Communicating with and understanding people is a sign of intelligence no matter how it's implemented.

Expand full comment

The reason Google was able to predict that you wanted to know when the Superbowl started was not because of some generally-applicable intelligence; it was because Google noticed that a lot of the people who started out typing "what" on that day went on to type "time does the superbowl start". That trained a Hidden Markov Model somewhere, and when you came along, it had a pretty good prediction going. The math here is a lot easier than training a computer to play chess, or most of the other classic AI feats that looked more intelligent than they were.

No understanding of language was necessary for this.

Expand full comment

Superhuman intelligence has been experimented with. The moon shot in the 60's is a fabulous example of super-human intelligence. Hundreds of thousands to millions of people organized and collaborated to achieve a goal that no one of them could come close to alone.

The need to organize that complexity did not swamp the benefits of the increased complexity.

To Tim's point, arguably Google has built the first AI worthy of that name. Last year, on superbowl sunday while wondering what time the game starts, I started to ask google "what..." and it figured out that the question I probably wanted to ask was "what time does the superbowl start".

Up to this point, understanding language has really been what AI is all about. So now we will probably redefine AI so that computers actually have to show creativity in order to be considered intelligent.

Expand full comment

I finally got around to finishing up this piece from the Spectrum and I have to say the third page with all the predictions is incredibly stupid. I cannot imagine a larger collection of stupid statements about the robotic future from a smart guy. Just to name a few examples, what possible gain would artificial AIs have from inhabiting mm-size bodies, rather than solely existing online? Why would "copying... make robot immortality feasible in principle, [but] few robots would be able to afford it" when copying is already so cheap today? The next sentence about how "few robots would be able to afford robot versions of human children" I cannot even parse to make any sense. The future will be a highly complex interaction of so many effects that it's extremely hard to have any idea today how it will all play out. However, the predictions that Robin makes are so dumb and slipshod that it's easy to see that it will not play out that way. These predictions are best viewed as insight into Robin's haphazard understanding of economics and technology more than anything else.

Expand full comment

Writing is a possible cause of any 4,000-5,000 BC growth spurt.

The timing is not really in favour of farming - since that arose more like 10,000 - 15,000 years ago. Also, logically, the ability to transmit ideas reliably across generations is really the more significant evolutionary development.

http://en.wikipedia.org/wiki/Tărtăria_tablets

Expand full comment

Disagree. The first AIs worthy of the name will most likely be built by Google/NSA/DARPA or similar - and they will probably be huge entities which play on a global scale.

Uploading is irrelevant. AIs will come a long time before that becomes possible - and once you have AI, uploading becomes a pretty pointless exercise. There are easier ways to simulate a human, if you really want to do that for some reason.

Similarly practically nobody builds mechanical birds to fly items about. We have aeroplanes and helicopters for that. The funding fell out of the drive to make mechanical birds a long time ago.

The only reason to discuss uploading is as a proof-of-concept of the idea of AI not being too far away these days. The idea of uploading as an implementation plan is way out there: surely nobody in their right mind would deliberately create such an unmaintainable, incomprehensible mess for any practical purpose.

Expand full comment

Tim, the issue is timescale. Diaspora is set centuries later, while my forecasts are for the early period after uploads are possible. Yes it is unlikely that the ultimate optimal mind size is human, but the minds would start out human size with coordination gains to interacting with minds of a similar size.

Expand full comment

Ben G - on the bizarre idea of insect AIs:

"This, I guess, is one of the oddest things about the digital minds in "Diaspora". After all those centuries, it's still optimal to have computer memory partitioned off into minds roughly the size of an individual human mind? How come entities with the memory & brain-power of 50,000 humans weren't experimented with, and didn't become dominant?"

- http://www.sl4.org/archive/...

The idea may make sense if you are crafting a novel for 20th century human readers - so they can identify with the characters - but I can't see how or why anyone would take it seriously as futurism.

God may love beetles - but he also made whales - and Google.

Expand full comment

My impression is that this is not a Euro-centric version of history. Maybe stretch to 600 BC but that's pretty much the max period of change everywhere prior to 1700. Maybe its Euro-centric not to note that globally, but not in Europe, there were other periods of less rapid but still rapid change between say 850 and 1150 and between say 2100 BC and 1600 BC?

In any event, I am pretty strongly suspicious of all pre-modern population numbers. We don't even have confident estimates of the population of contemporary Afghanistan to within a factor of two! Native American population uncertainty is more like a factor of 30. Historical populations look likely to me to be largely compromises between people who want to guess by summing up the land areas we know were heavily cultivated and fairly casually estimating the efficacy of the agricultural techniques available and people who simply want to assert long-term progress and extrapolate backwards globally simple trends that work in Europe from 1500 to 1700.

Expand full comment

Phil,that it is regarded as the greatest period of growth is rather Euro-centric, while the population estimates are not. Not that looking just at Greece or the Mediterranean will solve that problem.

Expand full comment

Robin, my surprise is because that period - and most especially 500BC-300BC - is generally regarded as the greatest period of growth, in terms of knowledge and civilization, of all history, up until perhaps the 17th century. It should show a steep rise, not a flattening out, unless economic growth is negatively correlated with cultural and technological growth. Something is deeply wrong with either that part of the chart, or with our understanding of how economics and civilization interact.

Expand full comment

Phil, historians disagree about population estimates for that period.

Expand full comment

Robin Hanson's "Economics of the Singularity, just published in the IEEE's SPECTRUM ONLINE suggests that "Wages would fall so far that most humans could not live on them". This mode seems to predict the extinction of humans-as-we-know-them and does absolutely nothing to overcome bias (unless, it leads you to decide that singularities are unlikely in a continuum ;)

Consider my own niche: 10 years ago I was the only one in the world who knew how to grease the gears and keep this particular econometric subsystem running. Then, our organization embarked on an effort to clone my skills into a more automated system, with a more relational database, with 'friendly', object-oriented use cases - so that the process was understandable to AND relevant to a larger set of 'consumers' (in terms of a bureaucracy that could make their living off of this process ;) But, while the new system certainly improves the functionality, that improvement is hardly equal to the increase in labor needed to coordinate the more complex system! Each efficiency advantage seems to create new problems that take more effort to resolve.

Biosemiotician Stan Salthe talks about extropy being achieved through the refinement of paths for harvesting the energy gradient. But, you have not shown that copying brains into machines will increase productivity until you deal with the synchronization issues. These, I've found, in the automation of the system I work on, to be decidedly non-trivial.

Actually, this probably supports your case: Each input-constrained brain copy is not necessarily free to go off and innovate on whatever it likes, nor does it necessarily have the variety of stimulatory modes available to an ideally autonomous biological being ;) So, bottom line speculation, each machine-mind may require a correlated biological-being. In short, wetter & wetter meta-minds may be a necessary counter-balance to what some 'radical philosophers' refer to as the "Noise, Pestilence & Darkness" of the increasingly abstract machine. In which case, this Singularity is not necessarily the eugenicist's wet dream. One of my assumptions is that there is an affective 'brain', alongside the rational 'brain', in biological beings. Gerd Gigerenzer's GUT FEELINGS: THE INTELLIGENCE OF THE UNCONSCIOUS may be useful here..

I'm afraid my idea of superationality is constrained by the image of Greg Egan's superintelligence that loses itself in a metaphysical CAVE akin to a fractal replication of obsessive compulsive calculation of prime numbers (an apt metaphor for hypercapitalism / obvious-advantages-of increased productivity ?) In short, if I may babble some half-baked econ-101, you have not shown how exponential increases in production functions would be matched by corresponding increases in demand functions. Over the long run, I have no doubt that this will occur - it's the short-run cost to conscious beings and their stratified entropic niches that I worry about (the anti-eugenicist's nightmare ;) THIS is the 'bias' (or danger) that must, somehow, be overcome (at least for the manufacturing of convincing arguments prior to the Singularity ;)

- Mark (suspecting that "life as a robot" still raises issues of Bucky Fuller's "pirates of the high seize" with which I previously taunted you (and maybe even perturbed the undaunted wave-front of the 15-year-old Eliezer ;))

Expand full comment

I am puzzled that the flattest spot on Delong's growth chart covers, almost exactly, the golden age of Greece through the end of the Roman empire. I checked it versus estimates of world population athttp://www.census.gov/ipc/w..., since I expected that graph to track population. It does - except for the period 500BC to 500AD, which is a 1000-year population-doubling period, like many before it.

Expand full comment

Tim, you are right. It wasn't CNN's fault, though: the original National Geographic press release itself mentioned the 2,000 figure. Here's an instructive post by John Hawks on the issue.

Expand full comment