Together with the provocative (Skype super-developer) Jaan Tallinn, I’ll speak on em econ next Saturday 2-5pm in London: In this extended (3 hour) session, Robin Hanson and Jaan Tallinn will revisit and expand the material from their ground-breaking presentations from the Singularity Summit 2012 – presentations that Vernor Vinge, commenting shortly afterwards, described as refutations of the saying that “there is nothing new under the sun”. (
@f26939f398e5b2e21ea353b06370c426:disqus You say it's a "question" and then go on to defend the position it reflects. one can't have a discussion that way.
Completely disagree. You're inferring a position on me that i do not hold - that it is impossible to predict the future several decades out, in principle. That's not my claim. However, i'm still within my rights to defend another position - i have no reason to believe that the ability to predict the future decades hence is currently within anyone's power.
If i had been around in the mid-1950s and someone in that era said to me they predict a superpower will send men to the Moon and return them to Earth by the end of the next decade, responding "how could you possibly predict that" and denying that anyone has the ability to make that prediction, does not also mean i deny the possibility that that prediction could successfully be made in principle, it simply means i see no evidence that the ability to make a successful prediction of that nature exists.
"Many things have been conceived before they are accomplished."
Vastly less than the number of things conceived that never see the light of day. This reminds me of Jacque Fresco, a man who designs lots of cool, futuristic looking stuff, and then labels what he does 'the future', as if calling something the future would make it come true.
Amdahl's law, fortunately, does not affect brain simulation as the brain is also a parallel system.
Moore's law is about to plateau, though, likely in fewer than 4 doublings. The past improvements were of decreased cost, as well - photolithography is immensely cheaper per component than discrete elements, discrete transistors are much cheaper than vacuum tubes, vacuum tubes are cheaper than electromechanical relays, etc. But there is no replacement in sight that would be cheaper than photolithography. And that sort of thing doesn't come around overnight.
GDP or processing power, or *any* other thing that people talk about when they talk about "growth."
I'm not an economist so don't know anything non-obvious about modeling growth. The first google result for "growth model" gives Solow-Swan, which seems to work fine and be popular.
In this model, if capital can substitute completely for labor, then there are constant returns to capital. So the capital stock will begin growing exponentially, independent of population growth, at a rate which depends directly on multifactor productivity. (Though now multifactor productivity reduces to average capital productivity, or whatever.)
If multifactor productivity is increasing, then the rate of exponential growth would increase. Multifactor productivity has been increasing exponentially for practically all of history as far as I can tell, so you would minimally expect an exponentially increasing rate of exponential progress.
If tech progress depended only on output and we kept up the historical relationship between output and tech progress, then you would get an equation of the form d K / d T = K^(1 + alpha) for some small alpha. And that sends K to infinity, i.e. the model breaks down. But this might well give you a period of very fast growth.
"At least taken literally, that's a poor argument."Perhaps because it was a question?"...the fallacy that ... amounts to inferring that something isn't so from your own present inability to conceive it."How could i, or anyone, conceive of the ability to make accurate long-term science/tech predictions, if no one has ever demonstrated that ability? Sure i can imagine someone with this ability, but that's where it stops - i'm not a religious type.A rhetorical question! I mean, come on! You say it's a "question" and then go on to defend the position it reflects. one can't have a discussion that way.Hanson presents arguments; hence "conceives" the possibility. Many things have been conceived before they are accomplished. The argument is almost too silly to address.
Anyway, the serial speed of computers has pretty much plateaued, and improvements due to parallelization are limited by Amdahl's law.
In simple models, you may get superexponential growth when machines become good enough substitutes for knowledge workers, or when population growth is proportional to economic output for some other reason.
Growth of what? GDP? Computer processing power? Intelligence?
I would think that super-exponential models are extremely controversial. What model do you have in mind?
Well, the Kruzweil kind of singularitarians take the existing exponential growth - every 1.5 years doubling the speed of computers and then argue that in the glorious future the speed of doubling itself will be doubled in such a fashion, which indeed actually hits a singularity in the mathematical sense. Ignoring the fact that existing rapid exponential growth is already a result of computers speeding up their own development (and recruiting more humans).
With regards to singularity as limit of prediction, that's the pipe dream of futurists to ingrain an assumption that their limit of prediction is not 'yesterday' but some interesting timeframe into the future. There's no non-trivial predictions that can be made even without any speed ups to progress, it's just that speeding up of progress makes it abundantly clear for shorter timespans. We can't predict if after singularity there will be practical fusion energy, right? Well that's not because of singularity, that's because you can't even tell if it was invented yesterday. If you are a futurist, you have to invent some singularity just to say that you can make predictions.
Engineers may be less likely than the technically uneducated population to buy into "nutty" beliefs, but more likely than scientists. This has been noted anecdotically for Creationism (the so called Salem's hypothesis) and IIRC, it has been observed with actual evidence for Islamic fundamentalism.
You need actual rates, though. There's probably a lot more engineers than scientists, so among the educated that believe in what ever, engineers dominate.
The philosophers got to be a very self selected bunch nowadays - most of the former philosophy is science, and the philosophy is actually an incredibly narrow field limited only to questions the answers to which we can't check, as such checks had shown it's complete inefficacy everywhere else. Trying to answer hard, grand questions with the methods that are unable to answer correctly any question the answer to which you can check, that is a very odd quest. If you are curious about qualia, study mathematics, maybe you'll make small step towards understanding how those arise, with some luck; if question of qualia bugs you, do philosophy, you'll have a fake answer or a feeling of knowing more about such, but you'll have it now, not maybe in 100 years maybe in 1000 years.
Yudkowsky can be considered an amateurish philosopher, Chalmers and Bostrom are professional philosophers.
Strange bedfellows. Chalmers occupies the extreme anti-materialist niche in the philosophy of mind.
Yudkowsky has few kind words for philosophy, but his subject matter is the same. Yet his method is in one respect opposite. Whereas professional philosophers are overconcerned with responding to other professional philosophers, Yudkowsky prides himself on total ignorance of what anybody else had to say on these subjects. The result is that his positions are an eclectic combination of personal prejudices.
"Perhaps you accord excessive importance to acts of individual genius."
That is an attitude (of mine), more than a prediction based on historical patterns of invention. It is the 'use-value' of assuming moments of genius will be required, rather than assuming any inevitability, let alone time-table, that i'm interested in. While you might be fairly harsh on the specific philosophies of the Singulatarians, my only philosophic comment on this is that we should be focused on the journey, not the destination.
"At least taken literally, that's a poor argument."
Perhaps because it was a question?
"...the fallacy that ... amounts to inferring that something isn't so from your own present inability to conceive it."
How could i, or anyone, conceive of the ability to make accurate long-term science/tech predictions, if no one has ever demonstrated that ability? Sure i can imagine someone with this ability, but that's where it stops - i'm not a religious type.
"How have the best futurological predictions fared?"
I'm with you on this, in that i'm not particularly interested in the whole subject. Futurology should be limited to looking at Intel's processor roadmap for the next few years, and similar short-range projections.
On the other hand, i think yourself and a few other regular commenters here are too keen on the psycho-analysis of those you disagree with philosophically and/or politically.
> On a planet of 7 billions. Where the resources required for that uploading and running would have supported several biological humans. That diversion of resources is not a speed up.
Indeed, if uploads are much more expensive than humans then their economic relevance is limited. We are specifically thinking about changes that occur as ems replace humans. This may well happen continuously (though probably fairly quickly) as the cost of running emulations falls and eventually reaches parity with humans (as has been discussed many times here and elsewhere).
The time required to build a computer and start running something on it is much faster than the time required to raise a human.
> One thing that is pretty damn stupid, in my opinion, is the transition from exponential growth to some faster than exponential growth in some dramatic fashion that didn't already happen.
In simple models, you may get superexponential growth when machines become good enough substitutes for knowledge workers, or when population growth is proportional to economic output for some other reason. Of course this only works if returns to technology diminish slowly enough, and eventually it's got to stop as you run up against physical limits. But I don't see why a period of superexponential growth is stupid. Even very simple models wouldn't predict superexponential growth so far, unless they have increasing returns to capital (which is an orthogonal issue). I don't know much economics, but this seems to be fairly straightforward and simple. Computers being used to speed up the development of computers is very obviously not the important criterion.
(Incidentally, Robin is talking about faster exponential growth, which I think is uncontroversial if machines substitute for humans, and I don't think the distinction makes much difference to Jaan's point. Maybe you are talking about something else.)
That is the key to its appeal - merging breakthroughs into milestones to the point that exponential development appears to be the only requirement for the miracle to occur - no genius Eureka moments required.
I think their outlook is that it's a milestone as far as AI happening, a breakthrough that it be "safe."
Perhaps you accord excessive importance to acts of individual genius. I think those who have studied the question find that simultaneous or near-simultaneous invention is common; the question of the extent to which inventions are the result of the history of prior invention, which will "inevitably" occur in the mind of some genius or another, is somewhat an open question today.
What I find most amazing is that the Singulatarians, the vanguard of the next scientific and social revolution, embrace the most backward philosophical ideas: moral realism ["morality" being part of the "utility function" programmed into an AI]; compatibilist free will; and the actual existence of phenomenological experience, these falsehoods constituting absolute roadblocks precluding success in their venture. What makes me nearly certain the leaders aren't serious [that there's a bit of a scam element] is that they don't seem to care. (In a recent discussion, Yudkowsky said Muehlhauser never understood the former's interminable screed on morality; Muehlhauser said perhaps Yudkowsky will explain it to him one afternoon.)
On what conceivable basis can anyone make serious predictions of major scientific breakthroughs or milestones* 8 decades into the future?
At least taken literally, that's a poor argument. There's a Latin name for the fallacy that I won't try to retrieve, but it amounts to inferring that something isn't so from your own present inability to conceive it. Hanson has given reasons in this blog.
What you're saying brings the whole "discipline" of "futurology" into question if there's an in-principle reason why predictions can't be made far into the future. If there is one, you should state it.
Maybe an empirical approach is appropriate. How have the best futurological predictions fared? I know science-fiction writers used to generally assume that by now our automobiles would fly. Surely they would drive themselves without human assistance. Since I'm not a futurological enthusiast, someone would have to show that the field itself has a track record before I'd be beguiled into considering specific arguments (unless the arguments themselves prove to be interesting).
Some people say that Singularity is a religious concept. I disagree, but whatever is the underlying motivation for it, consider that the Western world outside of the United States is now only mildly religious. Reaction to The Great Stagnation is probably a better explanation (and definitely a more ironic one). Singularity is self-medication for those suffering the disappointment of retarded technological and scientific progress. It maintains our self-belief ... for a while.
Singularity thinking is fueled by first-generation atheists, a peculiarly American phenomenon exactly because of the country's religiosity.
But I think, like you, that it's (also) fueled by economic stagnation (although our diagnoses are different as to their cause), although that doesn't distinguish it from religions: "Religion is the opium of the people." Religion has always arisen as compensation for worldly deprivation.
@google-8a859b151b507f070cefe46a035c0a99:disqus Yudkowsky can be considered an amateurish philosopher, Chalmers and Bostrom are professional philosophers.
That's not magical thinking, in principle there's no need for any giant breakthroughs in our understanding, just gradual progress, with technological advancements like scanning blockface microscope (i'm actually working on software for processing data from it). (Unless you're speaking of the really nutty crowd that equates uploading with understanding and some form of transcendence)
The point is that 2092 mind uploading and running simulation is a possibility, while 2030 mind uploading and running simulation is not even a possibility, so the Tallinn argues for inevitability of what's merely unlikely, while Krutzweil argues for the inevitability of impossible.