31 Comments

I don't think that AI is an existential risk. It is going to be more of a golden opportunity. For some not for all.

Given that most people oppose AI on various basis (religious, economic) chances are it will be implemented in a small group, and very few people will get to benefit from it. Wealthy people would probably be the first to use it.

This isn't a regular technology and it will not go first to the rich and then to everybody else, like it happened with the phones or computers in a couple of decades. This is where Kurzweil is wrong.

Can someone imagine the dynamics of a group that has access to AI for 20-30 years?

I doubt that after 20 or 30 years, heck even after 10 years, they would need any money so the assumption that it will be shared with the rest of the world for financial reasons doesn't seem founded.

So I am trying to save and figure what would be the cost of entry in this club.

Any thoughts on that?

Expand full comment

Joshua: you don't even need an intelligence explosion for AI to be cataclysmic. Just digital human-level intelligence is enough - no need to invoke either strong or weak superintelligence.

Imagine a human-level AI running on $100,000 a year of hardware, and imagine Moore's law has completely shutdown. You copy the premier patent law attorney, the premier oncologist, etc. Suddenly, those markets go from their current oligarchies to perfectly competitive winner-take-all markets reminiscent of FLOSS. (Why settle for an expensive inferior human, or Lawyer 1.2 when you can buy/rent Lawyer 2.0?)

And this can apply to most, if not all, of the white-collar professions. Even surgeons have been preparing their replacements with tele-surgery robots.

So, the blue-collar laborers get squeezed from below by machines, white-collar workers get squeezed from above by copies of the #1 in their profession, and that leaves not very much left. It may be a net win for humanity, but the 'crack of a future dawn' scenario will still be very painful for very many.

(As far as SA goes; I go with the dishonest-forecast and ignorance explanations. I'm not too sure what one could do in the crack scenario, though - buy equities? Try to change careers to something status-related that forbids copying?)

Expand full comment

Non-trivial possibility: the length of each segment is driven by some other editorial demand, like space, than how important any of these things is.

Expand full comment

Well, most of you focus on the possible consequences, but impact considered by SA has also a time-frame! and so, the impact of each technology/event should appear within that time frame. Nevertheless, I do believe in AI coming out of the labs by that time. And I do NOT believe the polar meltdown would have any dramatic consequences to human kind or the way we live our lives. Not to mention it is very unlikely to happen by 2050 based on the average temperatures there and the rate of global warming even if it could hold its speed for next 4 decades (which is extremely unlikely)!On the side note, I would rate a deadly pandemic as no.1 (certain within that time frame and with disastrous consequences!) threat. And we're already witnessing it today. The name of illness is socialism. It spreads extremely fast all around the world with EU in the lead and USA running fast (like on steroids) to catch up with them.

Expand full comment

I still subscribe to SA, but I have not taken it very seriously for past few years when one of the issues had almost every story/editorial about how global warming, sorry now climate change, is going to kill us all and by every possible manor (more earthquakes, volcanoes, mass species extinction, floods, and locusts). That paired with absolutely no reporting on advances of nuclear fusion and fission, which could actually solve any CO2 problems. So it does not surprise me that SA sees a greater chance of nuclear exchange than advances in nuclear energy, their bias is that nuclear = bad.

Expand full comment

I think fusion can be very likely, thinking outside the box, aneutronic nuclear fusion could be a cutting edge to find the solution.

Expand full comment

Bah .. one blind man ridiculing another blind man regarding their picture of the elephant .. perhaps they aren't so sold to the singularity idea of a "powerfully intelligent AI" (whatever that means).

Expand full comment

there is an accelaration of the collaborating of many human with many machins.any human level AI will be at best a boost in the present trend.

having hyped high hopes is a feature of intelligence , i'm sure same AI's would have them too..

Expand full comment

Their target audience is at most SL1, so these predictions are not surprising at all. Writing above the heads of an audience doesn't sell magazines.

Expand full comment

While Captain O. is right that Robin is a bit too hasty to affirm the huge consequences of successful AI, that doesn't change the inadequacy of SciAm's treatment, unless one goes so far as to say that Robin is very likely wrong about the consequences. Suppose for example that we assign P(Singularity | AI) = 0.5 and P(Gradual change | AI) =0.5. Then the intelligent machine scenario is still far and away the one with the greatest expected effect.

In reply to Lord's comments, AI is not necessarily disembodied, even today. Much manufacturing is computer-driven, and robots build many products including machinery.

Expand full comment

A disembodied intelligence would not necessarily even have the same interests as us. That it could solve fusion merely by thinking about it even if it wanted to seems naive. Even developing real world interfaces more sophisticated than webcams, speech synthesizers, and text analyzers would be difficult and until then they would be reliant on human provided data. Synthetic humans may provide the best hope of providing that kind of interaction but may have many of the same limitations as humans. That is why there is probably no singularity, only a gradual adaptation and working out of innumerable limitations and problems.

Expand full comment

Seems to me the slow step in improvement would be confirming that a change is an improvement via field testing, that can take time. You can test in a simulation, but you need to confirm the fidelity of the simulation to the real world and that itself involves something like field testing.

Even if a machine could reproduce instantly, it would still need to learn and compete in the environment and that takes time.

The question of what can be simulated at sufficiently high fidelilty by what date might be a speed limit on the process of improvement.

Expand full comment

I forgot to close the strong flag after very.

Expand full comment

Nuclear exchange is NOT an existential threat. Pandemics theoretically could be, but I have seen no evidence that the horror movie scenarios being tossed around are actually plausible. The odds against an asteroid strike occurring in the next forty years are ridiculously small. Their rating an asteroid collision as unlikely and fusion power as very unlikely is stupid.

Expand full comment

Either they don’t really believe their >50% number, they don’t understand its enormous civilization-remaking consequences, or they (and their readers) don’t find such vast consequences several decades hence of much interest. Which is it?

I'm not actually disagreeing about the possible impact of AI, but I have to point out that you've missing one "possibility": Perhaps others have thought it through and reached different conclusions, and you are simply wrong about the impact!

I'm not attempting to assess the likelihood that AI will happen, or that it's impact will be large or small if it does happen - but I find it interesting that you're not even willing to consider that you might be wrong (at least this is merely "overcoming bias", not "less wrong"!).

You might want to think about that, Robin...

Expand full comment

Oops, you're right. I interpreted that as the more colloquial meaning of "expect".

Expand full comment