Monthly Archives: November 2008

Disappointment in the Future

This seems worth posting around now…  As I’ve previously observed, futuristic visions are produced as entertainment, sold today and consumed today.  A TV station interviewing an economic or diplomatic pundit doesn’t bother to show what that pundit predicted three years ago and how the predictions turned out.  Why would they?  Futurism Isn’t About Prediction.

But someone on the ImmInst forum actually went and compiled a list of Ray Kurzweil’s predictions in 1999 for the years 2000-2009.  We’re not out of 2009 yet, but right now it’s not looking good…

· Individuals primarily use portable computers
· Portable computers have dramatically become lighter and thinner
· Personal computers are available in a wide range of sizes and shapes, and are commonly embedded in clothing and jewelry, like wrist watches, rings, earrings and other body ornaments
· Computers with a high-resolution visual interface range from rings and pins and credit cards up to the size of a thin book. People typically have at least a dozen computers on and around their bodies, which are networked, using body LANS (local area networks)
· These computers monitor body functions, provide automated identity to conduct financial transactions and allow entry into secure areas. They also provide directions for navigation, and a variety of other services.
· Most portable computers do not have keyboards

Continue reading "Disappointment in the Future" »

GD Star Rating
loading...

Stuck In Throat

Let me try again to summarize Eliezer’s position, as I understand it, and what about it seems hard to swallow.  I take Eliezer as saying: 

Sometime in the next few decades a human-level AI will probably be made by having a stupid AI make itself smarter.  Such a process starts very slow and quiet, but eventually "fooms" very fast and then loud. It is likely to go from much stupider to much smarter than humans in less than a week.  While stupid, it can be rather invisible to the world.  Once smart, it can suddenly and without warning take over the world. 

The reason an AI can foom so much faster than its society is that an AI can change its basic mental architecture, and humans can’t.  How long any one AI takes to do this depends crucially on its initial architecture.  Current architectures are so bad that an AI starting with them would take an eternity to foom.  Success will come from hard math-like (and Bayes-net-like) thinking that produces deep insights giving much better architectures.

A much smarter than human AI is basically impossible to contain or control; if it wants to it will take over the world, and then it will achieve whatever ends it has.  One should have little confidence that one knows what those ends are from its behavior as a much less than human AI (e.g., as part of some evolutionary competition).  Unless you have carefully proven that it wants what you think it wants, you have no idea what it wants. 

In such a situation, if one cannot prevent AI attempts by all others, then the only reasonable strategy is to try to be the first with a "friendly" AI, i.e., one where you really do know what it wants, and where what it wants is something carefully chosen to be as reasonable as possible. 

I don’t disagree with this last paragraph.  But I do have trouble swallowing prior ones.  The hardest to believe I think is that the AI will get smart so very rapidly, with a growth rate (e.g., doubling in an hour) so far out of proportion to prior growth rates, to what prior trends would suggest, and to what most other AI researchers I’ve talked to think.  The key issues come from this timescale being so much shorter than team lead times and reaction times.  This is the key point on which I await Eliezer’s more detailed arguments. 

Since I do accept that architectures can influence growth rates, I must also have trouble believing humans could find new AI architectures anytime soon that make this much difference.  Some other doubts: 

  • Does a single "smarts" parameter really summarize most of the capability of diverse AIs?
  • Could an AI’s creators see what it wants by slowing down its growth as it approaches human level?
  • Might faster brain emulations find it easier to track and manage an AI foom?
GD Star Rating
loading...
Tagged as: ,

Singletons Rule OK

Reply toTotal Tech Wars

How does one end up with a persistent disagreement between two rationalist-wannabes who are both aware of Aumann’s Agreement Theorem and its implications?

Such a case is likely to turn around two axes: object-level incredulity ("no matter what AAT says, proposition X can’t really be true") and meta-level distrust ("they’re trying to be rational despite their emotional commitment, but are they really capable of that?").

So far, Robin and I have focused on the object level in trying to hash out our disagreement.  Technically, I can’t speak for Robin; but at least in my own case, I’ve acted thus because I anticipate that a meta-level argument about trustworthiness wouldn’t lead anywhere interesting.  Behind the scenes, I’m doing what I can to make sure my brain is actually capable of updating, and presumably Robin is doing the same.

(The linchpin of my own current effort in this area is to tell myself that I ought to be learning something while having this conversation, and that I shouldn’t miss any scrap of original thought in it – the Incremental Update technique. Because I can genuinely believe that a conversation like this should produce new thoughts, I can turn that feeling into genuine attentiveness.)

Yesterday, Robin inveighed hard against what he called "total tech wars", and what I call "winner-take-all" scenarios:

Robin:  "If you believe the other side is totally committed to total victory, that surrender is unacceptable, and that all interactions are zero-sum, you may conclude your side must never cooperate with them, nor tolerate much internal dissent or luxury."

Robin and I both have emotional commitments and we both acknowledge the danger of that.  There’s nothing irrational about feeling, per se; only failure to update is blameworthy.  But Robin seems to be very strongly against winner-take-all technological scenarios, and I don’t understand why.

Among other things, I would like to ask if Robin has a Line of Retreat set up here – if, regardless of how he estimates the probabilities, he can visualize what he would do if a winner-take-all scenario were true.

Continue reading "Singletons Rule OK" »

GD Star Rating
loading...

Total Tech Wars

Eliezer Thursday:

Suppose … the first state to develop working researchers-on-a-chip, only has a one-day lead time. …  If there’s already full-scale nanotechnology around when this happens … in an hour … the ems may be able to upgrade themselves to a hundred thousand times human speed, … and in another hour, …  get the factor up to a million times human speed, and start working on intelligence enhancement. … One could, of course, voluntarily publish the improved-upload protocols to the world, and give everyone else a chance to join in.  But you’d have to trust that not a single one of your partners were holding back a trick that lets them run uploads at ten times your own maximum speed.

Carl Shulman Saturday and Monday:

I very much doubt that any U.S. or Chinese President who understood the issues would fail to nationalize a for-profit firm under those circumstances. … It’s also how a bunch of social democrats, or libertarians, or utilitarians, might run a project, knowing that a very likely alternative is the crack of a future dawn and burning the cosmic commons, with a lot of inequality in access to the future, and perhaps worse. Any state with a lead on bot development that can ensure the bot population is made up of nationalists or ideologues (who could monitor each other) could disarm the world’s dictatorships, solve collective action problems … [For] biological humans [to] retain their wealth as capital-holders in his scenario, ems must be obedient and controllable enough … But if such control is feasible, then a controlled em population being used to aggressively create a global singleton is also feasible.

Every new technology brings social disruption. While new techs (broadly conceived) tend to increase the total pie, some folks gain more than others, and some even lose overall.  The tech’s inventors may gain intellectual property, it may fit better with some forms of capital than others, and those who first foresee its implications may profit from compatible investments.  So any new tech can be framed as a conflict, between opponents in a race or war.

Every conflict can be framed as a total war. If you believe the other side is totally committed to total victory, that surrender is unacceptable, and that all interactions are zero-sum, you may conclude your side must never cooperate with them, nor tolerate much internal dissent or luxury.  All resources must be devoted to growing more resources and to fighting them in every possible way.

Continue reading "Total Tech Wars" »

GD Star Rating
loading...
Tagged as: , , , ,

Chaotic Inversion

I was recently having a conversation with some friends on the topic of hour-by-hour productivity and willpower maintenance – something I’ve struggled with my whole life.

I can avoid running away from a hard problem the first time I see it (perseverance on a timescale of seconds), and I can stick to the same problem for years; but to keep working on a timescale of hours is a constant battle for me.  It goes without saying that I’ve already read reams and reams of advice; and the most help I got from it was realizing that a sizable fraction other creative professionals had the same problem, and couldn’t beat it either, no matter how reasonable all the advice sounds.

"What do you do when you can’t work?" my friends asked me.  (Conversation probably not accurate, this is a very loose gist.)

And I replied that I usually browse random websites, or watch a short video.

"Well," they said, "if you know you can’t work for a while, you should watch a movie or something."

"Unfortunately," I replied, "I have to do something whose time comes in short units, like browsing the Web or watching short videos, because I might become able to work again at any time, and I can’t predict when -"

And then I stopped, because I’d just had a revelation.

Continue reading "Chaotic Inversion" »

GD Star Rating
loading...

Luck Pessimism

While we tend to be optimistic about our abilities, we are pessimistic about our luck:

We analyze the answers of a sample of 1,540 individuals to the following question "Imagine that a coin will be flipped 10 times. Each time, if heads, you win 10C. How many times do you think that you will win?" The average answer is surprisingly about 3.9 which is below the average 5, and we interpret this as a pessimistic bias. We find that women are more "pessimistic" than men, as are old people relative to young.

Added:  Benja Fallenstein notes "if there is no [personal] gain associated to the coin tossing, the average [guess] is 4.9, and 90% answer 5.

GD Star Rating
loading...
Tagged as:

Thanksgiving Prayer

At tonight’s Thanksgiving, Erin remarked on how this was her first real Thanksgiving dinner away from her family, and that it was an odd feeling to just sit down and eat without any prayer beforehand.  (Yes, she’s a solid atheist in no danger whatsoever, thank you for asking.)

And as she said this, it reminded me of how wrong it is to give gratitude to God for blessings that actually come from our fellow human beings putting in a great deal of work.

So I at once put my hands together and said,

"Dear Global Economy, we thank thee for thy economies of scale, thy professional specialization, and thy international networks of trade under Ricardo’s Law of Comparative Advantage, without which we would all starve to death while trying to assemble the ingredients for such a dinner as this.  Amen."

GD Star Rating
loading...

Dreams of Autarky

Selections from my 1999 essay "Dreams of Autarky":

[Here is] an important common bias on "our" side, i.e., among those who expect specific very large changes. … Futurists tend to expect an unrealistic degree of autarky, or independence, within future technological and social systems.  The cells in our bodies are largely-autonomous devices and manufacturing plants, producing most of what they need internally. … Small tribes themselves were quite autonomous. … Most people are not very aware of, and so have not fully to terms with their new inter-dependence.  For example, people are surprisingly willing to restrict trade between nations, not realizing how much their wealth depends on such trade. … Futurists commonly neglect this interdependence … they picture their future political and economic unit to be the largely self-sufficient small tribe of our evolutionary heritage.  … [Here are] some examples. …

Continue reading "Dreams of Autarky" »

GD Star Rating
loading...
Tagged as:

Modern Depressions

To make you extra thankful today, an excellent summary of what a depression today would look like:

The lines wouldn’t be outside soup kitchens but at emergency rooms, and rather than itinerant farmers we could see waves of laid-off office workers leaving homes to foreclosure and heading for areas of the country where there’s more work – or just a relative with a free room over the garage.  Already hollowed-out manufacturing cities could be all but deserted, and suburban neighborhoods left checkerboarded, with abandoned houses next to overcrowded ones. … The flickering glow of millions of televisions glimpsed through living room windows, as the nation’s unemployed sit at home filling their days with the cheapest form of distraction available. …

Continue reading "Modern Depressions" »

GD Star Rating
loading...
Tagged as:

Total Nano Domination

Followup toEngelbart: Insufficiently Recursive

The computer revolution had cascades and insights aplenty.  Computer tools are routinely used to create tools, from using a C compiler to write a Python interpreter, to using theorem-proving software to help design computer chips.  I would not yet rate computers as being very deeply recursive – I don’t think they’ve improved our own thinking processes even so much as the Scientific Revolution – yet.  But some of the ways that computers are used to improve computers, verge on being repeatable (cyclic).

Yet no individual, no localized group, nor even country, managed to get a sustained advantage in computing power, compound the interest on cascades, and take over the world.  There was never a Manhattan moment when a computing advantage temporarily gave one country a supreme military advantage, like the US and its atomic bombs for that brief instant at the end of WW2.  In computing there was no equivalent of "We’ve just crossed the sharp threshold of criticality, and now our pile doubles its neutron output every two minutes, so we can produce lots of plutonium and you can’t."

Will the development of nanotechnology go the same way as computers – a smooth, steady developmental curve spread across many countries, no one project taking into itself a substantial fraction of the world’s whole progress?  Will it be more like the Manhattan Project, one country gaining a (temporary?) huge advantage at huge cost?  Or could a small group with an initial advantage cascade and outrun the world?

Continue reading "Total Nano Domination" »

GD Star Rating
loading...