Tiptoe Or Dash To Future?

To test who should TA elite classes, new physics grad students at U Chicago in ’81 were asked to pick a physics problem and explain its solution to the group.  I did well by picking the question: should you walk fast or slow if you want to get the least water on your front while moving a certain distance in the rain.  The answer is: move as fast as you can.

On the other hand, I’m told that when working one’s way across a minefield, one is well advised to move slowly; in that case the extra time to look closely for mines pays off in a lower chance of tripping mines.  So whether you want to move fast or slow through a destructive region depends on the details of the region.

Humanity is now moving through a dangerous region in tech/econ growth. It will be very hard to squash us once we are spread across space with strong robust advance abilities, but we are now small, dumb, and weak.  Between here and there is a minefield of disasters that could destroy us; should we tiptoe slow or run fast?  That is, if the world economy now grows at 4% a year, should we prefer to slow it to 2%, speed it to 8%, or what?  The answer depends on which factors dominate:

  • Natural resources – Today’s tech uses certain natural resources most heavily, while tomorrow’s tech will probably use different resources.  If we run out of today’s resources before we can reach the next tech level, we risk not being able to grow to reach that level.  This factor says go fast.
  • Crazy Outbreaks – Our political and business organizations usually work tolerably, but every once in a while some crazy takes over one and all hell breaks loose.  (Similar for natural disasters like asteroids.)  We want a minimum of such events between here and there.  A faster growing economy might release such crazies faster, but as long as that rate less than doubles as growth doubles, this factor says go fast.
  • Pundit Foresight – If we have a limited number of thoughtful pundits who can consider the implications of new upcoming techs and changes, then the fewer changes that arrive per year the more thought our pundits might give to each change.  If more pundit thought per change leads to better policies to avoid terrible change, this factor says go slow.

My guess is that going fast is better.  But it seems an open question.  So what other factors say to grow fast or slow, to survive?

GD Star Rating
loading...
Tagged as: ,
Trackback URL:
  • http://www.rationalmechanisms.com richard silliker

    Timing.

  • http://ssmag.wordpress.com PeterW

    We can probably make things go slower, but I don’t see how we can make things go faster in any meaningful way.

    • Mario

      We could stop doing things that make it go slower. A more efficient tax system is certainly possible, and spending or investing the efficiency gains would boost economic growth (but I’m not sure by how much).
      —-
      Concerning natural resources, wouldn’t that factor suggest slower growth? By under-performing now, we would be leaving more resources available to future generations where they could be used to greater effect. I don’t think I understand your point.

      I would say that another factor suggesting we grow as quickly as possible is global poverty. Even ignoring the moral issues, every child that receives a substandard education is a wasted resource. Since there is no benefit to saving this resource for future exploitation, we should be making the most of it right now.

      [If someone wants to assume that generating more scientists would intolerably increase our risk of disaster, then I think that person would also have to accept that we should probably slow our production of scientists in developed countries too.]

      • Mario

        OK, I’m not sure if I misread or you rewrote the Natural Resources section (probably the former, it happens a lot to me), but I get what you mean now. Nevermind.

    • http://www.hopeanon.typepad.com Hopefully Anonymous

      Peter W, I understand your point.
      Since we currently do things (regulation) to make things go slower, we can stop doing some of those things to make tech innovation go faster.

      An example is the option of repealing laws like this:

      http://en.wikipedia.org/wiki/National_Research_Act

      • http://www.rationalmechanisms.com richard silliker

        I read http://en.wikipedia.org/wiki/National_Research_Act and came away with a feeling as to why this law passed. What are you going to propose to take its place?

        Primarily we legislate to control people’s behaviours, not to make things go slower. Only humans, as complex mechanisms, require legislation to restrain or constrain their behaviours. The problem will always be one of human behaviour and so the laws that are implemented must be rational, “thy shall”, brittle and adhere to cause and effect, form and function. Civilizing humans is necessary if we are to get along, and to do this we need a common language,

        The question whether we should go fast or slow is irrelevant. The question should read; are we going to move into the future rationally or irrationally?

    • Robert Wiblin

      Higher savings rates.

    • John Maxwell IV

      Improving the scope and quality of education would probably make things go faster, as would working to improve technology.

  • phane

    Run as fast as we can while still watching where we’re going.

    • http://robertwiblin.wordpress.com Robert Wiblin

      You’re describing the tradeoff we face, not the optimum point on that tradeoff.

  • http://sophia.smith.edu/~jdmiller/resume.pdf James Miller

    An individual’s self-interested answer to this question depends on his age because the older you are the faster technological growth has to be for you to live long enough to live forever.

    • michael vassar

      I think that this is a critical concern for those not using cryonics. Less critical for those using. In practice though, it seems to influence both groups a lot.

      • Jeffrey Soreff

        As a 51 year old Alcor member, I mostly agree with Miller, and see the chance of cryonics actually reviving any given member as a small correction. One other term in the equation though: Ignoring existential risk and just considering personal risk, if the minefield is dense enough, one may not want to see forward movement, even if slow examination buys no extra safety. My current expectation jumped in the negative direction on seeing IBM’s deepqa plans. If this really happens, then general AI that can make use of an unstructured environment may be a few years away, not decades.

  • Robert Wiblin

    “My guess is that going fast is better. But it seems an open question.”

    When we spoke about this a few months ago you seemed very confident that faster was better. Has something changed your mind recently?

  • washbash

    I instinctively think go faster. Not because I think this is better for the world. Why should I care about the world when I am dead and gone? I want it to go fast, damn it! This increases the chance I have of experiencing a more technologically advanced future.

    • Petr Hudeček

      Your comment is now famous! You are in Nick Bostrom’s Superintelligence and he presents your comment in talks!

  • Patri Friedman

    I’ll state the obvious since you didn’t list it: the faster we go, the less chance we get wiped out by non-human-generated existential risks like asteroids before developing the robustness to withstand them. (Human-generated risks, of course, are unclear, since our ability to harm and defend both rise).

    • michael vassar

      But those are ridiculously minor unless we are mostly here due to anthropics.

      • John Maxwell IV

        Huh?

  • Æ

    What’s the basic model for the rain problem?

    • Pierre-Andre

      We assume the rain falls at a constant rate.

      If you stand motionless in the rain for some time, you will get wet from the top proportionally to the area you expose from the top (head, shoulders…) and to the time elapsed.

      If you stop time, you can see droplets floating in the air. If you walk to your destination while “time is frozen”, you will get wet from those droplets floating in the air. The amount of droplets you get depends on the total area of your “front side” (walking laterally, with one of your side at the front, would reduce this area). This “frozen time scenario” corresponds to the “infinite speed limit” in regular time. Here you receive no droplets from the top.

      Now even when time flows, there are on average always the same amount of droplets in one cubic metre since the rain falls at a constant rain. Whichever is your speed of walking/running, the rain you receive from the front will be the same as in the infinite speed limit. However, the faster you go, the less rain you receive from the top. Hence, going as fast as you can is the best scenario.

  • http://www.hopeanon.typepad.com Hopefully Anonymous

    Tricky question. I’d prefer it to be framed around maximizing the persistence odds of those of us currently alive and who care (but incorporating solving aging, cryonics, etc.

  • http://williambswift.blogspot.com/ billswift

    If more pundit thought per change leads to better policies to manage change, this factor says go slow.

    Is there any evidence that pundit’s have helped in the past? If not (and I can’t think of any off-hand), is there any reason to expect them to help in the future?

    • John Maxwell IV

      You don’t hear about it much, but we used to be a very environmentally unfriendly civilization, and that wasn’t a good thing. See

      http://en.wikipedia.org/wiki/Great_Smog

      I wouldn’t be surprised if pundits played a role in changing that. I can’t name any specific pundits that played a role, but I can’t name any specific pundits from that era period.

  • http://causalityrelay.wordpress.com/ Vladimir Nesov

    If FAI assumptions are right, then as slow as possible or even backwards is good (focusing only and specifically on sustainability), because it helps to prevent high-tech existential disasters while developing FAI theory, which may well take 150 years, but won’t require technology until the very end.

  • TranshumanReflector

    Aside from the difficult question of how to determine whether a speed, or a specific path of research, is closer to optimal – we have no choice, but to try to adapt to the way we actually do things.

    If the U.S. were to ban a subclass of nanotech research, someone elsewhere would continue the process of discovery, with their chosen methodologies, at their chosen speed.

    We think we have control over these things, and we are highly skilled in rationalizing counter examples of that sense of control. “well, we made a mistake”, “this group or that group didn’t cooperate”, “we didn’t have sufficient funding”, “we didn’t know that tsunami was going to hit”, etc.

    All we can do, as far as the rate of discovery or change goes, is adapt to what is.

  • haig

    Surely it’s obvious to state that the faster we get to the ‘future’ (ie positive singularity) the better. That goes without saying.

    I guess the question then is will the rate of techno-social progression (and its d/dt) increase or decrease the probability of positive or negative future outcomes. That might be a moot point as long as civilization remains out of control the way it is now. Maybe a better question to ask would be, “how can we, as a society, take control of the speed and direction we are moving in order to guide us safely to our destination.”

    • http://www.hopeanon.typepad.com Hopefully Anonymous

      “Surely it’s obvious” -It would be an impressive trifecta if you incorporated “clearly” into that clause.

  • http://hanson.gmu.edu Robin Hanson

    Robert, I’m 80% sure, which is still open.

    Patri, yes, I’d started with that concept, then realized crazies are more likely. I’ve added in a parenthetical comment there.

    Bill, I too am skeptical of the pundit track record.

    Vladimir, that is an example of pundit foresight helping.

  • http://www.hegemonicon.com hegemonicon

    Related to natural resources, if future governmental changes will allow better/more efficient resource allocation, then the answer seems to be go slow. If they will cause more waste and less efficient resource allocation, then the answer seems to be go fast (assuming social/cultural change is at least somewhat independent of technological/economic change).

  • Bill

    It is a market. We don’t control the speed of anything. If we slow down, others speed up. It is a competitive universe.

    It is also a cheapening universe. Things that can destroy a lot of people and things can be made cheaply. You don’t need an atomic bomb and missile delivery systems to cause a lot of damage.

    So, what you are left with is what you always had: the need to develop coalitions so that network benefits are denied to those who choose to be outliers, with the opportunity for them to change and come in, verifiably.

  • Stuart Armstrong

    Good, thought provoking post, like the previous one was.

  • http://ynglingasaga.wordpress.com Rolf Andreassen

    The natural resources point seems strange to me; I think it’s necessary to factorise. There’s growth in the sense of drilling more oilfields, clearing arable land, and building infrastructure with existing technology, thus increasing gains from specialisation and trade; and there’s growth in the sense of innovation. If we want to get to a point where we don’t need the resources we currently use a lot of, it seems we should go slow on the former kind but fast on the latter.

  • ECM

    This quandary sounds a lot like playing a game of Civ.

  • jn

    This depends on how humans react to rapid change. If rapid change is socially disruptive it could conceivably increase the rate of “crazy” production and worse weaken our ability to defend against crazies. So unless the crazy issue is completely irrelevant, as is the problem of social order, there is surely an “optimal” rate of change that moves us fast without overwhelming us. But not knowing this possibly non-linear relationship between tech progress/growth and social order makes it hard to decide on a priori grounds as to the right degree of growth.

  • Julian Morrison

    The run-in-the-rain thing looks like an open question to me. Go fast, you impact the horizontal column of rain. Go dead slow, you impact the vertical column of rain. Which is smaller? Pragmatically, usually the horizontal, but it may vary depending what horizontal distance you have to travel and how much rain you guess is above you.

    As to speed of the economy: you did miss something. Growth pushes a standing wave of surplus, and there are surplus-minima for all specializations both human and industrial. That means there are industries now that would have been suicidal overspending in previous eras, and there are industries now that we don’t spend upon, but would – and could probably benefit by – if there were only more growth. This says go fast.

    • Randall Randall

      The front wetted area isn’t affected by your forward speed, unless you stop or the rain stops. That is, to the extent that it’s affected by going slower, it’s part of the “top”, and you can reduce top exposure to rain by going faster.

      • Julian Morrison

        Imagine a rainstorm 100 miles wide and 5 seconds in duration. Charging forward in a rocket-plane at 20 miles per second, you impact the entire horizontal length. Ambling on foot, you impact the entire 5-second vertical column above you. One is smaller than the other – in this case, the vertical.

  • Jayson Virissimo

    How, outside of a totalitarian political system, would you control the rate of progress of society?

    • http://omniorthogonal.blogspot.com mtraven

      Fundamental and long-term research is generally funded by governments, not private industry (with some exceptions for corporations that enjoy monopoly power, like AT&T back in the day, but those are essentially quasi-government entities). So governments have a variable they control, namely, the amount of money going into research. Nothing totalitarian about varying that.

      • Jayson Virissimo

        Can you point me to some evidence that government funded research is one of the primary sources of societal progress? Thanks in advance.

    • Nick Tarleton

      I assume any policy affecting economic growth would have an effect.

  • http://blog.contriving.net Dustin

    Mythbusters did the running in the rain thing…twice.

    http://www.tvsquad.com/2005/10/17/mythbusters-mythbusters-revisited/

  • Robert Wiblin

    “* Natural resources – Today’s tech uses certain natural resources most heavily, while tomorrow’s tech will probably use different resources. If we run out of today’s resources before we can reach the next tech level, we risk not being able to grow to reach that level. This factor says go fast.”

    This factor says to use that natural resource efficiently. We want maximum growth per unit of those limited natural resource use. That does not necessarily mean fast.

    “If we have a limited number of thoughtful pundits”

    Isn’t the more like limiting factor here not pundits but the rate at which the legislative process can effectively regulate new dangerous tech?

  • 2999

    It seems to me that we should try to go faster than our current speed, but not full tilt either. This is based on the fact that I am somewhat concerned about stuff like resource shortages (peak oil/coal/uranium).

    It seems that the regulatory framework in place is more precautionary than proactive, causing suboptimal levels of investment in potentially high-yield areas.

  • consider

    “If we run out of today’s resources before we reach the next level…”

    Psst… economists are supposed to remember that prices fluctuate….

  • mjgeddes

    Hard to see how the rate of change could be slowed at this point, with the info explosion you can be sure that thousands of hackers in their basements the world over are madly sprinting full pelt to get AGI first.

    Look at the ever more powerful suite of on-line tools that are appearing – as these tools come on-line the IQ barrier is plunging.

    Two examples:

    (1) Scirus

    The most powerful specialized scientific search engine already enables one to perform advanced searches for over 370 million scientific items – I can instantly call up thousands of papers on any science topic.

    (2) Wolfram Alpha

    Wolfram Alpha will make avaliable the full power of Mathematica, enabling me to perform sophicated hard math well beyond what I would be capable of with my unaided IQ, for instance it can already flawlessly perform complex calculus and even print out all the steps for me!

    Current on-line tools even partially write science papers for you, trawling wikipedia and dumping info to a pdf file to form the skeleton for a custom science paper.

    So consider the current situation. A hacker can already use (1) to instanty call up the state-of-the-art knowledge on AI via thousands of science papers and (2) to instantly perform all the complex math he needs.

    All this suggests its game over very soon. Far sooner than anyone thinks.

    • TranshumanReflector

      “..All this suggests its game over very soon. Far sooner..”

      I agree with what you’re saying here, mjgeddes, about the finish line possibly coming up soon, and about the importance of Wolfram Alpha or other knowledge engines we might not have heard about yet.

      Although affordable voice recognition which recognizes multiple voices out of the box is not widely available yet, the VoiceRec that already exists is pretty powerful. Some training with one’s voice is necessary, but after training, you can select from several third party modules that allows one to create user-defined commands. One of these packages has some built-in commands for searching Wolfram Alpha.

      There are all sorts of automation tools out there. And a myriad of different ways to search for specific knowledge. And there are specific types of AI modules, some commercial but many open source or free.

      It could well be that, yesterday or 3 minutes ago, or 5 minutes from now or next month or next year, someone will put it all together.

      It’s very easy to imagine some individual or a collaboration playing around with Voice Recognition, Wolfram Alpha, a couple of AI modules, a language parser – and effectively creating an emergent superintelligence. In fact, so many people are working on this, even without knowing it, that it could easily happen in several different areas around the globe.

      And the great news is that, because of pride or profit or a spirit of sharing or other reasons, there are more people who are willing to immediately share their findings, than there are people who want to keep it all to themselves. It could all happen in the next minute.

      Regarding Wolfram Alpha, one of its stated purposes is to take the place of experts, and make them available to non-experts. We don’t know if they will actually accomplish this in a large number of areas anytime soon; but it’s possible. Then, we won’t have to trust human experts, who come with their own self-interested but understandable baggage.

      Good points, mjgeddes. More good reasons to hope.

      • mjgeddes

        Democratizing power of the web eh?

        Some arseholes no doubt thought their high IQs made them superior to everyone else; sadly for them, with the explosion of ever more powerful web-apps, what they didn’t realize is that soon everyone else will in principle be their equal in terms of raw *optimization power* – for any specific domain you can now (or will soon be able) to find a web-app which duplicates the effect of optimization power (i.e. high iQ combined with decision making).

        What can’t be automated? Well, I think, as you mentioned, the ability to ‘put everything together’ – the ability to see the big picture and form analogies and cross-domain connections between things, I’m confident this will always require genuine creativity and consciousness (only a sentient AI could have these abilities).

        So the *real* advantage is rapidly swinging away from near mode thinkers with a love of details and system (conventional high-IQers), and towards far mode thinkers with a love of the ‘big picture’ and with the ability to integrate multiple domains – folks with traits like creativity, imagination and analogy making – people like me 😉

  • Jay

    Your optimism about space travel seems a bit naive. I don’t think that we can get to the stars in any meaningful sense. SF writer Charles Stross lays out some of the math here:

    http://www.antipope.org/charlie/blog-static/2007/06/the_high_frontier_redux.html

    I expect that strong AI could help optimize our economy, but optimization only goes so far. The real impact of AI may not be its ability to improve efficiency, but its ability to subject people to constant scrutiny. The political implications of IT are still not worked out, and potentially pretty worrying.

    • Robert Wiblin

      Go to space once we’re brain emulations running in computers. Simplifies things a lot.

      • Jay

        I think that the idea of “computer-people” is a fallacy.

        First, a historical analogy. Back in the early 20th century, a lot of SF writers were fascinated by the idea of humanoid robots. The human body is a good general purpose machine, and that’s what they thought robots would look like.

        We now know that robots don’t work that way. Robots are built as special purpose machines. It’s much more easy and efficient to build a robot that vacuums, or that operates a machine tool, or that stocks a warehouse, than to build a general purpose robot. A “printing robot” doesn’t look like a metal man at a printing press, it looks like a printer.

        The brain is also a complex system with many separate subfunctions. Some of these subfunctions (calculation, memory, data search) can be done today by machines better than by people. Some are still human dominated (facial recognition), and some haven’t been replicated electronically yet. But I don’t think the AI of the future will have all the same parts in the same proportions as the brain. I see AI as a set of tools for efficiently solving discrete sets of problems, more Roomba than C-3PO.

        More specifically, I doubt that the brain subsystems that, in humans, pass for “free will” would often be mimicked by AI designers. Whoever builds the system will want the machine to satisfy its owner’s needs, not go off on tangents of its own.

        PS. The outer space environment contains a fair bit of high energy, difficult-to-shield radiation, and may not be, overall, much more congenial to semiconductor crystals than it is to biological life.

      • Robert Wiblin

        Read more of Robin’s upload work. I was converted.

  • Noumenon

    I really like this post, Robin. It’s a big issue and it’s hard to have something to say about it, so it wouldn’t get talked about at all if you didn’t come up with all these options and minefield analogies.

  • lemmy caution

    It is surprisingly hard to get experimental proof to go the right way on the walk versus run in the rain thing:

    Over a hundred yard course, it is better to walk in the rain than run in the rain, if your goal is to not get wet. The difference isn’t huge, but over eight trials the running person got wetter.

    The mythbuster guys were kind of embarrassed by this.

  • Valkyrie Ice

    I posted this over at JoSH’s blog response, but figured it would be best to post it here too

    Dash to the future? Hell no. We need to strap a dozen JATO units to the back of the car, and punch it.

    To Robin, K. Eric, Mike and all the other ultra cautious “let’s go slow and make a trillion safeguards” let me say something I have wanted to say to you all since I first read Engines.

    Slow will kill us.

    Our only hope is to ride the rocket. There are 6 billion people on the planet, and no two of them share the exact same ethics and morality as any other. One man’s evil is another man’s good. You want to take it slow, make ten million safety checks, make sure that every contingency has been planned for? Well, I pity you when Al’Qida prefects their nanobot that will kill you for not being Shi’ite Muslim, or their superbug that will slaughter every non Arab.

    Simply put, ethics sounds nice. Morality sounds nice. Caution sounds nice. And it will get us all killed by the people who’s ethics, morality, and sense of what’s right and wrong are totally different than yours. Banning Stem Cell Research didn’t stop research in the rest of the world. Banning cloning didn’t stop it either.

    You say “let’s all be friends and play nice together”, and they will say “We will bury you.”

    Technology doesn’t wait on consensus. It doesn’t wait for everyone to agree on whether it should or shouldn’t be created. K. Eric had one thing right, you cannot put the Genie back in the bottle. The Genie is out, and it will serve whoever masters it first. At least if we become it’s master, there’s a better than even chance it will benefit the entire human race, but if we hem and haw and worry about how best to control the Genie, you can bet someone else will beat us to it. Japan? Not so worried, Giant Mecha would be cool. China? Not so certain there. The Taliban? We can kiss our asses goodbye.

    We can’t afford to debate, we can’t afford to slow down, we can’t afford to do anything but full speed ahead and damn the torpedoes.

    The sole consolation I have is the knowledge that for all your worry and caution, for twenty years now I’ve watched the makers and creators of the technology ignore you.

    It may kill us, yes. We may grey goo ourselves, make sky-net, turn our planet into a new asteroid field, or any number of other horrible things. But it’s the only hope we have of getting out of childhood alive. We’ve been walking a razor’s edge between heaven and hell since Einstein thought up E=MC2, and we have had a sword hanging over our heads for all of our existence. Once Drexler proposed a means to create the salvation of our race, it should have been the sole project of all of science to make it happen.

    We’re racing down an ever steeper slope to a future beyond imagining. Between us and it are a thousand pitfalls, terrorists, luddites, and crazies of all descriptions. If we slow down for even a fraction of a second, they will tear us from the sled and rip us to pieces. Speed is the only sane course. Some of us are going to die along the way. There’s nothing we can do about that, but the sooner we reach that light at the end of the path, the more of us will survive to enjoy our victory.

  • Valkyrie Ice

    humm. It also seems that your take on it here is rather different than your commentary on Foresight indicated. While I stand by the need to speed, and agree completely with JoSH, it does not seem that you are in quite the same “take every precaution and safeguard” boat that Treder and Drexler have been in, Robin. Please note that I am addressing every “go slow” proponent in a general “you” and not “you”, Robin, in specific.

    • http://hanson.gmu.edu Robin Hanson

      I do sometimes wonder how many folks comment on a post without reading it all.

      • Valkyrie Ice

        In my case, since I was responding to the blog at foresight, my response was written based on what was there, and re-posted here for fairness. I do normally read through the entire blog post and commentary prior to responding. Your take in the issue per your responses on Foresight indicated disagreement with the need to go as quickly as we can in order to shorten the period of danger between now, and fully mature nanotech which can defend us against the dangers of misuse of nanotech.

  • Pingback: Overcoming Bias : Hurry Or Delay Ems?