Fund UberTool?

Some companies specialize in making or servicing tools, and some even specialize in redesigning and inventing tools.  All these tool companies use tools themselves.  Let us say that tool type A "aids" tool type B if tools of type A are used when improving tools of type B.  The aiding graph can have cycles, such as when A aids B aids C aids D aids A. 

Such tool aid cycles contribute to progress and growth.  Sometimes a set of tool types will stumble into conditions especially favorable for mutual improvement.  When the aiding cycles are short and the aiding relations are strong, a set of tools may improve together especially quickly.  Such favorable storms of mutual improvement usually run out quickly, however, and in all of human history no more than three storms have had a large and sustained enough impact to substantially change world economic growth rates. 

Imagine you are a venture capitalist reviewing a proposed business plan.  UberTool Corp has identified a candidate set of mutually aiding tools, and plans to spend a millions pushing those tools through a mutual improvement storm.  While UberTool may sell some minor patents along the way, UberTool will keep its main improvements to itself and focus on developing tools that improve the productivity of its team of tool developers. 

In fact, UberTool thinks that its tool set is so fantastically capable of mutual improvement, and that improved versions of its tools would be so fantastically valuable and broadly applicable, UberTool does not plan to stop their closed self-improvement process until they are in a position to suddenly burst out and basically "take over the world."  That is, at that point their costs would be so low they could enter and dominate most industries.   

Now given such enormous potential gains, even a very tiny probability that UberTool could do what they planned might enticed you to invest in them.  But even so, just what exactly would it take to convince you UberTool had even such a tiny chance of achieving such incredible gains?

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • Keith

    Most important to me would be a map or model of the development feedback loop. The fewer improvement cycles required to reach the goal of an uber-toolset, the better. Even at 80% probability of success per improvement, the probability of a successful project diminishes as the number of improvements required goes up. It’s an iterated cascade of conditional probabilities.

  • Jef Allbright

    I would require evidence that they had discovered not only some fantastic new capability, but that they had discovered a new principle amounting to an as yet undiscovered physical law, sufficiently foundational in scope that it would be seen as directly supporting something like n factorial novel inferences, where n corresponds roughly to the number of functional relationships in the competitive landscape supporting discoveries through the “usual” evolutionary processes of innovation. Otherwise, as a potential investor, I must assess the probability of such innovation in isolation—effectively a self-supporting pyramid of Black Swans—as virtually nil. [Note that my use of “factorial” above is only a stand-in for a more complex combinatorial function which I lack the knowledge to specify. As far as I know, the mathematics to model this is a highly relevant and as yet open question.]

    Thanks Robin, for what appears to me to be virtually the first topical post on OvercomingBias to illuminate the leading edge rather than to review and reinforce infrastructure.

  • Tim Tyler

    Heh: this seems like a bit of a dig at SIAI. Will co-blogger E.Y. get riled? Stay tuned 😉

  • burger flipper

    I would require them to threaten harm to me or family if I disclosed anything to believe that they believed.
    And then I still cannot imagine believing what they believe. If they’ve progressed to the point they can demonstrate, they don’t need me.

  • bambi

    If this awesome storm of innovation can’t produce anything worth buying in reasonably short order — to demonstrate progress, help me get my money back quickly, and reduce the total investment required — it seems likely to me that it’s all hot air.

    As to whether they have a “tiny” chance of being legit, who really throws money at tiny chances? Only people with no money think that is a good strategy.

  • http://occludedsun.wordpress.com Caledonian

    Seconding burger flipper. If they could do that they claim, they wouldn’t need me.

    My response to UberToolCorp is essentially the same as my responses to the television ads promising people they can make great money by working at home – “If it’s such a great opportunity, why are you recruiting me instead of taking advantage of it yourself?”

    I can accept that someone might develop a useful technique or understanding and still accept dubious economic or political beliefs. But the technique couldn’t be part of the economic or political spheres.

  • http://yudkowsky.net/ Eliezer Yudkowsky

    Tim, Robin and I are leading up to our Singularity disagreement.

    I’ll offer my own intuitive answer to the above question: You’ve got to be doing something that’s the same order of Cool as the invention of “animal brains, human brains, farming, and industry”. I think this is the wrong list, really; “farming” sets too low a standard. And certainly venture capitalists have a tendency and a motive to exaggerate how neat their projects are.

    But if, without exaggeration, you find yourself saying “Well that looks like a much larger innovation than farming” – so as to leave some safety margin – then why shouldn’t it have at least that large an impact?

    However, I would be highly skeptical of an UberTool Corp that talked about discounted future cash flows and return on investment. I would be suspicious that they weren’t acting the way I would expect someone to act if they really believed in their UberTool.

  • Russell Wallace

    “UberTool will keep its main improvements to itself and focus on developing tools that improve the productivity of its team of tool developers.”

    That’s not how tool storms work. In every case, including (especially) the big ones, the process depends on interaction with (embedding in) the outside world, not only for funding but also for feedback. There’s no such thing as progress in a vacuum.

    So the venture capitalist’s criteria should include a credible road map of useful intermediate stages.

  • frelkins

    No, the deal as described is unfundable and violates the now-standard VC or even Angel process.

    As an Angel or Incubator, I want to see the working prototype or proof-of-concept and testimony from outside experts on the tech. I also want to see the team’s personal balance sheet – I want to ensure all principals have themselves invested at least 20% of their own net worth in the project and have a time-horizon plan for moving it through the “Valley of Death” to a real VC. Angels may consider themselves value-add folks, but if I can’t move the start-up on to VC, it’s not an investment, it’s a charity. Pass.

    As a real VC, I want to see an advanced prototype, with testimony and possibly a patent study to ensure we can actually achieve the IP. I also expect a real, grown-up business plan, identifying the specific industry you first intend to target, trends in that industry, market size, unique value proposition, barriers to entry and how to overcome them, switching issues, and competitive threats in that industry, a marketing plan to reach that industry, and a time-horizon for ROI.

    Further, I want to see reasonable projections of revenue and profit, an expected market cap, and the proposed ownership stake division. You get my money for a limited time, kiddies, and then I’m out at a profit. And if I don’t like the rate at which you’re progressing, I will replace the management.

    I see the deal as proposed – selling only minor patents for little revenue and apparently a plan to emerge as a cost leader at some indefinite date – highly unlikely. With no advanced prototype, it’s a joke. Wouldn’t even screen it. Next.

  • James Andrix

    It would have to convince me that it had first developed a viable method for finding sets of mutually aiding tools, rather than having stumbled upon it. In other terminology: They would have to show that they had applied a lot of optimization pressure to get the bits that they had.

    I think most of the people here think that some such set of tools exist, and that they will come together and change the world in the relatively near future. Plucking them out of the potential tool-space is a different matter.

    We think the singularity will happen because some new advancements (say, computers) can be applied to potentially improving the entire rest of the tool set. (agriculture, medicine, materials science, psychology, culture, particle physics, clean enregy, and millions of particulars) If any of those other technologies are improved in way that feed back to the original technology, or another broadly applicable tool-aiding-tool, then the cycle can repeat. The widely useful tools give many opportunities for creating more widely useful tools.

    We’re pretty confident that the technology tree is up there, but we currently have no way to predict which branches go highest.

    If UberTool has a methodology, we can evaluate that. If they just come up and say this is how we can make the singularity, then no. If there are trillions of paths, and millions of them would probably work, then Billions of them could be made to sound amazing. they have to start with a filter, and an explanation of why the filter will work.

    Now, each step of this might require massive investments, which would justify funding of UberTool. (We can’t assume that because they know the path, they can make it up the mountain.) In the public-technology singularity model, this investment is made by many different investor on the expectation of selling new technology. Ubertool has to go all the way up the tech tree with little outside input of resources.

    Also, the investment has got to really reframe what you’re buying. When Ubertool is more powerful than all the governments, ‘shareholder’ is not going to be an enforcable notion. You’ve got have some guarantee that the whole operation is going to be Friendly, or at least Friendly to You.

  • http://michaelkenny.blogspot.com Mike Kenny

    I think historical cases probably are the strongest way of demonstrating a small possibility of something big happening.

  • http://hanson.gmu.edu Robin Hanson

    A great set of responses so far. Keep them in mind for my post tomorrow.

  • http://transhumangoodness.blogspot.com Roko

    I love this!

    Allow me to speculate: Robin is about to post that, because of FHI’s brain emulation roadmap and the comments such as

    “I want to see the working prototype or proof-of-concept and testimony from outside experts on the tech”

    “So the venture capitalist’s criteria should include a credible road map of useful intermediate stages.”

    and, of course, the complete “shot in the dark” that AGI represents, that no-one is going to invest in AGI when brain emulation is there as an alternative.

    And I quite agree; the only thing that could upset this prediction is if brain emulation turns out to be moderately harder than we expect – and according to the Road Map there are ways that this could happen – whilst someone comes up with a great innovation in AI, for example a really good way of representing knowledge, or theoretical progress on self-improving AI. Given the large number of research years spent on AGI, this is only moderately likely by, say, 2040.

    So somebody needs to start thinking very hard about the social and ethical consequences of an upload singularity. How does this change things?

  • homunq

    If UberTool = capitalism, then that is not looking quite as foolproof over the course of this century so far. If UberTool = human brains, agriculture, or industry, there were definitely millenia, centuries, or decades when things didn’t go very far (and for agriculture, you can definitely argue that there was not any real upward trend from Tenochtitlan to just before the Industrial Revolution, which is about 7 centuries running).

    What I’m saying is that extrapolating a double-exponential, which may be purely an anthropic artifact, into the future is very shaky ground without some strong supporting evidence.

  • frelkins

    Robin, to be fair, I suppose I could mention that a more clever hopeful might try to pitch an open-source play on a services and white-label model.

  • homunq

    There are three futures worth discussing: continued human existence over a reasonably long scale (thousands of years) without clearly transhuman developments; friendly transhumanism; or unfriendly transhumanism. That’s 8 configurations of possibilities that our universe may have. The question of “do you want to invest in UberTool” only makes any difference in the 3 of them where non-transhumanism and transhumanism are both possible; and it only potentially deeply matters in the one where all three possibilities hold.

    (OT) A big topic of this blog is taming AI to be friendly. The assumption is that thinking philosophically about how that might be possible is the most promising path for achieving FAI. I think that it would be just as promising, and far more reliably productive, to try to tame-to-friendliness the other advancements which are seen as analogous to AI – that is, human minds and society, agriculture, and industrial capitalism. All three are showing increasing signs of leading towards decidedly unfriendly outcomes, especially (but not exclusively) when considered in light of their predecessor’s value systems. If the AI outcome is path-dependent, I find it stretches plausibility that the friendliness or otherwise of these larger systems will not play a corresponding role in determining that path.

  • http://transhumangoodness.blogspot.com Roko

    @ homunq: “tame-to-friendliness the other advancements which are seen as analogous to AI – that is, human minds and society,”

    Indeed.

    What, on might ask, does it mean to make the collection of all human minds “friendly”? Given that I am now a convert to moral anti-realism, I am not entirely sure that this question has a well-defined answer.

    And I think that this is basically the core problem: there is no clean definition of what the “right” thing to do is, so “friendly X” doesn’t actually make much sense in general. Humanity has a collection of drives and desires, as does each one of us, but these desires conflict, and there is no objective way of arbitrating.

  • frelkins

    @Roko

    Humanity has a collection of drives and desires, as does each one of us, but these desires conflict, and there is no objective way of arbitrating.”

    Oh this is such an important idea, Roko, and one most memorably and beautifully stated by its father, Sir Isaiah Berlin, in his book Liberty. He called it “value pluralism.” I highly recommend you look at Liberty – one of the greatest and most gorgeously written books of philosophy in the 20th century – and also his further thoughts on the idea in his Crooked Timber of Humanity.

  • homunq

    No objective way of arbitrating, might be true, because “objective” is an unattainable ideal, and, given human biases, pretending to optimize on that ideal is often counterproductive (ie, the most objective humans are the ones who acknowledge their subjectivity).

    But there is a non-objective way of arbitrating. It’s called history. Get out of your ivory tower, people: if you believe in the battle for friendliness, it is being fought on the street right outside right now.

  • http://brokensymmetry.typepad.com Michael F. Martin

    Nothing. Literally. History is littered with examples of better mousetraps that were never widely adopted because the worse mousetrap made it to consumers that much sooner. Think of the Dvorak keyboard.

  • http://lessertruth.wordpress.com Marcio Baraco Rocha Pereira

    To expect a business-like proposition out of ÜberTool in order to put your money there is, sincerely, to completely miss the point of the questioning. I think the only one remotely grasping what we are discussing here is the guy who said “if this works out you’ll have to redefine shareholder” 🙂

    He is right, if this works out maybe putting your money there would be a bad idea IF THE PROJECT worked, for it would be so dramatic a change that maybe being distant from it was more palatable. But this is also not the question, the question is “What could make you believe it to work?” My own opinion follows.

    First, i would require the enterprise to NOT frame it’s aims in recognizable terms. That means, i would not believe anyone proposing to make a “new computer” or a “new telescope” or even a “new industry” or to shorten any NEW anything. Any advancement that really did merit the “über” in the title would be so new that it would be ridiculous to explain it in terms of previous data.

    Also, i guess i would expect them to have huge communication problems. Anyone who had a reasonable plan to get there would be so hopelessly beyond current trends as to make it difficult to understand, so to say. (This is actually a side-effect from my first proposition).

    I would require the plan to incorporate present, common-sense, familiar, reliable ideas in unfamiliar and unreliable and unpredictable ways. That is to say, i would expect the plan to begin with present techniques, without dependence upon big-tech or big-science, so that the elements could be expected to mix and match in creative ways without big costs at each interaction.

    Finally, i think the general realm where the advancements would be expected to apply must be very mundane and “day-to-day”, that is, i would expect the proposed “final stage” to be spoken of not in terms of some magical-sounding arcane “super-power”, but instead in terms of modifications in the experience of concrete human beings. I think the enterprise would have to be aiming not at “being the next technological wave”, but in “changing our very way of life”, if you can see the difference.

    But obviously, the very issue of “why don’t we simply do that” is noise in the chain. The meta-advancement does not happen just because it is difficult to get rid of the small stupid problems. It is difficult to diminish the inertia. If they could tell me how they expect to do that, i would not only be already putting my money there, i would be applying for a job!

  • Pingback: AI-Foom Debate: Post 1 – 6 | wallowinmaya

  • Pingback: Links for December | More Right