21 Comments

To expect a business-like proposition out of ÜberTool in order to put your money there is, sincerely, to completely miss the point of the questioning. I think the only one remotely grasping what we are discussing here is the guy who said "if this works out you'll have to redefine shareholder" :-)

He is right, if this works out maybe putting your money there would be a bad idea IF THE PROJECT worked, for it would be so dramatic a change that maybe being distant from it was more palatable. But this is also not the question, the question is "What could make you believe it to work?" My own opinion follows.

First, i would require the enterprise to NOT frame it's aims in recognizable terms. That means, i would not believe anyone proposing to make a "new computer" or a "new telescope" or even a "new industry" or to shorten any NEW anything. Any advancement that really did merit the "über" in the title would be so new that it would be ridiculous to explain it in terms of previous data.

Also, i guess i would expect them to have huge communication problems. Anyone who had a reasonable plan to get there would be so hopelessly beyond current trends as to make it difficult to understand, so to say. (This is actually a side-effect from my first proposition).

I would require the plan to incorporate present, common-sense, familiar, reliable ideas in unfamiliar and unreliable and unpredictable ways. That is to say, i would expect the plan to begin with present techniques, without dependence upon big-tech or big-science, so that the elements could be expected to mix and match in creative ways without big costs at each interaction.

Finally, i think the general realm where the advancements would be expected to apply must be very mundane and "day-to-day", that is, i would expect the proposed "final stage" to be spoken of not in terms of some magical-sounding arcane "super-power", but instead in terms of modifications in the experience of concrete human beings. I think the enterprise would have to be aiming not at "being the next technological wave", but in "changing our very way of life", if you can see the difference.

But obviously, the very issue of "why don't we simply do that" is noise in the chain. The meta-advancement does not happen just because it is difficult to get rid of the small stupid problems. It is difficult to diminish the inertia. If they could tell me how they expect to do that, i would not only be already putting my money there, i would be applying for a job!

Expand full comment

Nothing. Literally. History is littered with examples of better mousetraps that were never widely adopted because the worse mousetrap made it to consumers that much sooner. Think of the Dvorak keyboard.

Expand full comment

No objective way of arbitrating, might be true, because "objective" is an unattainable ideal, and, given human biases, pretending to optimize on that ideal is often counterproductive (ie, the most objective humans are the ones who acknowledge their subjectivity).

But there is a non-objective way of arbitrating. It's called history. Get out of your ivory tower, people: if you believe in the battle for friendliness, it is being fought on the street right outside right now.

Expand full comment

@Roko

"Humanity has a collection of drives and desires, as does each one of us, but these desires conflict, and there is no objective way of arbitrating."

Oh this is such an important idea, Roko, and one most memorably and beautifully stated by its father, Sir Isaiah Berlin, in his book Liberty. He called it "value pluralism." I highly recommend you look at Liberty - one of the greatest and most gorgeously written books of philosophy in the 20th century - and also his further thoughts on the idea in his Crooked Timber of Humanity.

Expand full comment

@ homunq: "tame-to-friendliness the other advancements which are seen as analogous to AI - that is, human minds and society,"

Indeed.

What, on might ask, does it mean to make the collection of all human minds "friendly"? Given that I am now a convert to moral anti-realism, I am not entirely sure that this question has a well-defined answer.

And I think that this is basically the core problem: there is no clean definition of what the "right" thing to do is, so "friendly X" doesn't actually make much sense in general. Humanity has a collection of drives and desires, as does each one of us, but these desires conflict, and there is no objective way of arbitrating.

Expand full comment

There are three futures worth discussing: continued human existence over a reasonably long scale (thousands of years) without clearly transhuman developments; friendly transhumanism; or unfriendly transhumanism. That's 8 configurations of possibilities that our universe may have. The question of "do you want to invest in UberTool" only makes any difference in the 3 of them where non-transhumanism and transhumanism are both possible; and it only potentially deeply matters in the one where all three possibilities hold.

(OT) A big topic of this blog is taming AI to be friendly. The assumption is that thinking philosophically about how that might be possible is the most promising path for achieving FAI. I think that it would be just as promising, and far more reliably productive, to try to tame-to-friendliness the other advancements which are seen as analogous to AI - that is, human minds and society, agriculture, and industrial capitalism. All three are showing increasing signs of leading towards decidedly unfriendly outcomes, especially (but not exclusively) when considered in light of their predecessor's value systems. If the AI outcome is path-dependent, I find it stretches plausibility that the friendliness or otherwise of these larger systems will not play a corresponding role in determining that path.

Expand full comment

Robin, to be fair, I suppose I could mention that a more clever hopeful might try to pitch an open-source play on a services and white-label model.

Expand full comment

If UberTool = capitalism, then that is not looking quite as foolproof over the course of this century so far. If UberTool = human brains, agriculture, or industry, there were definitely millenia, centuries, or decades when things didn't go very far (and for agriculture, you can definitely argue that there was not any real upward trend from Tenochtitlan to just before the Industrial Revolution, which is about 7 centuries running).

What I'm saying is that extrapolating a double-exponential, which may be purely an anthropic artifact, into the future is very shaky ground without some strong supporting evidence.

Expand full comment

I love this!

Allow me to speculate: Robin is about to post that, because of FHI's brain emulation roadmap and the comments such as

"I want to see the working prototype or proof-of-concept and testimony from outside experts on the tech"

"So the venture capitalist's criteria should include a credible road map of useful intermediate stages."

and, of course, the complete "shot in the dark" that AGI represents, that no-one is going to invest in AGI when brain emulation is there as an alternative.

And I quite agree; the only thing that could upset this prediction is if brain emulation turns out to be moderately harder than we expect - and according to the Road Map there are ways that this could happen - whilst someone comes up with a great innovation in AI, for example a really good way of representing knowledge, or theoretical progress on self-improving AI. Given the large number of research years spent on AGI, this is only moderately likely by, say, 2040.

So somebody needs to start thinking very hard about the social and ethical consequences of an upload singularity. How does this change things?

Expand full comment

A great set of responses so far. Keep them in mind for my post tomorrow.

Expand full comment

I think historical cases probably are the strongest way of demonstrating a small possibility of something big happening.

Expand full comment

It would have to convince me that it had first developed a viable method for finding sets of mutually aiding tools, rather than having stumbled upon it. In other terminology: They would have to show that they had applied a lot of optimization pressure to get the bits that they had.

I think most of the people here think that some such set of tools exist, and that they will come together and change the world in the relatively near future. Plucking them out of the potential tool-space is a different matter.

We think the singularity will happen because some new advancements (say, computers) can be applied to potentially improving the entire rest of the tool set. (agriculture, medicine, materials science, psychology, culture, particle physics, clean enregy, and millions of particulars) If any of those other technologies are improved in way that feed back to the original technology, or another broadly applicable tool-aiding-tool, then the cycle can repeat. The widely useful tools give many opportunities for creating more widely useful tools.

We're pretty confident that the technology tree is up there, but we currently have no way to predict which branches go highest.

If UberTool has a methodology, we can evaluate that. If they just come up and say this is how we can make the singularity, then no. If there are trillions of paths, and millions of them would probably work, then Billions of them could be made to sound amazing. they have to start with a filter, and an explanation of why the filter will work.

Now, each step of this might require massive investments, which would justify funding of UberTool. (We can't assume that because they know the path, they can make it up the mountain.) In the public-technology singularity model, this investment is made by many different investor on the expectation of selling new technology. Ubertool has to go all the way up the tech tree with little outside input of resources.

Also, the investment has got to really reframe what you're buying. When Ubertool is more powerful than all the governments, 'shareholder' is not going to be an enforcable notion. You've got have some guarantee that the whole operation is going to be Friendly, or at least Friendly to You.

Expand full comment

No, the deal as described is unfundable and violates the now-standard VC or even Angel process.

As an Angel or Incubator, I want to see the working prototype or proof-of-concept and testimony from outside experts on the tech. I also want to see the team's personal balance sheet - I want to ensure all principals have themselves invested at least 20% of their own net worth in the project and have a time-horizon plan for moving it through the "Valley of Death" to a real VC. Angels may consider themselves value-add folks, but if I can't move the start-up on to VC, it's not an investment, it's a charity. Pass.

As a real VC, I want to see an advanced prototype, with testimony and possibly a patent study to ensure we can actually achieve the IP. I also expect a real, grown-up business plan, identifying the specific industry you first intend to target, trends in that industry, market size, unique value proposition, barriers to entry and how to overcome them, switching issues, and competitive threats in that industry, a marketing plan to reach that industry, and a time-horizon for ROI.

Further, I want to see reasonable projections of revenue and profit, an expected market cap, and the proposed ownership stake division. You get my money for a limited time, kiddies, and then I'm out at a profit. And if I don't like the rate at which you're progressing, I will replace the management.

I see the deal as proposed - selling only minor patents for little revenue and apparently a plan to emerge as a cost leader at some indefinite date - highly unlikely. With no advanced prototype, it's a joke. Wouldn't even screen it. Next.

Expand full comment

"UberTool will keep its main improvements to itself and focus on developing tools that improve the productivity of its team of tool developers."

That's not how tool storms work. In every case, including (especially) the big ones, the process depends on interaction with (embedding in) the outside world, not only for funding but also for feedback. There's no such thing as progress in a vacuum.

So the venture capitalist's criteria should include a credible road map of useful intermediate stages.

Expand full comment

Tim, Robin and I are leading up to our Singularity disagreement.

I'll offer my own intuitive answer to the above question: You've got to be doing something that's the same order of Cool as the invention of "animal brains, human brains, farming, and industry". I think this is the wrong list, really; "farming" sets too low a standard. And certainly venture capitalists have a tendency and a motive to exaggerate how neat their projects are.

But if, without exaggeration, you find yourself saying "Well that looks like a much larger innovation than farming" - so as to leave some safety margin - then why shouldn't it have at least that large an impact?

However, I would be highly skeptical of an UberTool Corp that talked about discounted future cash flows and return on investment. I would be suspicious that they weren't acting the way I would expect someone to act if they really believed in their UberTool.

Expand full comment

Seconding burger flipper. If they could do that they claim, they wouldn't need me.

My response to UberToolCorp is essentially the same as my responses to the television ads promising people they can make great money by working at home - "If it's such a great opportunity, why are you recruiting me instead of taking advantage of it yourself?"

I can accept that someone might develop a useful technique or understanding and still accept dubious economic or political beliefs. But the technique couldn't be part of the economic or political spheres.

Expand full comment