Wednesday I described UberTool, an imaginary firm planning to push a set of tools through a rapid mutual-improvement burst until they were in a position to basically “take over the world.” I asked when such a plan could be reasonable.
Thursday I noted that Doug Engelbart understood in ’62 that computers were the most powerful invention of his century, and could enable especially-mutually-improving tools. He understood lots of detail about what those tools would look like long before others, and oversaw a skilled team focused on his tools-improving-tools plan. That team pioneered graphic user interfaces and networked computers, and in ’68 introduced the world to the mouse, videoconferencing, email, and the web.
I asked if this wasn’t ideal for an UberTool scenario, where a small part of an old growth mode “takes over” most of the world via having a head start on a new faster growth mode. Just as humans displaced chimps, farmers displaced hunters, and industry displaced farming, would a group with this much of a head start on such a general better tech have a decent shot at displacing industry folks? And if so, shouldn’t the rest of the world have worried about how “friendly” they were?
In fact, while Engelbart’s ideas had important legacies, his team didn’t come remotely close to displacing much of anything. He lost most of his funding in the early 1970s, and his team dispersed. Even though Engelbart understood key elements of tools that today greatly improve team productivity, his team’s tools did not seem to have enabled them to be radically productive, even at the task of improving their tools.
It is not so much that Engelbart missed a few key insights about what computer productivity tools would look like. I doubt if it would have made much difference had he traveled in time to see a demo of modern tools. The point is that most tools require lots more than a few key insights to be effective – they also require thousands of small insights that usually accumulate from a large community of tool builders and users.
Small teams have at times suddenly acquired disproportionate power, and I’m sure their associates who anticipated this possibility used the usual human ways to consider that team’s “friendliness.” But I can’t recall a time when such sudden small team power came from an UberTool scenario of rapidly mutually improving tools.
Some say we should worry that a small team of AI minds, or even a single mind, will find a way to rapidly improve themselves and take over the world. But what makes that scenario reasonable if the UberTool scenario is not?
Depends on the test.
Wechsler-style tests - vocab, simple math, that sort of thing? Sure. I could buy that.
A Raven's-style matrices test? Many of them aren't online with answers; I would be very surprised if an Indian sweatshop (or Google) could tell you the answer with better odds than guessing, unless there's a high IQ individual there who can solve the matrix, in which case all that's been done is test a different person. A smart individual may be able to figure out the answer and explain it to the rest, but the rest won't figure it out even as a group.
What, in your perspective, distinguishes Doug Engelbart from the two previous occasions in history where a world takeover successfully occurred? I'm not thinking of farming or industry, of course.
For me the pure amount of testing to adapt themselves to solving a new problem. Animal brains speeded up the rate of testing, and human brains could test testing methodologies themselves and expand upon them and pass them on. But still a huge amount of testing was done.