Wednesday I described UberTool, an imaginary firm planning to push a set of tools through a rapid mutual-improvement burst until they were in a position to basically “take over the world.” I asked when such a plan could be reasonable.
Wechsler-style tests - vocab, simple math, that sort of thing? Sure. I could buy that.
A Raven's-style matrices test? Many of them aren't online with answers; I would be very surprised if an Indian sweatshop (or Google) could tell you the answer with better odds than guessing, unless there's a high IQ individual there who can solve the matrix, in which case all that's been done is test a different person. A smart individual may be able to figure out the answer and explain it to the rest, but the rest won't figure it out even as a group.
What, in your perspective, distinguishes Doug Engelbart from the two previous occasions in history where a world takeover successfully occurred? I'm not thinking of farming or industry, of course.
For me the pure amount of testing to adapt themselves to solving a new problem. Animal brains speeded up the rate of testing, and human brains could test testing methodologies themselves and expand upon them and pass them on. But still a huge amount of testing was done.
That is a fundamental part of the power of intelligence augmentation. Now, each individual human has the collective knowledge and intelligence of the planet at his fingertips - due to improvements in networking technologies.
The test is not biased - the rules of the test for the caveman are exactly the same: he just has to complete the test within an hour and hand it back in.
An individual with a concealed mobile phone and a link to an Indian test-solving sweat shop would probably result in an impressive IQ test score. Especially so if you compare against the score of a caveman.
What are the grounds for asserting that intelligence augmentation has not already enormously improved human intelligence?
I've wondered the same thing in different terms. Most of our advancement stems from increasing specialization. Is it rational to think AI will be different?
If self-improving AGI is created, will that be different? Humans haven't developed many tools to make us smarter in general. Sure we have calculators, event-planners, mathematics, and all sorts of things to help us make up for our serious cognitive faults (such as the inability to do complex arithmetic in our heads), but we don't seem to make much in the way of serious advances in our general intelligence. We need to increase the numbers of us and our specialization to solve more complex problems. If AGI can be self-improving, maybe it can do more with fewer minds?
Computers haven't "taken over the world" yet. Weigh the world's silicon chips and they are a tiny fraction of the weight of all the human brains in the world - let alone all the animal brains. It's the same story with memory, sensors and actuators. Machine civilisation is still at the "birth trauma" developmental stage.
Wasn't Engelbart right? Computers were a set of mutually self-enhancing tools that became very powerful in a very short amount of time, and they have taken over the world. It's just that most of the innovation came from people other than Doug Engelbart, which considering the non-Engelbart:Engelbart ratio among computer scientists is statistically plausible. The creation of computing technology was a society-wide effort, and considering the resources necessary, it couldn't have been otherwise.
To create Industry Tech N+1, you need Industry Tech N, but you also need coal, iron, water-power, workers, food for the workers, land on which to build factories, engineers, and inventors. You don't just create industrialism in your basement. UberTool can't spend a century developing industry and then march out of its office to take over the world, because it needs to have the world or a large chunk thereof just to start industrializing.
Computing Tech N makes Computing Tech N+1 easier to develop, but it's not sufficient to create Computing Tech N+1. That takes hi-tech factories, thousands of hours of skilled labor, money, and sometimes genius. Invention of the mouse speeds up all future computer tasks, but you can't leave a mouse in a room overnight and expect it to have written Windows XP when you get up. To go from the mouse to Windows XP still requires tens of thousands of hours of skilled labor, a bunch of money, high tech factories, and a few genius-level insights. That's why Doug Engelbart working alone couldn't conquer the world: he had the mouse, but not any of that other stuff.
Artificial intelligence is different from either of these because once you have a self-improving AI, AI Tech N is both necessary and sufficient to develop AI Tech N+1. You can just leave a self-improving AI in a room overnight and expect it to be a Power when you wake up.
To recap, one of the last world takeovers took place when DNA replaced RNA as the primary heritable medium in biology. DNA most-likely got started in a single organism - and then, after a while, all the other organisms on the planet found themselves with no surviving descendants - a genetic takeover.
Today, new heritable media have arisen - the new replicators. These are responsible for an enormous mass-extinction - and the extinction may well go all the way - until none of the primitive, DNA-protein based organisms which evolution clumsily tinkered together remain - a memetic takeover.
I would tend to predict there is always a sufficient diffusion from the concentrated optimizing kernel away towards everyone else, that allows everyone else to possess sufficient capability of predicting, outflanking, and\or otherwise containing the threat of a runaway monopoly, single-point strong AI, nation-state, etc.
A strong AI organization is going to pass through a series of breakthrough steps before the ideal humanity-crushing paperclip maximizer can be built, upon which the likelihood that all of these breakthroughs can come from a single organization without leakage to the rest of the world seems dramatically improbable. And so long as the rest of the world is within shooting range of the top, their collective ability\power should remain superior.
I'm not particularly concerned about the "hard takeoff" scenario as a near-term threat, but I'd say that the answer to your question is that analogy is a poor reasoning method. An AGI is not Doug Engelbart; treating them as being so similar that the failure of one implies the failure of another seems unjustified.
More generally, I wonder if Rationalists should be forbidden to use analogy at all in serious analysis. As a source of inspiration, ideas, possibilities to explore -- analogy is great. But it is not a valid inference method.
However, the fact that AGI is not Engelbart also does *not* imply in any way that it might be successful at doing things Enblebart could not do. That type of reasoning (which I see a lot) is even worse than misused analogy.
What, in your perspective, distinguishes Doug Engelbart from the two previous occasions in history where a world takeover successfully occurred? I'm not thinking of farming or industry, of course.
Depends on the test.
Wechsler-style tests - vocab, simple math, that sort of thing? Sure. I could buy that.
A Raven's-style matrices test? Many of them aren't online with answers; I would be very surprised if an Indian sweatshop (or Google) could tell you the answer with better odds than guessing, unless there's a high IQ individual there who can solve the matrix, in which case all that's been done is test a different person. A smart individual may be able to figure out the answer and explain it to the rest, but the rest won't figure it out even as a group.
What, in your perspective, distinguishes Doug Engelbart from the two previous occasions in history where a world takeover successfully occurred? I'm not thinking of farming or industry, of course.
For me the pure amount of testing to adapt themselves to solving a new problem. Animal brains speeded up the rate of testing, and human brains could test testing methodologies themselves and expand upon them and pass them on. But still a huge amount of testing was done.
That is a fundamental part of the power of intelligence augmentation. Now, each individual human has the collective knowledge and intelligence of the planet at his fingertips - due to improvements in networking technologies.
The test is not biased - the rules of the test for the caveman are exactly the same: he just has to complete the test within an hour and hand it back in.
Tim, that's a silly example. You're comparing one human to a team of humans.
An individual with a concealed mobile phone and a link to an Indian test-solving sweat shop would probably result in an impressive IQ test score. Especially so if you compare against the score of a caveman.
What are the grounds for asserting that intelligence augmentation has not already enormously improved human intelligence?
I've wondered the same thing in different terms. Most of our advancement stems from increasing specialization. Is it rational to think AI will be different?
If self-improving AGI is created, will that be different? Humans haven't developed many tools to make us smarter in general. Sure we have calculators, event-planners, mathematics, and all sorts of things to help us make up for our serious cognitive faults (such as the inability to do complex arithmetic in our heads), but we don't seem to make much in the way of serious advances in our general intelligence. We need to increase the numbers of us and our specialization to solve more complex problems. If AGI can be self-improving, maybe it can do more with fewer minds?
Computers haven't "taken over the world" yet. Weigh the world's silicon chips and they are a tiny fraction of the weight of all the human brains in the world - let alone all the animal brains. It's the same story with memory, sensors and actuators. Machine civilisation is still at the "birth trauma" developmental stage.
James, Nazi propaganda tools were useful, at least for making relative gains, but I don't see that they improved each other much.
Yvian, but why would a self-improving AI be so much more autonomous than a self-improving tool team?
Wasn't Engelbart right? Computers were a set of mutually self-enhancing tools that became very powerful in a very short amount of time, and they have taken over the world. It's just that most of the innovation came from people other than Doug Engelbart, which considering the non-Engelbart:Engelbart ratio among computer scientists is statistically plausible. The creation of computing technology was a society-wide effort, and considering the resources necessary, it couldn't have been otherwise.
To create Industry Tech N+1, you need Industry Tech N, but you also need coal, iron, water-power, workers, food for the workers, land on which to build factories, engineers, and inventors. You don't just create industrialism in your basement. UberTool can't spend a century developing industry and then march out of its office to take over the world, because it needs to have the world or a large chunk thereof just to start industrializing.
Computing Tech N makes Computing Tech N+1 easier to develop, but it's not sufficient to create Computing Tech N+1. That takes hi-tech factories, thousands of hours of skilled labor, money, and sometimes genius. Invention of the mouse speeds up all future computer tasks, but you can't leave a mouse in a room overnight and expect it to have written Windows XP when you get up. To go from the mouse to Windows XP still requires tens of thousands of hours of skilled labor, a bunch of money, high tech factories, and a few genius-level insights. That's why Doug Engelbart working alone couldn't conquer the world: he had the mouse, but not any of that other stuff.
Artificial intelligence is different from either of these because once you have a self-improving AI, AI Tech N is both necessary and sufficient to develop AI Tech N+1. You can just leave a self-improving AI in a room overnight and expect it to be a Power when you wake up.
...if you wake up.
The small team was the Nazi leadership and the mutually improving tools were their propaganda instruments.
To recap, one of the last world takeovers took place when DNA replaced RNA as the primary heritable medium in biology. DNA most-likely got started in a single organism - and then, after a while, all the other organisms on the planet found themselves with no surviving descendants - a genetic takeover.
Today, new heritable media have arisen - the new replicators. These are responsible for an enormous mass-extinction - and the extinction may well go all the way - until none of the primitive, DNA-protein based organisms which evolution clumsily tinkered together remain - a memetic takeover.
I would tend to predict there is always a sufficient diffusion from the concentrated optimizing kernel away towards everyone else, that allows everyone else to possess sufficient capability of predicting, outflanking, and\or otherwise containing the threat of a runaway monopoly, single-point strong AI, nation-state, etc.
A strong AI organization is going to pass through a series of breakthrough steps before the ideal humanity-crushing paperclip maximizer can be built, upon which the likelihood that all of these breakthroughs can come from a single organization without leakage to the rest of the world seems dramatically improbable. And so long as the rest of the world is within shooting range of the top, their collective ability\power should remain superior.
Hopefully.
Eliezer, I discussed what influences transition inequality here.
Derekz, I doubt if any of us know what it would be like to reason without analogy.
I'm not particularly concerned about the "hard takeoff" scenario as a near-term threat, but I'd say that the answer to your question is that analogy is a poor reasoning method. An AGI is not Doug Engelbart; treating them as being so similar that the failure of one implies the failure of another seems unjustified.
More generally, I wonder if Rationalists should be forbidden to use analogy at all in serious analysis. As a source of inspiration, ideas, possibilities to explore -- analogy is great. But it is not a valid inference method.
However, the fact that AGI is not Engelbart also does *not* imply in any way that it might be successful at doing things Enblebart could not do. That type of reasoning (which I see a lot) is even worse than misused analogy.
I don't think there have been any "world takeovers" in human history - unless you count takeovers by individual genes or memes, or by other species.
If we are considering the entire history of life, however, there have probably been many more than two "world takeovers".
What, in your perspective, distinguishes Doug Engelbart from the two previous occasions in history where a world takeover successfully occurred? I'm not thinking of farming or industry, of course.