We all want the things around us to be better. Yet today billions struggle year after year to make just a few things a bit better. But what if our meagre success was because we just didn’t have the right grand unified theory of betterness? What if someone someday discovered the basics of such a theory? Well then this person might use his basic betterness theory to make himself better in health, wealth, sexiness, organization, work ethic, etc. More important, that might help him make his betterness theory even better.
After several iterations this better person might have a much better betterness theory. Then he might quickly make everything around him much better. Not just better looking hair, better jokes, or better sleep. He might start a better business, and get better at getting investors to invest, customers to buy, and employees to work. Or he might focus on making better investments. Or he might run for office and get better at getting elected, and then make his city or nation run better. Or he might create a better weapon, revolution, or army, to conquer any who oppose him.
Via such a “betterness explosion,” one way or another this better person might, if so inclined, soon own, rule, or conquer the world. Which seems to make it very important that the first person who discovers the first good theory of betterness be a very nice generous person who will treat the rest of us well. Right?
OK, this might sound silly. After all, we seem to have little reason to expect there is a useful grand unified theory of betterness to discover, beyond what we already know. “Betterness” seems mostly a concept about us and what we want – why should it correspond to something out there about which we can make powerful discoveries?
But a bunch of smart well-meaning folks actually do worry about a scenario that seems pretty close to this one. Except they talk about “intelligence” instead of “betterness.” They imagine an “intelligence explosion,” by which they don’t just mean that eventually the future world and many of its creatures will be more mentally capable than us in many ways, or even that the rate at which the world makes itself more mentally capable will speed up, similar to how growth rates have sped up over the long sweep of history. No, these smart well-meaning folks instead imagine that once someone has a powerful theory of “intelligence,” that person could create a particular “intelligent” creature which is good at making itself more “intelligent,” which then lets that creature get more “intelligent” about making itself “intelligent.” Within a few days or weeks, the story goes, this one creature could get so “intelligent” that it could do pretty much anything, including taking over the world.
I put the word “intelligence” in quotes to emphasize that the way these folks use this concept, it pretty much just means “betterness.” (Well, mental betterness, but most of the betterness we care about is mental.) And this fits well with common usage of the term “intelligence.” When we talk about machines or people or companies or even nations being “intelligent,” we mainly mean that such things are broadly mentally or computationally capable, in ways that are important for their tasks and goals. That is, an “intelligent” thing has a great many useful capabilities, not some particular specific capability called “intelligence.” To make something broadly smarter, you have to improve a wide range of its capabilities. And there is generally no easy or fast way to do that.
Now if you artificially hobble something so as to simultaneously reduce many of its capacities, then when you take away that limitation you may simultaneously improve a great many of its capabilities. For example, if you drug a person so that they can hardly think, then getting rid of that drug can suddenly improve a great many of their mental abilities. But beyond removing artificial restrictions, it is very hard to simultaneously improve many diverse capacities. Theories that help you improve capabilities are usually focused on a relatively narrow range of abilities – very general and useful theories are quite rare.
All of which is to say that fearing that a new grand unified theory of intelligence will let one machine suddenly take over the world isn’t that different from fearing that a grand unified theory of betterness will let one better person suddenly take over the world. This isn’t to say that such an thing is impossible, but rather that we’d sure want some clearer indications that such a theory even exists before taking such a fear especially seriously.
a WordPress rating system