We all want the things around us to be better. Yet today billions struggle year after year to make just a few things a bit better. But what if our meagre success was because we just didn’t have the right grand unified theory of betterness? What if someone someday discovered the basics of such a theory? Well then this person might use his basic betterness theory to make himself better in health, wealth, sexiness, organization, work ethic, etc. More important, that might help him make his betterness theory even better.
After several iterations this better person might have a much better betterness theory. Then he might quickly make everything around him much better. Not just better looking hair, better jokes, or better sleep. He might start a better business, and get better at getting investors to invest, customers to buy, and employees to work. Or he might focus on making better investments. Or he might run for office and get better at getting elected, and then make his city or nation run better. Or he might create a better weapon, revolution, or army, to conquer any who oppose him.
Via such a “betterness explosion,” one way or another this better person might, if so inclined, soon own, rule, or conquer the world. Which seems to make it very important that the first person who discovers the first good theory of betterness be a very nice generous person who will treat the rest of us well. Right?
OK, this might sound silly. After all, we seem to have little reason to expect there is a useful grand unified theory of betterness to discover, beyond what we already know. “Betterness” seems mostly a concept about us and what we want – why should it correspond to something out there about which we can make powerful discoveries?
But a bunch of smart well-meaning folks actually do worry about a scenario that seems pretty close to this one. Except they talk about “intelligence” instead of “betterness.” They imagine an “intelligence explosion,” by which they don’t just mean that eventually the future world and many of its creatures will be more mentally capable than us in many ways, or even that the rate at which the world makes itself more mentally capable will speed up, similar to how growth rates have sped up over the long sweep of history. No, these smart well-meaning folks instead imagine that once someone has a powerful theory of “intelligence,” that person could create a particular “intelligent” creature which is good at making itself more “intelligent,” which then lets that creature get more “intelligent” about making itself “intelligent.” Within a few days or weeks, the story goes, this one creature could get so “intelligent” that it could do pretty much anything, including taking over the world.
I put the word “intelligence” in quotes to emphasize that the way these folks use this concept, it pretty much just means “betterness.” (Well, mental betterness, but most of the betterness we care about is mental.) And this fits well with common usage of the term “intelligence.” When we talk about machines or people or companies or even nations being “intelligent,” we mainly mean that such things are broadly mentally or computationally capable, in ways that are important for their tasks and goals. That is, an “intelligent” thing has a great many useful capabilities, not some particular specific capability called “intelligence.” To make something broadly smarter, you have to improve a wide range of its capabilities. And there is generally no easy or fast way to do that.
Now if you artificially hobble something so as to simultaneously reduce many of its capacities, then when you take away that limitation you may simultaneously improve a great many of its capabilities. For example, if you drug a person so that they can hardly think, then getting rid of that drug can suddenly improve a great many of their mental abilities. But beyond removing artificial restrictions, it is very hard to simultaneously improve many diverse capacities. Theories that help you improve capabilities are usually focused on a relatively narrow range of abilities – very general and useful theories are quite rare.
All of which is to say that fearing that a new grand unified theory of intelligence will let one machine suddenly take over the world isn’t that different from fearing that a grand unified theory of betterness will let one better person suddenly take over the world. This isn’t to say that such an thing is impossible, but rather that we’d sure want some clearer indications that such a theory even exists before taking such a fear especially seriously.
See, the problem here is you think you're rational.
As a wise man once said, if you think you're free, you'll never escape.
The truth is that this is all entirely irrelevant because you're making the extremely false argument that this is the limiting factor here. The reality is that it is very hard to improve something which is already really good or really complicated.
Consider, for instance, a computer program. It would probably be possible to make, say, Starcraft 2 run 20% better. But how HARD would it be to make that game run 20% more efficiently?
Making things work vastly better is frequently quite difficult, and the more complicated a thing is, the harder it is to do that.
In other words, increasing intelligence is likely to actually suffer diminishing returns rather than accelerating returns, because every iteration is that much harder than the last one.
What if there was a drug or some sort of external method which could strengthen the plasticity of the brain?
My guess is the brain evolved to be as plastic as it is because it creates more stability in people which helps/helped them survive. For most people taking a drug that increased their plasticity, it would be a tossup whether it led to disaster (impulsive, irrational, angry behavior etc could take over) or led to a great "betterness" (increased rationalilty, compassion, etc.).I would imagine if such a drug or external method was created, the creators would be very careful about who they gave it to. Lets look at a possibility for what would happen if a benevolent government controlled this drug. Perhaps they would choose the following traits to look for (these are the traits that I believe imperative in order to achieve a happy/meaningful life): compassion, self-discipline, ability to face fear, a healthy response to failure, and a mind that uses logic and evidence to reach its conclusions. They would first probably send this person through rigorous training to emphasize these positive traits: I am imagining a completely personalized education designed by the greatest educators/scientists/spiritual persons. If they then gave this person this drug I would think this person would quickly be able to quickly increase these traits, which would then allow him/her to further increase these traits, the cycle goes on. This person could become incredibly powerful in a positive way and also help us quickly reach a further understanding of how to improve this process in the next person and maybe eventually come up with a true theory of betterness.
This could all be negative as well and if this drug got in the wrong hands it could lead to a manipulative, scary person.