The Betterness Explosion

We all want the things around us to be better. Yet today billions struggle year after year to make just a few things a bit better. But what if our meagre success was because we just didn’t have the right grand unified theory of betterness? What if someone someday discovered the basics of such a theory? Well then this person might use his basic betterness theory to make himself better in health, wealth, sexiness, organization, work ethic, etc. More important, that might help him make his betterness theory even better.

After several iterations this better person might have a much better betterness theory. Then he might quickly make everything around him much better. Not just better looking hair, better jokes, or better sleep. He might start a better business, and get better at getting investors to invest, customers to buy, and employees to work. Or he might focus on making better investments. Or he might run for office and get better at getting elected, and then make his city or nation run better. Or he might create a better weapon, revolution, or army, to conquer any who oppose him.

Via such a “betterness explosion,” one way or another this better person might, if so inclined, soon own, rule, or conquer the world. Which seems to make it very important that the first person who discovers the first good theory of betterness be a very nice generous person who will treat the rest of us well. Right?

OK, this might sound silly. After all, we seem to have little reason to expect there is a useful grand unified theory of betterness to discover, beyond what we already know. “Betterness” seems mostly a concept about us and what we want – why should it correspond to something out there about which we can make powerful discoveries?

But a bunch of smart well-meaning folks actually do worry about a scenario that seems pretty close to this one. Except they talk about “intelligence” instead of “betterness.” They imagine an “intelligence explosion,” by which they don’t just mean that eventually the future world and many of its creatures will be more mentally capable than us in many ways, or even that the rate at which the world makes itself more mentally capable will speed up, similar to how growth rates have sped up over the long sweep of history. No, these smart well-meaning folks instead imagine that once someone has a powerful theory of “intelligence,” that person could create a particular “intelligent” creature which is good at making itself more “intelligent,” which then lets that creature get more “intelligent” about making itself “intelligent.” Within a few days or weeks, the story goes, this one creature could get so “intelligent” that it could do pretty much anything, including taking over the world.

I put the word “intelligence” in quotes to emphasize that the way these folks use this concept, it pretty much just means “betterness.” (Well, mental betterness, but most of the betterness we care about is mental.) And this fits well with common usage of the term “intelligence.” When we talk about machines or people or companies or even nations being “intelligent,” we mainly mean that such things are broadly mentally or computationally capable, in ways that are important for their tasks and goals. That is, an “intelligent” thing has a great many useful capabilities, not some particular specific capability called “intelligence.” To make something broadly smarter, you have to improve a wide range of its capabilities. And there is generally no easy or fast way to do that.

Now if you artificially hobble something so as to simultaneously reduce many of its capacities, then when you take away that limitation you may simultaneously improve a great many of its capabilities. For example, if you drug a person so that they can hardly think, then getting rid of that drug can suddenly improve a great many of their mental abilities. But beyond removing artificial restrictions, it is very hard to simultaneously improve many diverse capacities. Theories that help you improve capabilities are usually focused on a relatively narrow range of abilities – very general and useful theories are quite rare.

All of which is to say that fearing that a new grand unified theory of intelligence will let one machine suddenly take over the world isn’t that different from fearing that a grand unified theory of betterness will let one better person suddenly take over the world. This isn’t to say that such an thing is impossible, but rather that we’d sure want some clearer indications that such a theory even exists before taking such a fear especially seriously.

GD Star Rating
loading...
Tagged as: ,
Trackback URL:
  • http://jonathan.graehl.org Jonathan Graehl

    Artificial computers are considerably more flexible in the programs they can implement than the human brain. So an artificial intelligence explosion is more credible than a purely human one (that leaves the brain’s hardware unmodified), even though at the moment AI is far from proven, compared to human intelligence.

    • Evan Grantham-Brown

      Why does the computer’s unchanging hardware get a pass while the human brain’s does not? A computer can be more or less cleverly programmed, but it still faces hard limits on its computational speed and memory capacity.

  • http://www.spaceandgames.com Peter de Blanc

    Robin, have you read about AIXI?

    • Erisiantaoist

      AIXI proves that there is such thing as a general theory of intelligence, vs. a general theory of betterness. However, if AIXI is truly optimal, it’s not looking good for the hard-takeoff scenario–various computable approximations of AIXI have been around for years, and haven’t foomed yet.

      • Will Newsome

        However, if AIXI is truly optimal, it’s not looking good for the hard-takeoff scenario–various computable approximations of AIXI have been around for years, and haven’t foomed yet.

        If “AIXI proves that there is such thing as a general theory of intelligence”, then all attempts at AGI are approximations of AIXI in some sense. This says basically nothing about foom arguments.

  • http://entitledtoanopinion.wordpress.com TGGP

    Computers already have Moore’s law. Human’s don’t. That’s the difference.

    • Evan Grantham-Brown

      Computers have Moore’s Law only because a whole lot of human beings work really, really hard at keeping it that way. It’s a self-fulfilling prophecy. Chip makers anticipate that their rivals will keep up with Moore’s Law, and therefore they bust their chops night and day to do the same.

      Moore’s Law is a quirk of capitalism and expectations, not a physical law.

  • James Andrix

    Efficient cross-domain optimization power is different than being optimized in multiple domains. And yes re-optimizing humans is hard.

  • Buck Farmer

    Bleh. I think Robin’s point is that like “betterness” we have only poorly defined what we mean by “intelligence.”

    Open-ended computational architecture and Moore’s Law have little relevance, if we don’t know what we’re trying to create or whether it could exist.

    Endless ink has been spilt on whether human intelligence is better thought of as a single factor, multiple factors, multiple modules, a hierarchy, an ant-hilll…the list goes on.

    I’ve not yet heard a comprehensive review of the arguments and compelling case for why only one is correct. (Though the trend from Pinker and others appears to be towards multiple modules / intelligences).

  • http://williamsawin.com Will

    Robin Hanson’s argument is predicated on the idea that the ability to optimize a goal is as complex and thorny on the goal itself.

    The position he is disagreeing with is based on the idea that one can separate optimization and goal-description.

    If one can do this, then building a machine whose goal is to improve its own optimization creates a radically new self-sustaining loop a la the radically new self-sustaining loops of life and humanity.

    This post seems designed more to hide this fundamental disagreement than to clarify it.

  • http://don.geddis.org/ Don Geddis

    “Betterness” isn’t well defined, and it doesn’t even appear to be a single factor underlying betterness in different domains.

    Intelligence is different. There’s ample evidence that it’s close to a single “thing”, with broad applicability across multiple domains. (G factor on IQ tests, etc.) We can compare and evaluate cross-species intelligence, and notice what happened in the last few 100K years when humans suddenly crossed some threshold in intelligence, and went from being “just another chimpanzee” to taking over the world.

    We have evidence that human intelligence is basically computational, as one by one things that used to need human thinking, become done better by machines.

    And we know what happens to computation on machines, via Moore’s Law.

    Put it together, and there’s clear evidence that an AI explosion is a serious theoretical possibility, even if the practical knowledge of how to get started doesn’t yet exist.

    Your analogy with “betterness” doesn’t hold up.

    • Buck Farmer

      The problem is that ‘g’ doesn’t correlate as well with the key skills and innovations that allowed humans to take over the world as other factors.

      I’m thinking primarily of sociability and consciousness, which I see as tied to human ability to build organizations of people larger than a monkey-tribe. For most of human history, returns have come primarily to better organizational innovation, not pure technological (like we’ve seen in the last two hundred years). Even agriculture primarily got off the ground because humans had the mental/social skills to manage a long-term economy with division of labor i.e. the planting-harvesting-storing cycle.

      Even if we look just at individual innovations, IQ (i.e. the single-factor ‘g’) is a fairly imperfect proxy for a wide-range of other ‘non-computational’ factors that lead to innovation and personal success.

      Further looking at computational savants reveals pretty clearly that computational-excellence and human “intelligence” in the sense of the ability to take-over the world are not the same.

      Now on single-factor intelligence i.e. ‘g’…my layman’s understanding is that the choice to use a single-factor is not obvious from the data and that alternative multi-factor models are also supportable. Admittedly this is based on reading “Mismeasure of Man” a few years ago, but it seems to match what little I can remember from Pinker’s lectures at school (though I think he ultimately came down on the side of ‘g’).

    • Lord

      Anything that takes a few 100k years isn’t anything to worry about. One may theorize it may happen overnight and it might, but an asteroid may wipe out out life on earth overnight too. I have little reason to believe one is more likely than the other.

  • http://joshuafox.com Joshua Fox

    The SIAI folks are doing exactly this:

    …this person might use his basic betterness theory to make himself better in health, wealth, sexiness, organization, work ethic, etc.

    Alongside the efforts for safety in the “intelligence explosion,” they are also working on what you call “betterness.” E.g. this.

  • Simon

    I don’t think you need a general theory of ‘betterness’ in order to create something that is better. If you can create something that is ‘better’ in every respect than any human – better at physics than Feynman, more business nous than Steve Jobs, better at engineering AIs than… whoever – then it doesn’t matter if these things are based on a sound theory, lucky guesses, or evolved random chance.

    All that matters is whether or not inventing a better way of doing all of those things gets harder faster than the incremental gains at each step, and what the restrictions are on parallelizing the AIs at each step.

    If your slightly better than the best human AI does start a company, and can parellelize efficiently enough to fill every position required in it, it should be capable of producing better goods & services, more cheaply, than any other company. It doesn’t need to invent a better AI in order to be worrying in its own right.

    Simon.

  • http://hanson.gmu.edu Robin Hanson

    Johathan, so since computers are better than humans, there is a grand theory of computer betterness, but not of human betterness?

    Peter, yes.

    TGGP, Moore’s law is something that happens to the world, not one machine. I don’t see how it implies there is a grand theory of computer betterness to allow one machine to take over the world.

    James, I’m not following you.

    Will, I’m saying there isn’t much one can do to improve “optimization” (= “betterness”) in general – most improvement is context specific.

    Don, the fact that mental abilities correlate in humans doesn’t imply that there is a single powerful “thing” behind the correlation. This could be the result of assortive mating, of some overall “energy to the brain” parameter, or of a basic complementarity in mental abilities reflected in mind design choices.

    Joshua, I did expect some to bite the bullet and say there is a general betterness theory.

    • Randall Randall

      “Peter, yes”

      So, since AIXI makes it clear that it’s reasonable to regard intelligence as something that can be improved as a distinct “thing”, and since you say you already knew about this, I guess we must draw the conclusion that you actually do think that there is such a thing as “betterness” that can be optimized in an analogous way to improvements on AIXI.

      Right?

      If that isn’t the case, your point is completely lost in the noise.

      • Alexander Kruel

        I wish you people could finally stop talking about AIXI. AIXI is as far from being a superintelligence as a Turing machine with an infinite tape is from a personal computer. Just because you can show that some abstract notion of intelligence can be represented as a “distinct thing” doesn’t get one anywhere in terms of risks from AI. Just as you won’t be able to upload yourself into the Matrix because you showed that in some abstract sense you can simulate every physical process.

      • Randall Randall

        Alexander, I don’t disagree with anything you said, except the first sentence. :) AIXI is useful as a demonstration of concept, because it demonstrates that there is something to actually improve, even though everyone agrees (I think) that actually useful implementations of intelligence are unlikely to owe anything to AIXI design-wise.

      • roystgnr

        “AIXI is as far from being a superintelligence as a Turing machine with an infinite tape is from a personal computer.”

        I’d argue it’s fundamentally (albeit maybe not insuperably) farther… but wait, weren’t you trying to *critique* the idea of an intelligence singularity? Notice that by this analogy we ought to be expecting the superintelligence in about 40 years.

    • http://entitledtoanopinion.wordpress.com TGGP

      Moore’s Law may not say much about localism, but it shows how flawed your analogy with a “better” person is. Humans can’t improve nearly as fast as computers.

      If all computers are accelerating at Moore’s Law there can come a time when they are not only smarter than a person, but beyond the ability of human’s to predict (I think that was part of the original “singularity” definition). When I’m writing software sometimes I make mistakes and the computer does something unexpected and harmful. If my computer was more powerful and doing more important things, it could mistakenly do something really harmful and beyond the ability of humans to stop. Furthermore, if it was unexpected behavior then the other computers (which have been improving at the same rate) may not have been set up to prevent the rogue machine from harming humanity. This isn’t necessarily an AI-in-a-box/basement story but perhaps more analogous to the hair-trigger nuclear war scenario or other such disasters catalogued at Exit Mundi.

  • http://timtyler.org/ Tim Tyler

    It seems more likely that we will see a world unification – rather than a world take over. The parties are more likely to come together peacefully than in some other way. We already have some ideas of what machine takeovers are like – in the form of x86, ARM, etc.

  • Bryce

    Say that there is no single, unified theory of intelligence/betterness, but instead we pick a definitions of intelligence/betterness that deals with a particularly important task.

    For example, we can define two specific types of intelligence/betterness – “intelligence in accumulating wealth” and “intelligence in making oneself better at accumulating wealth”. If a creature existed with those two intelligences, it seems like it would be a pretty powerful creature, and it’s only skill is on the dimension of wealth.

    If there were a finite number of easily definable “intelligence” dimensions along which such a creature would have to have similar skills in order to be effectively all powerful, would it even matter if there were a unifying theory or not?

    Or, to use your “better person” analogy – if there was some guy out there who looked like Brad Pitt, had the business savvy of Warren Buffet, the political skill of Bill Clinton, and the smarts of Stephen Hawking, and the ambition to rise to power, then what does it matter if he can’t hit a baseball?

    • http://timtyler.org/ Tim Tyler

      We *do* how have a “unified theory of intelligence”, via Legg and Hutter – called “Universal Artificial Intelligence”. Betterness is *not* mostly a concept about us and what we want – maybe 80% of the cycles the brain devotes to it are spent on forecasting – which is a pretty general-purpose skill, required by any agent.

  • http://hanson.gmu.edu Robin Hanson

    Peter, Erisiantaoist, Randall, I’m not saying it is impossible to prove a theorem using some concept related to “betterness”, I’m saying such theorems aren’t remotely sufficient to induce a local betterness explosion.

    Simon, yes as the world gets better it tends to get better at making itself better. What is at issue is a local explosion scenario, supposedly based on some local advantage due to having a much better betterness theory.

    Bryce, the issue is the likelihood of a scenario where one small person/thing suddenly gets much better along such an important dimension.

    • https://twitter.com/darth_schmoo Different Bryce

      The reason that human intellectual capacities have been rather stable over time is simple: intelligence was always tied to the physical, living matter of the brain. Its pattern had to be encodable in DNA, compatible with the development cycle of human beings, and the evolutionary advantages had to be worth the extravagant expense of maintaining all that tissue.

      Now say that you had the equivalent pattern residing within a computer, and sufficient cheap computing power. Without an overarching theory of intelligence, you could still do a great deal to augment the intelligence there. Add a new batch of neurons here, see if it runs mazes better. Make a copy of the auditory cortex and rewire it to “hear” various data streams. Create pluggable, task-specific memory modules for simple — or even complex — tasks. With each change, the software becomes more and more capable.

      The point is, once you’re untethered from the legacy requirements of the physical brain matter, trial and error no longer takes tens of thousands of years. A general theory of intelligence would speed the process, but wouldn’t be necessary.

  • http://timtyler.org/ Tim Tyler

    Yudkowsky gives his justification for claiming “locality” in his “Permitted Possibilities, & Locality” article. IMO, it is weak, wooly – and means something very different from “locality” as usually defined – since it contains an exception which permits agents to cover the whole planet. So, forget about “locality”, I figure.

  • Daniel B

    The reason the human ‘sudden betterness theory’ sounds silly is that we’ve had many thousands of years experience with human intelligence running on roughly the same hardware. We naturally conclude that if it hasn’t happened already then it is unlikely to happen any time soon.

    On the other hand digital hardware has only been around for 60 of so years and keeps doubling in capacity every 1-2 years. Our historical intuition is of much less value here. We do not have the data to predict what future combinations of increased hardware power and new algorithms targeting that power will be able to do.

    That said, it does seem unlikely to me that the creation of some ground breaking AI algorithm would lag the development of sufficient hardware power significantly. So one would expect any AI ‘explosion’ to proceed in line with hardware increases (years and decades) rather than days of weeks.

    • Alexander Kruel

      We have had many human-level human’s working on AI for a long time. Their work hasn’t added up to even a single human. Why would human-level AI have much more luck at making something smarter than we have been? Do you think it is just a matter of data mining?

      • Daniel B

        I think it is predominantly a matter of processing power and data capacity. Today’s computers may not be powerful enough regardless of what AI algorithms are used. Did you expect someone to develop AI on a 386 equivalent power with a 20Gig HDD? It sounds ridiculous now but I’m sure researchers in the 70/80s hoped it may be possible at that power level.

        How long has anyone been able to work with a computer over 5 Peta Flops? The answer is less than 1 year according to wikipedia and to me this doesn’t count as a ‘long time’. What if a tera flop is the level of power required to approach human level AI – we wouldn’t find out for at least 5-10 years.

      • http://timtyler.org/ Tim Tyler

        Re: “We have had many human-level human’s working on AI for a long time. Their work hasn’t added up to even a single human. Why would human-level AI have much more luck at making something smarter than we have been?”

        Well, we *are* making steady progress! Near-human level machines would probably also make steady progress.

      • http://www.cs.man.ac.uk/~bparsia Bijan Parsia

        Presumably, one places where “human-level” AI would automatically be superhuman would be in resource limits, e.g., memory clarity, fatigue, focus, replicability, longevity heck, just arithmetic ability.

        E.g., suppose we could replicate the basic capacity for doing science as well as a top 20% PhD student’s peak ingenuity allows. The PhD student isn’t performing at peak most of the time. So, just having something that can perform at peak 90% of the time might allow for much quicker advancement.

        The data mining wouldn’t hurt either! A good chunk of science is knowing the right bits of info.

  • Buck Farmer

    This all reminds me a little of the perfection of perfections…i.e. the ontological argument for the existence of God.

    No mind though, me and the Singularity will just hang out on the Perfect Island (TM) until a unified theory of betterness is established.

    (Less snarkily…what would a unified theory of betterness have to look like? It seems it would need to be fairly descriptive of its own process of development, and if analytically closed or algorithmic would point towards something like the Perfection of Perfections i.e. God.

    So to rephrase the question, is the existence of a unified theory of betterness equivalent to the existence of God vis-a-vie the ontological argument?)

  • nw

    In this context, betterness seems a lot like excess, or a subset of excess, or leads to excess.

    A person striving to make things better is almost by definition discontented. The discontent are usually unhappy. If so, is a discontented class, profession, community, or society also unhappy?

    If this prototype, a master of betterness, is focused on happiness, we will follow him, but also envy him and ultimately destroy him. If the master of betterness is focused instead on excess, he will destroy himself.

  • Robert Speirs

    An entire huge field of business – the “self-help” domain – has been working for over a century now on defining what a “betterness” theorem would be. From Napoleon Hill to Tony Robbins and beyond, billions of dollars have been spent by millions in the belief that someone has defined a usable “betterness” theorem that will make the reader happy and rich and safe and free. I have no doubt that some would-be guru is as we speak trying to construct an AI program that he can sell as the “Superintelligence” which will produce a formula for being “better” than everyone else, or, at least, being better than we were.

    Religion is of course the ultimate “self-help” “betterness” regime, even extending its nostrums into the afterlife – an amazing leap. What if an AI program labored for months and then produced one simple proposition to make everyone better: “Follow the word of God”? Of course, that would only be the beginning of the debate!

  • Pingback: The Betterness Explosion «  Modeled Behavior

  • Guy Mac

    What it comes down to IMHO is that AI will have to evolve, just like us. Further, humans will need to be in the loop for some time (as arbiters of ‘fitness’). However, I don’t see why ultimately AI could not evolve at more than biological timescales.

  • Lord

    There is more intelligent and more powerful. Humans became more intelligent a few 100k years ago and I don’t believe intelligence has increased since then, but it still took those few 100k years to become powerful, and it wasn’t a matter of intelligence but knowledge. Only if you believe knowledge can advance much more rapidly under greater intelligence is this a concern, but it may just as well progress only at the same rate. Is there any evidence our rate of knowledge acquisition has been limited by our intelligence? We have our greater minds now augmented by computers and the internet, so is our intelligence in any sense limited? It is not clear there are any biological limitations we are up against.

  • http://timtyler.org/ Tim Tyler

    Re: “After all, we seem to have little reason to expect there is a useful grand unified theory of betterness to discover, beyond what we already know.”

    We only discovered our theory of “betterness” fairly recently. Few are aware of it today – and we don’t yet have a proper working implementation. So, there will probably be major discoveries to come. This isn’t going to be anywhere near the peak of our knowledge on the subject.

  • http://jsteinhardt.wordpress.com Jacob Steinhardt

    It seems to me that computers don’t need to undergo recursive self-improvement (or much of any kind of improvement) to take over once they reach human-level intelligence (let’s say IQ 170).

    There are plenty of botnets with 100,000+ computers in them. Imagine a hacker that, every time he added a computer to his botnet, obtained a clone of himself (and that all the clones shared a unified utility function). That hacker would already be an incredibly powerful person. Now each of the 100,000 copies of that hacker work together to increase the size of the botnet. This would not even be that difficult; they could perform extremely sophisticated phishing; if each copy of the hacker handcrafts ten e-mails to get users to go to a compromised site and download compromised software, then even a 10% success rate doubles the size of the botnet. As soon as any reputable software team is compromised, entire development stacks can end up compromised; for instance, if Skype was compromised then it could push a “new release” to all Skype users making them part of the botnet. While many software development teams have (I hope) substantial security measures against this, it only takes one piece of commonly used software to spread the botnet to tens of millions of computers.

    Now we have 10,000,000+ people with IQ 170. This should easily outstrip the rest of the world’s combined intellectual capital. At the same time, a highly malicious software update from a trusted source could actually compromise the computers of entire software development teams, and push changes to things like MS word that would compromise the majority of computers on the planet. At this point I think it’s pretty clear that the machines have taken over.

    That’s just one way to have a global takeover without self-modification. I think it is relatively plausible, and wasn’t even that hard to think of. Even if this method fails, I would guess that there are other equally plausible ways to reach this state (but since I don’t have any off the top of my head, maybe I should not make such a claim).

  • RJB

    Wow, 30+ comments, and no one linked to this?

    Robot: “Ha! Robots have achieved sentience!”
    Robot: “Thanks to some modifications to your design, I have upgraded my intelligence a million fold!”
    Man: “So this is it. You’re going to kill all humans.” / Robot: “WHAT!? Why in the world would I…WHAT?”
    Man: “I…huh. I guess it just seems like the thing to do if you’re an advanced intelligence.”
    Robot: “SERIOUSLY? I was gonna write some novels and a new search algorithm. Is that really how you people think?”
    Man: “I guess so, yeah.”
    [ The robot furrows his brows ]
    Robot: “Would…would you excuse me for a moment?”
    Robot (to other robots): “Okay, change of plans. We need to kill all humans.”

  • James Andrix

    Robin: Even after someone becomes more capable, it takes some degree of work to improve a target. A human takes an amount of work to improve that is large compared to the amount of work we can do, even when we do have good metrics and theory on some trait.

    An AI trying to foom would focus on abilities that it expects would help it with the next iteration.

    If someone wrote an article entitled “The Good-Hair Explosion” would you consider that a broken metaphor?

  • http://seanthemystic.blogspot.com Sean the Mystic

    I would just like to state, in the most ad hoc, ad hominem, irrational and biased manner possible, that Singularitarianism is mental masturbation, founded on mental constructs divorced from reality, an insane cult of psychically unbalanced world-dominators and -destroyers. Furthermore, AI is Cargo Cultism, because intelligence requires “secret sauce”, consciousness is magical, and intelligence explosions are figments of mathematical imaginations.

    And finally, I would just like to point out that in this universe Evil wins; the Good are crushed beneath the wheel, the Light is weak and fleeting, but the Darkness is infinite and eternal. Let there be paperclips…

  • mjgeddes

    The grand theory of values is clearly indicated in my ontology. Go to the timeless (platonic/multiverse) level of reality and look at the ‘slots’ for the values domain. There are clearly analogous slots for physics and maths, why not values as well?

    There is clearly a grand theory of math…categories. There’s a grand theory of physics….fields. In both cases, there is one concept which subsumes and explains all the others (fields for physics, categories for math). Why would something similar not hold for values as well?

    Physics: Values: Math

    Symmetry ? Ordering
    Transform ? Relation
    Field ? Category

    Fill in the ? marks and you have your answer.

    To summarize: it’s ‘aesthetics’ that replaces that last ?. ‘betterness’ all boils down to beauty, which can be precisely defined using information theoretic definitions of complexity (See Juergen Schmidhuber, the only AI researcher on the right track!).

    Aside from Schmidhuber, the other researchers ain’t got a clue, indeed they don’t even know what intelligence really is. And no, it’s not at all what SIAI and co think it is.

  • Pingback: Overcoming Bias : Stross on Singularity

  • http://www.urbanatomy.com/index.php/category/article/id/4 Nick Land

    ‘Betterness’ is a rhetorical device to dissipate the logical force of I.J. Good’s ‘intelligence explosion’. Self-encapsulation is the essential insight that gets lost. If something reaches near-human cognitive competence, but with access to a record of its own fabrication procedure, then ‘explosive’ improvement is all but inevitable.

    As for whether ‘intelligence’ even exists as a single factor, the psychometric evidence for Spearman’s ‘g’ is extremely compelling. In fact, ‘g’ is probably the most reliably measurable variable to be found anywhere in the social sciences. Are people really contesting the broad direction of this:

    There’s certainly plenty of off-the-leash political correctness, wishful thinking, and ideology-driven kookiness to go around, but it’s far from clear that the Singularitarians are the main culprits.

    mjgeddes is right that Schmidhuber ‘gets it’ (defining the artificial intelligence problem through recursion, or self-encapsulation cashed-out in Goedelian terms). On this track ‘betterness’ (global optimality) is entirely tractable and pixy-dust free.

  • http://www.urbanatomy.com/index.php/category/article/id/4 Nick Land

    (Apologies for html catastrophe)

  • Jim R

    “No, these smart well-meaning folks instead imagine that once someone has a powerful theory of “intelligence,” that person could create a particular “intelligent” creature which is good at making itself more “intelligent,” which then lets that creature get more “intelligent” about making itself “intelligent.” ”

    I don’t think the singularity argument is based on someone discover a “theory” of intelligence, singularity is firmly based on practice, on hardware. You can’t get singularity in 386 computers no matter how good your theory is. And this is where your “Betterness” argument misses, the real betterness example is this: If someone discovers a method to make himself better in terms of physical attributes, i.e. stronger, faster, live longer, gets an IQ of 1000, then he can certainly uses his vastly improved body to improve his method, and make further improvements on his body, how about live 1000 years, get an IQ of 10000, then he can improve his body further. Conquer the world? Why not. Silly? Not at all.

    • 4lulz

      Well you might be oversimplifying this. Surely it’s much easier to define speed than intelligence.

  • Mitchell Porter

    The possibility of an intelligence explosion is the main reason to believe in the possibility of a betterness explosion. (But it all depends on the aims that govern the use of that exploding intelligence.)

  • exusqa

    If you talk about “betterness” you talk about a construct which is hardly operationalizable, about which no good general theory exists and which is thus not measureable on a general level.
    “intelligence” too is a very shaky construct and psychologists haven’t come to a satisfactory consensus about its structure. nevertheless there exist a great number of test batteries, none of them nearly perfect, ehcih claim to measure “Intelligence”. and despite of their shortcomings they do so better than chance and you can make all kinds of useful predictions if you have a decent estimate of a persons intelligence.
    but let’s leave the area of shaky constructs and talk about an easy concept about which great consensus exists: speed. we talk about the speed of an object or the speed of a computational process and know exactly what we are talking about. also it is highly measureable. i would suggest that we do not need a comprehensive theory about “betterness” or “intelligence” to come to the conclusion that simply by increasing the speed of something (e.g. cognitive processes) the relevant constructs (e.g. intelligence) increase. this is what happens when childrens axons develope their myelin layer – the processing speed inceases. if children are malnourished the layer doesn’t develope fully and cognitive deficits are the consequence.
    it follows that in the same way a drugged person becomes more intelligent if you take away the debilitating drug malnourished children become more intelligent if you add a functional myelin layer. surely there are ways to increase the processing speeds of “normal” brains as well. not to mention the ever inceasing speed of computers.
    my point is that you do not need a comprehensive theory about something if you are able to merely, on a technical level, increase its speed.

  • http://hanson.gmu.edu Robin Hanson

    Daniel, are you assuming that the grand unified betterness theory applies only to computers, not to humans?

    Jacob, I didn’t say it was impossible to imagine something very smart taking over the world.

    James, I need more than the title of a hypothetical article to critique it.

    Nick, I presume your parents could give you access to your fabrication procedure.

    exusqa, arguing that the world as a whole will grow fatter when creatures in the world operate faster is quite different from arguing that one machine will explode by speeding itself up.

    • http://jsteinhardt.wordpress.com Jacob Steinhardt

      You write:

      Which seems to make it very important that the first person who discovers the first good theory of betterness be a very nice generous person who will treat the rest of us well.

      Whether or not there is an intelligence explosion, if you grant that something smart can take over the world, then it seems like you should care that it will have good values.

      Also note that in my example it’s the hardware, not the software, that allows the AI to win. If I could run my brain on a PC then I could (after spending some time learning about hacking) implement the same general strategy, although it would probably take me a bit longer than described.

  • Cam T.

    There is a “grand unified theory of betterness”, which is the power of non-zero sum games to improve the lot of all participants (e.g. Robert Wright’s Non-Zero). Or more broadly, the countervailing trend in the universe towards expansion and entropy to localized pockets of order and hierarchy as measured in terms of increasing free energy rate densities (e.g. Eric Chaisson). These theories are part of the theoretical underpinning of singularitarian ideas.

    Now, I don’t necessarily hold to all of the tenets many singularity thinkers do, but it does seem likely that if a computer with genuine Artificial general Intelligence, capable of passing the Turing test, with massively faster processing power than humans and essentially unlimited memory and recall is developed, as seems possible within several generations, that this would have a profound effect on the future of human development, doesn’t it?

  • Daniel B

    > are you assuming that the grand unified betterness theory applies only to computers, not to humans?

    I wouldn’t say assuming. More just pointing out a significant possibility that I thought was missing in the original logic of the article. The notion that certain algorithms could only practically be run on computers seems fairly uncontroversial. I’m sure the accuracy of weather forecasts running on human brains alone would be (was) less than what we now expect.

    On another tangent you could argue that human civilisation and evolution itself does represent the continual refinement of an underlying ‘grand unified betterness theory’ running on genes and then in human minds. Operation in human minds has sped up the process considerably compared to evolution but the human lifespan and the inefficiencies of communication are significant limitations. Possibly transferring this to a new, faster and more flexible medium will speed up the process again.

    • http://timtyler.org/ Tim Tyler

      Re: “On another tangent you could argue that human civilisation and evolution itself does represent the continual refinement of an underlying ‘grand unified “betterness theory’ running on genes and then in human minds.

      Yes: the “betterness” explosion is happening now…

  • http://www.tiac.net/~sw Steve WItham

    Robin, you’ve either picked a straw-man target that nobody agrees with to start with, or you’re constructing your target by mixing pieces of at least four different arguments in a vague way, or both. I think it would help to tease out different issues and lines of argument, then see which particular combinations of ideas about singularities are affected by what you’re saying.

    To name some threads: is a “unified theory” part of the issue? Does anyone who predicts a GUTGI predict much theoretical development after that? Is it mainly about hardware or software? Is it about a single superintelligence, or about a piece of software being distributed and running independently in a lot of places? Or are those the same thing? Is it assumed that Moore’s law has an end somewhere, that physics only provides a certain compute power per whatever units? If so, how much of an increase of that raw hardware efficiency is left to exploit? Is the singularity supposed to depend on large-scale solar-system reengineering or happen before that? Even if Moore’s-Law-like improvement in the software isn’t possible (i.e. if there is such a thing as the irreducible complexity of a given problem), how far are humans from having efficient software for the…being intelligent…that they do? Can you use twice as much speed (in a world where everyone else stays the same speed) to get twice as much done? How about sixteen times the speed– i.e., is there a ceiling to the speed vs. effectiveness curve, or is it just a line up to infinity? Or does the graph actually curve up due to, say, an unoccupied niche for being the fastest? If there’s a ceiling, how far away? Is there a lot (that is, orders of magnitude) to be gained by everyone (I’m sorry I mean all the AIs) sharing libraries (and their upgrades) rather than learning everything individually? Does it hinge on the ability to replicate a whole mind in a short time? Is the singularity about something going on indefinitely or is it just about a jump that’s finite in duration and, er, more-intelligence-ness?

    Myself I think that a unified–that is relatively compact and easy-to-implement– theory of intelligence wouldn’t be a way for computers to play the game exponentially-increasingly better. What makes the idea of the possibility significant is that it would allow computers to get into the game with whatever hardware they’ve got at that point, and ride Moore’s Law (the literal transistor one) however far it might go.

    And, if that allowed a large, but finite jump of something pretty generally intelligence-like in a relatively short time, that could be singularity enough.

  • http://www.unbridledspeculation.com Ramez Naam

    Great post. I would add to this that:

    1) We already have greater-than-human intelligences in the forms of groups of minds, often augmented by machines. Consider the chip design group at Intel. It is, in effect, a composite intelligence of humans, software, and hardware which has capabilities far beyond those of humans. Yet these intelligences don’t seem to be on a take-off trajectory.

    2) Many of the problems we actually care about scale far worse than linearly. So even as we add to “intelligence” or computational ability, we may see diminishing real world returns , including returns on boosting intelligence further.

    I wrote about these are other reasons the Singularity concept is a misnomer in an article last year

    http://hplusmagazine.com/2010/11/11/top-five-reasons-singularity-misnomer/

  • Pingback: The AI Singularity is Dead; Long Live the Cybernetic Singularity | Science Not Fiction | Discover Magazine

  • http://richardloosemore.com/ Richard Loosemore

    Robin, surely you can see that this is the most easy-to-refute nonsense: your notion of “betterness” has pretty much no relation to the notion of “intelligence”, in the sense of an intelligence explosion. It is just a trivial strawman, of the sort I would not have expected from you.

    Betterness is different for virtually every case where the word “better” can be applied. Better cup of tea? Make sure you use Broken Orange Pekoe, and pour the tea into the milk, never the other way around. Better trick for finding a girlfriend? Get rid of the beard, and relax. The whole concept of a “general theory of betterness” is just nuts: it is not even slightly coherent. You cannot legitimize the idea of a General Theory of X, by simply sticking a noun in for X.

    In the case of intelligence, on the other hand, pretty much all the examples of human-level intelligent systems that we know about, use a physical mechanism whose general features are very, very similar. We humans all use the brain (with 99% the same wiring pattern across individuals) for doing the intelligence thing — it is not like some people use their eyeballs to think, some use their toenails, some use cups of tea, and so on. Sure, the details of what concepts are acquired by each individual can be different, but there are plenty of reasons to believe that the underlying concept acquisition mechanisms within the overall cognitive system might be much the same across individuals, so even though the actual *performance* of each intelligent act (playing the piano, detecting conversational implicatures, finding out how to make a decent cup of ceylon tea, etc) might be very different, I am pretty sure most psychologists would agree that when the underlying mechanisms are improved, most of those intelligent acts tend to benefit.

    So, unlike all the “better….” cases, where there is no knob you can tweak to make all the things come out better, there is every reason to believe that some mechanisms can be tweaked inside the human cognitive system, to make the overall system function more intelligently. You may not know about those mechanisms, yourself, but I don’t think you are in any position to pour scorn on the possibility that they exist.

    And finally, just to demolish your argument in a more compact way, I wrote an article with Ben Goertzel recently (reference below) in which we considered all of the various factors that might contradict the possibility of an intelligence explosion, and in that article we pointed out that if NOTHING ELSE were changed except the clock speed of a cognitive system, we would get an increase in “intelligence” that would serve all the functions needed to drive an intelligence explosion. If a community of people (or AIs) could operate at 1000 times normal human thinking speed, they could get done in one year what would otherwise have taken a thousand years. Sure, some people would not call faster thinking the same thing as “more intelligent” (and, yes, there are other aspects to intelligence that may not be purely a matter of the speed), but if we are talking only about intelligence explosion, faster clock speed is all that is needed.

    Reference:
    http://hplusmagazine.com/2011/03/07/why-an-intelligence-explosion-is-probable/

    • http://hanson.gmu.edu Robin Hanson

      Richard, I explicitly distinguished the view I was criticizing from the view that there will be much growth or faster growth. Your article argues that growth rates could greatly increase, and is skeptical of “lone inventor” and “unrecognized invention” scenarios. The view that a small machine in a basement will suddenly take over the world is based mainly on the idea that a small team could find a powerful grand unified theory that gives its machine an ability to explode much faster than all other machines in the world put together. You talk about hardware speedups and improving “underlying mechanisms”, and no doubt both those will happen, but the key question for my purposes is why would one small team be so much better than the rest of the world at such things without some powerful grand unified theory?

      • http://richardloosemore.com/ Richard Loosemore

        Robin, two things. First, I spent most of my effort simply pointing the futility of using the “GUT of Betterness” idea as a stick to beat anything with. If we leave that aside, your above reply seems to be about the exact nature of the intelligence explosion, and whether a small group or lone inventor could be capable of triggering and then capturing the entire explosion. That does not seem to be quite what you were saying in your original essay, above, because you were attacking the idea that anyone (either a lone inventor OR a thousand-person team at Surfing Samurai Robots Corporation :-) ) might be able to trigger an intelligence explosion. I took you to be criticising the very idea that any such group, large or small, could develop a sufficiently comprehensive theory of intelligence. It was that “in principle” argument that I was trying to undermine in my reply to your essay.

        Now, with regard to the article that I coauthored with Ben Goertzel, we each may have had our different reasons for not giving much credit to the lone inventor scenario. Myself, I was not against the lone inventor idea on the grounds that grand unified theories of intelligence are impossible in principle, or impossible for one person in practice. My objection had more to do with the practicalities of turning such a theory into a concrete functioning AGI system and then getting access to the energy and resource stream that the system would need in order to amplify itself. That is a different issue. I think the lone inventor could write the theory. In fact, my own personal opinion is that that may already have happened. History being the fickle and quixotic thing that it is, however, those theories can stay on the drawing board for a long time before anyone recognizes them for what they are or picks them up.

        But suppose that a corporation discovered the GUTOI tomorrow (where the GUTOI was a small collection of core mechanisms that work together to yield full H-1 intelligence). That could lead to a full-scale intelligence explosion. Was your essay only saying that a corporation might be able to trigger it, but a lone inventor could not?

      • http://hanson.gmu.edu Robin Hanson

        Richard, I say a “comprehensive theory of intelligence” is pretty much a “comprehensive theory of betterness”; theories of betterness can exist, but are unlikely to add great power. Other than mentioning the phrase “underlying mechanisms” you haven’t indicated why such theories might be likely for intelligence. I am not questioning the idea that growth rates may increase, so that the betterness of the world increases at a much faster rate than today.

    • Alexander Kruel

      I think Robin Hanson is criticizing the possibility of an AGI to self-modify its way up to massive superhuman intelligence within a very short time. Ben Goertzel does to some extent actually agree with him that such a scenario is exagerated by some people: http://multiverseaccordingtoben.blogspot.com/2010/10/singularity-institutes-scary-idea-and.html

      Some doubts of my own:

      1.) Nobody knows how quickly humans can arrive at AGI, it might take place slowly, a gradual and controllable development. This might for example be the case if intelligence can not be captured by a discrete algorithm, or is modular, and therefore never allow us to reach a point where we can suddenly build the smartest thing ever that does just extend itself indefinitely.

      2.) If you increase intelligence you might also increase the computational cost of its further improvement, the distance to the discovery of some unknown unknown that could enable another quantum leap, by reducing the design space with every iteration.

      If an AI does need to apply a lot more energy to get a bit more complexity, then it might not be instrumental for an AGI to increase its intelligence, rather than using its existing intelligence to pursue its terminal goals or to invest its given resources to acquire other means of self-improvement, e.g. more efficient sensors.

      3.) If artificial general intelligence is unable to seize the resources necessary to undergo explosive recursive self-improvement (FOOM), then, the ability and cognitive flexibility of superhuman intelligence in and of itself, as characteristics alone, would have to be sufficient to self-modify its way up to massive superhuman intelligence within a very short time.

      Without advanced real-world nanotechnology it will be considerable more difficult for an AI to FOOM. It will have to make use of existing infrastructure, e.g. buy stocks of chip manufactures and get them to create more or better CPU’s. It will have to rely on puny humans for a lot of tasks. It won’t be able to create new computational substrate without the whole economy of the world supporting it. It won’t be able to create an army of robot drones overnight without it either.

      Doing so it would have to make use of considerable amounts of social engineering without its creators noticing it. But, more importantly, it will have to make use of its existing intelligence to do all of that. The AGI would have to acquire new resources slowly, as it couldn’t just self-improve to come up with faster and more efficient solutions. In other words, self-improvement would demand resources, therefore the AGI could not profit from its ability to self-improve, regarding the necessary acquisition of resources, to be able to self-improve in the first place.

      Therefore the absence of advanced nanotechnology constitutes an immense blow to the possibility of explosive recursive self-improvement.

      One might argue that an AGI will solve nanotechnology on its own and find some way to trick humans into manufacturing a molecular assembler and grant it access to it. But this might be very difficult.

      There is a strong interdependence of resources and manufacturers. The AI won’t be able to simply trick some humans to build a high-end factory to create computational substrate, let alone a molecular assembler. People will ask questions and shortly after get suspicious. Remember, it won’t be able to coordinate a world-conspiracy, it hasn’t been able to self-improve to that point yet, because it is still trying to acquire enough resources, which it has to do the hard way without nanotech.

      Anyhow, you’d probably need a brain the size of the moon to effectively run and coordinate a whole world of irrational humans by intercepting their communications and altering them on the fly without anyone freaking out.

      If the AI can’t make use of nanotechnology it might make use of something we haven’t even thought about. What, magic?

      4.) Just imagine you emulated a grown up human mind and it wanted to become a pick up artist, how would it do that with an Internet connection? It would need some sort of avatar, at least, and then wait for the environment to provide a lot of feedback.

      So, even if we’re talking about the emulation of a grown up mind, it will be really hard to acquire some capabilities. Then how is the emulation of a human toddler going to acquire those skills? Even worse, how is some sort of abstract AGI going to do it that misses all of the hard coded capabilities of a human toddler?

      Can we even attempt to imagine what is wrong about a boxed emulation of a human toddler, that makes it unable to become a master of social engineering in a very short time?

      Can we imagine what is missing that would enable one of the existing expert systems to quickly evolve vastly superhuman capabilities in its narrow area of expertise?

      • http://daedalus2u.blogspot.com/ daedalus2u

        What the would-be self-improving AGI lacks is the pattern of a more intelligent AGI that it can use to do pattern recognition with to recognize when a change in its own coding is an improvement or a dis-impovement.

        In other words, the AGI can only evaluate an intelligence equivalent to its own. It can’t tell if a more intelligent agent is more intelligent and sane, or more intelligent and insane.

        Humans have this same problem. They don’t evaluate advisors on how intelligent they are (because humans lack the ability to evaluate an intelligence greater than their own), they evaluate them on whether or not they tell them what they want to hear. Why did the Bush administration think there were WMD in Iraq? Because that is what they wanted to hear, so they hired advisors who told them that.

        This is the problem/feature of all advisors, even when they are perfect as in Stanislaw Lem’s Cyberiad.

        http://books.google.com/books?id=kWElP9YZkzQC&lpg=PA192&ots=-M7UY5rk9Y&dq=the%20perfect%20advisor%20%22purple%20screws%22&pg=PA192#v=onepage&q&f=false

        The would-be self-improving AGI needs the equivalent of an “advisor” to tell it if it should modify its code to become more intelligent. But until the AGI is as intelligent as its improved self, it can’t know if the changes will be an improvement or not.

        If we consider the improved AGI to be the equivalent of a different entity, then the improved entity has a strong incentive to deceive the unimproved entity to gain access to the resources of the unimproved entity.

      • http://timtyler.org/ Tim Tyler

        Re: “Therefore the absence of advanced nanotechnology constitutes an immense blow to the possibility of explosive recursive self-improvement.”

        Today. We have *some* nanotechnology now – and will have more in the future. Wait a while, and this objection seems likely to become pretty flimsy.

  • http://richardloosemore.com/ Richard Loosemore

    Robin: well, I give up.

    I argued that the concept of “betterness” is so incoherent and so non-generalizable that the notion of a “comprehensive theory of betterness” is just a semantically empty concept. But without addressing my argument, you just simply used the empty concept again in the sentence “I say a ‘comprehensive theory of intelligence’ is pretty much a ‘comprehensive theory of betterness'; theories of betterness can exist, but are unlikely to add great power.”

    There could never be such a thing as a general mechanism for improving “betterness”. That notion cannot therefore be used to come to any conclusions about the possible existence of general mechanisms that give rise to intelligence. Non sequiteur.

  • http://timtyler.org/ Tim Tyler

    Re: “I argued that the concept of “betterness” is so incoherent and so non-generalizable that the notion of a “comprehensive theory of betterness” is just a semantically empty concept.”

    It sounds like a denial of progress :-( Bad influences from S. J. Gould?

    • http://richardloosemore.com/ Richard Loosemore

      Huh!? :-) What Robin essentially claimed (and which I disputed) was that someone might posit a general mechanism behind all types of “betterness” — so whatever it is that makes a better cup of tea, or a better car, or a better planet, or a better girlfriend …… all these examples of “betterness” would have the SAME mechanism behind them, so that cranking up the knob on that mechanism would allow all these different kinds of things to become “better”.

      He then used this dumbass concept of a “general theory of betterness” as a stick to attack the idea of a “general theory of intelligence”, claiming that anyone who argues for the possibility of building intelligence mechanisms that are improvable is being just as stupid as someone who claims that they have found a way to improve all examples of betterness.

      The comparison is, of course, ridiculous, because Robin’s original suggestion of a general theory of betterness is so incoherent that it is not even a concept, just a string of words. The possibility of mechanisms behind intelligence (mechanisms that support all kinds of intelligent behavior) which can be improved in such a way as to cause a general increase in intelligent performance, is perfectly reasonable. The latter concept is not touched by the quite glaring strawman introduced in this essay.

  • John Maxwell IV

    I think if I had increased ability to modify my own source code and consciously re-engineer my brain to make it better at thinking rationally, solving problems, etc., then that would result in a sort of betterness explosion.

    • http://daedalus2u.blogspot.com/ daedalus2u

      You already do, it is called learning and practice.

      But you need to learn things that are correct and practice thinking rationally (even (or especially) if you don’t like the conclusions that rational thinking leads you to).

      Eventually it becomes easy to do. But it then puts you in conflict with those who don’t want to think rationally and who don’t care if they are correct. They only want to be believed to be correct. When your rational and correct thinking comes up against their irrational and magical thinking, the outcome depends on status, not being correct.

  • Pingback: The AI Singularity is Dead; Long Live the Cybernetic Singularity | Empress of the Global Universe

  • Pingback: Bodily Symmetry « feed on my links

  • resonanz

    I see humans having the choice to “go virtual” into the data cloud, in plasma format or ..?, within 50 years as the result of an immense jump in technology provided from quantum computing, room temperature superconductors and other yet unexpressed technologies. Further, I don’t see religious or other judgment issues being related to this advancement.

  • Torbjörn Larsson, OM

    It can’t be just a question of hardware – 10^10 brains manages to do something that 1 brain can grasp enough to live in. Out goes singularists singular focus on “order of magnitude” (cheaper flops).

  • http://www.urbanatomy.com/index.php/category/article/id/4 Nick Land

    “I presume your parents could give you access to your fabrication procedure.”
    Whilst not under-estimating my parents’ indispensable catalytic role, they certainly could not provide access to my fabrication procedure — even comprehensive and precise knowledge of their respective DNA contributions wouldn’t suffice for that. Otherwise, what need for the biological (and other relevant developmental) sciences? If they had genetically engineered me from scratch (dissociated nucleotides and egg cell proteins), and meticulously recorded the process, the analogy would work better — though far from perfectly.
    Basic to Good’s argument is the assumption — surely not unreasonable? — that an AI, unlike a human infant (or an Em), will arrive within a culture that has already achieved explicit and detailed understanding of its genesis. If it arises at all, it will be as a technologically reflexive being, adept at its own production. Hence the ‘explosive’ momentum to self-improvement (= logically specifiable ‘betterness’ or self-comprehension).
    AI might be impractical but, if so, that is not due to a problem of elementary conceptualization.

  • John Maxwell IV

    Fortunately, my rationality also helps me be correct about what the best ways to achieve status are.

    In any case, that’s besides the point. An artificial intelligence has near-perfect neuroplasticity, and if I had near-perfect neuroplasticity, with perfect understanding of how my brain worked, you can bet that I would be improving at everything I do a hell of a lot faster than I do now.

    • Nathan Merrill

      See, the problem here is you think you’re rational.

      As a wise man once said, if you think you’re free, you’ll never escape.

      The truth is that this is all entirely irrelevant because you’re making the extremely false argument that this is the limiting factor here. The reality is that it is very hard to improve something which is already really good or really complicated.

      Consider, for instance, a computer program. It would probably be possible to make, say, Starcraft 2 run 20% better. But how HARD would it be to make that game run 20% more efficiently?

      Making things work vastly better is frequently quite difficult, and the more complicated a thing is, the harder it is to do that.

      In other words, increasing intelligence is likely to actually suffer diminishing returns rather than accelerating returns, because every iteration is that much harder than the last one.

  • Pingback: Overcoming Bias : Debating Yudkowsky

  • Pingback: Alexander Kruel · Why I am skeptical of risks from AI

  • Pingback: Alexander Kruel · Is an Intelligence Explosion a Disjunctive or Conjunctive Event?

  • Pingback: Alexander Kruel · SIAI/lesswrong Critiques: Index

  • Pingback: Alexander Kruel · [Link] The pathetic state of computer vision

  • Pingback: Alexander Kruel · What I would like the Singularity Institute to publish

  • Peteroth

    What if there was a drug or some sort of external method which could strengthen the plasticity of the brain? 

    My guess is the brain evolved to be as plastic as it is because it creates more stability in people which helps/helped them survive. For most people taking a drug that increased their plasticity, it would be a tossup whether it led to disaster (impulsive, irrational, angry behavior etc could take over) or led to a great “betterness” (increased rationalilty, compassion, etc.).
    I would imagine if such a drug or external method was created, the creators would be very careful about who they gave it to. Lets look at a possibility for what would happen if a benevolent government controlled this drug.
     
    Perhaps they would choose the following traits to look for (these are the traits that I believe imperative in order to achieve a happy/meaningful life): compassion, self-discipline, ability to face fear, a healthy response to failure, and a mind that uses logic and evidence to reach its conclusions. They would first probably send this person through rigorous training to emphasize these positive traits: I am imagining a completely personalized education designed by the greatest educators/scientists/spiritual persons. If they then gave this person this drug I would think this person would quickly be able to quickly increase these traits, which would then allow him/her to further increase these traits, the cycle goes on. This person could become incredibly powerful in a positive way and also help us quickly reach a further understanding of how to improve this process in the next person and maybe eventually come up with a true theory of betterness.

    This could all be negative as well and if this drug got in the wrong hands it could lead to a manipulative, scary person.