Em- vs Non-Em- AGI Bet

Joshua Fox and I have agreed to a bet:

We, Robin Hanson and Joshua Fox, agree to bet on which kind of artificial general intelligence (AGI) will dominate first, once some kind of AGI dominates humans. If the AGI are closely based on or derived from emulations of human brains, Robin wins, otherwise Joshua wins. To be precise, we focus on the first point in time when more computing power (gate-operations-per-second) is (routinely, typically) controlled relatively-directly by non-biological human-level-or-higher general intelligence than by ordinary biological humans. (Human brains have gate-operation equivalents.)

If at that time more of that computing power is controlled by emulation-based AGI, Joshua owes Robin whatever $3000 invested today in S&P500-like funds is worth then. If more is controlled by AGI not closely based on emulations, Robin owes Joshua that amount. The bet is void if the terms of this bet make little sense then, such as if it becomes too hard to say if capable non-biological intelligence is general or human-level, if AGI is emulation-based, what devices contain computing power, or what devices control what other devices. But we intend to tolerate modest levels of ambiguity in such things.

[Added 16Aug:] To judge if “AGI are closely based on or derived from emulations of human brains,” judge which end of the following spectrum is closer to the actual outcome. The two ends are 1) an emulation of the specific cell connections in a particular human brain, and 2) general algorithms of the sort that typically appear in AI journals today.

We bet at even odds, but of course the main benefit of having more folks bet on such things is to discover market odds to match the willingness to bet on the two sides. Toward that end, who else will declare a willingness to take a side of this bet? At what odds and amount?

My reasoning is based mainly on the huge costs to create new complex adapted systems from scratch when existing systems embody great intricately-coordinated and adapted detail. In such cases there are huge gains to instead adapting existing systems, or to creating new frameworks to allow the transfer of most detail from old systems.

Consider, for example, complex adapted systems like bacteria, cities, languages, and legal codes. The more that such systems have accumulated detailed adaptations to the detail of other complex systems and environments, the less it makes sense to redesign them from scratch. The human mind is one of the most complex and intricately adapted systems we know, and our rich and powerful world economy is adapted in great detail to many details of those human minds. I thus expect a strong competitive advantage from new mind systems which can inherit most of that detail wholesale, instead of forcing the wholesale reinvention of substitutes.

Added 16Aug: Note that Joshua and I have agreed on a clarifying paragraph.

GD Star Rating
loading...
Tagged as: , , ,
Trackback URL:
  • arch1

    1) I think that when you say “controlled..by” you *don’t* mean to include the sense in which I control my PC’s compute cycles. If so, you might want to say something like “is used to realize” instead.

    2) I’m still not clear on your definition of “dominates”. Are you saying that 1 brain ~ x GOPS by assumption, and AGI is considered to dominate when AGI GOPS > p(t)*x, where p= current human population? If so, then a) you should state x, b) regardless of x, this allows a hugely inefficient AGI to be considered dominant solely by virtue of squandering a prodigious number of GOPS, even if it only realizes one brain-equivalent of intelligence. Is that your intent?

    • Joshua Fox

      As Robin points out above, he and I have already pinned down our wording, but any clarifications or improved bets that you can propose could be interesting.

      • arch1

        Thanks Joshua. I suspect the rewording in 1) isn’t needed anyway, because you and Robin have the same interpretation.

        Re: 2), I’m not suggesting a rewording, just a clarification of the meaning of the current wording.

        In case the question was unclear, I’ll try again using a specific value of x. Let’s say you assume that the gate-operation equivalent of a human brain is 10^14 GOPS, and let’s KISS and assume a fixed human population of 10^10. Will you and Robin deem AGI to have become dominant when it first consumes 10^24 GOPS, even if that processing power only realizes the equivalent of a single human brain? Or will AGI only be deemed dominant when it has realized the equivalent of 10^10 human brains?

        Given the factor of 10 billion difference between these two criteria, it seems relevant to understanding the bet.

  • Joe Teicher

    Good Bet Robin! It seems to me that your scenario has a much higher probability of resulting in a continuation of human institutions like the corporations in the S&P 500 than Joshua’s scenario. Therefore, the bet will be called off (or we will all be paperclips) more often when Joshua wins than when you win. The real odds could be in your favor even if Joshua’s scenario is more likely. That’s what I call edge!

  • Joe Teicher

    I don’t have a good sense of what the odds should be for this bet but I would love to bet either scenario occurring. Specifically, I would like to write an insurance contract against computers dominating humans. For as long as humans remain in control I want a monthly payment and then if computers ever take over I’ll pay out a lump sum. Anyone interested in taking the other side of that?

    • arch1

      …and for anyone concerned about money becoming worthless once computers take over, not to worry – I suspect Joe would be willing to insure you against that as well:-)

  • kevinsdick

    On general principles, it seems like the most likely outcome is that the intelligence “winner” will be a hybrid of some kind and so that bet will be difficult decide. There are far more combinations of emulated and algorithmic AI than either pure strategy. Moreover, it seems like the rapid self-improvement aspect will make it hard to disentangle what the “decisive” or “dominant” element was.

    Though perhaps you could get around this by making the bet dependent on the then dominant intelligence deciding the question with its superior intellect. But then I think trying to predict the answer given by an entity much smarter then you should make your priors pretty diffuse–so not much help in departing from even odds.

  • http://CommonSenseAtheism.com lukeprog

    Congrats to both of you on working out terms for this bet; it’s no easy task, I know!

    The big problem, of course, is that it may be difficult to collect, or not very meaningful to collect when the time comes. Also, there’s a good chance you’ll both be dead before we hit WBE or de novo AGI, with cryonics having a small chance of working even if we get a relatively good outcome from WBE/AGI.

    So perhaps an apocalypse bet would be an improvement? Or, you could bet on something nearer, with a defined date, e.g. “In 2040, we will ask the following list of 20 (younger) people, whose rationality we currently respect fairly highly, whether they think WBE or de novo AGI is more likely to come first.” (Presumably, the situation will be significantly clearer in 2040 than it is now, you’ll both plausibly still be alive, and the normal economic order will probably still be in place.)

  • http://don.geddis.org/ Don Geddis

    I love the idea of bold public bets like this, and working out the exact terms is always difficult, so kudos to both Joshua and Robin for making the bet.

    My own personal view is that a designed AI will be much more important for future history (because of its ability to understand itself and thus self-improve) than ems. But Robin, over years of careful thoughts and blog posts, has convinced me that his scenario is probably more likely to happen first, even if it later gets overwhelmed by a designed AI.

    All that said, this bet is probably less interesting than hoped, because (1) it’s unlikely to be resolved during either of your lifetimes, and (2) when those conditions occur, your stakes will likely have long become essentially meaningless.

    Despite those problems, having a formal bet is better than not. So, well done!

  • Alexander Gabriel

    I’ve been saying AI, since my first intuition was the planes and birds analogy.

    It’s true things might change from “abstracting is easier” if scanning technology gets good enough.

    I also thought Steve Hsu shared a neat intuition on why AI might take a long time:

    http://infoproc.blogspot.co.uk/2008/06/singularity-ai-and-ieee.html

    Between these, I’d now allow at least a 5% chance of WBE winning. Maybe more.

    That said, some neuroscientists claim that simulation successes have been overblown and the models are inaccurate.

    http://www.nytimes.com/2013/03/19/science/bringing-a-virtual-brain-to-life.html?pagewanted=all&_r=0

    Plus the progress with C. elegans doesn’t seem very inspiring.

    http://www.minduploadingproject.org/blog/2012/10/the-connectome-debate-is-mapping-the-mind-of-a-worm-worth-it.html

    We apparently don’t have a functional simulation of anything, despite oodles of computer power.

    So I mostly do fall back to the planes and birds analogy.

    My guess is also that Henry Markram at the Human Brain Project does not qualify as heading a $1 billion project with a 1% chance of success, which was a price point you mentioned before. Actually, I would be interested if someone wants to give their odds on this project working in ten years as Markram aims. 0.1%?

    The big question for me though is why the “WBE v. AI” dispute would actually matter to any action we could take. There are so many unknowable details of each scenario that I don’t see what this bit of information helps with.

  • http://www.gwern.net/ gwern

    I think this needs a lot more definitions and examples. If SPAUN takes off, does Robin win? If a deep learning architecture is successful, does Fox win?

    • http://overcomingbias.com RobinHanson

      It would be more useful to propose specific alternative bet wordings, rather than to just declare this proposed wording insufficient.

      • http://www.gwern.net/ gwern

        Yeah, it’s too bad I didn’t give any examples or anything.

  • Rafal Smigrodzki

    Just to be clear, by emulations do you mean uploads of specific persons or devices inspired by the structure of human brains in general?

    I would bet against the former and for the latter as early post-human dominants.

    • http://overcomingbias.com RobinHanson

      I am not free to clarify the meaning of the betting text by myself, as Joshua and I negotiated it. But you are free to propose clarifications that you would bet on (with what odds and amount).

      • Rafal Smigrodzki

        I think I would be willing to bet $500 against Robin, if the text of the bet was modified as follows:

        “If the AGI are closely based on or derived from emulations of INDIVIDUAL human brains, Robin wins, otherwise Joshua wins.”

        Other conditions/odds unchanged.

  • IMASBA

    Generally emulating nature is a good idea, however the nature of consciousness is so elusive that we’ll probably only develop true AI once we really understand consciousness scientifically, I doubt just programming a computer to emulate large groups of neurons using patterns that resemble that of the human brain will result in a self-aware machine (and it may be difficult to proof self-awareness in a machine), but knowledge of the human mind will still be instrumental. In the end I expect the bet to become void and the figuring out of the nature of consciousness (the answers may lie in fundamental physics rather than in biology) a feat of an incredibly advanced civilization that will hopefully be more enlightened than past and present civilizations.

    • http://don.geddis.org/ Don Geddis

      There are reasonable theories which suggest that “just” emulating scanned neurons would indeed result in consciousness, even without fully understanding it. The idea that consciousness depends on quantum physics (e.g. Penrose) is mostly silly. It’s hardly more sophisticated than “we don’t fully understand X, and we don’t understand Y, so maybe X and Y are related.”

      • IMASBA

        There is evidence single cells can hold memory, at least for short periods of time. That alone should be reason to suspect the inner structures of neurons and their interaction with a real environment (which includes truly random processes that cannot be reproduced by software alone, though they may of course be reproduced by artificial neurons in the far future, even exploiting the brownian motion of molecules witihn neurons to introduce randomness in the mind makes the mind “quantum” and impossible to simulate using software alone) cannot be neglected. Although Penrose’s theory is likely wrong the general idea behind such theories isn’t that far-fetched: self-awareness is a marvelous feature of information being aware that it is information and is processing information, something that even our fastest computers have not produced (they should have produced some traces of it by now if it was really as simple as just telling computers they are an unique object that thinks) and it just so happens that in physics information seems very fundamental (one of, or perhaps the most fundamental thing there is).

      • http://don.geddis.org/ Don Geddis

        We live in a quantum universe, so of course there exists quantum noise in everything. But there’s no evidence that the noise is at all useful in brains; sophisticated devices generally work by carefully separating the valuable signal from the messy background noise, and discarding the noise.

        As to consciousness, there are in fact already simple traces of it in simple computers, exactly as you suggest. It’s much like “alive”. When all you have are cow and rocks, it’s easy to believe that “life” is a huge separate thing. But eventually you come to understand ants, and then bacteria, and then you find things right on the border between life and non-life (crystals, viruses), and you realize that “life” is more of a convenient fuzzy description, with entities on a spectrum, than it is a binary property of physics.

        “Consciousness” is the same way. Our computers are “not very” conscious, in the way that a crystal or virus is “not very” alive. But it’s false to say that they have “no trace” of consciousness.

      • IMASBA

        “sophisticated devices generally work by carefully separating the valuable signal from the messy background noise, and discarding the noise.”

        Not brains, they thrive on “noise” (for example Hebbian learning) and actually so do many computer programs (any program that uses RNGs really). In any case, I was just illustrating the point (aside from the fact that single neurons have proven memory) that simulating a brain will certainly go deeper than just programming virtual neurons.

        “As to consciousness, there are in fact already simple traces of it in simple computers, exactly as you suggest.”

        I’ve never heard of this, ever (“ghosts in the machine” are just chaos theory at work, not experience or self-awareness). I don’t doubt there will one day be artificial machines on the treshold of consciousness, just as there are animals on that treshold, but we have not reached that day yet and I don’t think we will until after some fundamental breakthroughs in science.

        Also, obviously we can identify some kind of a (fuzzy) treshold between living and inanimate things because we understand what life is and how it works, we simply do not have that level of understanding yet when it comes to consciousness, we don’t even have a mathematical model.

      • http://don.geddis.org/ Don Geddis

        Is a rock conscious? How about a tree? An ant? A cat? A chimp? What objective test are you suggesting, to resolve the question?

        You are making very confident statements about what you think doesn’t have self-awareness, but I can’t see much justification for your opinions.

        I’d suggest: given that you self-admit to not understanding consciousness, I’d suggest being a little more humble about claiming computers can’t have any.

      • EPH

        Actually Don, IMASBA really knows what he’s talking about, and isn’t the one who has spoken with misplaced confidence – despite having no current theory of consciousness in science, you are acting like there is one, and that your intuitions will be the correct ones.

        Meanwhile, IMASBA is taking the more humble approach by pointing out that these big questions are unsolvable without at least the sketches of a theory of consciousness (something we do not have).

        Your main point has relied on a conflation – “life” is fuzzy, so therefore consciousness is fuzzy, so therefore computer programs must have a form of “proto-consciousness.” Life may be fuzzy but we have some wonderful theories about it – replicator theory, code/phenotype, and evolutionary theory all firmly provide guidelines to what should be considered life or not. Meanwhile, there is nothing like that for consciousness, other than “brains like ours probably have it.” Your argument doesn’t follow.

        Most importantly:
        His (or Her) general point on the fact that the bet will probably be void once we understand more about consciousness was missed entirely.

      • http://don.geddis.org/ Don Geddis

        EPH: No conflation; that was an analogy, not a proof. We do, actually, have a sketch theory of consciousness: a planning process that has models and sensors on the external world, develops a self-model and internal sensors as well. So it can think about its own future possible actions as part of its planning process, and it can sense recent internal thoughts.

        And BTW: there’s a big difference between having an understanding of what consciousness is, vs. having a detailed theory of how it works or how to implement it. The two of you are using a word, but I wonder if you’re referring to a real-world concept with your word.

        IMASBA: You asserted that current computers don’t have consciousness. I dispute that; I think some of them already do. Of course, not anywhere near as rich as human conscious experiences, but easily sufficient to cross the threshold from “none” to “a little bit”.

      • IMASBA

        “We do, actually, have a sketch theory of consciousness: a planning process that has models and sensors on the external world, develops a self-model and internal sensors as well. So it can think about its own future possible actions as part of its planning process, and it can sense recent internal thoughts.”

        That’s like saying light is something that allows eyes to see things: too general to be a real model or in fact be of any use. We really have no idea how, in a complex enough system, zombie-like information processing can evolve into self-awareness and feelings: even if aliens gave us incredible supercomputing technology tomorrow we wouldn’t know where to even begin to make that technology conscious, we do not even understand the most basic principles of consciousness, like cavemen wondering where what fire is and how it works.

        “IMASBA: You asserted that current computers don’t have consciousness. I dispute that; I think some of them already do.”

        Even if they do we lack the means to show it

      • http://don.geddis.org/ Don Geddis

        You have a theory that it would be possible to have a “zombie-like information processing” system that doesn’t have consciousness, and then you pose yourself a hard problem of what “additional” step is needed to make conscious experience.

        The resolution is simple. Your zombie idea is simply incoherent. Consciousness isn’t something “added” to complex information processing with a self-model; it’s what such processing actually is.

        (And BTW: adding “feelings” is more confusion. That’s a completely separate topic, essentially orthogonal to consciousness.)

      • IMASBA

        I know zombie-like information processing exists: it is proven to me every time my brain regulates my breathing while I’m in dreamless sleep and everytime I perform an action on auto-pilot, really every time my body does something I did not consciously tell it to. And I also happen to believe my mobile phone is not self-aware.

        If I was unclear, I meant experience, not emotions when I used the word “feelings”. Experience is part of consciousness (the very act of being self-aware is an experience).

      • AnotherScaryRobot

        “I know zombie-like information processing exists: it is proven to me every time my brain regulates my breathing while I’m in dreamless sleep and everytime I perform an action on auto-pilot, really every time my body does something I did not consciously tell it to.”

        You seem to be assuming that if this information processing produced some level of consciousness you would experience this. But why would that necessarily be true? I assume you’ll grant that whatever is going on in *my* brain produces consciousness, but you have no experience of that.

        Why can’t a similar phenomenon operate within a single brain? Perhaps complex ‘unconscious’ information processing taking place in your brain *does* produce consciousness, and you just don’t experience it because it (like the consciousness my brain produces) is not ‘your’ consciousness. I see no present basis for belief in a principle like ‘all awareness produced within a single computing device will merge into a single stream of subjective experience.’

      • IMASBA

        “Perhaps complex ‘unconscious’ information processing taking place in your brain *does* produce consciousness, and you just don’t experience it because it (like the consciousness my brain produces) is not ‘your’ consciousness.”

        I actually thought about that, I suppose it’s possible but I don’t think it’s very likely (I mean why wouldn’t they leave memories for me to find, “we” share the same brain?), I think it’s more likely the conscious self is connected to (and influenced by) “zombie” functions of the mind. But self-aware pocket calculators are of course a whole different story, to suppose that pocket calculators are consciousness (as in self-aware), to avoid having to deal with the hard problem reeks of desperation to me, but of course I could be proven wrong one day.

      • http://don.geddis.org/ Don Geddis

        Most people accept that human minds have both conscious and sub-conscious processing (although I love ASR’s hypothetical!). The idea is not that all information processing is necessarily conscious. The idea is the other way around: that conscious activity is nothing more than (a special kind of) information processing. (Go back to your own original comment here at the top, where, even if you emulate the entire network of neurons from a human brain, you doubt that the resulting computer would be self-aware.)

        Your example of “sleeping” kind of misses the point. Obviously, in that moment, you aren’t an entity that exhibits the behavior we want from consciousness: self-awareness, reflection, etc. The whole question is, if you exhibit the right behaviors, is there still something else required to be conscious? The (mistaken) idea of a philosophical zombie is that you could act exactly like a normal human, only somehow still be missing conscious experience. Your “sleeping” example isn’t like that, so it’s kind of beside the point.

        The resolution of the actual zombie problem is: no, you only need to be concerned about implementing the behavior. If you can reproduce the right self-aware behavior, there’s nothing “else” you need to add, to get consciousness.

      • IMASBA

        “The idea is not that all information processing is necessarily conscious.”

        But that’s basically what you were arguing, at least it seemed that way to me. If you think only “complex” systems are conscious then you too are saying there is some secret ingredient that determines how complex is complex enough.

        “The resolution of the actual zombie problem is: no, you only need to be concerned about implementing the behavior. If you can reproduce the right self-aware behavior, there’s nothing “else” you need to add, to get consciousness.”

        I don’t understand why people feel so uneasy at the thought that a zombie could replicate human behavior well enough to fool an outside observer, after all, we do most of the things we do on auto-pilot anyway. I get the impression people think that people like myself are saying the secret ingredient is of divine origin or something like that, but that’s not what we’re saying. It may just be a matter of complexity, with the human brain having some added complexity on top of what’s necessary to have a human-like zombie, personally I’m thinking about something deeper (hence the remark about fundamental physics), but that something would be a part of nature and behave according to set laws without the aid of god(s). And it is my belief that consciousness is about more than just the firing of neurons, so that a simulation would have to also simulate the inner structures of neurons and their interactions with the world to create a real conscious mind, making it far easier to work from the ground up once we understand consciousness, to build the simplest and most robust machinery that captures the spark of consciousness. So some sort of artificial brain (hardware) instead of a simulation of human neuron firing.

      • http://don.geddis.org/ Don Geddis

        No secret ingredient. Not mere “complexity”. I already answered above: a planning system with a self-model and internal sensors. Most computer systems don’t match that description, but a few do.

        Your search for a “secret ingredient” isn’t driven by real-world data. You’ve invented a fantasy problem, and then you’re trying to hypothesize imaginary solutions to it. But none of it is grounded in any actual observations. It’s fire’s phlogiston or light’s ether.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        Y’all sure are attached to your qualia illusions, aren’t you? ( http://tinyurl.com/c3zq8ht ) What would it be like to be a zombie? Is the thought of being one horrific? Or does it just seem so obvious?

      • IMASBA

        “Is a rock conscious? How about a tree? An ant? A cat? A chimp? What objective test are you suggesting, to resolve the question?”

        A chimp is conscious, a cat feels things but may not be self-aware, an ant may feel things but is most likely not self-aware. I already said proving consciousness is difficult (it’s an unsolved problem for now), disproving it is possible in select cases (a rock lacks information processing capabilities). All of this just goes to show how little we understand of consciousness, as opposed to, for example, life.

        “I’d suggest: given that you self-admit to not understanding consciousness, I’d suggest being a little more humble about claiming computers can’t have any.”

        I didn’t say computers cannot have consciousness, I just said it’s going to be a lot harder than Hanson thinks and that it’s probably easier to construct an AI that’s not entirely virtual (as in having artificial neuron-like hardware for example). Of course those are my beliefs, they may eventually be proven wrong, but they are not ridiculous at this stage.

  • AJean

    Have you read Kevin Kelly’s What Technology Wants?

  • Tim Tyler

    Birds had a lot of adapted complexity too. It mostly got trashed in our designs for flying machines – which weren’t really very much like scanned birds.

    • http://overcomingbias.com RobinHanson

      Human brains have vastly more adapted complexity to the task of being smart in our world than do birds have to the task of flying. A handful of bits can describe the features birds need to have to fly, while vastly more bits are needed to describe how to make minds smart like humans.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        A handful of bits can describe the features birds need to have to fly, while vastly more bits are needed to describe how to make minds smart like humans.

        This derives from what theorem?

      • Tim Tyler

        We scan pictures, movies and music – but rarely complex machinery. Pumps aren’t scanned hearts, cameras aren’t scanned eyes – and so on. Bioinspiration seems most effective in practice if used sparingly. An ‘em-first’ world thus seems tremendously unlikely.

  • Joshua Fox

    The title of this post is correct, but if there is any confusion, we should note that “Em- vs Non- AGI Bet” means “Em-AGI vs non-Em AGI Bet.”

    (In other words, this title should not be parsed as a bet on Emulations vs non-AGI, whatever that would mean.)

    • http://overcomingbias.com RobinHanson

      Because you thought it might be unclear, I’ve changed the title to be more explicit.

  • Robert Koslover

    Hmm. Sounds good. But I wonder if this bet is even legal. You’re in Virginia, right? Here’s some info I found on the web, but I’m unsure how, or if, it applies to your private bet: http://www.gambling-law-us.com/State-Laws/Virginia/

  • Samuel Hammond

    One way to break down ‘intelligence’ is perception => interpretation => execution. The biggest breakthroughs in AI in my opinion have followed this path, as the modules for a given type of interpretation are likely bounded up with the perceptual faculties and the pathways for execution.

    So for example, we have made huge progress in visual systems with A. cameras that can ‘perceive’ B. software that can interpret visual data into things like depth and even facial recognition and C. execute certain tasks like red eye correction or translating sense data interpretation into behaviors like when self-driving cars stay in their lane and break before impact.

    I’m no kind of expert on how this technology came about, but it seems to have been built up from scratch. To the extent we’ve emulated, its been indirect, like when we draw lessons from human perception and the algorithms that must be working the data to find edges and so on. This approach has been wildly successful. Yes, the human brain is hugely complex due to the incremental adaptive process of evolution. But that’s a fact about how human perception originated — not a fact about the intrinsic difficulty of designing artificial perception faculties and the software to interpret and execute sense data.

    So I see a lot of progress in cognitive sciences around perception and interpretation, from visual processing to natural language processing. It seems very plausible that this research can carry on in a modular way until one day we’ll look at what we have and find, putting it all in one box, we have an AGI.

  • Nancy Lebovitz

    I’m predicting (I’m not sure how much money I want to throw at this) that the AGI winner will be computer-augmented embodied humans.

    Part of this is an assumption that ems are going to be a lot harder to do than they sound, and the other part is that people will be doing augmentation asap.

    The best argument I can see against this and in favor of from-scratch AGI is that seamless augmentation (powerful augmentation which feels as natural as unaided thinking) might be up against the same difficulties as ems.

    • Tim Tyler

      Of course computers will augment humans initially. They have been doing so for decades. In that respect there’s not really any such thing as “from-scratch AGI” – since computers were born in symbiosis with humans.

  • Alexander Gabriel

    Here’s an article elaborating on the question of where the proof is for the accuracy of these brain simulations.

    http://www.scientificamerican.com/article.cfm?id=massive-brain-simulators-seung-conntectome

    The idea of copying the complexity inscribed by evolution into our brain in theory sounds good. But that these models lack incremental feedback seems to me a red flag. With no feedback, there’s no profit. With no profit, no investment. There may be this needle floating around in a haystack of possible futures but that doesn’t mean anybody is going to find it.

  • http://overcomingbias.com RobinHanson

    I just added to the post.

  • http://juridicalcoherence.blogspot.com/ Stephen Diamond

    This bet depends utterly on mutual good faith: you each believe the other will apply the agreed upon criteria honestly. This degree of trust would not be shared by people occupying different ideological camps on fundamental questions.

    What is the main signaling function of announcing this bet? Hanson and Fox vouch for each other as the kind of honorable person the other can rely on.

    Per discussion on Katja’s site, I think this kind of signaling is generally conscious rather than subconscious. I’d be interested in opinions.