Idea Talkers Clump

I keep encountering people who are mad at me, indignant even, for studying the wrong scenario. While my book assumes that brain emulations are the first kind of broad human-level AI, they expect more familiar AI, based on explicitly-coded algorithms, to be first.

Now the prospect of human-level ordinary AI is definitely what more people are talking about today – the topic is in fashion. There are AI companies, demos, conferences, media articles, and more serious intellectual discussion. In fact, I’d estimate that there is now at least one hundred times as much attention given to the scenario of human level AI based on explicit coding (including machine learning code) than to brain emulations.

But I very much doubt that ordinary AI first is over one hundred times as probable as em-based AI first. In fact, I’ll happily take bets at a factor of ten. You pay me $1000 if em-AI comes first, and I pay you $100 if other AI comes first.

In addition, due to diminishing returns, intellectual attention to future scenarios should probably be spread out more evenly than are probabilities. The first efforts to study each scenario can pick the low hanging fruit to make faster progress. In contrast, after many have worked on a scenario for a while there is less value to be gained from the next marginal effort on that scenario.

Yes, sometimes there can be scale economies to work on a topic; enough people need to do enough work to pass a critical threshold of productivity. But I see little evidence of that here, and much evidence to the contrary. Even within the scope of working on my book I saw sharply diminishing returns to continued efforts. So even if em-based AI had only 1% the chance of the other scenario, we’d want much more than 1% of thinkers to study it. At least we would if our goal were better understanding.

But of course that is not usually the main goal of individual thinkers. We are more eager to jump on bandwagons than to follow roads less traveled. All those fellow travelers validate us and our judgement. We prefer to join and defend a big tribe against outsiders, especially smaller weaker outsiders.

So instead of praising my attention to a neglected if less-likely topic, those who think em-AI less likely mostly criticize me for studying the wrong scenario. And continue to define topics of articles, conferences, special journal issues, etc. to exclude em-AI scenarios.

And this is how it tends to work in general in the world of ideas. Idea talkers tend to clump onto the topics that others have discussed lately, leaving topics outside the fashionable clumps with less attention relative to their importance. So if you are a thinker with the slack and independence to choose your own topics, an easy way to make disproportionate intellectual progress is to focus on neglected topics.

Of course most intellectuals already know this, and choose otherwise.

Added:  Never mind about effort less proportional than chances; Owen Cotton-Barratt reminded me that if value diminishes with log of effort, optimal scenario effort is proportional to probability.

Added 11Oct: Anders Sandberg weighs in.

GD Star Rating
loading...
Tagged as: , ,
Trackback URL:
  • J Storrs Hall

    That’s only half the problem, and the lesser half at that. I started writing about machine ethics 15 years ago, and published the first book in the field 10 years ago. Not many people listened. Now “AI safety” is the big intellectual fad. There is such a thing as being ahead of your time. Drexler published about nanotechnology in the 80s and 90s and people who tried to follow up on his work found themselves cut off from funding and blackballed. There were similar backlashes against cybernetics in the 60s and symbolic AI in the 80s. I’ve become convinced that this is the rule, not the exception, in science. You are very lucky in that most people consider your book a whimsical exercise, and not a serious new endeavor that could draw resources from their slice of the zero-sum funding pie.

  • http://www.jessriedel.com Jess Riedel

    Besides object-level assessment, are there any good indicators for distinguishing whether a research is avoiding fashions because (a) they’re following the most neglected/useful path or (b) they can’t compete with the best researchers on the most promising path?

    • http://overcomingbias.com RobinHanson

      Dunno, but given diminishing returns more social value may come from a worse quality researcher on a neglected topic.

  • Lord

    It is by no means clear to me either scenario is likely and that it may require faster processing and more detail than are capable in this universe; that we may be the most efficient incarnations of such engines, or at least for considerable time, say an order of magnitude or two longer. I can enjoy both while being wedded to neither. I think ordinary AI may be more believable due to our gullibility of confusing intelligence with consciousness. We will have increasingly intelligent passive systems augmenting our capabilities that people will take for and treat as intelligent though little more than automatons. We may not even be able to distinguish consciousness and attribute it where there is none. It may even take em tech before we can definitively identify it.

    • Joe

      Seems to me the natural assumption should be that we will have to emulate pretty much all the functionality of our brains, probably including consciousness. We’re ‘as if designed’ for success by evolution; if consciousness didn’t serve an important function, we wouldn’t have evolved to have it. Perhaps its function can be performed by some other mechanism which couldn’t meaningfully be identified as conscious. Or perhaps not.

      But, the assumption that we will just be able to ignore most of the features our brains have when building powerful AI seem to me quite likely to be mistaken. In fact this is a general kind of mistake that I see made VERY frequently. Someone will look at a complex system, and say, pfft, what’s all that complexity for? All of this stuff is junk, all you need is this core functionality here…. And sometimes the system they’re referring to is indeed bloated and overcomplicated. But even in those cases, there’s usually much more complexity that needs dealing with once you look into the problem in depth than there appears to be at first glance. And the rest of the time it turns out the system is complex because it actually really does need to be.

      I would be interested to know what investigation has been done on this issue, e.g. how much of past automation has actually consisted of cutting out features entirely versus reemphasis and shifting things around. Maybe there’s lots of research on this – I haven’t actually looked very hard for it – but I do notice that claims that we can replace humans with very narrow AI that is right around the corner don’t tend to come with good empirical evidence that this is likely. Usually what’s presented is intuition, but I’ve come to think that intuitive arguments are given far far more weight than they deserve.

    • http://juridicalcoherence.blogspot.com/ Stephen Diamond

      It is by no means clear to me either scenario is likely and that it may require faster processing and more detail than are capable in this universe

      I’ve become convinced that ems are theoretically possible. My intuitions have retreated to something like that line: actual feasibility considering the enormous amount of detail. [These are very nonexpert intuitions. But has this been seriously addressed? I’m having incentive trouble accumulating the wealth to buy Robin’s book because a low probability outcome doesn’t interest me enough.]

  • http://www.gwern.net/ gwern

    A lot of this is that deep nets have made *so* much progress in the past 10 years, while em approaches have made little apparent progress: the EU Brain has fallen apart in a scandal of mismanagement and overpromising, the worm emulation project hasn’t gotten much of anywhere, Spaun is only vaguely an ’em’ rather than a deep net, and I haven’t seen much progress on scanning since Hayworth 2012.

    > But I very much doubt that ordinary AI first is over one hundred
    times as probable as em-based AI first. In fact, I’ll happily take bets
    at a factor of ten. You pay me $1000 if em-AI comes first, and I pay you
    $100 if other AI comes first.

    Correct me if I’m wrong, but weren’t you earlier saying your em scenario had a more like 0-5% chance of happening? Why have you revised to >10%?

    • http://overcomingbias.com RobinHanson

      I don’t recall saying 0-5%. And progress in the last 10 yrs is only a small % of total progress so far or required in future. As we’ve seen bursts of similar magnitude in past, why think anything fundamental has changed?

      • http://www.gwern.net/ gwern

        > As we’ve seen bursts of similar magnitude in past, why think anything fundamental has changed?

        If lack of progress in one approach would be evidence against future success, progress must be evidence for it. If emulations have been seeing little success lately, and NN approaches have been seeing much success, why would we interpret this as evidence for emulations succeeding before NN approaches?

      • http://overcomingbias.com RobinHanson

        One should focus on long term trends, not on short term fluctuations that are within the usual range of past fluctuations. One needs to see an especially big recent change to conclude long term trends have changed.

        For example, right after 9/11 many concluded we were in a new regime, but in fact terror attacks since then seem drawn from a distribution similar to that before 9/11. So it seems 9/11 was just an unusually big outlier.

      • http://www.gwern.net/ gwern

        What objective dataset or criteria are you using to classify all recent work as merely fluctuations? For terrorism, you have the RAND dataset which one can fit a lognormal or power law distribution to and see that 9/11 is expected, but I’m not sure what AI equivalent there is.

      • http://overcomingbias.com RobinHanson

        It is common to take data from a single parameter over time and fit that to a model composed of a long term trend plus random fluctuations around that trend. One need not have classify the times or datums.

      • http://www.gwern.net/ gwern

        So you’re arguing for imaging a trend in the absence of any actual data and then handwaving away any inconvenient data as a fluctuation consistent with what distribution you hypothesize your hypothetical dataset would yield? That’s pretty bizarre reasoning.

        Let me try it a different way. You say we should do this purely based on Outside View reference class tennis reasoning. OK. What trend do you have in mind which yields a >10% chance of ems being accomplished before other approaches to AI, particularly the deep learning approaches which are having such huge success right now? As far as I am aware, there are no practical or commercial applications of ems: there is nothing useful which has ever been accomplished based on scanning the connectome of a biological organism or based on its learned synaptic weights. Nothing, zilch, zero, nada. Not one useful application in the approximately 70 years of AI research. All successful AI applications are based on artificial neural networks at best loosely inspired by biological neurons which are not ems, or based on other methods entirely. The state of the art is a few simulations whose value is that they might recapitulate some already well-studied biological phenomena, but might not. Would this not yield a strong Outside View prediction that the trend is non-existent and thus probability of an em being the first AI is very far from >10% and is closer to around 0? (In contrast to the enormous and increasing value of deep learning which entities like Google and Facebook have been applying in hundreds of places throughout their businesses, and which just for data center optimization alone are making hundreds of millions of dollars in profit, and whose trend is undoubtedly positive.)

      • http://overcomingbias.com RobinHanson

        We wouldn’t be having this conversation if we didn’t have data – we’ve been talking about models to account for the recent burst of progress in some AI subfields.

        I think you are well aware that the argument is that one shouldn’t expect any commercial value from ems until three key technologies reach sufficient levels. So an outside view should be tracking those three techs, not looking at the record of what has happened in the past when they reached sufficient levels.

      • http://www.gwern.net/ gwern

        I am not sure I am. But it seems like you’re rather quickly flipping into an inside view argument when it supports ems. One could just as easily say that deep learning should not be judged on its lack of commercial results because all such networks are equivalent, in terms of FLOPS and parameters, to fly brains or less, and are hilariously abstracted from biology; yet, they still work amazingly well for many applications, showing that tiny little crude nets can be useful. Whereas there are no tiny crude ems, such as OpenWorm, which have shown any use.

      • Joe

        I don’t think Robin or anyone else has ever suggested that animal brains are generically economically useful. Rather, the claim seems to be that a very small number of animal brains are cooperative and submissive enough that we have been able to domesticate those animals in particular.

        We have not been able to domesticate flies. So there is surely no reason at all to expect that an emulation of a fly will be of any practical use.

      • http://www.gwern.net/ gwern

        A domesticated fly is just that: one fly. An emulated fly is an AI which can be used to pilot vehicles, recognize objects, and the other capabilities any fly can do but we struggle to get computers and drones to achieve; very different. After all, the whole point of emulating a human mind would be that it’s not just a human – if you want one of *those*, we’ve got billions to spare… (Deep learning systems are, if you compare with estimates of animal/insect FLOPs, hardly on the fly level at the moment, yet they are extremely commercially valuable. So clearly fly level intelligence has a lot of commercial potential. Of course, you could argue that this comparison is misleading as deep learning systems are much better for implementing intelligence on on a FLOPs for FLOPs basis, but that doesn’t exactly augur well for larger brain emulations.)

      • Joe

        My point is that finding an economic use for an animal brain doesn’t just require an animal with some useful capability, but one that you can control, to get it to use that capability to perform the tasks you want done. An emulated fly could possibly pilot a vehicle, but it isn’t going to pilot it to where you want it to go, it’s going to take it wherever it feels like.

        A fly-like creature that has been specifically designed to take commands would indeed be useful, but that isn’t how flies actually work. And if Robin’s argument that em minds will be sufficiently convoluted in design that it will be implausible to make big changes to them is correct, we shouldn’t expect emulating flies and then modifying them to take orders to be a viable route to creating economically useful AI systems.

        (And in fact looking at whether ems of simple animal minds, when/if they are created, can be repurposed this way, should give us more confidence one way or the other regarding Robin’s claim that human em minds won’t be able to be significantly redesigned.)

      • http://www.gwern.net/ gwern

        I already addressed that. An emulated fly should be able to recognize objects, navigate, and *fly*. That is commercially valuable for drones, micro-drones, and computer vision even if you presume almost total non-modification capabilities far below the transfer learning & flexibility of deep learning approaches, and further ignore all the existing training ability of animals and insects.

      • Joe

        I’m not sure you did address it. You’re reiterating that flies have certain capabilities that would be economically useful, assuming we can control those abilities such that they are used to achieve the purposes we want them for. But that assumption seems extremely doubtful to me.

        Say you have a fly em that you have hooked up to a drone. Assume for the moment that there are no interface mismatches: the way the fly brain is built to control flight concords sufficiently well with how a drone flies that the fly is actually able to control the drone. This itself seems a dubious claim, but the more fundamental issue is surely – how do you get the fly em to pilot the drone where you want it to go, rather than where the fly wants it to go?

        This is why it’s so important for animals you want to domesticate to be pliable. It’s also why information hiding is so important in software systems. If you wanted to use a fly brain to control a drone, you would need to separate out the part that actually controls the flight from the parts that decide where and when and how fast the fly flies. But if brains are like Robin expects – a mess of spaghetti code that’s almost impossible to usefully modify – then this just won’t be feasible, and the only fly em you will be able to create is one that does what a fly wants to do. And there just isn’t much economic use for flies, even (especially?) when they are in control of drones.

    • TFL

      the worm emulation project hasn’t gotten much of anywhere,

      That could be simply a failure of nerve, pardon the pun. The two projects both seem indifferent-verging-on-hostile to creating a working model.

      Si Elegans appears to be broken by design (it uses an interconnect that’s only appropriate to spiking neurons–c. elegans interneurons do not spike.) The site for their simulation platform still consists entirely of placeholders.

      OpenWorm is a odd case in that it has been right on the verge of having a working model for nearly a year, but seems determined to focus on developing a testing suite for a model it doesn’t have yet. The only part missing is touch feedback from the hydrodynamics model to the neural net, which is trivial compared to the work already done. Nonetheless they seem to be taking a very lackadaisical approach to this, to put it mildly.

      It’s infuriating as Tim Busbice, one of the members, managed with a very crude model to get interesting results yet this does not seem to have spurred them to complete the full model, indeed Busbice seems to have been blackballed from the group.

  • http://confusionist.tumblr.com/ Ivo Wever

    An idea about a middle road to AI: code up learning algorithms, instantiate a bunch of them (initially as prokaryote-equivalent or something earlier?) and find a way to put them through something equivalent to a billion years of natural selection.

    Since the experimenters will be inspired by our natural history, the capabilities of our ancestors at every point in time and our current capabilities, the process will select for human traits and will somewhat replay the evolutionary road that resulted in them (I don’t see anyone capable of not doing that, deliberately or inadvertently).

    At some point during the process the intermediate forms should be allowed/enabled to obtain ways to perceive and interact with the real world and with each other via the physical world, in ways other animals interact.

    The end result could be human-type intelligence with whatever ‘wiring’ that results in that. Of course this supposes there’s some way to speed up (significant parts of) the evolutionary process.

  • http://juridicalcoherence.blogspot.com/ Stephen Diamond

    I’m impressed by sociologist Randall Collins’s explanation for intellectual clumping: there’s only room in the attention space for at most six different major positions. [Unless you can elbow out another school of thought, you’ll be in the periphery if you don’t embrace one of them.]

  • https://entirelyuseless.wordpress.com/ entirelyuseless

    Robin, I would be happy to accept your bet, either with the $1,000 vs $100 value or anything up to $3,000 vs $300.

    • http://overcomingbias.com RobinHanson

      Do you want to try to obligate our descendants to the bet, or just bet conditional on either event happening before one of us dies?

      • https://entirelyuseless.wordpress.com/ entirelyuseless

        Conditional on one of the events happening before one of us dies. Of course I realize that most likely this won’t happen, but currently I have no descendants and probably never will, and on the other hand I have no reason to punish your descendants for your personal mistakes.

      • http://overcomingbias.com RobinHanson

        Can we clarify that we mean AI sufficiently strong to replace pretty much all humans on jobs? That would seem a clear enough event we don’t need to haggle about exact wording or judges. If the final AI is some mixture, shall we call off the bet if it isn’t clear enough which kind of AI contributed the most to the resulting AI?

      • https://entirelyuseless.wordpress.com/ entirelyuseless

        Those condition are fine. Of course you realize that these conditions work in your favor, because current jobs are related to current human abilities, so em AI would be especially fit for those jobs. But I think the difficulty of producing ems is so great that the bet will still be strongly in my favor overall.

      • http://overcomingbias.com RobinHanson

        OK, why don’t you privately send me email so I have a way to contact you if I win the bet.

      • https://entirelyuseless.wordpress.com/ entirelyuseless

        Ok, I’ve emailed you at the address indicated under contact. Please confirm you have received it. Are you making this $100 vs $1,000 or $300 vs $3,000 ?

      • http://overcomingbias.com RobinHanson

        I still haven’t received your email.

      • https://entirelyuseless.wordpress.com/ entirelyuseless

        I tried again. But I think the problem may have been that the other email got saved but not sent, so you may receive both.

  • Joseh Kiym Kiym

    Thank you Robin for this nice and educative blog. i am sure it has stirred interest of all the web community. i like your point “need to do enough work to pass a critical threshold of productivity”. i bet this point is directed to me. i am very lazy at working on my assignments and therefore outsourcing is all i prefer. I have hated literature classes since I joined campus. The tasks are complicated and I do not have time to handle them because I have a full time job. However, when I landed at http://www.easypaperhub.com I don’t need to worry anymore. They are experts in every field because I have consulted them for my comparative essay and did very well. I am happy to share my joy.

  • Owen Cotton-Barratt

    If there were no diminishing returns we’d not allocate resources in proportion with probabilities, just put all of the resources into the most likely scenario.

    In fact if returns diminish approx logarithmically (as I’ve argued is a good baseline assumption), then marginal returns go like one over the total investment, so the appropriate investment is exactly in proportion with probabilities.

    Spillover benefits from analysing scenarios which won’t occur pushes back on this, but feels second order to me.

    • http://overcomingbias.com RobinHanson

      Yes, you are right given a function that produces good outcomes in a situation given effort toward that situation.

    • Philip Goetz

      Are you familiar with Nicholas Rescher’s work indicating that returns diminish logarithmically? He wrote a book on it: /Scientific Progress/, 1978.

  • zarzuelazen27

    What Robin hasn’t seriously considered is the development of
    BF-ASS-RT (‘Big Fucking Ace Super-Secret Rationality Technique’), a rationality so powerful that all ordinary models are blown away and you get a full theory of general intelligence in a very short time-frame.

    Will emulations beat BF-ASS-RT?

  • IMASBA

    “I’d estimate that there is now at least one hundred times as much attention given to the scenario of human level AI based on explicit coding (including machine learning code) than to brain emulations.

    But I very much doubt that ordinary AI first is over one hundred times as probable as em-based AI first.”

    Yeah, can’t really argue with that… I think brain emulation is going to be a lot harder than you make it out to be, especially the brain-mapping and scanning parts, but I wouldn’t say it’s a hundred times more likely for coded AI to arrive first (and people would likely acknowledge a brain emulation as “true” AI more easily) and the buzz ratio seems disproportionate.

  • FuturePundit

    In the computer industry a lot of progress is driven by the desire to make incremental gains in performance. The goal is to boost sales and earnings in a time frame that mostly doesn’t extend out more than 2-3 years and often is focused on the next quarter or next year. So engineering grad students wanting to get jobs are going to tend to focus on skills that industry will want when they graduate, not skills industry will want decades from now.

    This can preclude finding a much more optimal solution. But if the rate of progress on current approaches slows a lot then I can imagine where a wider range of approaches get tried out in hopes of finding a way out of the local minimum back into a more rapid rate of progress.

  • Simon Riddell

    I sometimes wonder if there is a meaningful difference between the two. The two categories seem to boil down to an ‘anthromorphic AI’ vs a ‘ “Robot” AI’. But what if what we consider to be an anthromorphic AI is just the logical conclusion of a sufficiently advanced Robot AI. Obviously not completely the same due to our biological programming for sex/food etc,

    Still, it could just be there is a certain class of algorithm to ‘solve’ AI, which is some sort of advanced neural network that can classify problems, and then build new models to solve each class of problems. And the only difference between anthro and ‘robot’ is relatively trivial.

    • IMASBA

      In the TV series Caprica (a prequel of BSG) AI was finally achieved by sort of merging an emulation of a human with a coded AI robot’s software and hardware.

    • Christian Kleineidam

      No, EM’s might be as likely robot-like like than an AGI.

      But “robot” is a bet model in the first place because neither EM’s nor AGI are physical but they are digital and can be copied like computer files.

      Reading Hanson’s book might give you a better idea about what EM’s would be like. He actually did research on it.

  • Joe

    While much of the current talk about AI is specifically about hand-coded AI, I think a significant proportion isn’t, but rather is about the very long-run future of whatever form of AI is created first.

    The folks worried about an intelligence explosion seem to at least consider ems-first as a possibility, but discount it as a bad outcome for the same reason as their generic worries about a multipolar future: when there are many competing AIs, not only is it unfeasible for any of them to care about preserving human values, but it will not even be feasible for them to care about maximizing paperclips. The successful AIs in such a scenario will be those whose goal most closely matches ‘maximize copies of myself’.

    You might argue that in fact these are the conditions under which all life evolved, involving humans, and so we shouldn’t expect such a scenario to result in a total loss of value. But then you’d be just getting back to what I see as your real disagreement with these folks, regarding the value of slowly accumulated content and tools of varying scope and power versus universal-scope raw problem-solving ability.

  • Mark Bahner

    I’m willing to give you more generous odds than you ask, provided we can agree on “em” versus “non-em”.

    If an “em” is indeed an exact copy of a human brain, warts and all, I don’t think they have a prayer of being economically . (If we’re talking about neural nets and machine learning, that’s a completely different matter.)

    I’m willing to give you 100-to-1 odds on a $10 bet that complete human brain emulations will never be economically significant.

    • http://overcomingbias.com RobinHanson

      I have in mind something that starts as an exact copy, but which may have small modifications. If any tiny modification makes something no longer count for you, that is a problem for a bet.

      • Mark Bahner

        OK, let’s say we want a firefighter. Suppose we make an em of the brain of the world’s best firefighter. But then we realize that brain has the flaws that it both feels and fears pain (e.g. burns), and it fears death. So kill all the circuits involved in feeling and fearing pain, and fearing death, do you consider those “small modifications?”

        Similarly, there isn’t a human brain alive (for very long) that doesn’t sleep. It’s crucial for memory, cognition, mood, and health. If the “em” is modified so that it doesn’t sleep, do you consider that a “small modification?”

      • http://overcomingbias.com RobinHanson

        I mean small in terms of the effort involved, not the consequences.

      • Mark Bahner

        So…an “em” firefighter that doesn’t feel or fear pain or death is still an “em”?

        And an “em” that doesn’t sleep is still an “em”?

        What about an “em” that happens to have a perfect memory by virtue of a petabyte of flash memory, versus a human’s highly flawed memory?

        Or an “em” that doesn’t have any visual cortex or auditory cortex, because it doesn’t need one?

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        But isn’t a crucial part of your claim that small efforts will fail to produce big changes: the brain isn’t easy to modify? Seems like you should be willing to deny big consequences as well as big efforts.

      • http://overcomingbias.com RobinHanson

        We are talking about the language of a bet – judging effort size seems to me much easier than judging consequence size. I see most consequences as small, but I don’t want to be that everyone will see all consequences as small.

      • Mark Bahner

        “We are talking about the language of a bet – judging effort size seems to me much easier than judging consequence size.”
        But suppose we have a firefighting entity that doesn’t sleep, doesn’t feel or fear pain, doesn’t fear death, isn’t interested in sex, music…anything. Just fires. It fights fires. 24/7/365 (except when in the shop). If it’s easy to create an entity that has all of those characteristics, is that an “em”?

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        Aren’t you essentially saying that an important part of the question is unbettable?

  • zarzuelazen27

    What is time? Why does time seem to flow? If you look at the laws of physics, time is nowhere to be found – these laws are entirely time symmetric, and the reductionist picture is of a static block-universe.

    Yet look around you. Sights, sounds, movement! Time seems to reside at the exact loci of your conscious perception!

    From simple things, more complex things are built. For the universe as a whole, the simpler states always reside in the past. And extraploating to the limit of this process, the beginning of time had to be the simplest possible state. In the other direction , the forward direction (future) always appears to have more complexity on a cosmological scale.

    The pure mathematics of physics is static and timeless. Yet a spark was added (‘time’) and it lit a fuse (‘the arrow of time’) that is leading to the intelligence explosion.

    Let us trace this spark across the fuse we call the arrow of time in the direction of increasing complexity, and you will see that there is no gap or seperation between the physical and the mental – the ‘is’ will pass smoothly and naturally over to the ‘ought’ , without any break or logical gap:

    Particle Physics > Mechanics > Thermdynamic Arrow of Time > Time Perception(NOW)Working Memory > Decisions > Values

    Our awareness resides at the point in the complexity chain we call ‘now’. Looking in the direction of decreasing complexity, we call this ‘past’ and we see a horizon we interpret as ‘physics’. Looking in the direction of increasing complexity, we call this ‘future’, and we see a horizon ahead we interpret as ‘values’ .

    But what is right here now? Three clocks and three selves – one for the past, one for the future, and a clock right in the middle where the time is always NOW. And it is at this exact fleeting point in the chain of complexity that we can catch the arrow of time!

    https://uploads.disquscdn.com/images/3c2fbb0ed31ace919c0ec9b4eddec19b6d88c4255dd4e6fc9a87ea780a46a2a7.jpg

    The time for the intelligence explosion … is NOW!

  • http://www.gwern.net/ gwern

    Robin, I’m not seeing any Outside View arguments here for why we should expect a 10%+ chance that 1 specific AI technique with no discernible progress and zero commercial applications over the past 70 years should not just be feasible but also be the very first AGI to be created, beating out everything else. So I’m interested in taking your $1k vs $100 bet. What specific terms are you offering about payment, end-date, AGI definition, arbitrators, and definition of em vs anything else?

    • http://overcomingbias.com RobinHanson

      If you want a lot of details, please suggest them.

      • http://www.gwern.net/ gwern

        Well, is that money in inflation-adjusted dollars to whatever year AGI is achieved? What is your rigorous specific definition of an ’em’ vs other things in the same generic space like deep nets (eg CNNs are loosely inspired by the visual cortex and actually turn out to organize themselves similarly & can predict neural activations in primates despite not being ems)? If we disagree about an AGI being an em or not, who will arbitrate? Someone like Eliezer would work for me but maybe not for you. (For someone who thinks betting makes everyone clarify things, you’re being pretty cavalier about the conditions of this bet.)

      • http://overcomingbias.com RobinHanson

        I didn’t ask for questions, I asked for suggestions.

      • http://www.gwern.net/ gwern

        2 of my 3 questions were suggestions, and the third was something I can’t write for you, since you’re the one who is pro-em while I’m taking the ‘everything else’ position.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        who thinks betting makes everyone clarify things

        Seems more like a lot of time is spent trying to find partial claims that are bettable.

      • Mark Bahner

        “(For someone who thinks betting makes everyone clarify things, you’re being pretty cavalier about the conditions of this bet.)”

        About 6 days ago, I offered to bet Robin $10 at 100-1 odds (I win, he gives me $10, he wins, I give him $1000) that other-AI would come before em-AI, conditional on our agreement of what an “em” is.

        I asked him if we took the em of a brain of a human firefighter, but removed the ability to feel pain, or fear pain or death, and removed any interest in anything (e.g. sex, music) other than firefighting, and made the entity not to need any sleep, but just think about and fight fires 24/7, if that would still be an “em.”

        Robin never answered. If he answered me, “No, that’s obviously not an ’em'” I’d be happy to give him 100-1 odds on a $10 bet.

  • cchdisc

    Robin, I too will accept your wager. I think the odds are fair – even in light of what I point out at the end. Having done my fair share of both slicing and modeling brains, and of building “neural networks”, I feel I am as well positioned as most to say that reverse-engineering human sentience will be MUCH harder than the alternative machine-evolution approach to creating artificial sentience.

    A billion years of evolution did indeed create an amazing machine in the form of the human brain. But it would be with much hubris for us to say that we must make God in our image. We can assume that there are other sentient beings in our galaxy, and certainly in the universe. Ours is therefore not the only template, and most probably not the best. We don’t have the luxury of a billion years to create AI, so we have, as you point out, approximately two approaches to accelerate the task. One is to serialize one representation, the biological brain organ, into an artificial/machine simulacrum. The other is to model the end result – intelligence – and encode those rules into the machine. Certainly combinations thereof are also possible, but let’s grant that even a combination weight very heavily in one direction or the other.

    I’d like to interject a story of my own first confrontation with this question. While still in my teens, I landed square in the middle of this debate, and in ’84 had a conversation that settled my position. The conversation was with Jerome Feldman, and early proponent of what came to be referred to as Neural Networks [http://onlinelibrary.wiley.com/doi/10.1207/s15516709cog0603_1/pdf]. I was at the time perusing independent research in artificial intelligence at CMU. In my discussion with Dr. Feldman, I stated my belief that we can only understand intelligence in the context of our only known examples – biological brains. He made the stronger case that intelligence is not rooted in nor dependent upon our primal beings. It is instead an emergent phenomena that can be both studied and probably realized in complex mathematical expressions. I was won over to that position. However, at the end of my CS degree I was frustrated enough with the state of the art of neural networks to pursue graduate studies in neuroscience so that I could drill down into this mysterious thing called the synapse. I studied crustacean neurophysiology under Dr. Harold Atwood.

    But back to my main thread. There is too much morphological detail in a human brain to model. And even if we did evolve machines powerful enough and with sufficient memory to model such a thing, there is still no technology know to slice a human brain at the detail level of the synapse – which would be necessary to create any useful morphological model. And finally, even if you could accomplish both of the above, as Dr. Feldman explained to me 32 years ago, this information will still tell you nothing about intelligence!

    A final thought, which I hinted to at the start. I think that perhaps you have settled on the wrong approach to em-AI. Rather than puzzle over the zettabytes of data from a brain sectioning, why not start with the 200 megabytes of the human genome. Clearly, the model for intelligence must be therein encoded. Even in light of this order 10^15 reduction in raw data size that I have bestowed upon you, I’ll still accept your wager.

    • http://overcomingbias.com RobinHanson

      Care to suggest more details for a wager? Conditional on our both staying alive? Must it be clear which side contributed more to the human level AI? Need we pick an independent judge?

      • cchdisc

        I don’t feel that either outcome is likely in our lifetimes. I will ask my daughter if she would be willing to “inherit” the bet (is that even a thing?). I don’t believe that a 3rd party judge is necessary. I’ll grant that you win if human brain wiring from slicing or scanning makes any contribution.

      • cchdisc

        I think it unfortunate that there is not more such wagering, since it can be a very powerful source of information in the world of competing ideas.

      • cchdisc

        “outrageous claims require outrageous wagers”

  • http://juridicalcoherence.blogspot.com/ Stephen Diamond

    Are the advantages of investigating new areas, due to diminishing returns on old areas, greater than the advantages due to agglomeration’s analog, achieved by entering well-populated areas?