I keep encountering people who are mad at me, indignant even, for studying the wrong scenario. While my book assumes that brain emulations are the first kind of broad human-level AI, they expect more familiar AI, based on explicitly-coded algorithms, to be first.
If it started from a working simulation of the details of a particular human brain, then even if it was substantially modified, its still an em.
I'm okay with inflation-adjusting, or with most any other way to track long term value. I don't have a jury of arbitrators to offer, but I'm willing to consider your suggestions. I think the difference between an em and a brain-inspired design should be plenty clear enough for us not to need to put a lot of work into a formal definition of the difference. I'm also willing to consider larger bets. I'm willing to make the bet end when one of us dies, or to commit our descendants to the bet.
Ping. Still interested in taking this bet.
Are you familiar with Nicholas Rescher's work indicating that returns diminish logarithmically? He wrote a book on it: /Scientific Progress/, 1978.
Are the advantages of investigating new areas, due to diminishing returns on old areas, greater than the advantages due to agglomeration's analog, achieved by entering well-populated areas?
"(For someone who thinks betting makes everyone clarify things, you're being pretty cavalier about the conditions of this bet.)"
About 6 days ago, I offered to bet Robin $10 at 100-1 odds (I win, he gives me $10, he wins, I give him $1000) that other-AI would come before em-AI, conditional on our agreement of what an "em" is.
I asked him if we took the em of a brain of a human firefighter, but removed the ability to feel pain, or fear pain or death, and removed any interest in anything (e.g. sex, music) other than firefighting, and made the entity not to need any sleep, but just think about and fight fires 24/7, if that would still be an "em."
Robin never answered. If he answered me, "No, that's obviously not an 'em'" I'd be happy to give him 100-1 odds on a $10 bet.
who thinks betting makes everyone clarify things
Seems more like a lot of time is spent trying to find partial claims that are bettable.
"outrageous claims require outrageous wagers"
I think it unfortunate that there is not more such wagering, since it can be a very powerful source of information in the world of competing ideas.
I don't feel that either outcome is likely in our lifetimes. I will ask my daughter if she would be willing to "inherit" the bet (is that even a thing?). I don't believe that a 3rd party judge is necessary. I'll grant that you win if human brain wiring from slicing or scanning makes any contribution.
2 of my 3 questions were suggestions, and the third was something I can't write for you, since you're the one who is pro-em while I'm taking the 'everything else' position.
I didn't ask for questions, I asked for suggestions.
Well, is that money in inflation-adjusted dollars to whatever year AGI is achieved? What is your rigorous specific definition of an 'em' vs other things in the same generic space like deep nets (eg CNNs are loosely inspired by the visual cortex and actually turn out to organize themselves similarly & can predict neural activations in primates despite not being ems)? If we disagree about an AGI being an em or not, who will arbitrate? Someone like Eliezer would work for me but maybe not for you. (For someone who thinks betting makes everyone clarify things, you're being pretty cavalier about the conditions of this bet.)
Care to suggest more details for a wager? Conditional on our both staying alive? Must it be clear which side contributed more to the human level AI? Need we pick an independent judge?
If you want a lot of details, please suggest them.
Robin, I too will accept your wager. I think the odds are fair - even in light of what I point out at the end. Having done my fair share of both slicing and modeling brains, and of building "neural networks", I feel I am as well positioned as most to say that reverse-engineering human sentience will be MUCH harder than the alternative machine-evolution approach to creating artificial sentience.
A billion years of evolution did indeed create an amazing machine in the form of the human brain. But it would be with much hubris for us to say that we must make God in our image. We can assume that there are other sentient beings in our galaxy, and certainly in the universe. Ours is therefore not the only template, and most probably not the best. We don't have the luxury of a billion years to create AI, so we have, as you point out, approximately two approaches to accelerate the task. One is to serialize one representation, the biological brain organ, into an artificial/machine simulacrum. The other is to model the end result – intelligence – and encode those rules into the machine. Certainly combinations thereof are also possible, but let's grant that even a combination weight very heavily in one direction or the other.
I'd like to interject a story of my own first confrontation with this question. While still in my teens, I landed square in the middle of this debate, and in '84 had a conversation that settled my position. The conversation was with Jerome Feldman, and early proponent of what came to be referred to as Neural Networks [http://onlinelibrary.wiley....]. I was at the time perusing independent research in artificial intelligence at CMU. In my discussion with Dr. Feldman, I stated my belief that we can only understand intelligence in the context of our only known examples – biological brains. He made the stronger case that intelligence is not rooted in nor dependent upon our primal beings. It is instead an emergent phenomena that can be both studied and probably realized in complex mathematical expressions. I was won over to that position. However, at the end of my CS degree I was frustrated enough with the state of the art of neural networks to pursue graduate studies in neuroscience so that I could drill down into this mysterious thing called the synapse. I studied crustacean neurophysiology under Dr. Harold Atwood.
But back to my main thread. There is too much morphological detail in a human brain to model. And even if we did evolve machines powerful enough and with sufficient memory to model such a thing, there is still no technology know to slice a human brain at the detail level of the synapse – which would be necessary to create any useful morphological model. And finally, even if you could accomplish both of the above, as Dr. Feldman explained to me 32 years ago, this information will still tell you nothing about intelligence!
A final thought, which I hinted to at the start. I think that perhaps you have settled on the wrong approach to em-AI. Rather than puzzle over the zettabytes of data from a brain sectioning, why not start with the 200 megabytes of the human genome. Clearly, the model for intelligence must be therein encoded. Even in light of this order 10^15 reduction in raw data size that I have bestowed upon you, I'll still accept your wager.