This Friendly AI discussion has taken more time than I planned or have. So let me start to wrap up. On small scales we humans evolved to cooperate via various pair and group bonding mechanisms. But these mechanisms aren’t of much use on today’s evolutionarily-unprecedented large scales. Yet we do in fact cooperate on the largest scales. We do this because we are risk averse, because our values mainly conflict on resource use which conflicts destroy, and because we have the intelligence and institutions to enforce win-win deals via property rights, etc.
We have an ontology that can represent anything, it is called a language that is turing complete...
We even have lots of people encoding knowledge in it they are called programmers.
Integration is another problem, but the brain is not completely integrated. We need to develop programs that can understand the knowledge encoded in other programs, and programs to maintain the knowledge. Some way of constraining the system, while changing the internal programs, so that it has some purpose would probably also be useful.
I don't see why you think ems would be so aggressive.
Who said anything about aggression? Like I said earlier: Warfare is different from winning.
You do not necessarily have to fight to prevail - all that is needed is for your competitiors to not have as many kids as you do. Of course, there might be fights - but they do not seem like a critical element.
M Geddes: "What I love about this idea is that with an effective ontology (a 'universal parser), all the world's other IT researchers would in effect be working for me... with the right ontology I can simply plagiarize all their insights (how nice of academia to publish everything in open source journals for me!)"
Unfortunately, this task is much harder than it seems. Creating ontologies that actually have the flexibility, accuracy and coverage required is an open problem that has foxed the Cyc project for 25 years. There are entire communities of researchers working on problems of (a) creation of upper ontologies, (b) learning ontologies from text, (c) mapping between ontologies and (d) actually doing inference over ontologies. The biggest problem (as I see it) is that there is a bad mismatch between the world of formal logic which allows one to give meaning to terms, and the world of statistics and probability which allows you to approximate things. If you have no notion of approximation, you can't leverage the powerful computers and large amounts of data we have on the internet, and you will be reduced to writing ontologies by hand. If you have no notion of meaning or semantics, you will end up creating a meaningless resource which can't perform even the most basic inferences, or you will end up with a probability distribution over a narrowly defined set of outcomes that don't even come close to providing the generality required to understand an "arbitrary" situation.
Basically, building an "ontology" which can represent "anything" is a very hard problem in itself.
If you have any ideas about how to make progress in this area, then do get in touch. I've spent a couple of months researching this, and I am becoming increasingly distressed by how hard it is to get anywhere.
Robin: I might do a post on this issue: "Advice for Wannabe Einsteins"
Douglas, property rights institutions were pretty primitive during the farming transition. In an ideal peaceful transition farmers would have bought land from hunters, or sold farming techniques to farmers. As it was though info leak of farming technique meant it wasn't just farmers wiping out hunters - hunters also copied farming.
The big weak point is definitely the 'locality' idea Robin, not the 'hard take off' itself.
The idea that one/few people on his/their own can somehow develop an entirely new 'localized' complex thing that is largely independent of the wider community seems wildly improbable (no one is smart enough).
That's why the universal parser/ontological approach is definitely a major alternative strategy, because it can draw on the insights of everyone in the wider IT community for sharing all the individual ideas in an integrated framework.
Roko, here's the way to do it:
Instead of trying to develop all the 'machinery' for AGI from stratch, don't work on the 'machinery' at all. Instead, develop a specialized language (a parser/ontology) enabling sharing and integration of all the various narrow IT domains - this way, you are in effect only designing the levers, whilst borrowing all the underlying machinery from everyone else.
What I love about this idea is that with an effective ontology (a 'universal parser), all the world's other IT researchers would in effect be working for me... with the right ontology I can simply plagiarize all their insights (how nice of academia to publish everything in open source journals for me!), and in effect, use their brains as 'botnets', which are linked via my effective ontology for sharing of cognitive content (I am simply pushing the levers, they're supplying all the underlying machinery). LOL It's the ultimate hack.
Thank you all for the comments on whole-body emulation. I will look again at the WBE roadmap. I would have thought at least simulated blood would be required but perhaps not.
I think by HG Douglas meant hunter-gatherers.
Ian:In our experience with alterations made to the body, either through accident or design, changes to the brain have the most direct and detailed impact on thought.
People with artificial hearts can think, so on for just about every organ. It is likely that it would be trivial to make very computationally efficient virtual organs that work as good or better. (probably just mimicking their final effects)
Roko, that is the big question here.
James, stupid idle rich humans are pretty safe now. No perfectly safe of course. I don't consider the em world I describe to be a hell-hole, but don't want to get distracted on that topic at the moment.
Tim, I don't see why you think ems would be so aggressive.
Douglas, what is HG?
Ian, no whole body emulation seem unnecessary.
"we would probably have to emulate the whole body?"
The Whole Brain Emulation roadmap discusses this in its own section, p. 74
"Simulating a realistic human body is kinematically possible today, requiring computational power ranging between workstations and mainframes. For simpler organisms such as nematodes or insects correspondingly simpler models could (and have) been used. Since the need for early WBE is merely adequate body simulation, the body does not appear to pose a major bottleneck."
The roadmap also notes that an environment may be required:
"Convincing environments might be necessary only if the long‐term mental well‐being of emulated humans (or other mammals) is at stake. While it is possible that a human could adapt to a merely adequate environment, it seems likely that it would experience such an environment as confining or lacking in sensory stimulation. Note that even in a convincing environment simulation not all details have to fit physical reality perfectly (Bostrom, 2003). Plausible simulation is more important than accurate simulation in this domain and may actually improve the perceived realism (Barzel, Hughes et al., 1996)."
For those who have not read the WBE roadmap in detail, I urge it strongly. It is technical and takes work. There you will see what the real issues are.
I find much of the discussion a bit frustrating as it doesn't seem to address the force of the WBE roadmap at all. Since this is what Robin bases his thinking on, it seems crucial to me to engage with it.
Regarding ems, isn't it likely that, without some special insight, we would probably have to emulate the whole body?
Reductionism tells us we can emulate a thing by emulating it's constituent atoms. Yes, but if an atom has multiple possible behaviors, and it's behavior is "selected" through interaction (cause and effect) with the atoms around it, then wouldn't we have to emulate them too? And so on with the atoms around those.
Where does it stop? Where the effects on the brain of actions at this distance become negligible. But surely that is not likely to be the edge of the brain. The blood for example, flows through the brain at a great rate, having been through the rest of the body in a cycle time of minutes, "collecting" effects all along the way.
Robin,you like to compare to the transitions to industry and farming. How do you compare conflict between farmers and hunter-gatherers with conflict between polities of farmers?
It seems to me farmers were easily able to mark HG as unworthy of contract, but it may be that I'm looking too late, and it was really imbalance of power and not lifestyle that is relevant.
Ems would not exterminate humans mainly because that would threaten the institutions ems use to keep peace with each other.
I hate to get into the (silly IMO) "ems" topic - but this is only true if humans are still running those institutions. If "ems" get rights that situation might last for five, ten years, maybe. More perhaps - but probably not for very long on a historical scale.
With property rights enforced, both sides would expect to benefit more when copying was allowed. Ems would not exterminate humans mainly because that would threaten the institutions ems use to keep peace with each other.
Backing up a bit: What does 'peace' mean? We don't have institutions that keep the peace NOW. We have massive power inequalities now, and if I understand your general model, you think that the singularity will expand those differences, but less than previous major changes.
I just don't understand why you think you're painting a picture of a world that isn't a hellhole for a lot of people, if not most.
Robin: "This scenario seems quite flattering to Einstein-wannabes, making deep-insight-producing Einsteins vastly more valuable than they have ever been, even in percentage terms"
- this is a source of possible bias for people like me (or Eli, or indeed anyone who thinks they are clever and are aware of the problem) which worries me a lot. In general, people want to think of themselves as being important, having some kind of significance, etc. Under the "architecture heavy" AGI scenario, people like us would be very important. Under the "general economic progress and vast content" scenario, people like us would not be particularly important, there would be billions of small contributions from hundreds of millions of individuals in academia, in the corporate sector and in government which would collectively add up to a benign singularity, without any central plan or organization.
We are therefore prone to overestimate the probability that the first scenario is the case.
How can I compensate for such a Bias?
Don Geddis's comment on another post provoked some reflections on "insight" that I'll reproduce here with a few changes.
In summary, I think we've had some major insights, and will need more. They don't typically come from a brilliant mind working alone, sometimes there is no single mind to credit and the minds involved are never working alone.
Instead the pattern has two parts: how the insight is produced, and what it contributes:<ul><li>Insights are produced by crystallizing a pattern from existing positive and negative experiments. Typically an insight requires decades of prior experiments by a large, diverse group.</li><li>An insight rarely leads to radical improvement in any system. Instead it enables researchers to communicate better, avoid useless experiments, design more informative experiments, etc. in cases where the insight is relevant. So it increases the productivity of investigations in a specific respect.</li></ul>
Most of the insights I'll list are quite directly traceable to researchers working on a large set of related problems for decades, and sometimes beating their heads against a wall that the insight finally made visible. Note that many of the insights are negative or have major negative aspects -- essentially understanding the nature of the wall, the way the crystallization of thermodynamics made a lot of experiments obviously useless.
Here's a quick list of major insights I'm pretty sure are relevant. I've probably missed some, but I doubt the list could be twice as long.<ul><li>Information is a measurable quantity, the inverse of entropy.</li><li>Turing's ideas of abstract machines and emulation, and the later generalization to multiple realizability.</li><li>Turing's incomputability results.</li><li>Formal language hierarchy and related results.</li><li>The computational complexity hierarchy, and resulting intractability proofs for various flavors of reasoning and search.</li><li>Search and optimization as basic elements of AI systems.</li><li>Komolgorov entropy / maximum entropy / minimum description length.</li><li>Switch from logic to statistical modeling as the conceptual language of AI.</li><li>Use of population / evolutionary methods and analysis (currently only partially worked out).</li></ul>
So I agree that insight is required. If we had tried to just "muddle through" without these insights we'd be progressing very slowly, if at all.
Conversely however I think that we can't get these insights without the prior accumulated engineering efforts / experiments (successful and unsuccessful) that outline the issue to be understood.
And the insight only helps us work more effectively at the engineering / experiment level.