Wrapping Up

This Friendly AI discussion has taken more time than I planned or have.  So let me start to wrap up.

On small scales we humans evolved to cooperate via various pair and group bonding mechanisms.  But these mechanisms aren’t of much use on today’s evolutionarily-unprecedented large scales.  Yet we do in fact cooperate on the largest scales.  We do this because we are risk averse, because our values mainly conflict on resource use which conflicts destroy, and because we have the intelligence and institutions to enforce win-win deals via property rights, etc.

I raise my kids because they share my values.  I teach other kids because I’m paid to.  Folks raise horses because others pay them for horses, expecting horses to cooperate as slaves.  You might expect your pit bulls to cooperate, but we should only let you raise pit bulls if you can pay enough damages if they hurt your neighbors.

In my preferred em (whole brain emulation) scenario, people would only authorize making em copies using borrowed or rented brains/bodies when they expected those copies to have lives worth living.  With property rights enforced, both sides would expect to benefit more when copying was allowed.  Ems would not exterminate humans mainly because that would threaten the institutions ems use to keep peace with each other.

Similarly, we expect AI developers to plan to benefit from AI cooperation, via either direct control, indirect control such as via property rights institutions, or such creatures having cooperative values.  As with pit bulls, developers should have to show an ability, perhaps via insurance, to pay plausible hurt amounts if their creations hurt others.  To the extent they or their insurers fear such hurt, they would test for various hurt scenarios, slowing development as needed in support.  To the extent they feared inequality from some developers succeeding first, they could exchange shares, or share certain kinds of info.  Naturally-occurring info-leaks, and shared sources, both encouraged by shared standards, would limit this inequality.

In this context, I read Eliezer as fearing that developers, insurers, regulators, and judges, will vastly underestimate how dangerous are newly developed AIs.  Eliezer guesses that within a few weeks a single AI could grow via largely internal means from weak and unnoticed to so strong it takes over the world, with no weak but visible moment between when others might just nuke it.  Since its growth needs little from the rest of the world, and since its resulting power is so vast, only its values would make it treat others as much more than raw materials.  But its values as seen when weak say little about its values when strong.  Thus Eliezer sees little choice but to try to design a theoretically-clean AI architecture allowing near-provably predictable values when strong, to in addition design a set of robust good values, and then to get AI developers to adopt this architecture/values combination.

This is not a choice to make lightly; declaring your plan to build an AI to take over the world would surely be seen as an act of war by most who thought you could succeed, no matter how benevolent you said its values would be.  (But yes if Eliezer were sure he should push ahead anyway.)  And note most Eliezer’s claim’s urgency comes from the fact that most of the world, including most AI researchers, disagree with Eliezer; if they agreed AI development would likely be severely regulated, like nukes today.

On the margin this scenario seems less a concern when manufacturing is less local, when tech surveillance is stronger, and when intelligence is multi-dimensional.  It also seems less of a concern with ems, as AIs would have less of a hardware advantage over ems, and modeling AI architectures on em architectures would allow more reliable value matches.

While historical trends do suggest we watch for a several-year-long transition sometime in the next century to a global growth rate two or three orders of magnitude faster, Eliezer’s postulated local growth rate seems much faster.  I also find Eliezer’s growth math unpersuasive.  Usually dozens of relevant factors are co-evolving, with several loops of all else equal X growth speeds Y growth speeds etc. Yet usually it all adds up to exponential growth, with rare jumps to faster growth rates.  Sure if you pick two things that plausibly speed each other and leaving everything else out including diminishing returns your math can suggest accelerating growth to infinity, but for a real foom that loop needs to be real strong, much stronger than contrary muting effects.

But the real sticking point seems to be locality.  The “content” of a system is its small modular features while its “architecture” is its most important least modular features.  Imagine that a large community of AI developers, with real customers, mostly adhering to common architectural standards and sharing common content; imagine developers trying to gain more market share and that AIs mostly got better by accumulating more better content, and that this rate of accumulation mostly depended on previous content; imagine architecture is a minor influence. In this case the whole AI sector of the economy might grow very quickly, but it gets pretty hard to imagine one AI project zooming vastly ahead of others.

So I suspect this all comes down to how powerful is architecture in AI, and how many architectural insights can be found how quickly?  If there were say a series of twenty deep powerful insights, each of which made a system twice as effective, just enough extra oomph to let the project and system find the next insight, it would add up to a factor of a million.  Which would still be nowhere near enough, so imagine a lot more of them, or lots more powerful.

This scenario seems quite flattering to Einstein-wannabes, making deep-insight-producing Einsteins vastly more valuable than they have ever been, even in percentage terms.  But when I’ve looked at AI research I just haven’t seen it.  I’ve seen innumerable permutations on a few recycled architectural concepts, and way too much energy wasted on architectures in systems starved for content, content that academic researchers have little incentive to pursue.  So we have come to:  What evidence is there for a dense sequence of powerful architectural AI insights?  Is there any evidence that natural selection stumbled across such things?

And if Eliezer is the outlier he seems on the priority of friendly AI, what does Eliezer know that the rest of us don’t?  If he has such revolutionary clues, why can’t he tell us?  What else could explain his confidence and passion here if not such clues?

GD Star Rating
loading...
Tagged as: , , ,
Trackback URL:
  • http://reflectivedisequilibria.blogspot.com/ Maimonides

    “I raise my kids because they share my values.”

    Do your kids really best share your values, out of all the kids in the world?

    “Ems would not exterminate humans mainly because that would threaten the institutions ems use to keep peace with each other.”

    So biological human survival depends on ems’ inability to solve coordination problems to redistribute from vastly less capable, non-productive, vastly wealthy per capita, entities with bizarre and pointless values from the em perspective (e.g. opposition to personal death).

    “And note most Eliezer’s claim’s urgency comes from the fact that most of the world, including most AI researchers, disagree with Eliezer; if they agreed AI development would likely be severely regulated, like nukes today.”

    This is an essential point, although the state of AI today is such that one might rate ‘AGI projects’ as having near-zero chance of success in the near term, and intervention needless or counterproductive.

  • Cameron Taylor

    When you ask what Eleizer knows that the rest of ‘us’ don’t, to which ‘us’ do you refer exactly? I confess the final paragraph tripped my bullshit detectors. I got the impression that you were slipping in some dubious implied premises to make a conclusion that also was not fit to actually be presented explicitly.

    The remainder of the summary impressed me. It presented the key issues, even if I don’t necessarily agree with your conclusions.

    The early comments on how humans manage to cooperate on a far broader level then our instincts were initially created were painfully close to insight. I followed the reasoning, found the pitbull analogy perfect in the context and yet right near the natural conclusion you fumble the ball!

    We can handle the pitbull scenario, as suggested, with damages and insurance. It is natural to consider using the same economic leash on AI developers. Yet it is here that we run in to difficulty. As Carl has similarly observed, I have no interest in being entitled to money in a situation in which all parties are no longer present. This kind of deterrent becomes implausible in the case where a single instance of the disaster breaks the whole game. What point is there of dealing with pit bull insurance when a single escaped bit bull will reliably tear the insurer limb from limb?

    “On small scales we humans evolved to cooperate via various pair and group bonding mechanisms. But these mechanisms aren’t of much use on today’s evolutionarily-unprecedented large scales. Yet we do in fact cooperate on the largest scales. We do this because we are risk averse, because our values mainly conflict on resource use which conflicts destroy, and because we have the intelligence and institutions to enforce win-win deals via property rights, etc.”

    That, right there, sums up the situation today perfectly. Alas, this situation of healthy, relatively stable competition isn’t an inevitable equilibrium to which all optimisation processes desire to achieve. With the potential for different forms of conflict, utility functions less constrained by our tribal heritage and with human weaknesses no longer universal, the stability we see now is unlikely.

    We can trust analogy from our experiences in human economies about as much as we can trust our intuitions of group selection. Neither evolution nor economics will hold our hand here. This is an adult problem. If we want some semblance of our values to remain in the distant future we need to find a way to make it so. Eliezer has taken the approach of ‘shut up and do the impossible’. I really hope that works out for him. I haven’t got a better idea.

  • http://hanson.gmu.edu Robin Hanson

    Maimonides, I value my genes, and my kids share those. Yes, the idle rich have always relied on inabilities of the poor to coordinate to exterminate them.

    Cameron, “us” refers to non-outliers. Do let us know if you identify the dubious premises. I’m not following you on “different forms of conflict” and “human weakness no longer universal.”

  • Daniel Burfoot

    I think your estimate of twenty deep powerful insights is way too high. In my view about five such deep insights are required, and three of them have already been made:

    1. Capacity control/structural risk minization/MDL, by Vapnik, Rissanen, et al.
    2. Bayesian belief propagation, by Pearl et al.
    3. the view of the brain as a generative model, by Hinton, Friston, et al

    There are a few others waiting right over the horizon.

  • http://causalityrelay.wordpress.com/ Vladimir Nesov

    Robin, since you raise the question of value of Friendly AI research, depending on whether the whole scenario is possible: given the stakes, how sure are you that hard takeoff, or any other form of development where technology takes over, making its values the main factor, is impossible? Is economic understanding of this situation that relevant, if it still allows 1% chance of error, the scenario that leads to the loss of the future? Dismissing this approach as a policy requires pretty high certainty in the impossibility of the scenario, certainty which I don’t see possible without much stronger scientific understanding of the whole problem than we have now. Even if you rationally expect non-humane takeover to be unlikely, the stakes should turn the resulting policy upside down, unless you are damn sure it’s Pascal’s wager. All it takes is a weak feasibility argument.

  • http://reflectivedisequilibria.blogspot.com/ Maimonides

    Vladimir,

    I believe Robin has said less than 1%, and that that is enough for people like Eliezer to spend time thinking about it.

  • http://causalityrelay.wordpress.com Vladimir Nesov

    Yes, I remember that. But how much is it enough for? <1% isn’t nearly that low. With just <1% humanity should be much more enthusiatic about trying to understand intelligence than it is now. Should we discuss 1% vs. 70% or how rational we are about handling even that 1%? It sounds like these discussions get intuitively conflated.

  • http://yudkowsky.net/ Eliezer Yudkowsky

    On small scales we humans evolved to cooperate via various pair and group bonding mechanisms. But these mechanisms aren’t of much use on today’s evolutionarily-unprecedented large scales. Yet we do in fact cooperate on the largest scales. We do this because we are risk averse, because our values mainly conflict on resource use which conflicts destroy, and because we have the intelligence and institutions to enforce win-win deals via property rights, etc.

    Individual organisms are adaptation-executers, not fitness-maximizers. We seem to have a disagreement-of-fact here; I think that our senses of honor and of internalized group morality are operating to make us honor our agreements with trade partners and internalize certain capitalist values. If human beings were really genuinely selfish, the economy would fall apart or at least have to spend vastly greater resources policing itself – think Zimbabwe and other failed states where police routinely stop buses to collect bribes from all passengers, but without the sense of restraint: the police just shoot you and loot your corpse unless they expect to be able to extract further bribes from you in particular.

    I think the group coordination mechanisms, executing as adaptations, are critical to the survival of a global economy between imperfect minds of our level, that cannot simultaneously pay attention to everyone who might betray us.

    In this case the whole AI sector of the economy might grow very quickly, but it gets pretty hard to imagine one AI project zooming vastly ahead of others.

    Robin, you would seem to be leaving out a key weak point here. It’s much easier to argue that AIs don’t zoom ahead of each other, than to argue that the AIs as a collective don’t zoom ahead of the humans. To the extent where, if AIs lack innate drives to treasure sentient life and humane values, it would be a trivial coordination problem and a huge net benefit to all AIs to simply write the statue-slow, defenseless, noncontributing humans out of the system.

  • Cameron Taylor

    (Robin) Do let us know if you identify the dubious premises.

    This was in reference to the final paragraph, as shown below:
    (Robin) And if Eliezer is the outlier he seems on the priority of friendly AI, what does Eliezer know that the rest of us don’t? If he has such revolutionary clues, why can’t he tell us? What else could explain his confidence and passion here if not such clues?

    As I understand it, that argument assumes or implies each of the following:
    – Eleizer’s claims in this argument are more outlying than Robin’s claims have been. (In the context of the entire uninformed human species I would grant this. In the context of the OB commenters my observations would lead me to the reverse conclusion.)
    – ‘We’re’ normal. Eleizer is weird. Eleizer must justify himself to us, rather than the reverse.
    – Eleizer has not explained or justified his claims here. “Why can’t he tell us?”, etc. (This does not fit my observations in the slightest. For example, I see you rejecting offhand the quite clear and very nearly obvious explaination of recursion. I see no non-status based reason for this dismisal.
    – Eleizer is more biassed than me, more biassed than *us*, we should ally socially to find excuses to discredit him.

    (Robin)I’m not following you on “different forms of conflict” and “human weakness no longer universal.”

    Ems don’t have human wars. They don’t have human penalties for conflict. “cp -r /sodier_em102 em_slate_20000″ is cheaper than replacing a human casualty with education and training. There is no problem of controlling ‘conquered’ nations. You can just EMP the place and build new ems. AIs need not suffer human productivity penalties for enslavement. Losing every allied em except one in a battle for the world would be a fantastic outcome, not a Pyrrhic victory of cataclysmic proportions.

    As Eleizer said, we’re adaptation executers, not fitness maximisers. For this reason, and others, we come with a whole bunch of penalties for conflict built in. Our economic stability relies on these weaknesses. It would be crazy to assume that a late generation em or a direct AGI would have competitive behavior remotely as stable as that which we observe in humans.

  • http://hanson.gmu.edu Robin Hanson

    Eliezer: If human beings were really genuinely selfish, the economy would fall apart or at least have to spend vastly greater resources policing itself … group coordination mechanisms, executing as adaptations, are critical to the survival of a global economy … it would be a trivial coordination problem and a huge net benefit to all AIs to simply write the statue-slow, defenseless, noncontributing humans out of the system

    Here you disagree with most economists, including myself, about the sources and solutions of coordination problems. Yes genuinely selfish humans would have to spend more resources to coordinate at the local level, because this is where adapted coordinations now help. But larger scale coordination would be just as easy. Since coordination depends crucially on institutions, AIs would need to preserve those institutions as well. So AIs would not want to threaten the institutions they use to keep the peace among themselves. It is far from easy to coordinate to exterminate humans while preserving such institutions. Also, why assume AIs not explicitly designed to be friendly are in fact “really genuinely selfish”?

  • James Andrix

    I hope that ‘wrapping up’ doen’t mean you’re going to end the conversation without resolving the disagreement.

    How do institutions of slow stupid people control AI’s or EM’s? They would need their own institutions. People try to start cyber nations NOW, when they’re just logging in with a keyboard monitor and mouse. But now they don’t have the teeth or the strong incentive to protect their cyber interests.

    Yes, the idle rich have always relied on inabilities of the poor to coordinate to exterminate them.

    That has been known to fail. Sometimes in very little subjective time.

  • Ian C.

    If a human being was an organism with a 5 minute lifespan, it might be in your interest to rob people like in Zimbabwe. But since we live for 80 odd years, it might just be in our “genuine selfish interest” to try and create a prosperous country instead.

  • Unknown

    In regard to Cameron’s comments:

    In relation to the human race, and in relation to both informed and uninformed people in the world at large, Eliezer’s opinions are outlying in comparison to Robin’s.

    However, it is true that most of the commenters on this blog are supporting Eliezer: but this is because they are cult members accepting every word that comes from the great Master Eliezer.

  • luzr

    Eliezer:

    “To the extent where, if AIs lack innate drives to treasure sentient life and humane values, it would be a trivial coordination problem and a huge net benefit to all AIs to simply write the statue-slow, defenseless, noncontributing humans out of the system.”

    I fail to see the “huge net benefit”. Can you elaborate?

  • http://jed.jive.com/ Jed Harris

    Don Geddis’s comment on another post provoked some reflections on “insight” that I’ll reproduce here with a few changes.

    In summary, I think we’ve had some major insights, and will need more. They don’t typically come from a brilliant mind working alone, sometimes there is no single mind to credit and the minds involved are never working alone.

    Instead the pattern has two parts: how the insight is produced, and what it contributes:

    • Insights are produced by crystallizing a pattern from existing positive and negative experiments. Typically an insight requires decades of prior experiments by a large, diverse group.
    • An insight rarely leads to radical improvement in any system. Instead it enables researchers to communicate better, avoid useless experiments, design more informative experiments, etc. in cases where the insight is relevant. So it increases the productivity of investigations in a specific respect.

    Most of the insights I’ll list are quite directly traceable to researchers working on a large set of related problems for decades, and sometimes beating their heads against a wall that the insight finally made visible. Note that many of the insights are negative or have major negative aspects — essentially understanding the nature of the wall, the way the crystallization of thermodynamics made a lot of experiments obviously useless.

    Here’s a quick list of major insights I’m pretty sure are relevant. I’ve probably missed some, but I doubt the list could be twice as long.

    • Information is a measurable quantity, the inverse of entropy.
    • Turing’s ideas of abstract machines and emulation, and the later generalization to multiple realizability.
    • Turing’s incomputability results.
    • Formal language hierarchy and related results.
    • The computational complexity hierarchy, and resulting intractability proofs for various flavors of reasoning and search.
    • Search and optimization as basic elements of AI systems.
    • Komolgorov entropy / maximum entropy / minimum description length.
    • Switch from logic to statistical modeling as the conceptual language of AI.
    • Use of population / evolutionary methods and analysis (currently only partially worked out).

    So I agree that insight is required. If we had tried to just “muddle through” without these insights we’d be progressing very slowly, if at all.

    Conversely however I think that we can’t get these insights without the prior accumulated engineering efforts / experiments (successful and unsuccessful) that outline the issue to be understood.

    And the insight only helps us work more effectively at the engineering / experiment level.

  • http://www.transhumangoodness.blogspot.com Roko

    Robin: “This scenario seems quite flattering to Einstein-wannabes, making deep-insight-producing Einsteins vastly more valuable than they have ever been, even in percentage terms”

    – this is a source of possible bias for people like me (or Eli, or indeed anyone who thinks they are clever and are aware of the problem) which worries me a lot. In general, people want to think of themselves as being important, having some kind of significance, etc. Under the “architecture heavy” AGI scenario, people like us would be very important. Under the “general economic progress and vast content” scenario, people like us would not be particularly important, there would be billions of small contributions from hundreds of millions of individuals in academia, in the corporate sector and in government which would collectively add up to a benign singularity, without any central plan or organization.

    We are therefore prone to overestimate the probability that the first scenario is the case.

    How can I compensate for such a Bias?

  • James Andrix

    With property rights enforced, both sides would expect to benefit more when copying was allowed. Ems would not exterminate humans mainly because that would threaten the institutions ems use to keep peace with each other.

    Backing up a bit: What does ‘peace’ mean? We don’t have institutions that keep the peace NOW. We have massive power inequalities now, and if I understand your general model, you think that the singularity will expand those differences, but less than previous major changes.

    I just don’t understand why you think you’re painting a picture of a world that isn’t a hellhole for a lot of people, if not most.

  • Tim Tyler

    Ems would not exterminate humans mainly because that would threaten the institutions ems use to keep peace with each other.

    I hate to get into the (silly IMO) “ems” topic – but this is only true if humans are still running those institutions. If “ems” get rights that situation might last for five, ten years, maybe. More perhaps – but probably not for very long on a historical scale.

  • Douglas Knight

    Robin,
    you like to compare to the transitions to industry and farming. How do you compare conflict between farmers and hunter-gatherers with conflict between polities of farmers?

    It seems to me farmers were easily able to mark HG as unworthy of contract, but it may be that I’m looking too late, and it was really imbalance of power and not lifestyle that is relevant.

  • Ian C.

    Regarding ems, isn’t it likely that, without some special insight, we would probably have to emulate the whole body?

    Reductionism tells us we can emulate a thing by emulating it’s constituent atoms. Yes, but if an atom has multiple possible behaviors, and it’s behavior is “selected” through interaction (cause and effect) with the atoms around it, then wouldn’t we have to emulate them too? And so on with the atoms around those.

    Where does it stop? Where the effects on the brain of actions at this distance become negligible. But surely that is not likely to be the edge of the brain. The blood for example, flows through the brain at a great rate, having been through the rest of the body in a cycle time of minutes, “collecting” effects all along the way.

  • frelkins

    @Ian C

    we would probably have to emulate the whole body?

    The Whole Brain Emulation roadmap discusses this in its own section, p. 74

    “Simulating a realistic human body is kinematically possible today, requiring computational power ranging between workstations and mainframes. For simpler organisms such as nematodes or insects correspondingly simpler models could (and have) been used. Since the need for early WBE is merely adequate body simulation, the body does not appear to pose a major bottleneck.”

    The roadmap also notes that an environment may be required:

    “Convincing environments might be necessary only if the long‐term mental well‐being of emulated humans (or other mammals) is at stake. While it is possible that a human could adapt to a merely adequate environment, it seems likely that it would experience such an environment as confining or lacking in sensory stimulation. Note that even in a convincing environment simulation not all details have to fit physical reality perfectly (Bostrom, 2003). Plausible simulation is more important than accurate simulation in this domain and may actually improve the perceived realism (Barzel, Hughes et al., 1996).”

    For those who have not read the WBE roadmap in detail, I urge it strongly. It is technical and takes work. There you will see what the real issues are.

    I find much of the discussion a bit frustrating as it doesn’t seem to address the force of the WBE roadmap at all. Since this is what Robin bases his thinking on, it seems crucial to me to engage with it.

  • http://hanson.gmu.edu Robin Hanson

    Roko, that is the big question here.

    James, stupid idle rich humans are pretty safe now. No perfectly safe of course. I don’t consider the em world I describe to be a hell-hole, but don’t want to get distracted on that topic at the moment.

    Tim, I don’t see why you think ems would be so aggressive.

    Douglas, what is HG?

    Ian, no whole body emulation seem unnecessary.

  • James Andrix

    Ian:
    In our experience with alterations made to the body, either through accident or design, changes to the brain have the most direct and detailed impact on thought.

    People with artificial hearts can think, so on for just about every organ. It is likely that it would be trivial to make very computationally efficient virtual organs that work as good or better. (probably just mimicking their final effects)

  • TGGP

    I think by HG Douglas meant hunter-gatherers.

  • Ian C.

    Thank you all for the comments on whole-body emulation. I will look again at the WBE roadmap. I would have thought at least simulated blood would be required but perhaps not.

  • mjgeddes

    The big weak point is definitely the ‘locality’ idea Robin, not the ‘hard take off’ itself.

    The idea that one/few people on his/their own can somehow develop an entirely new ‘localized’ complex thing that is largely independent of the wider community seems wildly improbable (no one is smart enough).

    That’s why the universal parser/ontological approach is definitely a major alternative strategy, because it can draw on the insights of everyone in the wider IT community for sharing all the individual ideas in an integrated framework.

    Roko, here’s the way to do it:

    Instead of trying to develop all the ‘machinery’ for AGI from stratch, don’t work on the ‘machinery’ at all. Instead, develop a specialized language (a parser/ontology) enabling sharing and integration of all the various narrow IT domains – this way, you are in effect only designing the levers, whilst borrowing all the underlying machinery from everyone else.

    What I love about this idea is that with an effective ontology (a ‘universal parser), all the world’s other IT researchers would in effect be working for me… with the right ontology I can simply plagiarize all their insights (how nice of academia to publish everything in open source journals for me!), and in effect, use their brains as ‘botnets’, which are linked via my effective ontology for sharing of cognitive content (I am simply pushing the levers, they’re supplying all the underlying machinery). LOL It’s the ultimate hack.

  • http://hanson.gmu.edu Robin Hanson

    Douglas, property rights institutions were pretty primitive during the farming transition. In an ideal peaceful transition farmers would have bought land from hunters, or sold farming techniques to farmers. As it was though info leak of farming technique meant it wasn’t just farmers wiping out hunters – hunters also copied farming.

  • http://www.transhumangoodness.blogspot.com Roko

    M Geddes: “What I love about this idea is that with an effective ontology (a ‘universal parser), all the world’s other IT researchers would in effect be working for me… with the right ontology I can simply plagiarize all their insights (how nice of academia to publish everything in open source journals for me!)”

    Unfortunately, this task is much harder than it seems. Creating ontologies that actually have the flexibility, accuracy and coverage required is an open problem that has foxed the Cyc project for 25 years. There are entire communities of researchers working on problems of (a) creation of upper ontologies, (b) learning ontologies from text, (c) mapping between ontologies and (d) actually doing inference over ontologies. The biggest problem (as I see it) is that there is a bad mismatch between the world of formal logic which allows one to give meaning to terms, and the world of statistics and probability which allows you to approximate things. If you have no notion of approximation, you can’t leverage the powerful computers and large amounts of data we have on the internet, and you will be reduced to writing ontologies by hand. If you have no notion of meaning or semantics, you will end up creating a meaningless resource which can’t perform even the most basic inferences, or you will end up with a probability distribution over a narrowly defined set of outcomes that don’t even come close to providing the generality required to understand an “arbitrary” situation.

    Basically, building an “ontology” which can represent “anything” is a very hard problem in itself.

    If you have any ideas about how to make progress in this area, then do get in touch. I’ve spent a couple of months researching this, and I am becoming increasingly distressed by how hard it is to get anywhere.

    Robin: I might do a post on this issue: “Advice for Wannabe Einsteins”

  • Tim Tyler

    I don’t see why you think ems would be so aggressive.

    Who said anything about aggression? Like I said earlier: Warfare is different from winning.

    You do not necessarily have to fight to prevail – all that is needed is for your competitiors to not have as many kids as you do. Of course, there might be fights – but they do not seem like a critical element.

  • Will Pearson

    We have an ontology that can represent anything, it is called a language that is turing complete…

    We even have lots of people encoding knowledge in it they are called programmers.

    Integration is another problem, but the brain is not completely integrated. We need to develop programs that can understand the knowledge encoded in other programs, and programs to maintain the knowledge. Some way of constraining the system, while changing the internal programs, so that it has some purpose would probably also be useful.

  • Pingback: Overcoming Bias : Distrusting Drama

  • Pingback: AI Foom Debate: Post 41 – 45 | wallowinmaya

  • Pingback: Overcoming Bias : Are AIs Homo Economicus?