Are AIs Homo Economicus?

Eliezer yesterday:

If I had to pinpoint a single thing that strikes me as “disagree-able” about the way Robin frames his analyses, it’s that there are a lot of opaque agents running around, little black boxes assumed to be similar to humans, but there are more of them and they’re less expensive to build/teach/run.  … The core of my argument has to do with what happens when you pry open the black boxes that are your economic agents, and start fiddling with their brain designs, and leave the tiny human dot in mind design space.

Lots of folks complain about economists; believers in peak oil, the gold standard, recycling, electric cars, rent control, minimum wages, tariffs, and bans on all sorts of things complain about contrary economic analyzes.  Since compared to most social scientists economists use relatively stark mathy models, the usual complaint is that our models neglect relevant factors, and make false assumptions.

But of course we must neglect most everything, and make false assumptions, to have tractable models; the question in each context is what neglected factors and false assumptions would most mislead us.

It is odd to hear complaints that economic models assume too much humanity; the usual complaint is the opposite.  Unless physicists have reasons to assume otherwise, they usually assume masses are at points, structures are rigid, surfaces are frictionless, and densities are uniform.  Similarly, unless economists have reasons to be more realistic in a context, they usually assume people are identical, risk-neutral, live forever, have selfish material stable desires, know everything, make no mental mistakes, and perfectly enforce every deal.  Products usually last one period or forever, are identical or infinitely varied, etc.

Of course we often do have reasons to be more realistic, considering deals that may not be enforced, people who die, people with diverse desires, info, abilities, and endowments, people who are risk-averse, altruistic, or spiteful, people who make mental mistakes, and people who follow “behavioral” strategies.  But the point isn’t just to add as much realism as possible; it is to be clever about knowing which sorts of detail are most relevant in what context.

So to to a first approximation, economists can’t usually tell if the agents in their models are AIs or human!  But we can still wonder: how could economic models better capture AIs?  In common with ems, AIs could make copies of themselves, save backups, and run at varied speeds.  Beyond ems, AIs might buy or sell mind parts, and reveal mind internals, to show commitment to actions or honesty of stated beliefs.  Of course:

That might just push our self-deception back to the process that produced those current beliefs.  To deal with self-deception in belief production, we might want to provide audit trails, giving more transparency about the origins of our beliefs.

Since economists feel they understand the broad outlines of cooperation and conflict pretty well using simple stark models, I am puzzled to hear Eliezer say:

If human beings were really genuinely selfish, the economy would fall apart or at least have to spend vastly greater resources policing itself … group coordination mechanisms, executing as adaptations, are critical to the survival of a global economy.

We think we understand just fine how genuinely selfish creatures can cooperate.  Sure they might have to spend somewhat greater on policing, but not vastly greater, and a global economy could survive just fine.  This seems an important point, as it seems to be why Eliezer fears even non-local AI fooms.

GD Star Rating
loading...
Tagged as: ,
Trackback URL:
  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    The main part you’re leaving out of your models (on my view) is the part where AIs can scale on hardware by expanding their brains, and scale on software by redesigning themselves, and these scaling curves are much sharper than “faster” let alone “more populous”. Aside from that, of course, AIs are more like economic agents than humans are.

    My statement about “truly selfish humans” isn’t meant to be about truly selfish AIs, but rather, truly selfish entities with limited human attention spans, who have much worse agent problems than an AI that can monitor all its investments simultaneously and inspect the source code of its advisers. The reason I fear non-local AI fooms is precisely that they would have no trouble coordinating to cut the legacy humans out of their legal systems.

  • http://occludedsun.wordpress.com Caledonian

    ‘Are’? I’d think ‘will be’ would be a better verb choice, since no AIs currently exist.

    Likewise, it is difficult to determine what AIs might or might not be, since we know so little about what would be necessary to create them and what limits exist on their properties.

  • http://hanson.gmu.edu Robin Hanson

    Eliezer, economists assume that every kind of product can be improved, in terms of cost and performance, and we have many detailed models of product innovation and improvement. The hardware expansion and software redesign that you say I leave out seem to me included in the mind parts that can be bought or sold. How easy it is to improve such parts, and how much better parts add to mind productivity, is exactly the debate we’ve been having.

  • http://reflectivedisequilibria.blogspot.com/ Assumption

    The assumption of exogenous secure property rights is a problem in this discussion, especially in light of various economic literatures that treat government policy and property rights as endogenous.

  • James Andrix

    There should be a fair bit of historical data on the kinds of innovations we expect AI’s to make to improve themselves (faster chips, better algorithms), and how much those innovations cost in equipment, time, researchers, education levels, etc..

    We talk about technological hurdles and steep payoffs abstractly. Maybe we should just pretend that an AGI was developed decades ago, and figure out out how long it would take it to get to where we are, if it took roughly the same path.

  • pookleblinky

    “unless economists have reasons to be more realistic in a context, they usually assume people are identical, risk-neutral, live forever, have selfish material stable desires, know everything, make no mental mistakes, and perfectly enforce every deal. Products usually last one period or forever, are identical or infinitely varied, etc.”

    I know some Austrians who would disagree with almost every word of this.

  • Tiiba

    Suppose a spherical cow…

  • http://t-a-w.blogspot.com/ Tomasz Wegrzanowski

    The usual complaint is that your models neglect relevant factors, make false assumptions, turn out to be empirically wrong, and you keep following models instead of reality.

    For example Ricardian comparative advantage theory of trade which you praised many times on this blog clearly predicts that most of the trade will happen between countries with significantly different economies. In reality most of the trade happens between virtually identical developed economies, what makes no sense whatsoever for Ricardian analysis.

    So how can economists base their advice (in this case – more free trade always) on models like that which fail all empirical tests? If you start including relevant factors, and fix false assumptions, so that models finally make correct predictions, how do you know they will still predict that free trade is good for everyone in every situation?

    Ricardian theories of trade, and free trade advice are just one example. It’s infuriating how economists keep doing stuff like that all the time, making up theories that have simple math but match reality very badly, and then using them to advocate policies.

  • Tim Tyler

    Re: why fear?: Viruses can wipe out whole species – it does not seem intrinsically silly to consider the possibility of something like that happening to us with a malevolent superintelligent agent. Of course, it does seem very likely that humans would successfully act to prevent such an event – but that doesn’t mean it isn’t worth considering.

    The other main associated problematical scenario involves success at enslaving the superintelligences – but then failing to build a migration path for surviving humans. As the superintelligences ascend they would inevitably compete with humans for resources. The humans would have become superintelligences – if they wanted to retain their role as the dominant organisms.

    Genetic engineering and uploading appear to be the paths – but genetic engineering of humans builds on an appalling foundation, and doesn’t make much sense – while uploading may arrive on the scene late – and uploads would have a hard time competing without very major reconstructive brain surgery.

    If no path is successfully built for the humans, most of them will probably have a hard time economically – and will probably be pushed into the fringes of society. Something very similar seems likely – even if the machines are built in such a way that they love us, honour our every request, and do us no harm. In that case, we would eventually become like parasites on the machines – agents that suck resources, while providing little benefit to the hosts. That situation would probably lack long-term stability.

  • luzr

    “As the superintelligences ascend they would inevitably compete with humans for resources.”

    This is elementary antropomorphic bias. Are we speaking about superintelligency here, or about Hitler’s WWII Germany?

  • http://ynglingasaga.wordpress.com Rolf Andreassen

    Would you like to point to an intelligent entity which does not compete for resources?

  • luzr

    Rolf:

    1) There is no other known intelligent entity than humans. If humans compete for resources, does it imply that any inteligent entity does?

    2) If anything can be learnt from history and economy, the real price of resources (as compared to the price of human work) is going down for at least last 200 years. The reason is that smarter technology brings new ways how to obtain more resources at lower price. It is not too hard to extrapolate this to accelerate if there is AI capable of creating even smarter ways how to gather them.

  • http://shagbark.livejournal.com Phil Goetz

    Tim’s post touches on probably my biggest disagreement with Eliezer, which is about what is worth saving.

    I would have expected anyone who thinks a great deal about AI to agree with me, that what is worth saving is not our bodies, or our genes, but our values and aesthetics. That we should be at least as happy to transfer our memes to the next generation and die, as all previous human generations have been to transfer their genes to the next generation and die. But I would have been wrong.

    (To the people who have protested in the past that Eliezer isn’t talking about saving fleshly humans, I quote Eliezer from a recent post: “It’s much easier to argue that AIs don’t zoom ahead of each other, than to argue that the AIs as a collective don’t zoom ahead of the humans. To the extent where, if AIs lack innate drives to treasure sentient life and humane values, it would be a trivial coordination problem and a huge net benefit to all AIs to simply write the statue-slow, defenseless, noncontributing humans out of the system.”)

  • Tim Tyler

    Regarding competing with humans for resources: in practice all organisms compete for resources – or elso die out. Machines share our ecosystem. Molecules cannot be part of both a human and a machine – so there’s a natural conflict of interests over who gets what between the gene-based entities and those based around the new replicators. I don’t think this is antropomorphism – rather it’s based on the Malthusian idea of resource limitation.

  • luzr

    “rather it’s based on the Malthusian idea of resource limitation”

    Actually, the Malthusian catastrophe that never happened and accepted explanation of the phenomenon is basis of my claim that for superintelligence, resources are next to irrelevant.

    “Machines share our ecosystem.”

    The relevant question is: How big is our ecosystem? If you count only areas actually populated by people and resource currently economically exploited, then yes, there might be a problem. But what is the point for AI, which does not depend on gravity, food, air etc… to compete for resources in the same area?

  • Nick Tarleton

    luzr, please read The Basic AI Drives.

  • Tim Tyler

    Self-reproducing systems grow exponentially. Resources grow at best at t^3 (with the light cone). To escape Malthusian resource limitation, you need to limit growth – which has much the same effect as resource limitation (which also acts to limit growth).

    Civilization has been resource limited from the beginning. For example, if you were not resource limited, winning a billion dollar lottery would have no effect on your actions.

    In my view, machines are currently effectively competing with humans for energy, space, and many chemical elements – and have been doing so for at least 200 years.

  • michael vassar

    Phil: Eliezer has been explicit on this on MANY occasions, to the point of claiming that uploads are a type of human, not a type of AGI for instance. I don’t know why you seem stuck on misreading him.

  • Cameron Taylor

    Consider a counterfactual question: All humans suddenly have no intrinsic desire for status. Luxury becomes meaningless. Lust is replaced with an explicit desire to reproduce. Would current economic systems remain even remotely secure?

    My suspicion would be no. Our obsession with social games is a massive distraction. It also focusses our competition in a relatively ‘safe’ arena. Without these training wheels our political and economic assumptions would be irrelevant. AI, or even later generation ems, would have these differences. They would also have the self modification differences that Eleizer mentioned above.

    Assuming that humans would be remotely safe in such an environment is reckless. We have no reason to assume we’d even be kept around for our historical significance. Aesthetic atachement for historical creatures is another human quirk that AIs needed be assumed to have.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Cameron, don’t you think economists might know something about how behavior would change without status or luxury desires? And how exactly do you know ems or AIs would not have these things?

    Assumption, no one made that assumption.

  • luzr

    “Self-reproducing systems grow exponentially.”

    Not all.

    “Resources grow at best at t^3 (with the light cone). To escape Malthusian resource limitation, you need to limit growth – which has much the same effect as resource limitation (which also acts to limit growth).”

    Pushed to limits, this of course is true. But, pushed to limits, there is also “final limit” to any grow, be it exponential or linear (the size of reachable universe).

    Any superintelligence worth of its name should know that. So, when going through FOOM, any AGI, if rational and with a sense of self-preservation, would be very careful not to kill its goal system by imposing exponential grow.

    I still stand behind my point that resource conflicts would only be possible if our wannabe strong AGI is, uhm, kind of stupid….

    “In my view, machines are currently effectively competing with humans for energy, space, and many chemical elements – and have been doing so for at least 200 years.”

    That is definitely true (although one would comment that machine desires are a bit different, humans do not need as much iron, cooper and silicon). Anyway, the net effect of this development is that there are much MORE resources available, especially those relevant for human bodies and useless for machines.

    BTW: Do you think that winning a “billion dollar lottery” makes you spend billion / your_current_income times more raw resources?

  • Cameron Taylor

    “Cameron, don’t you think economists might know something about how behavior would change without status or luxury desires?”

    Robin, I expect there is work at the fringes of economics that would give valuable insight into that situation. Could you point me at a significant paper on that explicit topic that you consider worthwhile and makes the kind of assumptions and reasoning that I may benefit from?

    Unfortunately, I also know that the disadvantage of expertise is that it tends to make people overconfident in their understanding of things outside their field. When it comes to commenting outside the bounds of their professional knowledge, I expect experts in economics to overrate the importance of their field. It’s what humans do.

    Economic research and understanding is incredibly biassed towards actual human behavior. Even work that deals with societies of specific counterfactual entities will be biassed. People are less likely to publish conclusions that would be considered ‘silly’ and are more likely to publish theories that validate the core dogmas of the field. What incentive does an economics researcher to publish a paper that concludes “allmost all of our core political values as a profession wouldn’t apply in this situation”? That’s the sort of naivety that leaves someone either burnt out or ostracized soon enough.

  • Cameron Taylor

    PS: I agree the irony here is huge! It would be extremely frustrating to be constantly bombarded with claims that your ‘homo economicus’ assumption makes you irrelevant in the ‘real world’. Then, to hear almost the reverse claim would be infuriating!

    Nevertheless, I just don’t feel comfortable with how casually you account for the change from biological humans to self-modifying AI, keeping even the legal system of property rights of their human creators. For my part, I would need some extremely strong arguments to convince me that humans can comfortably rely on legacy property rights to ensure their long term survival. Given the scope of possible actions superintelligent entities of unknown motives could take, assuming property rights for humans remain in the long term seems like science fiction.