19 Comments

You have the causality backwards. We have only one type of mind; therefore, we have a simple economy and economic theory that are possible with homogenous agents. Our economy is the degenerate case.

Expand full comment

I am going to guess that you seem to think cooperation and specialization are at odds because you're thinking in terms of game theory, which assumes that agents are indivisible and interchangable. In the em world, we don't even know how to assign responsibility for an action to an agent, or how to compute the change in social utility when an agent copies itself or gives itself an upgrade, or how to define death or identity.

A single human can be regarded as a collection of AIs. The hand is specialized, yet cooperates with the eye.

Expand full comment

I notice you did not elaborate on the question, even though at least three of the commenters asked it.

Expand full comment

I suppose you mean that since much economics follows from simple rationality assumptions, the results don't depend on the specific characteristics of the human mind, which, if they deviate from rationality, can only be irrational.

But this assumes that AI will only subtract irrationalities and not add any, ignoring the likelihood that overcoming irrationalities in human behavior won't substitute different irrationalities or that for other reasons, AI won't need to implement other irrationalities.

Perhaps this is partly based on a latent assumption that perfect rationality is a physical possibility. (For rebuttal, see "Buridan’s ass and the psychological origins of objective probability" — http://tinyurl.com/bqrjjxh )

Expand full comment

It is the ordinary presumption among economists, not at all extraordinary there.

Expand full comment

Robin, you are making a highly counterintuitive, even extraordinary claim. Would you care to elaborate on the particulars?

Expand full comment

Not obviously, no.

Expand full comment

But the tendency toward nepotism and especially the crowding out of distant cooperation by local cooperation--these things *do* strongly depend on the particulars of human minds, right?

Expand full comment

AI minds might carve at different joints than humans do. Remember that a human consists of both circuitry needed to calculate goals, including goals evolved in clan-based societies, and circuitry for solving various technical tasks. An AI, even one inspired by a human upload, might be dissected to separate these parts, and such parts could be traded between larger entities. In this way goal systems could remain stable within a group of related AI minds, even as trade with other such groups provides the flexibility and diversity needed to solve various technical problems, thus enabling both close cooperation and specialization/flexibility. The relevant analogy might be bacteria that are clonal yet capable of exchanging task-relevant information (e.g. antibiotic resistance genes) with unrelated bacteria.

It is too much of a simplification to expect that the uploaded world will consist of human-like minds still under the influence of evolutionary pressures similar to what shaped us. Everything could change, because the nitty-gritty technical details that affect fitness in the substrate are likely to be different from the physical and chemical constraints that shaped us.

Expand full comment

"I agree that giant em clans come with big costs, though I think you understate the gains. I expect that e.g. better reputation systems won't come with significant costs in terms of flexibility or specialization, because I don't see any good reason why they would."

Without clans you need another mechanism to distribute societal gains back to its members (like social security and organized charity), if this doesn't exist then most inidividuals would be worse off than in a clan society because most of the gains would go to a handful of people at the top.

Expand full comment

Coordination is hard. Cooperating in prisoner's dliemna type situations is only a small part of coordination.

Expand full comment

What ever happened to "cooperation is hard"? I mean, looking at humans, in practice even the minimally difficult cooperation with one's near-future self seems to be hard enough to be one of the main determinants of people's outcomes.

Expand full comment

It would be interesting for you to catalog all the "impossibility theorems" that limit the amount of "easy" cooperation: Prisoner's Dilemma, Arrow, Myerson-Satterthwaite, Gibbard Sattethwaite, Holmstrom's Theorem, etc. I'm sure there must be others

Expand full comment

Yes the success of our methods of cooperating probably depend on many details, but that is likely to be true for all the other possible ways to cooperate as well. I doubt there is a single general solution for all contexts. Glad your theory suggests reputation can be taken a long way, though I suspect that result also depends on some details.

I don't think that cooperation necessarily comes with costs, but for now and the foreseeable future I guess it will in fact come with costs, because all the ways I know to do it come with such costs. Yes capturing more gains in a modestly broader range of contexts is a more reasonable goal and expectation.

Expand full comment

Talking and reputation seem like good solutions; but how well do such solutions work in the long run? Many people have the intuition that the success of those systems to date depends on altruism, humans' inability to lie, humans' insufficient cutthroatness, the absence of sophisticated manipulators or large-scale sybil attacks, benevolent regulators, etc. I think libertarians tend to think that everything will just work out, but I think the average American intellectual disagrees.

The general question is "how much can you achieve with reputation systems?" and it seems like many different answers are a priori conceivable, including "much less than modern society achieves." This has been the subject of my recent research, and I now think that as a theoretical question the answer is "everything you could hope for."

It seems obvious that some forms of cooperation come with costs, such as increased rigidity. But I don't see why you would infer that cooperation itself comes with costs. This seems like a particularly extreme and indefensible form of the reasoning "Nature trades off X for Y, so probably it is impossible to obtain X without trading off Y." I agree that giant em clans come with big costs, though I think you understate the gains. I expect that e.g. better reputation systems won't come with significant costs in terms of flexibility or specialization, because I don't see any good reason why they would. Of course I wouldn't expect such schemes to "change everything," I would mostly hope for them to let us capture familiar gains from cooperation in a modestly broader ranger of contexts (which still may be a big gain).

Expand full comment

How does the strong local cooperation of clans interfere with specialization?

I think you have it backwards. It is cooperation outside the clan that allows specialization. Clans are a tradeoff between strong local cooperation and weak distant cooperation. New cooperation technology could increase both.

Expand full comment