Author Archives: Robin Hanson

Neglecting Win-Win Help

Consider three kinds of acts:

  • S. Selfish – helps you, and no one else.
  • A. Altruistic – helps others, at a cost to you.
  • M. Mixed – helps others, and helps you.

To someone who is honestly and simply selfish, acts of type A would be by far the least attractive. All else equal such people would do fewer acts of type A, relatives to other types. Because they don’t care about helping others.

To someone who is honestly and simply altruistic, in contrast, acts of type M should be the most attractive. All else equal, such a person should more often do acts of type M, relative to the other types. A simply altruistic person is happy to help others while helping themself.

Now consider someone who wants to show others that they are altruistic and not selfish. To such a person, type M acts have a serious problem: since both selfish and altruistic people often do type M acts, observers may plausibly attribute their behavior to selfishness. Compared to a simply altruistic person, a person of this type finds type A acts more attractive, and type M acts less attractive. They want everyone to see them suffering, to show they are not selfish.

In fact, most people do seem to care just as much about seeming altruistic as about being altruistic. I thus predict a neglect of acts of type M, relative to acts of type A. For example:

  • Having kids. Observers often don’t credit parents for being altruistic toward their kids. They instead describe parents as selfishly wanting to enjoy the kids attention and devotion.
  • Having lovers. In a world of monogamous romantic pairs, someone who chooses not to pair up can force someone else to also go without a partner. So choosing to be part of a pair helps others. But observers often don’t credit romantic partners for altruism toward partners. They instead say lovers selfishly seek pleasure and flattery.
  • Inventing. While people in some kinds of professions are credited with choosing them in part to help others, people in other professions are not so credited, even when they give a lot of help. For example, nurses are often credited with altruism, but inventors are usually not so credited. Even though inventors often give a lot more help to the world. Perhaps because inventing seems more fun than nursing.
  • Marginal charity. Adjusting private optima a bit in the direction of social good helps others at almost no cost to yourself, but is hard for observers to distinguish from not doing so.

In sum, the more eager we are to show others that we care, the less eager we are to do things that both help us and help others. We instead do more things that help others while hurting us, so that we can distinguish ourselves from selfish people. Because of this we neglect win-win acts like having kids, being in love, and inventing. Which seems a shame.

Added 8a: Seems I’ve said something like this before, as did Katja Grace even earlier. Seems I’ve written more than I can keep track of.

GD Star Rating
loading...
Tagged as: ,

Why Do We So Seek Synch?

We economists are known for being “imperial” in trying to apply economics to almost everything. And that’s a goal I can get behind, in the sense of trying to find an integrated view of the social world, where all social phenomena have a place and some candidate explanations within a common framework. Of course many parts of this integrated view may start first in fields outside economics.

In pursuit of such an integrated view, I’ve been making a special effort to learn more about social phenomena that economists don’t talk much about. And since a lot of these phenomena are often associated with the words “play” and “ritual”, and it is sociologists who most seem to write about these things, I’ve been reading a lot of sociology.

Sixteen months ago I posted about an intriguing summary of Randall Collins’ book Interaction Ritual Chains:

Any physical gathering … turns into a ritual when those physically present focus their attention on specific people, objects, or symbols, and are thereby constituted as a distinct group with more or less clear boundaries. …

A ritual, for Collins, is basically an amplifier of emotion. … A successful ritual generates and amplifies motivating emotions. … Perhaps Collins’ most controversial claim is the idea that we are basically emotional energy “seekers”: much of our social activity can be understood as a largely unconscious “flow” along the gradient of maximal emotional energy charge for us, given our particular material resources and positions within the … set of ritual situations available to us. Our primary “motivation” is the search for motivation. … Motivation is simply a result of emotional amplification in ritual situations. …

Emotional charge or motivational energy is built up from entrainment: the micro-coordination of gesture, voice, and attention in rhythmic activity, down to tiny fractions of a second. Think of how in an engrossing conversation the partners are wholly attuned to one another, laughing and exhibiting emotional reactions simultaneously, keeping eye contact, taking turns at precisely the right moments, mirroring each other’s reactions. … Or consider sexual acts, to which Collins devotes a long and very interesting chapter. (more)

I’ve now read this book carefully, twice. Here is my report. Continue reading "Why Do We So Seek Synch?" »

GD Star Rating
loading...
Tagged as: , ,

Automation vs. Innovation

We don’t yet know how to make computer software that is as flexibly smart as human brains. So when we automate tasks, replacing human workers with computer-guided machines, we usually pay large costs in flexibility and innovation. The new automated processes are harder to change to adapt to new circumstances. Software is harder to change than mental habits, it takes longer to conceive and implement software changes, and such changes require the coordination of larger organizations. The people who write software are further from the task, and so are less likely than human workers to notice opportunities for improvement.

This is a big reason why it will take automation a lot longer to replace human workers than many recent pundits seem to think. And this isn’t just abstract theory. For example, some of the most efficient auto plants are the least automated. Read more about Honda auto plants:

[Honda] is one of the few multinational companies that has succeeded at globalization. Their profit margins are high in the auto industry. Almost everywhere they go — over 5 percent profit margins. In most markets, they consistently are in the top 10 of specific models that sell. They’ve never lost money. They’ve been profitable every year. And they’ve been around since 1949. …

Soichiro Honda, the founder of the company … was one of the world’s greatest engineers. And yet he never graduated college. He believed that hands-on work as an engineer is what it takes to be a great manufacturer. … Continue reading "Automation vs. Innovation" »

GD Star Rating
loading...
Tagged as:

Part Of Something Big

A hero is someone who has given his or her life to something bigger than oneself. Joseph Campbell

Most Twitter talk reminds me of the movie Ridicule, wherein courtiers compete to show cruel wit and cynicism. This makes me crave a simple direct conversation on something that matters.

So I pick this: being part of something larger than yourself. This is a commonly expressed wish. But what does it mean?

Here are some clues: Judging from Google-found quotes, common satisfactory “things” include religions, militaries, political parties, and charities. For most people “the universe” seems too big and “my immediate family” seems too small. And neither seem idealistic enough. “All utilitarians” is idealistic enough, but seems insufficiently coherent as a group. The words “part” and “thing” here are suspiciously vague, suggesting that there are several elements here, some of which people are more willing to admit than others.

Here’s my interpretation: We want to be part of a strong group that has our back, and we want to support and promote ideals. But these preferences aren’t independent, to be satisfied separately. We especially want to combine them, and be a valued part of a group that supports good ideals.

So we simultaneously want all these things:

  1. We are associated with an actual group of people.
  2. These people concretely relate to each other.
  3. This group is credibly seen as really supporting some ideals.
  4. We embrace those ideals, and find them worth our sacrifice.
  5. Our help to this group’s ideals would be noticed, appreciated.
  6. If outsiders resist our help, the group will have our back.
  7. The group is strong enough to have substantial help to give.
  8. The group does’t do wrongs that outweigh their ideals support.
  9. Both the group and its ideals are big in the scheme of things.

Since this is a lot of constraints, the actual groups that exist are unlikely to satisfy them all. So we compromise. Some people see most all big coherent groups as easily corrupted, and so only accept small groups. For some, group bonding is so important they’ll compromise on the ideals, or accept weak evidence that the group actually supports its ideals. If group strength is important enough to them, they may not require any ideals. For others, the ideal is everything, and they’ll accept a weak group defined abstractly as “everyone who truly supports this ideal.” Finally, for some being appreciated is so important that they’ll take the thing the world seems to most appreciate about them and accept a group and ideal defined around that.

If this is right then just talking about what are the best ideals and how to achieve them somewhat misses the point. Also somewhat missing the point is talk about how to make strong well-bonded groups. If people typically want the two of these things together, then the actual design problem is how to achieve good ideals via a strong well-bonded group.

Which isn’t a design problem I hear people talk about much. Some presume that if they can design a good enough ideal, a good group will naturally collect around it. Others presume that if they can design a good enough way for groups to coordinate, groups will naturally coordinate to achieve good ideals. But how reasonable are these assumptions?

If we focus on explaining this preference instead of satisfying it, a homo hypocritus framework fits reasonably well. Coalition politics is central to what we really want, but if cheap we’d rather appear to focus on supporting ideals, and only incidentally pick groups to help us in that quest.

GD Star Rating
loading...
Tagged as: , ,

Open Thread

This is our monthly place to discuss relevant topics that have not appeared in recent posts.

GD Star Rating
loading...
Tagged as:

Irreducible Detail

Our best theories vary in generality. Some theories are very general, but most are more context specific. Putting all of our best theories together usually doesn’t let us make exact predictions on most variables of interest. We often express this fact formally in our models via “noise,” which represents other factors that we can’t yet predict.

For each of our theories there was a point in time when we didn’t have it yet. Thus we expect to continue to learn more theories, which will let us make more precise predictions. And so it might seem like we can’t constrain our eventual power of prediction; maybe we will have powerful enough theories to predict everything exactly.

But that doesn’t seem right either. Our best theories in many areas tell us about fundamental limits on our prediction abilities, and thus limits on how powerful future simple general theories could be. For example:

  • Thermodynamics – We can predict some gross features of future physical states, but the entropy of a system sets a very high (negentropy) cost to learn precise info about the state of that system. If thermodynamics is right, there will never be a general theory to let one predict future states more cheaply than this.
  • Finance – Finance theory has identified many relevant parameters to predict the overall distribution of future assets returns. However, finance theory strongly suggests that it is usually very hard to predict details of the specific future returns of specific assets. The ability to do so would be worth such a huge amount that there just can’t be many who often have such an ability. The cost to gain such an ability must usually be more than the gains from trading it.
  • Cryptography – A well devised code looks random to an untrained eye. As there are a great many possible codes, and a great many ways to find weaknesses in them, it doesn’t seem like there could be any general way to break all codes. Instead code breaking is a matter of knowing lots of specific things about codes and ways they might be broken. People use codes when they expect the cost of breaking them to be prohibitive, and such expectations are usually right.
  • Innovation – Economic theory can predict many features of economies, and of how economies change and grow. And innovation contributes greatly to growth. But economists also strongly expect that the details of particular future innovations cannot be predicted except at a prohibitive cost. Since knowing of innovations ahead of time can often be used for great private profit, and would speed up the introduction of those innovations, it seems that no cheap-to-apply simple general theories can exist which predict the details of most innovations well ahead of time.
  • Ecosystems – We understand some ways in which parameters of ecosystems correlate with their environments. Most of these make sense in terms of general theories of natural selection and genetics. However, most ecologists strongly suspect that the vast majority of the details of particular ecosystems and the species that inhabit them are not easily predictable by simple general theories. Evolution says that many details will be well matched to other details, but to predict them you must know much about the other details to which they match.

In thermodynamics, finance, cryptography, innovations, and ecosystems, we have learned that while there are many useful generalities, the universe is also chock full of important irreducible incompressible detail. As this is true at many levels of abstraction, I would add this entry to the above list:

  • Intelligence – General theories tell us what intelligence means, and how it can generalize across tasks and contexts. But most everything we’ve learned about intelligence suggests that the key to smarts is having many not-fully-general tools. Human brains are smart mainly by containing many powerful not-fully-general modules, and using many modules to do each task. These modules would not work well in all possible universes, but they often do in ours. Ordinary software also gets smart by containing many powerful modules. While the architecture that organizes those modules can make some difference, that difference is mostly small compared to having more better modules. In a world of competing software firms, most ways to improve modules or find new ones cost more than the profits they’d induce.

If most value in intelligence comes from the accumulation of many expensive parts, there may well be no powerful general theories to be discovered to revolutionize future AI, and give an overwhelming advantage to the first project to discover them. Which is the main reason that I’m skeptical about AI foom, the scenario where an initially small project quickly grows to take over the world.

Added 7p: Peter McCluskey has thoughtful commentary here.

GD Star Rating
loading...
Tagged as: , , , ,

Adam Ford & I on Great Filter

Adam Ford interviewed me again, this time on the Great Filter:

We have three main sources of info on existential risks (xrisks):

  1. Inside View Analysis – where we try to use our best theories to reason about particular causal processes.
  2. Earth Track Records – the empirical distribution of related events observed so far on Earth.
  3. The Great Filter – inferences from the fact that the universe looks dead everywhere but here.

These sources are roughly equally informative. #2 suggests xrisks are low, even if high enough to deserve much effort to prevent them. I’d say that most variations on #1 suggest the same. However, #3 suggests xrisks could be very high, which should encourage more xrisk-mitigation efforts.

Ironically most xrisk efforts (of which I’m aware) focus on AI-risk, which can’t explain the great filter. Most analysis efforts also focus on #1, less on #2, and almost none on #3.

GD Star Rating
loading...
Tagged as: , ,

Lost For Words, On Purpose

When we use words to say how we feel, the more relevant concepts and distinctions that we know, the more precisely we can express our feelings. So you might think that the number of relevant distinctions we can express on a topic rises with a topic’s importance. That is, the more we care about something, the more distinctions we can make about it.

But consider the two cases of food and love/sex (which I’m lumping together here). It seems to me that while these topics are of comparable importance, we have a lot more ways to clearly express distinctions on foods than on love/sex. So when people want to express feelings on love/sex, they often retreat to awkward analogies and suggestive poetry. Two different categories of explanations stand out here:

1) Love/sex is low dimensional. While we care a lot about love/sex, there are only a few things we care about. Consider money as an analogy. While money is important, and finance experts know a great many distinctions, for most people the key relevant distinction is usually more vs. less money; the rest is detail. Similarly, evolution theory suggests that only a small number of dimensions about love/sex matter much to us.

2) Clear love/sex talk looks bad.  Love/sex are to supposed to have lots of non-verbal talk, so a verbal focus can detract from that. We have a norm that love/sex is to be personal and private, a norm you might seem to violate via comfortable impersonal talk that could easily be understood if quoted. And if you only talk in private, you learn fewer words, and need them less. Also, a precise vocabulary used clearly could make it seem like what you wanted from love/sex was fungible – you aren’t so much attached to particular people as to the bundle of features they provide. Precise talk could make it easier for us to consciously know what we want when, which makes it harder to self-deceive about what we want. And having available more precise words about our love/sex relations could force us to acknowledge smaller changes in relation status — if “love” is all there is, you can keep “loving” someone even as many things change.

It seems to me that both kinds of things must be going on. Even when we care greatly about a topic, we may not care about many dimensions, and we may be better off not being able to express ourselves clearly.

GD Star Rating
loading...
Tagged as: , , ,

Conflicting Abstractions

My last post seems an example of an interesting general situation: when abstractions from different fields conflict on certain topics. In the case of my last post, the topic was the relative growth rate feasible for a small project hoping to create superintelligence, and the abstractions that seem to conflict are the ones I use, mostly from economics, and abstractions drawn from computer practice and elsewhere used by Bostrom, Yudkowsky, and many other futurists.

What typically happens when it seems that abstractions from field A suggests X, while abstraction from field B suggests not X? Well first, since both X and not X can’t be true, each field would likely see this as a threat to their good reputation. If they were forced to accept the existence of the conflict, then they’d likely try to denigrate the other field. If one field is higher status, the other field would expect to lose a reputation fight, and so they’d be especially eager to reject the claim that a conflict exists.

And in fact, it should usually be possible to reject a claim that a conflict exists. The judgement that a conflict exists would come from specific individuals studying the questions of if A suggests X and if B suggests not X. One could just suggest that some of those people were incompetent at analyzing the implications of the abstractions of particular fields. Or that they were talking past each other and misunderstanding what X and not X mean to the other. So one would need especially impeccable credentials to publicly make these claims and make them stick.

The ideal package of expertise for investigating such an issue would be expertise in both fields A and B. This would position one well to notice that a conflict exists, and to minimize the chance of problems arising from misunderstandings on what X means. Unfortunately, our institutions for crediting expertise don’t do well at encouraging combined expertise. For example, often patrons are interested in the intersection between fields A and B, and sponsor conferences, journal issues, etc. on this intersection. However, seeking maximal prestige they usually prefer people with the most prestige in each field, over people who actually know both fields simultaneously. Anticipating this, people usually choose to stay within each field.

Anticipating this whole scenario, people are likely to usually avoid seeking out or calling attention to such conflicts. To seek out or pursue a conflict, you’d have to be especially confident that your field would back you up in a fight, because your credentials are impeccable and the field thinks it could win a status conflict with the other field. And even then you’d have to waste some time studying a field that your field doesn’t respect. Even if you win the fight you might lose prestige in your field.

This is unfortunate, because such conflicts seem especially useful clues to help us refine our important abstractions. By definition, abstractions draw inferences from reduced descriptions, descriptions which ignore relevant details. Usually that is useful, but sometimes that leads to errors when the dropped details are especially relevant. Intellectual progress would probably be promoted if we could somehow induce more people to pursue apparent conflicts between the abstractions from different fields.

GD Star Rating
loading...
Tagged as: ,

I Still Don’t Get Foom

Back in 2008 my ex-co-blogger Eliezer Yudkowsky and I discussed his “AI foom” concept, a discussion that we recently spun off into a book. I’ve heard for a while that Nick Bostrom was working on a book elaborating related ideas, and this week his Superintelligence was finally available to me to read, via Kindle. I’ve read it now, along with a few dozen reviews I’ve found online. Alas, only the two reviews on GoodReads even mention the big problem I have with one of his main premises, the same problem I’ve had with Yudkowsky’s views. Bostrom hardly mentions the issue in his 300 pages (he’s focused on control issues).

All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain. Continue reading "I Still Don’t Get Foom" »

GD Star Rating
loading...
Tagged as: , , ,