Regulating Infinity

As a professor of economics in the GMU Center for the Study of Public Choice, I and my colleagues are well aware of the many long detailed disputes on the proper scope of regulation.

One the one hand, the last few centuries has seen increasing demands for and expectations of government regulation. A wider range of things that might happen without regulation are seen as intolerable, and our increasing ability to manage large organizations and systems of surveillance is seen as making us increasingly capable of discerning relevant problems and managing regulatory solutions.

On the other hand, some don’t see many of the “problems” regulations are set up to address as legitimate ones for governments to tackle. And others see and fear regulatory overreach, wherein perhaps well-intentioned regulatory systems actually make most of us worse off, via capture, corruption, added costs, and slowed innovation.

The poster-children of regulatory overreach are 20th century totalitarian nations. Around 1900, many were told that the efficient scale of organization, coordination, and control was rapidly increasing, and nations who did not follow suit would be left behind. Many were also told that regulatory solutions were finally available for key problems of inequality and inefficient resource allocation. So many accepted and even encouraged their nations to create vast intrusive organizations and regulatory systems. These are now largely seen to have gone too far.

Or course there have no doubt been other cases of regulatory under-reach; I don’t presume to settle this debate here. In this post I instead want to introduce jaded students of regulatory debates to something a bit new under the sun, namely a newly-prominent rationale and goal for regulation that has recently arisen in a part of the futurist community: stopping preference change.

In history we have seen change not only in technology and environments, but also in habits, cultures, attitudes, and preferences. New generations often act not just like the same people thrust into new situations, but like new kinds of people with new attitudes and preferences. This has often intensified intergenerational conflicts; generations have argued not only about who should consume and control what, but also about which generational values should dominate.

So far, this sort of intergenerational value conflict has been limited due to the relatively mild value changes that have so far appeared within individual lifetimes. But at least two robust trends suggest the future will have more value change, and thus more conflict:

  1. Longer lifespans – Holding other things constant, the longer people live the more generations will overlap at any one time, and the more different will be their values.
  2. Faster change – Holding other things constant, a faster rate of economic and social change will likely induce values to change faster as people adapt to these social changes.
  3. Value plasticity – It may become easier for our descendants to change their values, all else equal. This might be via stronger ads and schools, or direct brain rewiring. (This trend seems less robust.)

These trends robustly suggest that toward the end of their lives future folk will more often look with disapproval at the attitudes and behaviors of younger generations, even as these older generations have a smaller proportional influence on the world. There will be more “Get off my lawn! Damn kids got no respect.”

The futurists who most worry about this problem tend to assume a worst possible case. (Supporting quotes below.) That is, without a regulatory solution we face the prospect of quickly sharing the world with daemon spawn of titanic power who share almost none of our values. Not only might they not like our kind of music, they might not like music. They might not even be conscious. One standard example is that they might want only to fill the universe with paperclips, and rip us apart to make more paperclip materials. Futurists’ key argument: the space of possible values is vast, with most points far from us.

This increased intergenerational conflict is the new problem that tempts some futurists today to consider a new regulatory solution. And their preferred solution: a complete totalitarian takeover of the world, and maybe the universe, by a new super-intelligent computer.

You heard that right. Now to most of my social scientist colleagues, this will sound bonkers. But like totalitarian advocates of a century ago, these new futurists have a two-pronged argument. In addition to suggesting we’d be better off ruled by a super-intelligence, they say that a sudden takeover by such a computer will probably happen no matter what. So as long as we have to figure out how to control it, we might as well use it to solve the intergenerational conflict problem.

Now I’ve already discussed at some length why I don’t think a sudden (“foom”) takeover by a super intelligent computer is likely (see here, here, here). Nor do I think it obvious that value change will generically put us face-to-face with worst case daemon spawn. But I do grant that increasing lifespans and faster change are likely to result in more intergenerational conflict. And I can also believe that as we continue to learn just how strange the future could be, many will be disturbed enough to seek regulation to prevent value change.

Thus I accept that our literatures on regulation should be expanded to add one more entry, on the problem of intergenerational value conflict and related regulatory solutions. Some will want to regulate infinity, to prevent the values of our descendants from eventually drifting away from our values to parts unknown.

I’m much more interested here in identifying this issue than in solving it. But if you want my current opinion it is that today we are just not up to the level of coordination required to usefully control value changes across generations. And even if we were up to the task I’m not at all sure gains would be worth the quite substantial costs.

Added 8a: Some think I’m unfair to the fear-AI position to call AIs our descendants and to describe them in terms of lifespan, growth rates and value plasticity. But surely AIs being made of metal or made in factories aren’t directly what causes concern. I’ve tried to identify the relevant factors but if you think I’ve missed the key factors do tell me what I’ve missed.

Added 4p: To try to be even clearer, the standard worrisome foom scenario has a single AI that grows in power very rapidly and whose effective values drift rapidly away from ones that initially seemed friendly to humans. I see this as a combination of such AI descendants having faster growth rates and more value plasticity, which are two of the three key features I listed.

Added 15Sep: A version of this post appeared as:

Robin Hanson, Regulating Infinity, Global Government Venturing, pp.30-31, September 2014.

Those promised supporting quotes: Continue reading "Regulating Infinity" »

GD Star Rating
loading...
Tagged as: ,

Neglecting Win-Win Help

Consider three kinds of acts:

  • S. Selfish – helps you, and no one else.
  • A. Altruistic – helps others, at a cost to you.
  • M. Mixed – helps others, and helps you.

To someone who is honestly and simply selfish, acts of type A would be by far the least attractive. All else equal such people would do fewer acts of type A, relatives to other types. Because they don’t care about helping others.

To someone who is honestly and simply altruistic, in contrast, acts of type M should be the most attractive. All else equal, such a person should more often do acts of type M, relative to the other types. A simply altruistic person is happy to help others while helping themself.

Now consider someone who wants to show others that they are altruistic and not selfish. To such a person, type M acts have a serious problem: since both selfish and altruistic people often do type M acts, observers may plausibly attribute their behavior to selfishness. Compared to a simply altruistic person, a person of this type finds type A acts more attractive, and type M acts less attractive. They want everyone to see them suffering, to show they are not selfish.

In fact, most people do seem to care just as much about seeming altruistic as about being altruistic. I thus predict a neglect of acts of type M, relative to acts of type A. For example:

  • Having kids. Observers often don’t credit parents for being altruistic toward their kids. They instead describe parents as selfishly wanting to enjoy the kids attention and devotion.
  • Having lovers. In a world of monogamous romantic pairs, someone who chooses not to pair up can force someone else to also go without a partner. So choosing to be part of a pair helps others. But observers often don’t credit romantic partners for altruism toward partners. They instead say lovers selfishly seek pleasure and flattery.
  • Inventing. While people in some kinds of professions are credited with choosing them in part to help others, people in other professions are not so credited, even when they give a lot of help. For example, nurses are often credited with altruism, but inventors are usually not so credited. Even though inventors often give a lot more help to the world. Perhaps because inventing seems more fun than nursing.
  • Marginal charity. Adjusting private optima a bit in the direction of social good helps others at almost no cost to yourself, but is hard for observers to distinguish from not doing so.

In sum, the more eager we are to show others that we care, the less eager we are to do things that both help us and help others. We instead do more things that help others while hurting us, so that we can distinguish ourselves from selfish people. Because of this we neglect win-win acts like having kids, being in love, and inventing. Which seems a shame.

Added 8a: Seems I’ve said something like this before, as did Katja Grace even earlier. Seems I’ve written more than I can keep track of.

GD Star Rating
loading...
Tagged as: ,

Why Do We So Seek Synch?

We economists are known for being “imperial” in trying to apply economics to almost everything. And that’s a goal I can get behind, in the sense of trying to find an integrated view of the social world, where all social phenomena have a place and some candidate explanations within a common framework. Of course many parts of this integrated view may start first in fields outside economics.

In pursuit of such an integrated view, I’ve been making a special effort to learn more about social phenomena that economists don’t talk much about. And since a lot of these phenomena are often associated with the words “play” and “ritual”, and it is sociologists who most seem to write about these things, I’ve been reading a lot of sociology.

Sixteen months ago I posted about an intriguing summary of Randall Collins’ book Interaction Ritual Chains:

Any physical gathering … turns into a ritual when those physically present focus their attention on specific people, objects, or symbols, and are thereby constituted as a distinct group with more or less clear boundaries. …

A ritual, for Collins, is basically an amplifier of emotion. … A successful ritual generates and amplifies motivating emotions. … Perhaps Collins’ most controversial claim is the idea that we are basically emotional energy “seekers”: much of our social activity can be understood as a largely unconscious “flow” along the gradient of maximal emotional energy charge for us, given our particular material resources and positions within the … set of ritual situations available to us. Our primary “motivation” is the search for motivation. … Motivation is simply a result of emotional amplification in ritual situations. …

Emotional charge or motivational energy is built up from entrainment: the micro-coordination of gesture, voice, and attention in rhythmic activity, down to tiny fractions of a second. Think of how in an engrossing conversation the partners are wholly attuned to one another, laughing and exhibiting emotional reactions simultaneously, keeping eye contact, taking turns at precisely the right moments, mirroring each other’s reactions. … Or consider sexual acts, to which Collins devotes a long and very interesting chapter. (more)

I’ve now read this book carefully, twice. Here is my report. Continue reading "Why Do We So Seek Synch?" »

GD Star Rating
loading...
Tagged as: , ,

Automation vs. Innovation

We don’t yet know how to make computer software that is as flexibly smart as human brains. So when we automate tasks, replacing human workers with computer-guided machines, we usually pay large costs in flexibility and innovation. The new automated processes are harder to change to adapt to new circumstances. Software is harder to change than mental habits, it takes longer to conceive and implement software changes, and such changes require the coordination of larger organizations. The people who write software are further from the task, and so are less likely than human workers to notice opportunities for improvement.

This is a big reason why it will take automation a lot longer to replace human workers than many recent pundits seem to think. And this isn’t just abstract theory. For example, some of the most efficient auto plants are the least automated. Read more about Honda auto plants:

[Honda] is one of the few multinational companies that has succeeded at globalization. Their profit margins are high in the auto industry. Almost everywhere they go — over 5 percent profit margins. In most markets, they consistently are in the top 10 of specific models that sell. They’ve never lost money. They’ve been profitable every year. And they’ve been around since 1949. …

Soichiro Honda, the founder of the company … was one of the world’s greatest engineers. And yet he never graduated college. He believed that hands-on work as an engineer is what it takes to be a great manufacturer. … Continue reading "Automation vs. Innovation" »

GD Star Rating
loading...
Tagged as:

Part Of Something Big

A hero is someone who has given his or her life to something bigger than oneself. Joseph Campbell

Most Twitter talk reminds me of the movie Ridicule, wherein courtiers compete to show cruel wit and cynicism. This makes me crave a simple direct conversation on something that matters.

So I pick this: being part of something larger than yourself. This is a commonly expressed wish. But what does it mean?

Here are some clues: Judging from Google-found quotes, common satisfactory “things” include religions, militaries, political parties, and charities. For most people “the universe” seems too big and “my immediate family” seems too small. And neither seem idealistic enough. “All utilitarians” is idealistic enough, but seems insufficiently coherent as a group. The words “part” and “thing” here are suspiciously vague, suggesting that there are several elements here, some of which people are more willing to admit than others.

Here’s my interpretation: We want to be part of a strong group that has our back, and we want to support and promote ideals. But these preferences aren’t independent, to be satisfied separately. We especially want to combine them, and be a valued part of a group that supports good ideals.

So we simultaneously want all these things:

  1. We are associated with an actual group of people.
  2. These people concretely relate to each other.
  3. This group is credibly seen as really supporting some ideals.
  4. We embrace those ideals, and find them worth our sacrifice.
  5. Our help to this group’s ideals would be noticed, appreciated.
  6. If outsiders resist our help, the group will have our back.
  7. The group is strong enough to have substantial help to give.
  8. The group does’t do wrongs that outweigh their ideals support.
  9. Both the group and its ideals are big in the scheme of things.

Since this is a lot of constraints, the actual groups that exist are unlikely to satisfy them all. So we compromise. Some people see most all big coherent groups as easily corrupted, and so only accept small groups. For some, group bonding is so important they’ll compromise on the ideals, or accept weak evidence that the group actually supports its ideals. If group strength is important enough to them, they may not require any ideals. For others, the ideal is everything, and they’ll accept a weak group defined abstractly as “everyone who truly supports this ideal.” Finally, for some being appreciated is so important that they’ll take the thing the world seems to most appreciate about them and accept a group and ideal defined around that.

If this is right then just talking about what are the best ideals and how to achieve them somewhat misses the point. Also somewhat missing the point is talk about how to make strong well-bonded groups. If people typically want the two of these things together, then the actual design problem is how to achieve good ideals via a strong well-bonded group.

Which isn’t a design problem I hear people talk about much. Some presume that if they can design a good enough ideal, a good group will naturally collect around it. Others presume that if they can design a good enough way for groups to coordinate, groups will naturally coordinate to achieve good ideals. But how reasonable are these assumptions?

If we focus on explaining this preference instead of satisfying it, a homo hypocritus framework fits reasonably well. Coalition politics is central to what we really want, but if cheap we’d rather appear to focus on supporting ideals, and only incidentally pick groups to help us in that quest.

GD Star Rating
loading...
Tagged as: , ,

Open Thread

This is our monthly place to discuss relevant topics that have not appeared in recent posts.

GD Star Rating
loading...
Tagged as:

Irreducible Detail

Our best theories vary in generality. Some theories are very general, but most are more context specific. Putting all of our best theories together usually doesn’t let us make exact predictions on most variables of interest. We often express this fact formally in our models via “noise,” which represents other factors that we can’t yet predict.

For each of our theories there was a point in time when we didn’t have it yet. Thus we expect to continue to learn more theories, which will let us make more precise predictions. And so it might seem like we can’t constrain our eventual power of prediction; maybe we will have powerful enough theories to predict everything exactly.

But that doesn’t seem right either. Our best theories in many areas tell us about fundamental limits on our prediction abilities, and thus limits on how powerful future simple general theories could be. For example:

  • Thermodynamics – We can predict some gross features of future physical states, but the entropy of a system sets a very high (negentropy) cost to learn precise info about the state of that system. If thermodynamics is right, there will never be a general theory to let one predict future states more cheaply than this.
  • Finance – Finance theory has identified many relevant parameters to predict the overall distribution of future assets returns. However, finance theory strongly suggests that it is usually very hard to predict details of the specific future returns of specific assets. The ability to do so would be worth such a huge amount that there just can’t be many who often have such an ability. The cost to gain such an ability must usually be more than the gains from trading it.
  • Cryptography – A well devised code looks random to an untrained eye. As there are a great many possible codes, and a great many ways to find weaknesses in them, it doesn’t seem like there could be any general way to break all codes. Instead code breaking is a matter of knowing lots of specific things about codes and ways they might be broken. People use codes when they expect the cost of breaking them to be prohibitive, and such expectations are usually right.
  • Innovation – Economic theory can predict many features of economies, and of how economies change and grow. And innovation contributes greatly to growth. But economists also strongly expect that the details of particular future innovations cannot be predicted except at a prohibitive cost. Since knowing of innovations ahead of time can often be used for great private profit, and would speed up the introduction of those innovations, it seems that no cheap-to-apply simple general theories can exist which predict the details of most innovations well ahead of time.
  • Ecosystems – We understand some ways in which parameters of ecosystems correlate with their environments. Most of these make sense in terms of general theories of natural selection and genetics. However, most ecologists strongly suspect that the vast majority of the details of particular ecosystems and the species that inhabit them are not easily predictable by simple general theories. Evolution says that many details will be well matched to other details, but to predict them you must know much about the other details to which they match.

In thermodynamics, finance, cryptography, innovations, and ecosystems, we have learned that while there are many useful generalities, the universe is also chock full of important irreducible incompressible detail. As this is true at many levels of abstraction, I would add this entry to the above list:

  • Intelligence – General theories tell us what intelligence means, and how it can generalize across tasks and contexts. But most everything we’ve learned about intelligence suggests that the key to smarts is having many not-fully-general tools. Human brains are smart mainly by containing many powerful not-fully-general modules, and using many modules to do each task. These modules would not work well in all possible universes, but they often do in ours. Ordinary software also gets smart by containing many powerful modules. While the architecture that organizes those modules can make some difference, that difference is mostly small compared to having more better modules. In a world of competing software firms, most ways to improve modules or find new ones cost more than the profits they’d induce.

If most value in intelligence comes from the accumulation of many expensive parts, there may well be no powerful general theories to be discovered to revolutionize future AI, and give an overwhelming advantage to the first project to discover them. Which is the main reason that I’m skeptical about AI foom, the scenario where an initially small project quickly grows to take over the world.

Added 7p: Peter McCluskey has thoughtful commentary here.

GD Star Rating
loading...
Tagged as: , , , ,

Adam Ford & I on Great Filter

Adam Ford interviewed me again, this time on the Great Filter:

We have three main sources of info on existential risks (xrisks):

  1. Inside View Analysis – where we try to use our best theories to reason about particular causal processes.
  2. Earth Track Records – the empirical distribution of related events observed so far on Earth.
  3. The Great Filter – inferences from the fact that the universe looks dead everywhere but here.

These sources are roughly equally informative. #2 suggests xrisks are low, even if high enough to deserve much effort to prevent them. I’d say that most variations on #1 suggest the same. However, #3 suggests xrisks could be very high, which should encourage more xrisk-mitigation efforts.

Ironically most xrisk efforts (of which I’m aware) focus on AI-risk, which can’t explain the great filter. Most analysis efforts also focus on #1, less on #2, and almost none on #3.

GD Star Rating
loading...
Tagged as: , ,

Lost For Words, On Purpose

When we use words to say how we feel, the more relevant concepts and distinctions that we know, the more precisely we can express our feelings. So you might think that the number of relevant distinctions we can express on a topic rises with a topic’s importance. That is, the more we care about something, the more distinctions we can make about it.

But consider the two cases of food and love/sex (which I’m lumping together here). It seems to me that while these topics are of comparable importance, we have a lot more ways to clearly express distinctions on foods than on love/sex. So when people want to express feelings on love/sex, they often retreat to awkward analogies and suggestive poetry. Two different categories of explanations stand out here:

1) Love/sex is low dimensional. While we care a lot about love/sex, there are only a few things we care about. Consider money as an analogy. While money is important, and finance experts know a great many distinctions, for most people the key relevant distinction is usually more vs. less money; the rest is detail. Similarly, evolution theory suggests that only a small number of dimensions about love/sex matter much to us.

2) Clear love/sex talk looks bad.  Love/sex are to supposed to have lots of non-verbal talk, so a verbal focus can detract from that. We have a norm that love/sex is to be personal and private, a norm you might seem to violate via comfortable impersonal talk that could easily be understood if quoted. And if you only talk in private, you learn fewer words, and need them less. Also, a precise vocabulary used clearly could make it seem like what you wanted from love/sex was fungible – you aren’t so much attached to particular people as to the bundle of features they provide. Precise talk could make it easier for us to consciously know what we want when, which makes it harder to self-deceive about what we want. And having available more precise words about our love/sex relations could force us to acknowledge smaller changes in relation status — if “love” is all there is, you can keep “loving” someone even as many things change.

It seems to me that both kinds of things must be going on. Even when we care greatly about a topic, we may not care about many dimensions, and we may be better off not being able to express ourselves clearly.

GD Star Rating
loading...
Tagged as: , , ,

Conflicting Abstractions

My last post seems an example of an interesting general situation: when abstractions from different fields conflict on certain topics. In the case of my last post, the topic was the relative growth rate feasible for a small project hoping to create superintelligence, and the abstractions that seem to conflict are the ones I use, mostly from economics, and abstractions drawn from computer practice and elsewhere used by Bostrom, Yudkowsky, and many other futurists.

What typically happens when it seems that abstractions from field A suggests X, while abstraction from field B suggests not X? Well first, since both X and not X can’t be true, each field would likely see this as a threat to their good reputation. If they were forced to accept the existence of the conflict, then they’d likely try to denigrate the other field. If one field is higher status, the other field would expect to lose a reputation fight, and so they’d be especially eager to reject the claim that a conflict exists.

And in fact, it should usually be possible to reject a claim that a conflict exists. The judgement that a conflict exists would come from specific individuals studying the questions of if A suggests X and if B suggests not X. One could just suggest that some of those people were incompetent at analyzing the implications of the abstractions of particular fields. Or that they were talking past each other and misunderstanding what X and not X mean to the other. So one would need especially impeccable credentials to publicly make these claims and make them stick.

The ideal package of expertise for investigating such an issue would be expertise in both fields A and B. This would position one well to notice that a conflict exists, and to minimize the chance of problems arising from misunderstandings on what X means. Unfortunately, our institutions for crediting expertise don’t do well at encouraging combined expertise. For example, often patrons are interested in the intersection between fields A and B, and sponsor conferences, journal issues, etc. on this intersection. However, seeking maximal prestige they usually prefer people with the most prestige in each field, over people who actually know both fields simultaneously. Anticipating this, people usually choose to stay within each field.

Anticipating this whole scenario, people are likely to usually avoid seeking out or calling attention to such conflicts. To seek out or pursue a conflict, you’d have to be especially confident that your field would back you up in a fight, because your credentials are impeccable and the field thinks it could win a status conflict with the other field. And even then you’d have to waste some time studying a field that your field doesn’t respect. Even if you win the fight you might lose prestige in your field.

This is unfortunate, because such conflicts seem especially useful clues to help us refine our important abstractions. By definition, abstractions draw inferences from reduced descriptions, descriptions which ignore relevant details. Usually that is useful, but sometimes that leads to errors when the dropped details are especially relevant. Intellectual progress would probably be promoted if we could somehow induce more people to pursue apparent conflicts between the abstractions from different fields.

GD Star Rating
loading...
Tagged as: ,