Fundamentalists Are Not Traditionalists

In my last two years of college I rebelled against the system. I stopped doing homework and instead studied physics by playing with equations (and acing exams). In this I was a “school fundamentalist.” I wanted to cut out what I saw as irrelevant and insincere ritual, so that school could better serve what I saw as its fundamental purpose, which was to help curious people learn. I contrasted myself with “traditionalists” who just unthinkingly continued with previous habits and customs.

One of the big social trends over the last few centuries has been a move toward reforming previous rituals and institutions to become more “sincere,” i.e., to more closely align with stated purposes, especially purposes related to internal feelings. For example, the protestant revolution tried to reform religious rituals and institutions toward a stated purpose of improving personal relations with God. (Christian and Islamic “fundamentalists” continue in this vein today.) The romantic revolution in marriage was to move marriage toward a stated purpose of promoting loving romantic relations. And various revolutions in government have been justified as moving government toward stated purposes of legitimacy, representation, and accountability.

In all of these cases advocates for reform have complained about insincerity and hypocrisy in prior practices and institutions. Similar sincerity concerns can be raised about birthday presents, or dinner table manners. Kids sometimes ask why, if gifts are to show feelings, people shouldn’t wait to give gifts until they most feel the mood. Or wait for when the receiver would most like the gift. Kids also sometimes ask why they must lie and say “thank you” when that is not how they feel. Here kids are being fundamentalists, while parents are traditionalists who mostly just want the kids to do the usual thing, without too much reflection on exactly why.

We economists are deep into this sincerity trend, in that we often analyze institutions according to stated purposes, and propose institutional reforms that seem to better achieve stated purposes. For example, in law & economics, the class I’m teaching this semester, we analyze which legal rules best achieve the stated purpose of creating incentives to increase economic welfare.

I’ve been made aware of this basic sincerity vs. tradition conflict by the sociology book Ritual and Its Consequences: An Essay on the Limits of Sincerity. While its sociology theory can make for hard reading at times, I was persuaded by its basic claim that modern intellectuals are too quick to favor the sincerity side of this conflict. For example, even if dinner manners and birthday presents rituals don’t most directly express the sincerest feeling of those involved, they can create an “as if” appearance of good feelings, and this appearance can make people nicer and feel better about each other. We’d get a lot fewer presents if people only gave them when in the mood.

Similarly, while for some kids it seems enough to just support their curiosity, most kids are probably better off in a school system that forces them to act as if they are curious, even when they are not. Also, my wife, who works in hospice, tells me that people today often reject traditional bereavement rituals which don’t seem to reflect their momentary sincere feelings. But such people often then feel adrift, not knowing what to do, and their bereavement process goes worse.

Of course I’m not saying we should always unthinkingly follow tradition. But I do think our efforts to reform often go badly because we focus on the most noble and flattering functions and situations, and neglect many other important ones.

From Ritual and Its Consequences I also got some useful distinctions. In addition to sincerity vs. tradition, there is also play vs. ritual. This is the distinction among less-practical “as-if” behaviors between those (play) that spin out into higher variance and those (ritual) that spin in to high predictability. Ritual in this sense can help one to feel safe when threatened, while play can bring joy when one doesn’t feel threatened. One can also distinguish between kinds of play and ritual where people’s usual roles are preserved vs. reversed, and distinguish between kinds where people are in control vs. out of control of events.

GD Star Rating
a WordPress rating system
Tagged as: , , , ,

Security Has Costs

Technical systems are often insecure, in that they allow unauthorized access and control. While strong security is usually feasible if designed in carefully from the start, such systems are usually made fast on the cheap. So they usually ignore security at first, and then later address it as an afterthought, which as a result becomes a crude ongoing struggle to patch holes as fast as holes are made or discovered.

The more complex a system is, the more different other systems it is adapted to, the more different organizations that share a system, and the more that such systems are pushed to the edge of technical or financial feasibility, the more likely that related security is full of holes.

A dramatic example of this is cell phone security. Most anyone in the world can use your cell phone to find out where your phone is, and hence where you are. And there’s not much anyone is going to do about this anytime soon. From today’s Post: Continue reading "Security Has Costs" »

GD Star Rating
a WordPress rating system
Tagged as: , ,

Liked For Being You

What do people want to be liked for? You are advised to tell a pretty woman she is smart and a smart woman she is pretty. But people don’t seem that happy with being liked for features like wealth, fame, beauty, strength, talent, smarts, or charisma. People do seem to prefer being liked for more stable features that they are less likely lose with time. But they still often aren’t that happy with being liked for easily visible and hence “shallow” features, relative to “deep” features that take time and attention to discover. And they sometimes say “I want to be liked just for me, not for my features.”

I’ve often puzzled over what people could mean by this; surely everything you could like about someone is a feature of some sort. And why does a feature being harder to see make it better? But I recently realized the answer is simple and even obvious: we want people to become attached to us. Attachment is a well known psychological process wherein people become bonded to particular others:

Bowlby referred to attachment bonds as a specific type of “affectional” bond. … He established five criteria for affectional bonds between individuals, and a sixth criterion for attachment bonds:

    • An affectional bond is persistent, not transitory.
    • An affectional bond involves a particular person who is not interchangeable with anyone else.
    • An affectional bond involves a relationship that is emotionally significant.
    • The individual wishes to maintain proximity or contact with the person with whom he or she has an affectional tie.
    • The individual feels sadness or distress at involuntary separation from the person.

An attachment bond has an additional criterion: the person seeks security and comfort in the relationship. (more)

Other people don’t start out with a deep preference for the exact combination of features that we embody. But if they like our shallow features they may expose themselves to us enough to see deeper features, and in the process become attached to our particular combination of all features. And it is that attachment that we really want when we say we want to be liked “for being me.”

GD Star Rating
a WordPress rating system
Tagged as: ,

Regulating Infinity

As a professor of economics in the GMU Center for the Study of Public Choice, I and my colleagues are well aware of the many long detailed disputes on the proper scope of regulation.

One the one hand, the last few centuries has seen increasing demands for and expectations of government regulation. A wider range of things that might happen without regulation are seen as intolerable, and our increasing ability to manage large organizations and systems of surveillance is seen as making us increasingly capable of discerning relevant problems and managing regulatory solutions.

On the other hand, some don’t see many of the “problems” regulations are set up to address as legitimate ones for governments to tackle. And others see and fear regulatory overreach, wherein perhaps well-intentioned regulatory systems actually make most of us worse off, via capture, corruption, added costs, and slowed innovation.

The poster-children of regulatory overreach are 20th century totalitarian nations. Around 1900, many were told that the efficient scale of organization, coordination, and control was rapidly increasing, and nations who did not follow suit would be left behind. Many were also told that regulatory solutions were finally available for key problems of inequality and inefficient resource allocation. So many accepted and even encouraged their nations to create vast intrusive organizations and regulatory systems. These are now largely seen to have gone too far.

Or course there have no doubt been other cases of regulatory under-reach; I don’t presume to settle this debate here. In this post I instead want to introduce jaded students of regulatory debates to something a bit new under the sun, namely a newly-prominent rationale and goal for regulation that has recently arisen in a part of the futurist community: stopping preference change.

In history we have seen change not only in technology and environments, but also in habits, cultures, attitudes, and preferences. New generations often act not just like the same people thrust into new situations, but like new kinds of people with new attitudes and preferences. This has often intensified intergenerational conflicts; generations have argued not only about who should consume and control what, but also about which generational values should dominate.

So far, this sort of intergenerational value conflict has been limited due to the relatively mild value changes that have so far appeared within individual lifetimes. But at least two robust trends suggest the future will have more value change, and thus more conflict:

  1. Longer lifespans – Holding other things constant, the longer people live the more generations will overlap at any one time, and the more different will be their values.
  2. Faster change – Holding other things constant, a faster rate of economic and social change will likely induce values to change faster as people adapt to these social changes.
  3. Value plasticity – It may become easier for our descendants to change their values, all else equal. This might be via stronger ads and schools, or direct brain rewiring. (This trend seems less robust.)

These trends robustly suggest that toward the end of their lives future folk will more often look with disapproval at the attitudes and behaviors of younger generations, even as these older generations have a smaller proportional influence on the world. There will be more “Get off my lawn! Damn kids got no respect.”

The futurists who most worry about this problem tend to assume a worst possible case. (Supporting quotes below.) That is, without a regulatory solution we face the prospect of quickly sharing the world with daemon spawn of titanic power who share almost none of our values. Not only might they not like our kind of music, they might not like music. They might not even be conscious. One standard example is that they might want only to fill the universe with paperclips, and rip us apart to make more paperclip materials. Futurists’ key argument: the space of possible values is vast, with most points far from us.

This increased intergenerational conflict is the new problem that tempts some futurists today to consider a new regulatory solution. And their preferred solution: a complete totalitarian takeover of the world, and maybe the universe, by a new super-intelligent computer.

You heard that right. Now to most of my social scientist colleagues, this will sound bonkers. But like totalitarian advocates of a century ago, these new futurists have a two-pronged argument. In addition to suggesting we’d be better off ruled by a super-intelligence, they say that a sudden takeover by such a computer will probably happen no matter what. So as long as we have to figure out how to control it, we might as well use it to solve the intergenerational conflict problem.

Now I’ve already discussed at some length why I don’t think a sudden (“foom”) takeover by a super intelligent computer is likely (see here, here, here). Nor do I think it obvious that value change will generically put us face-to-face with worst case daemon spawn. But I do grant that increasing lifespans and faster change are likely to result in more intergenerational conflict. And I can also believe that as we continue to learn just how strange the future could be, many will be disturbed enough to seek regulation to prevent value change.

Thus I accept that our literatures on regulation should be expanded to add one more entry, on the problem of intergenerational value conflict and related regulatory solutions. Some will want to regulate infinity, to prevent the values of our descendants from eventually drifting away from our values to parts unknown.

I’m much more interested here in identifying this issue than in solving it. But if you want my current opinion it is that today we are just not up to the level of coordination required to usefully control value changes across generations. And even if we were up to the task I’m not at all sure gains would be worth the quite substantial costs.

Added 8a: Some think I’m unfair to the fear-AI position to call AIs our descendants and to describe them in terms of lifespan, growth rates and value plasticity. But surely AIs being made of metal or made in factories aren’t directly what causes concern. I’ve tried to identify the relevant factors but if you think I’ve missed the key factors do tell me what I’ve missed.

Added 4p: To try to be even clearer, the standard worrisome foom scenario has a single AI that grows in power very rapidly and whose effective values drift rapidly away from ones that initially seemed friendly to humans. I see this as a combination of such AI descendants having faster growth rates and more value plasticity, which are two of the three key features I listed.

Those promised supporting quotes: Continue reading "Regulating Infinity" »

GD Star Rating
a WordPress rating system
Tagged as: ,

Neglecting Win-Win Help

Consider three kinds of acts:

  • S. Selfish – helps you, and no one else.
  • A. Altruistic – helps others, at a cost to you.
  • M. Mixed – helps others, and helps you.

To someone who is honestly and simply selfish, acts of type A would be by far the least attractive. All else equal such people would do fewer acts of type A, relatives to other types. Because they don’t care about helping others.

To someone who is honestly and simply altruistic, in contrast, acts of type M should be the most attractive. All else equal, such a person should more often do acts of type M, relative to the other types. A simply altruistic person is happy to help others while helping themself.

Now consider someone who wants to show others that they are altruistic and not selfish. To such a person, type M acts have a serious problem: since both selfish and altruistic people often do type M acts, observers may plausibly attribute their behavior to selfishness. Compared to a simply altruistic person, a person of this type finds type A acts more attractive, and type M acts less attractive. They want everyone to see them suffering, to show they are not selfish.

In fact, most people do seem to care just as much about seeming altruistic as about being altruistic. I thus predict a neglect of acts of type M, relative to acts of type A. For example:

  • Having kids. Observers often don’t credit parents for being altruistic toward their kids. They instead describe parents as selfishly wanting to enjoy the kids attention and devotion.
  • Having lovers. In a world of monogamous romantic pairs, someone who chooses not to pair up can force someone else to also go without a partner. So choosing to be part of a pair helps others. But observers often don’t credit romantic partners for altruism toward partners. They instead say lovers selfishly seek pleasure and flattery.
  • Inventing. While people in some kinds of professions are credited with choosing them in part to help others, people in other professions are not so credited, even when they give a lot of help. For example, nurses are often credited with altruism, but inventors are usually not so credited. Even though inventors often give a lot more help to the world. Perhaps because inventing seems more fun than nursing.
  • Marginal charity. Adjusting private optima a bit in the direction of social good helps others at almost no cost to yourself, but is hard for observers to distinguish from not doing so.

In sum, the more eager we are to show others that we care, the less eager we are to do things that both help us and help others. We instead do more things that help others while hurting us, so that we can distinguish ourselves from selfish people. Because of this we neglect win-win acts like having kids, being in love, and inventing. Which seems a shame.

Added 8a: Seems I’ve said something like this before, as did Katja Grace even earlier. Seems I’ve written more than I can keep track of.

GD Star Rating
a WordPress rating system
Tagged as: ,

Why Do We So Seek Synch?

We economists are known for being “imperial” in trying to apply economics to almost everything. And that’s a goal I can get behind, in the sense of trying to find an integrated view of the social world, where all social phenomena have a place and some candidate explanations within a common framework. Of course many parts of this integrated view may start first in fields outside economics.

In pursuit of such an integrated view, I’ve been making a special effort to learn more about social phenomena that economists don’t talk much about. And since a lot of these phenomena are often associated with the words “play” and “ritual”, and it is sociologists who most seem to write about these things, I’ve been reading a lot of sociology.

Sixteen months ago I posted about an intriguing summary of Randall Collins’ book Interaction Ritual Chains:

Any physical gathering … turns into a ritual when those physically present focus their attention on specific people, objects, or symbols, and are thereby constituted as a distinct group with more or less clear boundaries. …

A ritual, for Collins, is basically an amplifier of emotion. … A successful ritual generates and amplifies motivating emotions. … Perhaps Collins’ most controversial claim is the idea that we are basically emotional energy “seekers”: much of our social activity can be understood as a largely unconscious “flow” along the gradient of maximal emotional energy charge for us, given our particular material resources and positions within the … set of ritual situations available to us. Our primary “motivation” is the search for motivation. … Motivation is simply a result of emotional amplification in ritual situations. …

Emotional charge or motivational energy is built up from entrainment: the micro-coordination of gesture, voice, and attention in rhythmic activity, down to tiny fractions of a second. Think of how in an engrossing conversation the partners are wholly attuned to one another, laughing and exhibiting emotional reactions simultaneously, keeping eye contact, taking turns at precisely the right moments, mirroring each other’s reactions. … Or consider sexual acts, to which Collins devotes a long and very interesting chapter. (more)

I’ve now read this book carefully, twice. Here is my report. Continue reading "Why Do We So Seek Synch?" »

GD Star Rating
a WordPress rating system
Tagged as: , ,

Automation vs. Innovation

We don’t yet know how to make computer software that is as flexibly smart as human brains. So when we automate tasks, replacing human workers with computer-guided machines, we usually pay large costs in flexibility and innovation. The new automated processes are harder to change to adapt to new circumstances. Software is harder to change than mental habits, it takes longer to conceive and implement software changes, and such changes require the coordination of larger organizations. The people who write software are further from the task, and so are less likely than human workers to notice opportunities for improvement.

This is a big reason why it will take automation a lot longer to replace human workers than many recent pundits seem to think. And this isn’t just abstract theory. For example, some of the most efficient auto plants are the least automated. Read more about Honda auto plants:

[Honda] is one of the few multinational companies that has succeeded at globalization. Their profit margins are high in the auto industry. Almost everywhere they go — over 5 percent profit margins. In most markets, they consistently are in the top 10 of specific models that sell. They’ve never lost money. They’ve been profitable every year. And they’ve been around since 1949. …

Soichiro Honda, the founder of the company … was one of the world’s greatest engineers. And yet he never graduated college. He believed that hands-on work as an engineer is what it takes to be a great manufacturer. … Continue reading "Automation vs. Innovation" »

GD Star Rating
a WordPress rating system
Tagged as:

Part Of Something Big

A hero is someone who has given his or her life to something bigger than oneself. Joseph Campbell

Most Twitter talk reminds me of the movie Ridicule, wherein courtiers compete to show cruel wit and cynicism. This makes me crave a simple direct conversation on something that matters.

So I pick this: being part of something larger than yourself. This is a commonly expressed wish. But what does it mean?

Here are some clues: Judging from Google-found quotes, common satisfactory “things” include religions, militaries, political parties, and charities. For most people “the universe” seems too big and “my immediate family” seems too small. And neither seem idealistic enough. “All utilitarians” is idealistic enough, but seems insufficiently coherent as a group. The words “part” and “thing” here are suspiciously vague, suggesting that there are several elements here, some of which people are more willing to admit than others.

Here’s my interpretation: We want to be part of a strong group that has our back, and we want to support and promote ideals. But these preferences aren’t independent, to be satisfied separately. We especially want to combine them, and be a valued part of a group that supports good ideals.

So we simultaneously want all these things:

  1. We are associated with an actual group of people.
  2. These people concretely relate to each other.
  3. This group is credibly seen as really supporting some ideals.
  4. We embrace those ideals, and find them worth our sacrifice.
  5. Our help to this group’s ideals would be noticed, appreciated.
  6. If outsiders resist our help, the group will have our back.
  7. The group is strong enough to have substantial help to give.
  8. The group does’t do wrongs that outweigh their ideals support.
  9. Both the group and its ideals are big in the scheme of things.

Since this is a lot of constraints, the actual groups that exist are unlikely to satisfy them all. So we compromise. Some people see most all big coherent groups as easily corrupted, and so only accept small groups. For some, group bonding is so important they’ll compromise on the ideals, or accept weak evidence that the group actually supports its ideals. If group strength is important enough to them, they may not require any ideals. For others, the ideal is everything, and they’ll accept a weak group defined abstractly as “everyone who truly supports this ideal.” Finally, for some being appreciated is so important that they’ll take the thing the world seems to most appreciate about them and accept a group and ideal defined around that.

If this is right then just talking about what are the best ideals and how to achieve them somewhat misses the point. Also somewhat missing the point is talk about how to make strong well-bonded groups. If people typically want the two of these things together, then the actual design problem is how to achieve good ideals via a strong well-bonded group.

Which isn’t a design problem I hear people talk about much. Some presume that if they can design a good enough ideal, a good group will naturally collect around it. Others presume that if they can design a good enough way for groups to coordinate, groups will naturally coordinate to achieve good ideals. But how reasonable are these assumptions?

If we focus on explaining this preference instead of satisfying it, a homo hypocritus framework fits reasonably well. Coalition politics is central to what we really want, but if cheap we’d rather appear to focus on supporting ideals, and only incidentally pick groups to help us in that quest.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Open Thread

This is our monthly place to discuss relevant topics that have not appeared in recent posts.

GD Star Rating
a WordPress rating system
Tagged as:

Irreducible Detail

Our best theories vary in generality. Some theories are very general, but most are more context specific. Putting all of our best theories together usually doesn’t let us make exact predictions on most variables of interest. We often express this fact formally in our models via “noise,” which represents other factors that we can’t yet predict.

For each of our theories there was a point in time when we didn’t have it yet. Thus we expect to continue to learn more theories, which will let us make more precise predictions. And so it might seem like we can’t constrain our eventual power of prediction; maybe we will have powerful enough theories to predict everything exactly.

But that doesn’t seem right either. Our best theories in many areas tell us about fundamental limits on our prediction abilities, and thus limits on how powerful future simple general theories could be. For example:

  • Thermodynamics – We can predict some gross features of future physical states, but the entropy of a system sets a very high (negentropy) cost to learn precise info about the state of that system. If thermodynamics is right, there will never be a general theory to let one predict future states more cheaply than this.
  • Finance – Finance theory has identified many relevant parameters to predict the overall distribution of future assets returns. However, finance theory strongly suggests that it is usually very hard to predict details of the specific future returns of specific assets. The ability to do so would be worth such a huge amount that there just can’t be many who often have such an ability. The cost to gain such an ability must usually be more than the gains from trading it.
  • Cryptography – A well devised code looks random to an untrained eye. As there are a great many possible codes, and a great many ways to find weaknesses in them, it doesn’t seem like there could be any general way to break all codes. Instead code breaking is a matter of knowing lots of specific things about codes and ways they might be broken. People use codes when they expect the cost of breaking them to be prohibitive, and such expectations are usually right.
  • Innovation – Economic theory can predict many features of economies, and of how economies change and grow. And innovation contributes greatly to growth. But economists also strongly expect that the details of particular future innovations cannot be predicted except at a prohibitive cost. Since knowing of innovations ahead of time can often be used for great private profit, and would speed up the introduction of those innovations, it seems that no cheap-to-apply simple general theories can exist which predict the details of most innovations well ahead of time.
  • Ecosystems – We understand some ways in which parameters of ecosystems correlate with their environments. Most of these make sense in terms of general theories of natural selection and genetics. However, most ecologists strongly suspect that the vast majority of the details of particular ecosystems and the species that inhabit them are not easily predictable by simple general theories. Evolution says that many details will be well matched to other details, but to predict them you must know much about the other details to which they match.

In thermodynamics, finance, cryptography, innovations, and ecosystems, we have learned that while there are many useful generalities, the universe is also chock full of important irreducible incompressible detail. As this is true at many levels of abstraction, I would add this entry to the above list:

  • Intelligence – General theories tell us what intelligence means, and how it can generalize across tasks and contexts. But most everything we’ve learned about intelligence suggests that the key to smarts is having many not-fully-general tools. Human brains are smart mainly by containing many powerful not-fully-general modules, and using many modules to do each task. These modules would not work well in all possible universes, but they often do in ours. Ordinary software also gets smart by containing many powerful modules. While the architecture that organizes those modules can make some difference, that difference is mostly small compared to having more better modules. In a world of competing software firms, most ways to improve modules or find new ones cost more than the profits they’d induce.

If most value in intelligence comes from the accumulation of many expensive parts, there may well be no powerful general theories to be discovered to revolutionize future AI, and give an overwhelming advantage to the first project to discover them. Which is the main reason that I’m skeptical about AI foom, the scenario where an initially small project quickly grows to take over the world.

Added 7p: Peter McCluskey has thoughtful commentary here.

GD Star Rating
a WordPress rating system
Tagged as: , , , ,