Tag Archives: Values

Earth: A Status Report

In a universe that is (so far) almost entirely dead, we find ourselves to be on a rare planet full not only of life, but now also of human-level intelligent self-aware creatures. This makes our planet a roughly a once-per-million-galaxy rarity, and if we ever get grabby we can expect to meet other grabby aliens in roughly a billion years.

We see that our world, our minds, and our preferences have been shaped by at least four billions years of natural selection. And we see that evolution going especially fast lately, as we humans pioneer many powerful new innovations. Our latest big thing: larger scale organizations, which have induced our current brief dreamtime, wherein we are unusually rich.

For preferences, evolution has given us humans a mix of (a) some robust general preferences, like wanting to be respected and rich, (b) some less robust but deeply embedded preferences, like preferring certain human body shapes, and (c) some less robust but cultural plastic preferences, such as which particular things each culture finds more impressive.

My main reaction to all this is to feel grateful to be a living intelligent creature, who is compatible enough with his world to often get what he wants. Especially to be living in such a rich era. I accept that I and my descendants will long continue to compete (in part by cooperating of course), and that as the world changes evolution will continue to change my descendants, including as needed their values.

Many see this situation quite differently from me, however. For example, “anti-natalists” see life as a terrible crime, as the badness of our pains outweigh the goodness of our pleasures, resulting in net negative value lives. They thus want life on Earth to go extinct. Maybe, they say, it would be okay to only create really-rich better-emotionally-adjusted creatures. But not the humans we have now.

Many kinds of “conservatives” are proud to note that their ancestors changed in order to win prior evolutionary competitions. But they are generally opposed to future such changes. They want only limited changes to our tech, culture, lives, and values; bigger changes seem like abominations to them.

Many “socialists” are furious that some of us are richer and more influential than others. Furious enough to burn down everything if we don’t switch soon to more egalitarian systems of distribution and control. The fact that our existing social systems won difficult prior contests does not carry much weight with them. They insist on big radical changes now, and disavow any failures associated with prior attempts made under their banner. None of that was “real” socialism, you see.

Due to continued global competition, local adoption of anti-natalist, conservative, or socialist agendas seems insufficient to ensure these as global outcomes. Now most fans of these things don’t care much about long term outcomes. But some do. Some of those hope that global social pressures, via global social norms, may be sufficient. And others suggest using stronger global governance.

In fact, our scales of governance, and level of global governance, have been increasing over centuries. Furthermore, over the last half century we have created a world community of elites, wherein global social norms and pressures have strong power.

However, competition at the largest scales has so far been our only robust solution to system rot and suicide, problems that may well apply to systems of global governance or norms. Furthermore, centralized rulers may be reluctant to allow civilization to expand to distant places which they would find it harder to control.

This post resulted from Agnes Callard asking me to comment on Scott Alexander’s essay Meditations On Moloch, wherein he takes similarly stark positions on these grand issues. Alexander is irate that the world is not adopting various utopian solutions to common problems, such as ending corporate welfare, smaller militaries, and common hospital medical record systems. He seems to blame all of that, and pretty much anything else that has ever gone wrong, on something he personalizes into a monster “Moloch.” And while Alexander isn’t very clear on what exactly that is, my best read is that it is the general phenomenon of competition (at least the bad sort); that at least seems central to most of the examples he gives.

Furthermore, Alexander fears that, in the long run, competition will force our descendants to give up absolutely everything that they value, just to exist. Now he has no empirical or theoretical proof that this will happen; his post is instead mostly a long passionate primal scream expressing his terror at this possibility.

(Yes, he and I are aware that cooperation and competition systems are often nested within each other. The issue here is about the largest outer-most active system.)

Alexander’s solution is:

Elua. He is the god of flowers and free love and all soft and fragile things. Of art and science and philosophy and love. Of niceness, community, and civilization. He is a god of humans. … Only another god can kill Moloch. We have one on our side, but he needs our help. We should give it to him.

By which Alexander means: start with a tiny weak AI, induce it to “foom” (sudden growth from tiny to huge), resulting in a single “super-intelligent” AI who rules our galaxy with an iron fist, but wrapped the velvet glove of being “friendly” = “aligned”. By definition, such a creature makes the best possible utopia for us all. Sure, Alexander has no idea how to reliably induce a foom or to create an aligned-through-foom AI, but there are some people pondering theses questions (who are generally not very optimistic).

My response: yes of course if we could easily and reliably create a god to mange a utopia where nothing ever goes wrong, maybe we should do so. But I see enormous risks in trying to induce a single AI to grow crazy fast and then conquer everything, and also in trying to control that thing later via pre-foom design. I also fear many other risks of a single global system, including rot, suicide, and preventing expansion.

Yes, we might take this chance if we were quite sure that in the long term all other alternatives result in near zero value, while this remained the only scenario that could result in substantial value. But that just doesn’t seem remotely like our actual situation to me.

Because: competition just isn’t as bad as Alexander fears. And it certainly shouldn’t be blamed for everything that has ever gone wrong. More like: it should be credited for everything that has ever gone right among life and humans.

First, we don’t have good reasons to expect competition, compared to an AI god, to lead more reliably to the extinction either of life or of creatures who value their experiences. Yes, you can fear those outcomes, but I can as easily fear your AI god.

Second, competition has so far reigned over four billion years of Earth life, and at least a half billion years of Earth brains, and on average those seem to have been brain lives worth living. As have been the hundred billion human brain lives so far. So empirically, so far, given pretty long time periods, competition has just not remotely destroyed all value.

Now I suspect that Alexander might respond here thus:

The way that evolution has so far managed to let competing creatures typically achieve their values is by having those values change over time as their worlds change. But I want descendants to continue to achieve their values without having to change those values across generations.

However, relatively soon on evolutionary timescales, I’ve predicted that, given further competition, our descendants will come to just directly and abstractly value reproduction. And then after that, no descendant ever need to change their values. But I think even that situation isn’t good enough for Alexander; he wants our (his?) current human values to be the ones that continue and never change.

Now taken very concretely, this seems to require that our descendants never change their tastes in music, movies, or clothes. But I think Alexander has in mind only keeping values the same at some intermediate level of abstraction. Above the level of specific music styles, but below the level of just wanting to reproduce. However, not only has Alexander not been very clear regarding which exact value abstraction level he cares about, I’m not clear on why the rest of us should agree to with him about this level, or care as much as he does about it.

For example, what if most of our descendants get so used to communicating via text that they drop talking via sound, and thus also get less interesting in music? Oh they like artistic expressions using other mediums, such as text, but music becomes much more of a niche taste, mainly of interest to that fraction of our descendants who still attend a lot to sound.

This doesn’t seem like such a terrible future to me. Certainly not so terrible that we should risk everything to prevent it by trying to appoint an AI god. But if this scenario does actually seem that terrible to you, I guess maybe you should join Alexander’s camp. Unless all changes seem terrible to you, in which case you might join the conservative camp. Or maybe all life seems terrible to you, in which case you might join the anti-natalists.

Me, I accept the likelihood and good-enough-ness of modest “value drift” due to future competition. I’m not saying I have no preferences whatsoever about my descendants’ values. But relative to the plausible range I envision, I don’t feel greatly at risk. And definitely not so much at risk as to make desperate gambles that could go very wrong.

You might ask: if I don’t think making an AI god is the best way to get out of bad equilibria, what do I suggest instead? I’ll give the usual answer: innovation. For most problems, people have thought of plausible candidate solutions. What is usually needed is for people to test those solution in smaller scale trials. With smaller successes, it gets easier to entice people to coordinate to adopt them.

And how do you get people to try smaller versions? Dare them, inspire them, lead them, whatever works; this isn’t something I’m good at. In the long run, such trials tend to happen anyway, by accident, even when no one is inspired to do them on purpose. But the goal is to speed up that future, via smaller trials of promising innovation concepts.

Added 5Jan: While I was presuming that Alexander had intended substantial content to his claims about Moloch, many are saying no, he really just mean to say “bad equilibria are bad”. Which is just a mood well-expressed, but doesn’t remotely support the AI god strategy.

GD Star Rating
loading...
Tagged as: , ,

Why We Don’t Know What We Want

Moons and Junes and Ferris wheels
The dizzy dancing way that you feel
As every fairy tale comes real
I’ve looked at love that way

But now it’s just another show
And you leave ’em laughing when you go
And if you care, don’t let them know
Don’t give yourself away

I’ve looked at love from both sides now
From give and take and still somehow
It’s love’s illusions that I recall
I really don’t know love
Really don’t know love at all

Both Sides Now, Joni Mitchell 1966.

If you look at two things up close, it is usually pretty easy to tell which one is closest. And also to tell their relative sizes, e.g., which one might fit inside the other. But if you look far in the distance, such as toward the sky or the horizon, it gets much harder to tell relative sizes or distances. While you might notice that one thing occludes another, when considering unknown things in different directions it is harder to tell relative sizes or distances.

I see similar effects also for things that are more “distant” in other ways, such as in time, social distance, or hypothetically; it also seems harder to judge relative distance when things are further away in these ways. Furthermore, it seems harder to tell of two abstract descriptions which is more abstract, but easier to tell which of two detailed things which has more detail. Thus in the sense of near-far (or construal-level) theory, it seems that we generally find it harder to compare relative distances when things are further away.

According to near-far theory, we also frame our more stable, general, and fundamental goals as more far and abstract, compared to the more near local considerations that constrain our plans. Thus this theory seems to predict that we will have more trouble comparing the relative value of our more abstract values. That is, when comparing two general persistent values, we will find it hard to say which one we value more. Thus near-far theory predicts a big puzzling human feature: we know surprisingly little about what we want. For example, we find it very hard to imaging concrete, coherent, and attractive utopias.

When we see an object from up close, and then we later see it from afar, we often remember its details from when we saw it up close. So similarly, we might learn to compare our general values by remembering examples of concrete decisions where such values were in conflict. And we do often have concrete situations where we are aware that our general values apply to those concrete cases. Such as when we are very hungry, horny, injured, or socially embarrassed. Why don’t we learn our values from those?

Here I will invoke my theory of the sacred: for some key values and things, we set our minds to try to always see them in a rather far mode, no matter how close we are to them. This enables different people in a community to bond together by seeing those sacred things in the same way, even when some of them are much closer to them than others. And this also enables a single person to better maintain a unified identity and commitments over time, even when that person sees concrete examples from different distances at different times in their life. (I thank Arnold Brooks for pointing this out in an upcoming MAM podcast.)

For example, most of us have felt strong feelings of lust, limerence, and attachment to other people at many times during our lives. So we should have plenty of data on which to base rough estimates of what exactly is “love”, and how much we value it compared to other things. But our treating love as sacred makes it harder to use that data to construct such a detailed and unified account. Even when we think about concrete examples up close, it seems hard to use those to update our general views on “love”. We still “really don’t know love at all.”

Because we really can’t see love up close and in detail. Because we treat love as sacred. And sacred things we see from afar, so we can see them together.

GD Star Rating
loading...
Tagged as: , ,

Exploring Value Space

If you have enough of a following, Twitter polls are a great resource for exploring how people think. I’ve just finished asking a 8 polls each regarding 12 different questions that make people choose between the following 16 features, either in themself or in others:

attractiveness, confidence, empathy, excitement, general respect, grandchildren, happiness, improve world, income, intelligence, lifespan, pleasure, productive hrs/day, professional success, serenity, wit.

The questions were, in the order they were asked (links give more detail):

  1. UpSelf: Which feature of you would you most like to increase by 1%?
  2. Advice: For which feature do you most want a respected advisor’s advice?
  3. ToMind: Which feature of yourself came to your mind most recently?
  4. WorkedOn: Which feature did you most try to improve in the last year?
  5. UpOthers: Which feature of your associates would you most like to increase by 1%?
  6. City: To which city would you move, options labeled by the feature that people there are on average better on?
  7. KeepSelf: If all your features are to decline a lot, which feature would you save from declining?
  8. Aliens: What feature would you use to decide which civilization survives?
  9. Voucher: On which feature would you spend $10K to improve?
  10. World: Which feature of yours would you most like to improve to become world class?
  11. Obit: Which feature would you feel proudest to have mentioned in your obituary?
  12. KeepOthers: If all of your closest associates’ features will decline a lot, which feature would you save from declining?

Each poll gives four options, and for each poll I fit the response % to to a simple model where each feature has a positive priority, and each feature is chosen in proportion to its priority. The max priority feature is set to have priority 100. And here are the results:

This shows, for each question, the average number who responded to each poll, the RMS error of the model fit, in percentage points, and then the priorities of each feature for each question. Notice how much variation there is in priorities for different questions. Overall, intelligence is the clear top priority, while grandkids is near the bottom. What would Darwin say?

Here are correlations between these priorities, both for features and for questions:

Darker colors show higher correlations. Credit to Daniel Martin for making these diagrams, and to Anders Sandberg for the idea. We have ordered these by hand to try to put the stronger correlations closer to the diagonal.

Notice that both features and questions divide neatly into self-oriented and other-oriented versions. That seems to be the main way our values vary: we want different internal versus external features, and different features in ourselves versus others.

Added 20Jan: Some observations:

There are three packages of features, Impressive, Feelings, and Miscellaneous, plus two pretty disconnected features, intelligence and grandkids. It is striking that grandkids is so weak a priority, and negatively correlated with everything else; grandkids neither make us feel better, nor look impressive.

The Impressive package includes: attractiveness, professional success, income, confidence, and lifespan. The inclusion of lifespan in that package is surprising; do we mainly want to live longer to be impressive, not to enjoy the extra years? Also note that intelligence is only weakly connected with Impressive, and negatively with Feelings.

The Feelings package includes: serenity, pleasure, happiness, and excitement. These all make sense together. The Miscellaneous set is more weakly connected internally, and includes wit, respect, empathy, and improve world, which is the most weakly connected of the set. Empathy and respect are strongly connected, as are wit and excitement. Do we want to be respected because we can imagine how others feel about us, or are we empathetic because that is a “good look”?

There are two main packages of questions: Self and Other. The Other package is UpOthers, City, Aliens, and KeepOther, about what we want in associates. The Self package is Voucher, World, ToMind, WorkedOn, and Advice, about how we choose to improve ourself. UpSelf and KeepSelf are connected but less so, which I interpret as being more influenced by what we’d like others to think we care about.

KeepSelf and KeepOther are an intermediate package, influenced both by what we want in ourselves and what we’d like others to think we care about. Thus what we want in others is close to what we’d like others to think we want in ourselves. It seems that we are more successfully empathetic when we think about the losses of others, rather than their gains. We can more easily feel their pain than their joy.

Obit is more connected to the Other than the Self package, suggesting we more want our Obits to contain the sorts of things we want in others, rather than what we want in ourself. 

Note that while with features the Impressive and Feelings packages are positively correlated, for Questions the Self and Other questions are negatively correlated. Not sure why.

 

GD Star Rating
loading...
Tagged as: ,

Much Talk Is Sales Patter

The world is complex and high dimensional. Even so, it sometimes helps to try to identify key axes of variation and their key correlates. This is harder when one cannot precisely define an axis, but merely gesture toward its correlates. Even so, that’s what I’m going to try to do in this post, regarding a key kind of difference in talk. Here are seven axes of talk:

1. The ability to motivate. Some kinds of talk can more move people to action, and fill people with meaning, in ways that other kinds of talk do not. In other kinds of talk, people are already sufficiently moved to act, and so less seek such added motivation.

2. The importance of subtext and non-literal elements of the talk, relative to the literal surface meanings. Particular words used, rhythms, sentence length, images evoked, tone of voice, background music, etc. Who says it, who listens, who overhears. Things not directly or logically connected to the literal claims being made, but that matter nonetheless for that talk

3. Discussion of, reliance on, or connection to, values. While values are always relevant to any discussion, for some topics and context there are stable and well accepted values at issue, so that value discussions are just not very relevant. For other topics value discussion is more relevant, though we only rarely every discuss them directly. We are quite bad at talking directly about values, and are reluctant to do so. This is a puzzle worth explaining.

4. Subjective versus objective view. Some talk can be seen as making sense from a neutral outside point of view, while other talk mainly makes sense from the view of a particular person with a particular history, feelings, connections, and concerns. They say that much is lost in trying to translate from a subjective view to an objective view, though not in the other direction.

5. Precision of language, and ease of abstraction. On some topics we can speak relatively precisely in ways that make it easy for others to understand us very clearly. Because of this, we can reliably build and share precise abstractions of such concepts. We can learn things, and then teach others by telling them what we’ve learned. Our most celebrated peaks of academic understanding are mostly toward this end of this axis.

6. Some talk is riddled with errors, lies, and self-deceptions. If you go through it sentence by sentence, you find a large fraction of misleading or wrong claims. In other kinds of talk, you’d have to look a long time before you found such errors.

7. Talk in the context of a well accepted system of thought. Like physics, game theory, etc. Where concepts are well defined relative to each other, and with standard methods of analysis. As opposed to talk wherein the concept meanings are still up for grabs and there are few accepted ways to combine and work with them.

It seems to me that these seven axes are all correlated with each other. I want to postulate a single underlying axis as causing a substantial fraction of that shared correlation. And I offer a prototype category to flag one end of this axis: sales patter.

The world is full of people buying and selling, and a big fraction of the cost of many products and services goes to pay for sales patter. Not just documents and analyses that you could read or access to help you figure out which versions are betting quality or better suited to your needs. No, an actual person standing next you being friendly and chatting with you about the product or whatever else you feel like.

You can’t at all trust this person to be giving you neutral advice. Even if you do come to “trust” them. And their sales patter isn’t usually very precise, integrated into systems of analysis, or well documented with supporting evidence. It is chock full of extra padding, subtext, and context that influences without being directly informative. It is even full of lies and invitations to self-deception. Even so, it actually motivates people to buy. And thus it must, and usually does, connect substantially to values. And it is typically oriented to the subjective view of its target.

At the opposite end of the spectrum from sales patter is practical talk in well defined areas where people know well why they are talking about it. And already have accepted systems of analysis. Consider as a prototypical example talk about how to travel from A to B under constraints of cost, time, reliability, and comfort. Or talk about the financial budget of some organization. Or engineering talk about how to make a building, rebuild a car engine, or write software.

In these areas our purposes and meanings are the simplest and clearest, and we can usefully abstract the most. And yet people tend to pick from areas like these when they offer examples of a “meaningless” existence or soul-crushing jobs. Such talk is the most easily painted by non-participants as failing to motivate, and being inhuman, the result of our having been turned into mindless robots by mean capitalists or some other evil force.

The worlds of such talk are said to be “dead”, “empty”, “colorless”, and in need of art. In fact people often justify art as offering a fix for such evils. Art talk, and art itself, is in fact much more like sales patter, being vague, context dependent, value-laden, and yet somehow motivating.

There’s an awful lot of sales talk in the world, and a huge investment goes into creating it. Yet there are very few collected works of the best sales patter ever. Op-eds are a form of sales talk, as is romantic seduction talk, but we don’t try to save the best of those. That’s in part because sales patter tends to be quite context dependent. It also doesn’t generalize very well, and so there are few systems of thought built up around it.

So why does sales patter differ in these ways from practical systematic talk? My best guess is that this is mostly about hidden motives. People don’t just want to buy stuff, they also like to have a relation with a particular human seller. They want sellers to impress them, to connect to them, and to affirm key cherished identities. All from their personal subjective point of view. They also want similar connections to artists.

But these are all hidden motives, not to be explicitly acknowledged. Thus the emphasis on subtext, context, and subjectivity, which make such talk poor candidates for precision and abstraction. And the tolerance for lies and self-deception in the surface text; the subtext matters more. Our being often driven by hidden motives makes it hard for us to talk about values, since we aren’t willing to acknowledge our true motives, even to ourselves. To claim to have some motives while actually acting on others, we can’t allow talk about our decisions to get too precise or clear, especially about key values.

We keep clear precise abstract-able talk limited to areas where we agree enough on, and can be honest enough about, some key relevant values. Such as in traveling plans or financial accounting. But these aren’t usually our main ultimate values. They are instead “values” derived from constraints that our world imposes on us; we can’t spend more money than we have, and we can’t jump from one place to another instantly. Constraints only motivate us when we have other more meaningful goals that they constrain. But goals we can’t acknowledge or look at directly.

If, as I’ve predicted, our descendants will have a simple, conscious, and abstract key value, for reproduction, they will be very different creatures from us.

GD Star Rating
loading...
Tagged as: , ,

On What Is Advice Useful?

Regarding what areas of our life do we think advisors can usefully advise? Some combination of they actually know stuff, plus we can evaluate and incentivize their advice enough to get them to tell us what they know, plus how possible it is to change this feature.

Yesterday I had an idea for how to find this out via polls. Ask people which feature of them they’d most like to get advice on how to improve it from a respected advisor, and also ask them on these same features which ones they’d most like to increase by 1%. The ratio of their priorities to get advice, relative to just increasing the feature, should say how effective they think advice is regarding each feature.

So I picked these 16 features: attractiveness, confidence, empathy, excitement, general respect, grandchildren, happiness, improve world, income, intelligence, lifespan, pleasure, productive hrs/day, professional success, serenity, wit.

Then on Twitter I did two sets of eight (four answer) polls, one asking “Which feature of you would you most like to increase by 1%?”, and the other asking “For which feature do you most want a respected advisor’s advice?” I fit the responses to estimate relative priorities for each feature on each kind of question. And here are the answers (max priority = 100):

According to the interpretation I had in mind in creating these polls, advisors are very effective on income and professional success, pretty good at general respect and time productivity, terrible at grandchildren, and relatively bad at happiness, wit, pleasure, intelligence, and excitement.

However, staring at the result I suspect people are being less honest on what they want to increase than on what they want advice. Getting advice is a more practical choice which puts them in more of a near mode, where they are less focused on what choice makes them look good.

However, I don’t believe people really care zero about grandchildren either. So, alas, these results are a messy mix of these effects. But interesting, nonetheless.

Added 11am: The advice results might be summarize by my grand narrative that industry moved us toward more forager like attitudes in general, but to hyper farmer attitudes regarding work, where we accept more domination and conformity pressures.

Added 24Jan: I continued with more related questions until I had a set of 12 then did this deeper analysis of them all.

GD Star Rating
loading...
Tagged as: ,

On Evolved Values

Biological evolution selects roughly for creatures that do whatever it takes to have more descendants in the long run. When such creatures have brains, those brains are selected for having supporting habits. And to the extent that such brains can be described as having beliefs and values that combine into actions via expected utility theory, then these beliefs and values should be ones which are roughly behaviorally-equivalent to the package of having accurate beliefs, and having values to produce many descendants (relative to rivals). Equivalent at least within the actual environments in which those creatures were selected.

Humans have unusually general brains, with which we can think unusually abstractly about our beliefs and values. But so far, we haven’t actually abstracted our values very far. We instead have a big mess of opaque habits and desires that implicitly define our values for us, in ways that we poorly understand. Even though what evolution has been selecting for in us can in fact be described concisely and effectively in an abstract way.

Which leads to one of the most disturbing theoretical predictions I know: with sufficient further evolution, our descendants are likely to directly and abstractly know that they simply value more descendants. In diverse and varying environments, such a simpler more abstract representation seems likely to be more effective at helping them figure out which actions would best achieve that value. And while I’ve personally long gotten used to the idea that our distant descendants will be weird, to (the admittedly few) others who care about the distant future, this vision must seem pretty disturbing.

Oh there are some subtleties regarding whether all kinds of long-term descendants get the same weight, to what degree such preferences are non-monotonic in time and number of descendants, and whether we care the same about risks that are correlated or not across descendants. But those are details: evolved descendants should more simply and abstractly value more descendants.

This applies whether our descendants are biological or artificial. And it applies regardless of the kind of environments our descendants face, as long as those environments allow for sufficient selection. For example, if our descendants live among big mobs, who punish them for deviations from mob-enforced norms, then our descendants will be selected for pleasing their mobs. But as an instrumental strategy for producing more descendants. If our descendants have a strong democratic world government that enforces rules about who can reproduce how, then they will be selected for gaining influence over that government in order to gain its favors. And for an autocratic government, they’d be selected for gaining its favors.

Nor does this conclusion change greatly if the units of future selection are larger than individual organisms. Even if entire communities or work teams reproduce together as single units, they’d still be selected for valuing reproduction, both of those entire units and of component parts. And if physical units are co-selected with supporting cultural features, those total physical-plus-cultural packages must still tend to favor the reproduction of all parts of those packages.

Many people seem to be confused about cultural selection, thinking that they are favored by selection if any part of their habits or behaviors is now growing due to their actions. But if, for example, your actions are now contributing to a growing use of the color purple in the world, that doesn’t at all mean that you are winning the evolutionary game. If wider use of purple is not in fact substantially favoring the reproduction of the other elements of the package by which you are now promoting purple’s growth, and if those other elements are in fact reproducing less than their rivals, then you are likely losing, not winning, the evolutionary game. Purple will stop growing and likely decline after those other elements sufficiently decline.

Yes of course, you might decide that you don’t care that much to win this evolutionary game, and are instead content to achieve the values that you now have, with the resources that you can now muster. But you must then accept that tendencies like yours will become a declining fraction of future behavior. You are putting less weight on the future compared to others who focus more on reproduction. The future won’t act like you, or be as much influenced by acts like yours.

For example, there are “altruistic” actions that you might take now to help out civilization overall. For example, you might build a useful bridge, or find some useful invention. But if by such actions you hurt the relative long-term reproduction of many or most of the elements that contributed to your actions, then you must know you are reducing the tendency of descendants to do such actions. Ask: is civilization really better off with more such acts today, but fewer such acts in the future?

Yes, we can likely identify some parts of our current packages which are hurting, not helping, our reproduction. Such as genetic diseases. Or destructive cultural elements. It makes sense to dump such parts of our reproduction “teams” when we can identify them. But that fact doesn’t negate the basic story here: we will mainly value reproduction.

The only way out I see is: stop evolution. Stop, or slow to a crawl, the changes that induce selection of features that influence reproduction. This would require a strong civilization-wide government, and it only works until we meet the other grabby aliens. Worse, in an actually changing universe, such stasis seems to me to seriously risk rot. Leading to a slowly rotting civilization, clinging on to its legacy values but declining in influence, at least relative to its potential. This approach doesn’t at all seems worth the cost to me.

But besides that, have a great day.

Added 7p: There many be many possible equilibria, in which case it may be possible to find an equilibrium in which maximizing reproduction also happens to maximize some other desired set of values. But it may be hard to maintain the context that allows that equilibrium over long time periods. And even if so, the equilibrium might itself drift away to support other values.

Added 8Dec: This basic idea expressed 14 years ago.

GD Star Rating
loading...
Tagged as: , ,

The Master and His Emissary

I had many reasons to want to read Iain McGilchrist’s 2009 book The Master and His Emissary.

  1. Its an ambitious big-picture book, by a smart knowledgeable polymath. I love that sort of book.
  2. I’ve been meaning to learn more about brain structure, and this book talks a lot about that.
  3. I’ve been wanting to read more literary-based critics of economics, and of sci/tech more generally.
  4. I’m interested in critiques of civilization suggesting that people were better off in less modern worlds.

This video gives an easy to watch book summary:

McGilchrist has many strong opinions on what is good and bad in the world, and on where civilization has gone wrong in history. What he mainly does in his book is to organize these opinions around a core distinction: the left vs right split in our brains. In sum: while we need both left and right brain style thinking, civilization today has gone way too far in emphasizing left styles, and that’s the main thing that’s wrong with the world today.

McGilchrist maps this core left-right brain distinction onto many dozens of other distinctions, and in each case he says we need more of the right version and less of the left. He doesn’t really argue much for why right versions are better (on the margin); he mostly sees that as obvious. So what his book mainly does is help people who agree with his values organize their thinking around a single key idea: right brains are better than left.

Here is McGilchrist’s key concept of what distinguishes left from right brain reasoning: Continue reading "The Master and His Emissary" »

GD Star Rating
loading...
Tagged as: , ,

On Value Drift

The outcomes within any space-time region can be seen as resulting from 1) preferences of various actors able to influence the universe in that region, 2) absolute and relative power and influence of those actors, and 3) constraints imposed by the universe. Changes in outcomes across regions result from changes in these factors.

While you might mostly approve of changes resulting from changing constraints, you might worry more about changes due to changing values and influence. That is, you likely prefer to see more influence by values closer to yours. Unfortunately, the consistent historical trend has been for values to drift over time, increasing the distance between random future and current values. As this trend looks like a random walk, we see no obvious limit to how far values can drift. So if the value you place on the values of others falls rapidly enough with the distance between values, you should expect long term future values to be very wrong.

What influences value change?
Inertia – The more existing values are tied to important entrenched systems, the less they change.
Growth – On average, over time civilization collects more total influence over most everything.
Competition – If some values consistently win key competitive contests, those values become more common.
Influence Drift – Many processes that change the world produce random drift in agent influence.
Internal Drift – Some creatures, e.g., humans, have values that drift internally in complex ways.
Culture Drift – Some creatures, e.g., humans, have values that change together in complex ways.
Context – Many of the above processes depend on other factors, such as technology, wealth, a stable sun, etc.

For many of the above processes, rates of change are roughly proportional to overall social rates of change. As these rates of change have been increased over time, we should expect faster future change. Thus you should expect values to drift faster in the future than then did in the past, leading faster to wrong values. Also, people are living longer now than they did in the past. So even past people didn’t live long enough to see big enough changes to greatly bother them, future people may live to see much more change.

Most increases in the rates of change have been concentrated in a few sudden large jumps (associated with the culture, farmer, and industry transitions). As a result, you should expect that rates of change may soon increase greatly. Value drift may continue at past rates until it suddenly goes much faster.

Perhaps you discount the future rapidly, or perhaps the value you place on other values falls slowly with value distance. In these cases value drift may not disturb you much. Otherwise, the situation described above may seem pretty dire. Even if previous generations had to accept the near inevitability of value drift, you might not accept it now. You may be willing to reach for difficult and dangerous changes that could remake the whole situation. Such as perhaps a world government. Personally I see that move as too hard and dangerous for now, but I could understand if you disagree.

The people today who seem most concerned about value drift also seem to be especially concerned about humans or ems being replaced by other forms of artificial intelligence. Many such people are also concerned about a “foom” scenario of a large and sudden influence drift: one initially small computer system suddenly becomes able to grow far faster than the rest of the world put together, allowing it to quickly take over the world.

To me, foom seems unlikely: it posits an innovation that is extremely lumpy compared to historical experience, and in addition posits an unusually high difficulty of copying or complementing this innovation. Historically, innovation value has been distributed with a long thin tail: most realized value comes from many small innovations, but we sometimes see lumpier innovations. (Alpha Zero seems only weak evidence on the distribution of AI lumpiness.) The past history of growth rates increases suggests that within a few centuries we may see something, perhaps a very lumpy innovation, that causes a growth rate jump comparable in size to the largest jumps we’ve ever seen, such as at the origins of life, culture, farming, and industry. However, as over history the ease of copying and complementing such innovations has been increasing, it seems unlikely that copying and complementing will suddenly get much harder.

While foom seems unlikely, it does seems likely that within a few centuries we will develop machines that can outcompete biological humans for most all jobs. (Such machines might also outcompete ems for jobs, though that outcome is much less clear.) The ability to make such machines seems by itself sufficient to cause a growth rate increase comparable to the other largest historical jumps. Thus the next big jump in growth rates need not be associated with a very lumpy innovation. And in the most natural such scenarios, copying and complementing remain relatively easy.

However, while I expect machines that outcompete humans for jobs, I don’t see how that greatly increases the problem of value drift. Human cultural plasticity already ensures that humans are capable of expressing a very wide range of values. I see no obviously limits there. Genetic engineering will allow more changes to humans. Ems inherit human plasticity, and may add even more via direct brain modifications.

In principle, non-em-based artificial intelligence is capable of expressing the entire space of possible values. But in practice, in the shorter run, such AIs will take on social roles near humans, and roles that humans once occupied. This should force AIs to express pretty human-like values. As Steven Pinker says:

Artificial intelligence is like any other technology. It is developed incrementally, designed to satisfy multiple conditions, tested before it is implemented, and constantly tweaked for efficacy and safety.

If Pinker is right, the main AI risk mediated by AI values comes from AI value drift that happens after humans (or ems) no longer exercise such detailed frequent oversight.

It may be possible to create competitive AIs with protected values, i.e., so that parts where values are coded are small, modular, redundantly stored, and insulated from changes to the rest of the system. If so, such AIs may suffer much less from internal drift and cultural drift. Even so, the values of AIs with protected values should still drift due to influence drift and competition.

Thus I don’t see why people concerned with value drift should be especially focused on AI. Yes, AI may accompany faster change, and faster change can make value drift worse for people with intermediate discount rates. (Though it seems to me that altruistic discount rates should scale with actual rates of change, not with arbitrary external clocks.)

Yes, AI offers more prospects for protected values, and perhaps also for creating a world/universe government capable of preventing influence drift and competition. But in these cases if you are concerned about value drift, your real concerns are about rates of change and world government, not AI per se. Even the foom scenario just temporarily increases the rate of influence drift.

Your real problem is that you want long term stability in a universe that more naturally changes. Someday we may be able to coordinate to overrule the universe on this. But I doubt we are close enough to even consider that today. To quote a famous prayer:

God, grant me the serenity to accept the things I cannot change,
Courage to change the things I can,
And wisdom to know the difference.

For now value drift seems one of those possibly lamentable facts of life that we cannot change.

GD Star Rating
loading...
Tagged as: , ,

See A Wider View

Ross Douthat in the NYT:

From now on the great political battles will be fought between nationalists and internationalists, nativists and globalists. .. Well, maybe. But describing the division this way .. gives the elite side of the debate .. too much credit for being truly cosmopolitan.

Genuine cosmopolitanism is a rare thing. It requires comfort with real difference, with forms of life that are truly exotic relative to one’s own. .. The people who consider themselves “cosmopolitan” in today’s West, by contrast, are part of a meritocratic order that transforms difference into similarity, by plucking the best and brightest from everywhere and homogenizing them into the peculiar species that we call “global citizens.”

This species is racially diverse (within limits) and eager to assimilate the fun-seeming bits of foreign cultures — food, a touch of exotic spirituality. But no less than Brexit-voting Cornish villagers, our global citizens think and act as members of a tribe. They have their own distinctive worldview .. common educational experience, .. shared values and assumptions .. outgroups (evangelicals, Little Englanders) to fear, pity and despise. .. From London to Paris to New York, each Western “global city” .. is increasingly interchangeable, so that wherever the citizen of the world travels he already feels at home. ..

It is still possible to disappear into someone else’s culture, to leave the global-citizen bubble behind. But in my experience the people who do are exceptional or eccentric or natural outsiders to begin with .. It’s a problem that our tribe of self-styled cosmopolitans doesn’t see itself clearly as a tribe. .. They can’t see that paeans to multicultural openness can sound like self-serving cant coming from open-borders Londoners who love Afghan restaurants but would never live near an immigrant housing project.

You have values, and your culture has values. They are similar, and this isn’t a coincidence. Causation here mostly goes from culture to individual. And even if you did pick your culture, you have to admit that the young you who did was’t especially wise or well-informed. And you were unaware of many options. So you have to wonder if you’ve too easily accepted your culture’s values.

Of course your culture anticipates these doubts, and is ready with detailed stories on why your culture has the best values. Actually most stories you hear have that as a subtext. But you should wonder how well you can trust all this material.

Now, you might realize that for personal success and comfort, you have little to gain, and much to lose, by questioning your culture’s values. Your associates mostly share your culture, and are comforted more by your loyalty displays than your intellectual cleverness. Hey, everyone agrees cultures aren’t equal; someone has to be best. So why not give yours the benefit of the doubt? Isn’t that reasonable?

But if showing cleverness is really important to you, or if perhaps you really actually care about getting values right, then you should wonder what else you can do to check your culture’s value stories. And the obvious option is to immerse yourself in the lives and viewpoints of other cultures. Not just via the stories or trips your culture has set up to tell you of its superiority. But in ways that give those other cultures, and their members, a real chance. Not just slight variations on your culture, but big variations as well. Try to see a wider landscape of views, and then try to see the universe from many widely dispersed points on that landscape.

Yes if you are a big-city elite, try to see the world from Brexit or Trump fan views. But there are actually much bigger view differences out there. Try a islamic fundamentalist, or a Chinese nationalist. But even if you grow to be able to see the world as do most people in the world today, there still remain even bigger differences out there. Your distant ancestors were quite human, and yet they saw the universe very differently. Yes, they were wrong on some facts, but that hardly invalidates most of their views. Learn some ancient history, to see their views.

And if you already know some ancient history, perhaps the most alien culture you have yet to encounter is that of your human-like descendants. But we can’t possibly know anything about that yet, you say? I beg to differ. I introduce my new book with this meet-a-strange-culture rationale: Continue reading "See A Wider View" »

GD Star Rating
loading...
Tagged as: , ,