Tag Archives: Values

Exploring Value Space

If you have enough of a following, Twitter polls are a great resource for exploring how people think. I’ve just finished asking a 8 polls each regarding 12 different questions that make people choose between the following 16 features, either in themself or in others:

attractiveness, confidence, empathy, excitement, general respect, grandchildren, happiness, improve world, income, intelligence, lifespan, pleasure, productive hrs/day, professional success, serenity, wit.

The questions were, in the order they were asked (links give more detail):

  1. UpSelf: Which feature of you would you most like to increase by 1%?
  2. Advice: For which feature do you most want a respected advisor’s advice?
  3. ToMind: Which feature of yourself came to your mind most recently?
  4. WorkedOn: Which feature did you most try to improve in the last year?
  5. UpOthers: Which feature of your associates would you most like to increase by 1%?
  6. City: To which city would you move, options labeled by the feature that people there are on average better on?
  7. KeepSelf: If all your features are to decline a lot, which feature would you save from declining?
  8. Aliens: What feature would you use to decide which civilization survives?
  9. Voucher: On which feature would you spend $10K to improve?
  10. World: Which feature of yours would you most like to improve to become world class?
  11. Obit: Which feature would you feel proudest to have mentioned in your obituary?
  12. KeepOthers: If all of your closest associates’ features will decline a lot, which feature would you save from declining?

Each poll gives four options, and for each poll I fit the response % to to a simple model where each feature has a positive priority, and each feature is chosen in proportion to its priority. The max priority feature is set to have priority 100. And here are the results:

This shows, for each question, the average number who responded to each poll, the RMS error of the model fit, in percentage points, and then the priorities of each feature for each question. Notice how much variation there is in priorities for different questions. Overall, intelligence is the clear top priority, while grandkids is near the bottom. What would Darwin say?

Here are correlations between these priorities, both for features and for questions:

Darker colors show higher correlations. Credit to Daniel Martin for making these diagrams, and to Anders Sandberg for the idea. We have ordered these by hand to try to put the stronger correlations closer to the diagonal.

Notice that both features and questions divide neatly into self-oriented and other-oriented versions. That seems to be the main way our values vary: we want different internal versus external features, and different features in ourselves versus others.

Added 20Jan: Some observations:

There are three packages of features, Impressive, Feelings, and Miscellaneous, plus two pretty disconnected features, intelligence and grandkids. It is striking that grandkids is so weak a priority, and negatively correlated with everything else; grandkids neither make us feel better, nor look impressive.

The Impressive package includes: attractiveness, professional success, income, confidence, and lifespan. The inclusion of lifespan in that package is surprising; do we mainly want to live longer to be impressive, not to enjoy the extra years? Also note that intelligence is only weakly connected with Impressive, and negatively with Feelings.

The Feelings package includes: serenity, pleasure, happiness, and excitement. These all make sense together. The Miscellaneous set is more weakly connected internally, and includes wit, respect, empathy, and improve world, which is the most weakly connected of the set. Empathy and respect are strongly connected, as are wit and excitement. Do we want to be respected because we can imagine how others feel about us, or are we empathetic because that is a “good look”?

There are two main packages of questions: Self and Other. The Other package is UpOthers, City, Aliens, and KeepOther, about what we want in associates. The Self package is Voucher, World, ToMind, WorkedOn, and Advice, about how we choose to improve ourself. UpSelf and KeepSelf are connected but less so, which I interpret as being more influenced by what we’d like others to think we care about.

KeepSelf and KeepOther are an intermediate package, influenced both by what we want in ourselves and what we’d like others to think we care about. Thus what we want in others is close to what we’d like others to think we want in ourselves. It seems that we are more successfully empathetic when we think about the losses of others, rather than their gains. We can more easily feel their pain than their joy.

Obit is more connected to the Other than the Self package, suggesting we more want our Obits to contain the sorts of things we want in others, rather than what we want in ourself. 

Note that while with features the Impressive and Feelings packages are positively correlated, for Questions the Self and Other questions are negatively correlated. Not sure why.

 

GD Star Rating
loading...
Tagged as: ,

Much Talk Is Sales Patter

The world is complex and high dimensional. Even so, it sometimes helps to try to identify key axes of variation and their key correlates. This is harder when one cannot precisely define an axis, but merely gesture toward its correlates. Even so, that’s what I’m going to try to do in this post, regarding a key kind of difference in talk. Here are seven axes of talk:

1. The ability to motivate. Some kinds of talk can more move people to action, and fill people with meaning, in ways that other kinds of talk do not. In other kinds of talk, people are already sufficiently moved to act, and so less seek such added motivation.

2. The importance of subtext and non-literal elements of the talk, relative to the literal surface meanings. Particular words used, rhythms, sentence length, images evoked, tone of voice, background music, etc. Who says it, who listens, who overhears. Things not directly or logically connected to the literal claims being made, but that matter nonetheless for that talk

3. Discussion of, reliance on, or connection to, values. While values are always relevant to any discussion, for some topics and context there are stable and well accepted values at issue, so that value discussions are just not very relevant. For other topics value discussion is more relevant, though we only rarely every discuss them directly. We are quite bad at talking directly about values, and are reluctant to do so. This is a puzzle worth explaining.

4. Subjective versus objective view. Some talk can be seen as making sense from a neutral outside point of view, while other talk mainly makes sense from the view of a particular person with a particular history, feelings, connections, and concerns. They say that much is lost in trying to translate from a subjective view to an objective view, though not in the other direction.

5. Precision of language, and ease of abstraction. On some topics we can speak relatively precisely in ways that make it easy for others to understand us very clearly. Because of this, we can reliably build and share precise abstractions of such concepts. We can learn things, and then teach others by telling them what we’ve learned. Our most celebrated peaks of academic understanding are mostly toward this end of this axis.

6. Some talk is riddled with errors, lies, and self-deceptions. If you go through it sentence by sentence, you find a large fraction of misleading or wrong claims. In other kinds of talk, you’d have to look a long time before you found such errors.

7. Talk in the context of a well accepted system of thought. Like physics, game theory, etc. Where concepts are well defined relative to each other, and with standard methods of analysis. As opposed to talk wherein the concept meanings are still up for grabs and there are few accepted ways to combine and work with them.

It seems to me that these seven axes are all correlated with each other. I want to postulate a single underlying axis as causing a substantial fraction of that shared correlation. And I offer a prototype category to flag one end of this axis: sales patter.

The world is full of people buying and selling, and a big fraction of the cost of many products and services goes to pay for sales patter. Not just documents and analyses that you could read or access to help you figure out which versions are betting quality or better suited to your needs. No, an actual person standing next you being friendly and chatting with you about the product or whatever else you feel like.

You can’t at all trust this person to be giving you neutral advice. Even if you do come to “trust” them. And their sales patter isn’t usually very precise, integrated into systems of analysis, or well documented with supporting evidence. It is chock full of extra padding, subtext, and context that influences without being directly informative. It is even full of lies and invitations to self-deception. Even so, it actually motivates people to buy. And thus it must, and usually does, connect substantially to values. And it is typically oriented to the subjective view of its target.

At the opposite end of the spectrum from sales patter is practical talk in well defined areas where people know well why they are talking about it. And already have accepted systems of analysis. Consider as a prototypical example talk about how to travel from A to B under constraints of cost, time, reliability, and comfort. Or talk about the financial budget of some organization. Or engineering talk about how to make a building, rebuild a car engine, or write software.

In these areas our purposes and meanings are the simplest and clearest, and we can usefully abstract the most. And yet people tend to pick from areas like these when they offer examples of a “meaningless” existence or soul-crushing jobs. Such talk is the most easily painted by non-participants as failing to motivate, and being inhuman, the result of our having been turned into mindless robots by mean capitalists or some other evil force.

The worlds of such talk are said to be “dead”, “empty”, “colorless”, and in need of art. In fact people often justify art as offering a fix for such evils. Art talk, and art itself, is in fact much more like sales patter, being vague, context dependent, value-laden, and yet somehow motivating.

There’s an awful lot of sales talk in the world, and a huge investment goes into creating it. Yet there are very few collected works of the best sales patter ever. Op-eds are a form of sales talk, as is romantic seduction talk, but we don’t try to save the best of those. That’s in part because sales patter tends to be quite context dependent. It also doesn’t generalize very well, and so there are few systems of thought built up around it.

So why does sales patter differ in these ways from practical systematic talk? My best guess is that this is mostly about hidden motives. People don’t just want to buy stuff, they also like to have a relation with a particular human seller. They want sellers to impress them, to connect to them, and to affirm key cherished identities. All from their personal subjective point of view. They also want similar connections to artists.

But these are all hidden motives, not to be explicitly acknowledged. Thus the emphasis on subtext, context, and subjectivity, which make such talk poor candidates for precision and abstraction. And the tolerance for lies and self-deception in the surface text; the subtext matters more. Our being often driven by hidden motives makes it hard for us to talk about values, since we aren’t willing to acknowledge our true motives, even to ourselves. To claim to have some motives while actually acting on others, we can’t allow talk about our decisions to get too precise or clear, especially about key values.

We keep clear precise abstract-able talk limited to areas where we agree enough on, and can be honest enough about, some key relevant values. Such as in traveling plans or financial accounting. But these aren’t usually our main ultimate values. They are instead “values” derived from constraints that our world imposes on us; we can’t spend more money than we have, and we can’t jump from one place to another instantly. Constraints only motivate us when we have other more meaningful goals that they constrain. But goals we can’t acknowledge or look at directly.

If, as I’ve predicted, our descendants will have a simple, conscious, and abstract key value, for reproduction, they will be very different creatures from us.

GD Star Rating
loading...
Tagged as: , ,

On What Is Advice Useful?

Regarding what areas of our life do we think advisors can usefully advise? Some combination of they actually know stuff, plus we can evaluate and incentivize their advice enough to get them to tell us what they know, plus how possible it is to change this feature.

Yesterday I had an idea for how to find this out via polls. Ask people which feature of them they’d most like to get advice on how to improve it from a respected advisor, and also ask them on these same features which ones they’d most like to increase by 1%. The ratio of their priorities to get advice, relative to just increasing the feature, should say how effective they think advice is regarding each feature.

So I picked these 16 features: attractiveness, confidence, empathy, excitement, general respect, grandchildren, happiness, improve world, income, intelligence, lifespan, pleasure, productive hrs/day, professional success, serenity, wit.

Then on Twitter I did two sets of eight (four answer) polls, one asking “Which feature of you would you most like to increase by 1%?”, and the other asking “For which feature do you most want a respected advisor’s advice?” I fit the responses to estimate relative priorities for each feature on each kind of question. And here are the answers (max priority = 100):

According to the interpretation I had in mind in creating these polls, advisors are very effective on income and professional success, pretty good at general respect and time productivity, terrible at grandchildren, and relatively bad at happiness, wit, pleasure, intelligence, and excitement.

However, staring at the result I suspect people are being less honest on what they want to increase than on what they want advice. Getting advice is a more practical choice which puts them in more of a near mode, where they are less focused on what choice makes them look good.

However, I don’t believe people really care zero about grandchildren either. So, alas, these results are a messy mix of these effects. But interesting, nonetheless.

Added 11am: The advice results might be summarize by my grand narrative that industry moved us toward more forager like attitudes in general, but to hyper farmer attitudes regarding work, where we accept more domination and conformity pressures.

Added 24Jan: I continued with more related questions until I had a set of 12 then did this deeper analysis of them all.

GD Star Rating
loading...
Tagged as: ,

On Evolved Values

Biological evolution selects roughly for creatures that do whatever it takes to have more descendants in the long run. When such creatures have brains, those brains are selected for having supporting habits. And to the extent that such brains can be described as having beliefs and values that combine into actions via expected utility theory, then these beliefs and values should be ones which are roughly behaviorally-equivalent to the package of having accurate beliefs, and having values to produce many descendants (relative to rivals). Equivalent at least within the actual environments in which those creatures were selected.

Humans have unusually general brains, with which we can think unusually abstractly about our beliefs and values. But so far, we haven’t actually abstracted our values very far. We instead have a big mess of opaque habits and desires that implicitly define our values for us, in ways that we poorly understand. Even though what evolution has been selecting for in us can in fact be described concisely and effectively in an abstract way.

Which leads to one of the most disturbing theoretical predictions I know: with sufficient further evolution, our descendants are likely to directly and abstractly know that they simply value more descendants. In diverse and varying environments, such a simpler more abstract representation seems likely to be more effective at helping them figure out which actions would best achieve that value. And while I’ve personally long gotten used to the idea that our distant descendants will be weird, to (the admittedly few) others who care about the distant future, this vision must seem pretty disturbing.

Oh there are some subtleties regarding whether all kinds of long-term descendants get the same weight, to what degree such preferences are non-monotonic in time and number of descendants, and whether we care the same about risks that are correlated or not across descendants. But those are details: evolved descendants should more simply and abstractly value more descendants.

This applies whether our descendants are biological or artificial. And it applies regardless of the kind of environments our descendants face, as long as those environments allow for sufficient selection. For example, if our descendants live among big mobs, who punish them for deviations from mob-enforced norms, then our descendants will be selected for pleasing their mobs. But as an instrumental strategy for producing more descendants. If our descendants have a strong democratic world government that enforces rules about who can reproduce how, then they will be selected for gaining influence over that government in order to gain its favors. And for an autocratic government, they’d be selected for gaining its favors.

Nor does this conclusion change greatly if the units of future selection are larger than individual organisms. Even if entire communities or work teams reproduce together as single units, they’d still be selected for valuing reproduction, both of those entire units and of component parts. And if physical units are co-selected with supporting cultural features, those total physical-plus-cultural packages must still tend to favor the reproduction of all parts of those packages.

Many people seem to be confused about cultural selection, thinking that they are favored by selection if any part of their habits or behaviors is now growing due to their actions. But if, for example, your actions are now contributing to a growing use of the color purple in the world, that doesn’t at all mean that you are winning the evolutionary game. If wider use of purple is not in fact substantially favoring the reproduction of the other elements of the package by which you are now promoting purple’s growth, and if those other elements are in fact reproducing less than their rivals, then you are likely losing, not winning, the evolutionary game. Purple will stop growing and likely decline after those other elements sufficiently decline.

Yes of course, you might decide that you don’t care that much to win this evolutionary game, and are instead content to achieve the values that you now have, with the resources that you can now muster. But you must then accept that tendencies like yours will become a declining fraction of future behavior. You are putting less weight on the future compared to others who focus more on reproduction. The future won’t act like you, or be as much influenced by acts like yours.

For example, there are “altruistic” actions that you might take now to help out civilization overall. For example, you might build a useful bridge, or find some useful invention. But if by such actions you hurt the relative long-term reproduction of many or most of the elements that contributed to your actions, then you must know you are reducing the tendency of descendants to do such actions. Ask: is civilization really better off with more such acts today, but fewer such acts in the future?

Yes, we can likely identify some parts of our current packages which are hurting, not helping, our reproduction. Such as genetic diseases. Or destructive cultural elements. It makes sense to dump such parts of our reproduction “teams” when we can identify them. But that fact doesn’t negate the basic story here: we will mainly value reproduction.

The only way out I see is: stop evolution. Stop, or slow to a crawl, the changes that induce selection of features that influence reproduction. This would require a strong civilization-wide government, and it only works until we meet the other grabby aliens. Worse, in an actually changing universe, such stasis seems to me to seriously risk rot. Leading to a slowly rotting civilization, clinging on to its legacy values but declining in influence, at least relative to its potential. This approach doesn’t at all seems worth the cost to me.

But besides that, have a great day.

Added 7p: There many be many possible equilibria, in which case it may be possible to find an equilibrium in which maximizing reproduction also happens to maximize some other desired set of values. But it may be hard to maintain the context that allows that equilibrium over long time periods. And even if so, the equilibrium might itself drift away to support other values.

Added 8Dec: This basic idea expressed 14 years ago.

GD Star Rating
loading...
Tagged as: , ,

The Master and His Emissary

I had many reasons to want to read Iain McGilchrist’s 2009 book The Master and His Emissary.

  1. Its an ambitious big-picture book, by a smart knowledgeable polymath. I love that sort of book.
  2. I’ve been meaning to learn more about brain structure, and this book talks a lot about that.
  3. I’ve been wanting to read more literary-based critics of economics, and of sci/tech more generally.
  4. I’m interested in critiques of civilization suggesting that people were better off in less modern worlds.

This video gives an easy to watch book summary:

McGilchrist has many strong opinions on what is good and bad in the world, and on where civilization has gone wrong in history. What he mainly does in his book is to organize these opinions around a core distinction: the left vs right split in our brains. In sum: while we need both left and right brain style thinking, civilization today has gone way too far in emphasizing left styles, and that’s the main thing that’s wrong with the world today.

McGilchrist maps this core left-right brain distinction onto many dozens of other distinctions, and in each case he says we need more of the right version and less of the left. He doesn’t really argue much for why right versions are better (on the margin); he mostly sees that as obvious. So what his book mainly does is help people who agree with his values organize their thinking around a single key idea: right brains are better than left.

Here is McGilchrist’s key concept of what distinguishes left from right brain reasoning: Continue reading "The Master and His Emissary" »

GD Star Rating
loading...
Tagged as: , ,

On Value Drift

The outcomes within any space-time region can be seen as resulting from 1) preferences of various actors able to influence the universe in that region, 2) absolute and relative power and influence of those actors, and 3) constraints imposed by the universe. Changes in outcomes across regions result from changes in these factors.

While you might mostly approve of changes resulting from changing constraints, you might worry more about changes due to changing values and influence. That is, you likely prefer to see more influence by values closer to yours. Unfortunately, the consistent historical trend has been for values to drift over time, increasing the distance between random future and current values. As this trend looks like a random walk, we see no obvious limit to how far values can drift. So if the value you place on the values of others falls rapidly enough with the distance between values, you should expect long term future values to be very wrong.

What influences value change?
Inertia – The more existing values are tied to important entrenched systems, the less they change.
Growth – On average, over time civilization collects more total influence over most everything.
Competition – If some values consistently win key competitive contests, those values become more common.
Influence Drift – Many processes that change the world produce random drift in agent influence.
Internal Drift – Some creatures, e.g., humans, have values that drift internally in complex ways.
Culture Drift – Some creatures, e.g., humans, have values that change together in complex ways.
Context – Many of the above processes depend on other factors, such as technology, wealth, a stable sun, etc.

For many of the above processes, rates of change are roughly proportional to overall social rates of change. As these rates of change have been increased over time, we should expect faster future change. Thus you should expect values to drift faster in the future than then did in the past, leading faster to wrong values. Also, people are living longer now than they did in the past. So even past people didn’t live long enough to see big enough changes to greatly bother them, future people may live to see much more change.

Most increases in the rates of change have been concentrated in a few sudden large jumps (associated with the culture, farmer, and industry transitions). As a result, you should expect that rates of change may soon increase greatly. Value drift may continue at past rates until it suddenly goes much faster.

Perhaps you discount the future rapidly, or perhaps the value you place on other values falls slowly with value distance. In these cases value drift may not disturb you much. Otherwise, the situation described above may seem pretty dire. Even if previous generations had to accept the near inevitability of value drift, you might not accept it now. You may be willing to reach for difficult and dangerous changes that could remake the whole situation. Such as perhaps a world government. Personally I see that move as too hard and dangerous for now, but I could understand if you disagree.

The people today who seem most concerned about value drift also seem to be especially concerned about humans or ems being replaced by other forms of artificial intelligence. Many such people are also concerned about a “foom” scenario of a large and sudden influence drift: one initially small computer system suddenly becomes able to grow far faster than the rest of the world put together, allowing it to quickly take over the world.

To me, foom seems unlikely: it posits an innovation that is extremely lumpy compared to historical experience, and in addition posits an unusually high difficulty of copying or complementing this innovation. Historically, innovation value has been distributed with a long thin tail: most realized value comes from many small innovations, but we sometimes see lumpier innovations. (Alpha Zero seems only weak evidence on the distribution of AI lumpiness.) The past history of growth rates increases suggests that within a few centuries we may see something, perhaps a very lumpy innovation, that causes a growth rate jump comparable in size to the largest jumps we’ve ever seen, such as at the origins of life, culture, farming, and industry. However, as over history the ease of copying and complementing such innovations has been increasing, it seems unlikely that copying and complementing will suddenly get much harder.

While foom seems unlikely, it does seems likely that within a few centuries we will develop machines that can outcompete biological humans for most all jobs. (Such machines might also outcompete ems for jobs, though that outcome is much less clear.) The ability to make such machines seems by itself sufficient to cause a growth rate increase comparable to the other largest historical jumps. Thus the next big jump in growth rates need not be associated with a very lumpy innovation. And in the most natural such scenarios, copying and complementing remain relatively easy.

However, while I expect machines that outcompete humans for jobs, I don’t see how that greatly increases the problem of value drift. Human cultural plasticity already ensures that humans are capable of expressing a very wide range of values. I see no obviously limits there. Genetic engineering will allow more changes to humans. Ems inherit human plasticity, and may add even more via direct brain modifications.

In principle, non-em-based artificial intelligence is capable of expressing the entire space of possible values. But in practice, in the shorter run, such AIs will take on social roles near humans, and roles that humans once occupied. This should force AIs to express pretty human-like values. As Steven Pinker says:

Artificial intelligence is like any other technology. It is developed incrementally, designed to satisfy multiple conditions, tested before it is implemented, and constantly tweaked for efficacy and safety.

If Pinker is right, the main AI risk mediated by AI values comes from AI value drift that happens after humans (or ems) no longer exercise such detailed frequent oversight.

It may be possible to create competitive AIs with protected values, i.e., so that parts where values are coded are small, modular, redundantly stored, and insulated from changes to the rest of the system. If so, such AIs may suffer much less from internal drift and cultural drift. Even so, the values of AIs with protected values should still drift due to influence drift and competition.

Thus I don’t see why people concerned with value drift should be especially focused on AI. Yes, AI may accompany faster change, and faster change can make value drift worse for people with intermediate discount rates. (Though it seems to me that altruistic discount rates should scale with actual rates of change, not with arbitrary external clocks.)

Yes, AI offers more prospects for protected values, and perhaps also for creating a world/universe government capable of preventing influence drift and competition. But in these cases if you are concerned about value drift, your real concerns are about rates of change and world government, not AI per se. Even the foom scenario just temporarily increases the rate of influence drift.

Your real problem is that you want long term stability in a universe that more naturally changes. Someday we may be able to coordinate to overrule the universe on this. But I doubt we are close enough to even consider that today. To quote a famous prayer:

God, grant me the serenity to accept the things I cannot change,
Courage to change the things I can,
And wisdom to know the difference.

For now value drift seems one of those possibly lamentable facts of life that we cannot change.

GD Star Rating
loading...
Tagged as: , ,

See A Wider View

Ross Douthat in the NYT:

From now on the great political battles will be fought between nationalists and internationalists, nativists and globalists. .. Well, maybe. But describing the division this way .. gives the elite side of the debate .. too much credit for being truly cosmopolitan.

Genuine cosmopolitanism is a rare thing. It requires comfort with real difference, with forms of life that are truly exotic relative to one’s own. .. The people who consider themselves “cosmopolitan” in today’s West, by contrast, are part of a meritocratic order that transforms difference into similarity, by plucking the best and brightest from everywhere and homogenizing them into the peculiar species that we call “global citizens.”

This species is racially diverse (within limits) and eager to assimilate the fun-seeming bits of foreign cultures — food, a touch of exotic spirituality. But no less than Brexit-voting Cornish villagers, our global citizens think and act as members of a tribe. They have their own distinctive worldview .. common educational experience, .. shared values and assumptions .. outgroups (evangelicals, Little Englanders) to fear, pity and despise. .. From London to Paris to New York, each Western “global city” .. is increasingly interchangeable, so that wherever the citizen of the world travels he already feels at home. ..

It is still possible to disappear into someone else’s culture, to leave the global-citizen bubble behind. But in my experience the people who do are exceptional or eccentric or natural outsiders to begin with .. It’s a problem that our tribe of self-styled cosmopolitans doesn’t see itself clearly as a tribe. .. They can’t see that paeans to multicultural openness can sound like self-serving cant coming from open-borders Londoners who love Afghan restaurants but would never live near an immigrant housing project.

You have values, and your culture has values. They are similar, and this isn’t a coincidence. Causation here mostly goes from culture to individual. And even if you did pick your culture, you have to admit that the young you who did was’t especially wise or well-informed. And you were unaware of many options. So you have to wonder if you’ve too easily accepted your culture’s values.

Of course your culture anticipates these doubts, and is ready with detailed stories on why your culture has the best values. Actually most stories you hear have that as a subtext. But you should wonder how well you can trust all this material.

Now, you might realize that for personal success and comfort, you have little to gain, and much to lose, by questioning your culture’s values. Your associates mostly share your culture, and are comforted more by your loyalty displays than your intellectual cleverness. Hey, everyone agrees cultures aren’t equal; someone has to be best. So why not give yours the benefit of the doubt? Isn’t that reasonable?

But if showing cleverness is really important to you, or if perhaps you really actually care about getting values right, then you should wonder what else you can do to check your culture’s value stories. And the obvious option is to immerse yourself in the lives and viewpoints of other cultures. Not just via the stories or trips your culture has set up to tell you of its superiority. But in ways that give those other cultures, and their members, a real chance. Not just slight variations on your culture, but big variations as well. Try to see a wider landscape of views, and then try to see the universe from many widely dispersed points on that landscape.

Yes if you are a big-city elite, try to see the world from Brexit or Trump fan views. But there are actually much bigger view differences out there. Try a islamic fundamentalist, or a Chinese nationalist. But even if you grow to be able to see the world as do most people in the world today, there still remain even bigger differences out there. Your distant ancestors were quite human, and yet they saw the universe very differently. Yes, they were wrong on some facts, but that hardly invalidates most of their views. Learn some ancient history, to see their views.

And if you already know some ancient history, perhaps the most alien culture you have yet to encounter is that of your human-like descendants. But we can’t possibly know anything about that yet, you say? I beg to differ. I introduce my new book with this meet-a-strange-culture rationale: Continue reading "See A Wider View" »

GD Star Rating
loading...
Tagged as: , ,