Search Results for: conformity

Conformity Excuses

From a distance it seems hard to explain a lot of human behavior without presuming that we humans have strong desires to conform to the behaviors of others. But when we look at our conscious thoughts and motivations regarding our specific behaviors, we find almost no conformity pressures. We are only rarely aware that we do anything, or avoid doing other things, because we want to conform.

The obvious explanation is that we make many excuses for our conformity – we make up other mostly-false explanations for why we like the same things that others like, and dislike other things. And since we do a lot of conforming, there must be a lot of bias here. So we can uncover and understand a lot of our biases if we can identify and understand these excuses. Here are a few possibilities that come to mind. I expect there are many others.

I picked my likes first, my group second. We like to point out that we are okay with liking many things that many others in the world don’t like. Yes, the people around us tend to like those same things, but that isn’t us conforming to those social neighbors, because we picked the things we like first, and then picked those people around us as a consequence. Or so we say. But we conform far more to our neighbors than can plausibly be explained by our limited selection power.

I just couldn’t be happy elsewhere. We tend to tell ourselves that we couldn’t be happy in a different profession, city, or culture, in part to excuse our reluctance to deviate from the standard practices of such things. We’d actually adjust fine to much larger moves than we are willing to consider.

I actually like small differences. We notice that we don’t like to come to a party in the exact same dress as someone else. We also want different home decorations and garden layouts, and we don’t want to be reading the exact same book as everyone else at the moment. We then extrapolate and think we don’t mind being arbitrarily different.

In future, this will be more popular. We are often okay with doing something different today because we imagine that it will become much more popular later. Then we can be celebrated for being one of the first to like it. If we were sure that few would ever like it, we’d be much less willing to like it now.

Second tier folks aren’t remotely as good. While we personally can tell the difference between someone who is very bad and someone who is very good, we usually just don’t have the discernment to tell the difference in quality between the most popular folks and second tier folks who are much less popular. But we tell ourselves that we can tell the difference, to justify our strong emphasis on those most popular folks.

Unpopular things are objectively defective. We probably make many specific excuses about unpopular things, to justify our neglect of them.

GD Star Rating
loading...
Tagged as:

Conformity Shows Loyalty

"The world has too many people showing too much loyalty to their groups.  That is why I’m so proud to be member of ALU, anti-loyalists united, where we refuse to show loyalty to any other groups. My local chapter just kicked out George for suspicion of showing loyalty to California, and we chastised Ellen for expressing doubts about the latest anti-loyalty directives from headquarters.  We’ll only lick loyalty by showing we are united behind our courageous ALU leaders.  All hail ALU!"

Sounds pretty silly, right?  But I hear something pretty similar when I hear folks say they are proud to be part of a group that fights conformity by pushing their unusual beliefs.  Especially when such folks seem more comfortable claiming their beliefs contribute to diversity than that they are true.   

We use belief conformity to show loyalty to particular groups, relative to other groups.  We rarely bother to show loyalty to humanity as a whole, because non-humans threaten little.  So we rarely bother to try to conform our beliefs with humanity as a whole, which is why herding experiments with random subjects show no general conformity tendencies

Our conformity efforts instead target smaller in-groups, with more threatening out-groups.  And we are most willing to conform our beliefs on abstract ideological topics, like politics or religion, where our opinions have few other personal consequences.  Our choices show to which conflicting groups we feel the most allied.   

You just can’t fight "conformity" by indulging the evil pleasure of enjoying your conformity to a small tight-knit group of "non-conformists."  All this does is promote some groups at the expense of other groups, and poisons your mind in the process.  It is like fighting "loyalty" by dogged devotion to an anti-loyalty alliance.

Best to clear your mind and emotions of group loyalties and resentments and ask, if this belief gave me no pleasure of rebelling against some folks or identifying with others, if it was just me alone choosing, would my best evidence suggest that this belief is true?  All else is the road to rationality ruin. 

GD Star Rating
loading...
Tagged as: ,

What Belief Conformity?

I wrote:

We feel a deep pleasure from realizing that we believe something in common with our friends, and different from most people. … This feeling is EVIL.

Patri Friedman responded:

I see this bias as counteracting the bias of groupthink. The opposite bias is for people to enjoy believing what everyone else believes. This leads to homogeneity of viewpoints, less generation and testing of new hypothesis, and stasis. The people who enjoy believing they have a secret truth are those who nurture non-mainstream but plausible hypotheses, and accumulate new evidence to possibly challenge the mainstream. I think this is very valuable.

Yes, we want to explore a diversity of hypotheses, but this doesn’t require a diversity of beliefs; we can believe similar things while exploring different things.  Yes, groupthink seems to exist, but not as a general bias to conform to average beliefs; groupthink is a bias to conform to in-group beliefs against out-groups.  Thus by their nature groupthink biases of in-groups come already countered by out-groups. 

When a particular group (such as academia) rewards in-group conformity, you may at times be right to resist that.  But by doing so you would not be resisting some general pressure to conform with a global average; you would instead be favoring one group less than others.  I see no general conformity pressure in need of resisting; I instead see instead particular groupthinks, some of which may be preferred to others. 

For example, in herding experiments, subjects must choose between a few acts (e.g., which movie to watch), where some acts pay better than others.  One at a time subjects choose an act, after seeing both a private clue about act quality, and seeing others’ previous choices.  I’ve just reviewed 16 papers on this (including this this this this this this this this this this this and this).

Continue reading "What Belief Conformity?" »

GD Star Rating
loading...
Tagged as:

Conformity Questions

Follow-up to: Conformity Myths

Robin posted earlier about a NYT Magazine article on conformity. I was able to find an online copy of the scientific paper here: http://psr.sagepub.com/cgi/reprint/10/1/2.

The synopsis from the NYT is not complete. Of the 12 times that people were challenged to disagree with the social consensus, the most popular choice was to agree 0 times. 25% of the subjects did this. The second most common was to agree 3 times, done by 14%. Third most common was agreeing 8 times, 11%. Only 5% went along with the crowd all 12 times.

I think it’s quite significant that 25% of subjects never went along with the crowd and stuck to their own perceptions. In total, only 32% of the answers were wrong.

I’m not sure I follow Robin’s comments on this. It seems to me that this re-interpretation of the classic experiment suggests that people are not as conformist as generally thought. That would mean that we do more than merely give lip service to celebrating independence, that culturally we are quite effective at following the ideal of independent thinking.

The key question is, what is the right thing to do here? Should one conform when presented with 8 people denying the evidence of one’s own senses? I argue that it is the right thing to do.

Now of course, if you know you’re in a psychological experiment, maybe you can’t help but be suspicious that something fishy is going on. But in general, in real life, if 8 people come in and tell you that your perceptions are completely wrong, you should take it very seriously. I imagine that in the history of the world, in the great majority of such situations, the 8 were right and the one was wrong. As an example that some may be familiar with, if a bunch of friends come in and tell you you’re drinking too much, while your perception is that you can easily handle the alcohol, you should probably listen to them.

I would suggest that conformity is the right thing to do in these situations, and to that extent I am rather dismayed that the subjects were as non-conformist as this data shows.

GD Star Rating
loading...
Tagged as:

Conformity Myths

Conformity gets a bad rap.  From NYT Mag:

The psychologists Bert Hodges and Anne Geyer recently took a new look at a well-known experiment devised by Asch in the 1950s.  Asch’s subjects were asked to look at a line printed on a white card and then tell which of three similar lines was the same length. The answer was obvious, but the catch was that each volunteer was sitting in a small group whose other members were actually in on the experiment. Asch found that when those other people all agreed on the wrong answer, many of the subjects went along with the group, against the evidence of their own senses.

But the question (Which of these lines matches the one on the card?) was not posed just once. Each subject saw 18 sets of lines, and the group answer was wrong for 12 of them. Examining all the data, Hodges and Geyer found that many people were varying their answers, sometimes agreeing with the group, more often sticking up for their own view. (The average participant gave in to the group three times out of 12.)

This means that the subjects in the most famous "people are sheep" experiment were not sheep at all – they were human beings who largely stuck to their guns, but now and then went along with the group.

Our culture gives lip service to celebrating independence and dengrating conformity, but not only do we not actually discourage conformity much, it is not obvious that conformity as typically practiced is such a bad thing. 

GD Star Rating
loading...
Tagged as:

Asch’s Conformity Experiment

Asch2 Solomon Asch, with experiments originally carried out in the 1950s and well-replicated since, highlighted a phenomenon now known as “conformity”.  In the classic experiment, a subject sees a puzzle like the one in the nearby diagram:  Which of the lines A, B, and C is the same size as the line X?  Take a moment to determine your own answer…

The gotcha is that the subject is seated alongside a number of other people looking at the diagram – seemingly other subjects, actually confederates of the experimenter.  The other “subjects” in the experiment, one after the other, say that line C seems to be the same size as X.  The real subject is seated next-to-last.  How many people, placed in this situation, would say “C” – giving an obviously incorrect answer that agrees with the unanimous answer of the other subjects?  What do you think the percentage would be?

Continue reading "Asch’s Conformity Experiment" »

GD Star Rating
loading...

Best Case Contrarians

Consider opinions distributed over a continuous parameter, like the chance of rain tomorrow. Averaging over many topics, accuracy is highest at the median, and falls away for other percentile ranks. This is bad news for contrarians, who sit at extreme percentile ranks. If you want to think you are right as a contrarian, you have to think your case is an exception to this overall pattern, due to some unusual feature of you or your situation. A feature that suggests you know more than them.

Yet I am often tempted to hold contrarian opinions. In this post I want to describe the best case for being a contrarian. I’m not saying that most contrarians are actually in this best case. I’m saying that this is the case you most want to be in as a contrarian, as it can most justify your position.

I recently posted on how innovation is highest for more fragmented species, as species so often go wrong via conformity traps. For example, peacocks are now going wrong together with overly long tails. To win their local competitions, each peacock needs to have and pick the tails that are sexy to other peacocks, even if that makes them all more vulnerable to predators.

Salmon go wrong by having to swim up hard hazard-filled rivers to get to their mating groups. Only a third of them survive to return from that trip. Now imagine a salmon sitting in the ocean at the mouth of the river, saying to the other salmon:

We are suffering from a conformity trap here. I’m gonna stay and mate here, instead of going up river. If you stay here and mate with me, then we can avoid all those river hazards. We’ll survive, with more energy to help our kids, and win out over the others. Who’s with me?

Now salmon listening to his should wonder if genetic losers are especially likely to make such contrarian speeches. After all, they are the least likely to survive the river, and so the most desperate to avoid it. For all its harms, the river does function to sort out the salmon with the best genes. If you make it to the end, you know your mating partner will also be unusually fit.

So yes, those less likely to pass the river test are more likely to become salmon contrarians. But they aren’t the only ones. Also more likely are:
A) those who can better sort good from bad mates in other ways,
B) those who can better see the conformity traps, and see they are especially big,
C) those who can better see which are the best places to start alternatives to the conformity traps, and
D) those who happen to have invested less in, and thus are less tied to, existing traps. Like the young.

Our world suffers from myriad conformity traps. Like investors who must coordinate with other investors (e.g., via the different levels of venture capital), may feel they must do crypto, as that’s what the others are doing. Even if they don’t think that much of crypto. Like academics in fields that use too much math feel they also need to do too much math if they are to be respected there. Like journalists and think tank pundits feel they must write on the topics on which everyone else is talking, even if other topics are more important.

In all of these cases, it can make sense to try to initiate a contrarian alternative. If many others know about the existing conformity traps, they may also be looking for a chance to escape. The questions are then: when is the right time and place to initiate a contrarian move to escape such a trap. Who is best place to initiate, and how? And, what is the ratio of the gains of success to the costs of failure?

In situations like this, the people who actually try contrarian initiatives may not be at all wrong on their estimates about the truth. They will be different in some ways yes, but not necessarily overall on truth accuracy. In fact, they are likely to be more informed on average in the sense of being better able to judge the overall conformity trap situation, and to evaluate partners in unusual ways.

That is, they can better judge how bad is the overall conformity trap, where are promising alternatives, and who are promising partners. Even if, yes, they are also probably worse on average at winning within the usual conformity-trapped system. Compared to others, contrarians are on average better at being contrarians, and worse at being conformists. Duh.

And that’s the best case for being a contrarian. Not so much because you are just better able to see truth in general. But because you are likely better in particular at seeing when it is time to bail on a collective that is all going wrong together. If the gains from success are high relative to the costs of failure, then most such bids should fail, making the contrarian bid “wrong” most of the time. But not making most bids themselves into mistakes.

GD Star Rating
loading...
Tagged as:

On What Is Advice Useful?

Regarding what areas of our life do we think advisors can usefully advise? Some combination of they actually know stuff, plus we can evaluate and incentivize their advice enough to get them to tell us what they know, plus how possible it is to change this feature.

Yesterday I had an idea for how to find this out via polls. Ask people which feature of them they’d most like to get advice on how to improve it from a respected advisor, and also ask them on these same features which ones they’d most like to increase by 1%. The ratio of their priorities to get advice, relative to just increasing the feature, should say how effective they think advice is regarding each feature.

So I picked these 16 features: attractiveness, confidence, empathy, excitement, general respect, grandchildren, happiness, improve world, income, intelligence, lifespan, pleasure, productive hrs/day, professional success, serenity, wit.

Then on Twitter I did two sets of eight (four answer) polls, one asking “Which feature of you would you most like to increase by 1%?”, and the other asking “For which feature do you most want a respected advisor’s advice?” I fit the responses to estimate relative priorities for each feature on each kind of question. And here are the answers (max priority = 100):

According to the interpretation I had in mind in creating these polls, advisors are very effective on income and professional success, pretty good at general respect and time productivity, terrible at grandchildren, and relatively bad at happiness, wit, pleasure, intelligence, and excitement.

However, staring at the result I suspect people are being less honest on what they want to increase than on what they want advice. Getting advice is a more practical choice which puts them in more of a near mode, where they are less focused on what choice makes them look good.

However, I don’t believe people really care zero about grandchildren either. So, alas, these results are a messy mix of these effects. But interesting, nonetheless.

Added 11am: The advice results might be summarize by my grand narrative that industry moved us toward more forager like attitudes in general, but to hyper farmer attitudes regarding work, where we accept more domination and conformity pressures.

Added 24Jan: I continued with more related questions until I had a set of 12 then did this deeper analysis of them all.

GD Star Rating
loading...
Tagged as: ,

To Innovate, Unify or Fragment?

In the world around us, innovation seems to increase with the size of an integrated region of activity. For example, human and computer languages with more users acquire more words and tools at a faster rate. Tech ecosystems, such as those collected around Microsoft, Apple, or Google operating systems, innovate faster when they have more participating suppliers and users. And there is more innovation per capita in larger cities, firms, and economies. (All else equal, of course.)

We have decent theories to explain all this: larger communities try more things, and each trial has more previous things to combine and build on. The obvious implication is that innovation will increase as our world gets larger, more integrated, and adopts more wider-shared standards and tech ecosystems. More unification will induce more innovation.

Simple theory also predicts that species evolve faster when they have larger populations. And this seems to have applied across human history. But if this were generally true across species, then we should expect most biological innovation to happen in the largest species, which would live in the largest most integrated environmental niches. Like big common ocean areas. And most other species to have descended from these big ones.

But in fact, more biological innovation happens where the species are the smallest, which happens where mobility is higher and environments are more fragmented and changing. For example, over the last half billion years, we’ve seen a lot more innovation on land than in the sea, more on the coasts than on the interiors of land or sea, and more closer to rivers. All more mobile and fragmented places. How can that be?

Maybe big things tend to be older, and old things rot. Maybe the simple theory mentioned above focuses on many small innovations, but doesn’t apply as well to the few biggest innovations, that require coordinating many supporting innovations. Or maybe phenomena like sexual selection, as illustrated by the peacock’s tail, show how conformity and related collective traps can bedevil species, as well as larger more unified tech ecosystems. It seems to require selection between species to overcome such traps; individual species can’t fix them on their own.

If so, why hasn’t the human species fallen into such traps yet? Maybe the current fertility decline is evidence of such a trap, or maybe such problems just take a long time to arise. Humans fragmenting into competing cultures may have saved us for a while. Individual cultures do seem to have often fallen into such traps. Relatively isolated empires consistently rise and then fall. So maybe cultural competition is mostly what has saved us from cultures falling into traps.

While one might guess that collective traps are a rare problem for species and cultures, the consistent collapse of human empires and our huge dataset on bio innovation suggest that such problems are in fact quite common. So common that we really need larger scale competition, such as between cultures or species, to weed it out. To innovate, the key to growth, we need to fragment, not unify.

Which seems a big red loud warning sign about our current trend toward an integrated world culture, prey to integrated world collective traps, such as via world mobs. They might take some time to reveal themselves, but then be quite hard to eradicate. This seems to me the most likely future great filter step that we face.

Added 10Jan: There are papers on how to design a population structure to maximize the rate of biological evolution.

GD Star Rating
loading...
Tagged as: , ,

We Don’t Have To Die

You are mostly the mind (software) that runs on the brain (hardware) in your head; your brain and body are tools supporting your mind. If our civilization doesn’t collapse but instead advances, we will eventually be able to move your mind into artificial hardware, making a “brain emulation”. With an artificial brain and body, you could live an immortal life, a life as vivid and meaningful as your life today, where you never need feel pain, disease, grime, and your body always looks and feels young and beautiful. That person might not be exactly you, but they could (at first) be as similar to you as the 2001 version of you was to you today. I describe this future world of brain emulations in great detail in my book The Age of Em.

Alas, this scenario can’t work if your brain is burned or eaten by worms soon. But the info that specifies you is now only a tiny fraction of all the info in your brain and is redundantly encoded. So if we freeze all the chemical processes in your brain, either via plastination or liquid nitrogen, quite likely enough info can be found there to make a brain emulation of you. So “all” that stands between you and this future immortality is freezing your brain and then storing it until future tech improves.

If you are with me so far, you now get the appeal of “cryonics”, which over the last 54 years has frozen ~500 people when the usual medical tech gave up on them. ~3000 are now signed up for this service, and the [2nd] most popular provider charges $28K, though you should budget twice that for total expenses. (The 1st most popular charges $80K.) If you value such a life at a standard $7M, this price is worth it even if this process has only a 0.8% chance of working. Its worth more if an immortal life is worth more, and more if your loved ones come along with you.

So is this chance of working over 0.8%? Some failure modes seem to me unlikely: civilization collapses, frozen brains don’t save enough info, or you die in way that prevents freezing. And if billions of people used this service, there’d be a question of if the future is willing, able, and allowed to revive you. But with only a few thousand others frozen, that’s just not a big issue. All these risks together have well below a 50% chance, in my opinion.

The biggest risk you face then is organizational failure. And since you don’t have to pay them if they aren’t actually able to freeze you at the right time, your main risk re your payment is re storage. Instead of storing you until future tech can revive you, they might instead mismanage you, or go bankrupt, allowing you to thaw. This already happened at one cryonics org.

If frozen today, I judge your chance of successful revival to be at least 5%, making this service worth the cost even if you value such an immortal future life at only 1/6 of a standard life. And life insurance makes it easier to arrange the payment. But more important, this is a service where the reliability and costs greatly improve with more customers. With a million customers, instead of a thousand, I estimate cost would fall, and reliability would increase, each by a factor of ten.

Also, with more customers cryonics providers could afford to develop plastination, already demonstrated in research, into a practical service. This lets people be stored at room temperature, and thus ends most storage risk. Yes, with more customers, each might need to also pay to have future folks revive them, and to have something to live on once revived. But long time delays make that cheap, and so with enough customers total costs could fall to less than that of a typical funeral today. Making this a good bet for most everyone.

When the choice is between a nice funeral for Aunt Sally or having Aunt Sally not actually die, who will choose the funeral? And by buying cryonics for yourself, you also help move us toward the low cost cryonics world that would be much better for everyone. Most people prefer to extend existing lives over creating new ones.

Thus we reach the title claim of this post: if we coordinated to have many customers, it would be cheap for most everyone to not die. That is: most everyone who dies today doesn’t actually need to die! This is possible now. Ancient Egypt, relative rationalists among the ancients, paid to mummify millions, a substantial fraction of their population, and also a similar number of animals, in hope of later revival. But we now actually can mummify to allow revival, yet we have only done that to 500 people, over a period when over 4 billion people have died.

Why so few cryonics customers? When I’ve taught health economics, over 10% of students judge the chances of cryonics working to be high enough to justify a purchase. Yet none ever buy. In a recent poll, 31.5% of my followers said they planned to sign up, but few have. So the obstacle isn’t supporting beliefs, it is the courage to act on such beliefs. It looks quite weird to act on a belief in cryonics. So weird that spouses often divorce those who do. (But not spouses who spend a similar amounts to send their ashes into space, which looks much less weird.) We like to think we tolerate diversity, and we do for unimportant stuff, but for important stuff we in fact impose strongly penalize diversity.

Sure it would help if our official medical experts endorsed the idea, but they are just as scared of non-conformity, and also stuck on a broken concept of “science” which demands someone actually be revived before they can declare cryonics feasible. Non-medical scientists like that would insist we can’t say our sun will burn out until it actually does, or that rockets could take humans to Mars until a human actually stands on Mars. The fact that their main job is to prevent death and they could in fact prevent most death doesn’t weigh much on them relative to showing allegiance to a broken science concept.

Severe conformity pressures also seem the best explanation for the bizarre range of objections offered to cryonics, objections that are not offered re other ways to cut death rates. The most common objection offered is just that it seems “unnatural”. My beloved colleague Tyler said reducing your death rate this way is selfish, you might be tortured if you stay alive, and in an infinite multiverse you can never die. Others suggest that freezing destroys your soul, that it would hurt the environment, that living longer would slows innovation, that you might be sad to live in a world different from that of your childhood, or that it is immoral to buy products that not absolutely everyone can afford.

While I wrote a pretty similar post a year ago, I wrote this as my Christmas present to Alex Tabarrok, who requested this topic.

Added 17Dec: The chance the future would torture a revived you is related to the chance we would torture an ancient revived today:

Answers were similar re a random older person alive today. And people today are actually tortured far less often than this suggests, as we organize society to restrain random individual torture inclinations. We should expect the future to also organize to prevent random torture, including of revived cryonics patients.

Also, if their were millions of such revived people, they could coordinate to revive each other and to protect each other from torture. Torture really does seem a pretty minor issue here.

GD Star Rating
loading...
Tagged as: ,