Tag Archives: Talk

Socializers Clump

Imagine that this weekend you and others will volunteer time to help tend the grounds at some large site – you’ll trim bushes, pull weeds, plant bulbs, etc. You might have two reasons for doing this. First, you might care about the cause of the site. The site might hold an orphanage, or a historical building. Second, you might want to socialize with others going to the same event, to reinforce old connections and to make new ones.

Imagine that instead of being assigned to work in particular areas, each person was free to choose where on the site to work. These different motives for being there are likely to reveal themselves in where people spend their time grounds-tending. The more that someone wants to socialize, the more they will work near where others are working, so that they can chat while they work, and while taking breaks from work. Socializing workers will tend to clump together.

On the other hand, the more someone cares about the cause itself, the more they will look for places that others have neglected, so that their efforts can create maximal value. These will tend to be places places away from where socially-motivated workers are clumped. Volunteers who want more to socialize will tend more to clump, while volunteers who want more to help will tend more to spread out.

This same pattern should also apply to conversation topics. If your main reason for talking is to socialize, you’ll want to talk about whatever everyone else is talking about. Like say the missing Malaysia Airlines plane. But if instead your purpose is to gain and spread useful insight, so that we can all understand more about things that matter, you’ll want to look for relatively neglected topics. You’ll seek topics that are important and yet little discussed, where more discussion seems likely to result in progress, and where you and your fellow discussants have a comparative advantage of expertise.

You can use this clue to help infer the conversation motives of the people you talk with, and of yourself. I expect you’ll find that almost everyone mainly cares more about talking to socialize, relative to gaining insight.

GD Star Rating
loading...
Tagged as: , , ,

Can a tiny bit of noise destroy communication?

If everyone knows a tenth of the population dishonestly claims to observe alien spaceships, this can make it very hard for the honest alien-spaceship-observer to communicate fact that she has actually seen an alien spaceship.

In general, if the true state of the world is seen as not much more likely than you sending the corresponding false message somehow, it’s hard to communicate the true state.

You might think there needs to be quite a bit of noise relative to true claims, or for acting on true claims to be relatively unimportant, for the signal to get drowned out. Yet it seems to me that a relatively small amount of noise could overwhelm communication, via feedback.

Suppose you have a network of people communicating one-on-one with one another. There are two possible mutually exclusive states of the world – A and B – which individuals occasionally get some info about directly. They can tell each other about info they got directly, and also about info they heard from others. Suppose that everyone likes for they and others to believe the truth, but they also like to say that A is true (or to suggest that it is more likely). However making pro-A claims is a bit costly for some reason, so it’s not worthwhile if A is false. Then everyone is honest, and can trust what one another says.

Now suppose that the costs people experience from making claims about A vary among the population. In the lowest reaches of the distribution, it’s worth lying about A. So there is a small amount of noise from people falsely claiming A. Also suppose that nobody knows anyone else’s costs specifically, just the distribution that costs are drawn from.

Now when someone gives you a pro-A message, there’s a small chance that it’s false. This slightly reduces the benefits to you of passing on such pro-A messages, since the value from bringing others closer to the truth is diminished. Yet you still bear the same cost. If the costs of sending pro-A messages were near the threshold of being too high for you, you will now stop sending pro-A messages.

From the perspective of other people, this decreases the probability that a given message of A is truthful, because some of the honest A messages have been removed. This makes passing on messages of A even less valuable, so more people further down the spectrum of costs find it not worthwhile. And so on.

At the same time as the value of passing on A-claims declines due to their likely falsehood, it also declines due to others anticipating their falsehood and thus not listening to them. So even if you directly observe evidence of A in nature, the value of passing on such claims declines (though it is still higher than for passing on an indirect claim).

I haven’t properly modeled this, but I guess for lots of distributions of costs this soon reaches an equilibrium where everyone who still claims A honestly finds it worthwhile. But it seems that for some, eventually nobody ever claims A honestly (though sometimes they would have said A either way, and in fact A happened to be true).

In this model the source of noise was liars at the bottom of the distribution of costs. These should also change during the above process. As the value of passing on A-claims declines, the cost threshold below which it is worth lying about such claims lowers. This would offset the new liars at the top of the spectrum, so lead to equilibrium faster. If the threshold becomes lower than the entire population, lying ceases. If others knew that this had happened, they could trust A-claims again. This wouldn’t help them with dishonest B-claims, which could potentially be rife, depending on the model. However they should soon lose interest in sending false B-claims, so this would be fixed in time. However by that time it will be worth lying about A again. This is less complicated if the initial noise is exogenous.

GD Star Rating
loading...
Tagged as: , ,

Could risk aversion be from friend thresholds?

If you are going for a job that almost nobody is going to get, it’s worth trying to be unusual. Better that one in a hundred employers loves you and the rest hate you than all of them think you’re mediocre.

On the other hand, if you are going for a job that almost everybody who applies is going to get, best to be as close to normal as possible.

In general, if you expect to fall on the bad side of some important threshold, it’s good to increase your variance and maybe make it over. If you expect to fall on the good side, it’s good to decrease your variance and stay there. This is assuming you can change your variance without changing your mean too much.

This suggests people should be risk seeking sometimes, and risk averse other times, depending on where the closest or most important thresholds are for them.

Prospect theory and its collected evidence says that people are generally risk averse for gains, and risk seeking for losses. That is, if you offer them fifty dollars for sure or half a chance of a hundred, they’ll take the sure fifty. If you offer them minus fifty dollars for sure, or half a chance of minus one hundred, they’ll take the gamble. The proposed value function looks something like this:

The zero point is a ‘reference point’, usually thought to be something like expectations or the status quo. This means people feel differently about gaining fifty dollars vs. a fifty percent of one hundred, and being given one hundred then later offered minus fifty or a fifty percent chance of minus one hundred, even though these things are equivalent in payoffs.

Risk aversion in gains and risk seeking in losses is what you would expect if people were usually sitting right near an important threshold, regardless of how much they had gained or lost in the past. What important threshold might people always be sitting on top of, regardless of their movement?

One that occurs to me is their friends’ and acquaintances’ willingness to associate with them. Which I will explain in a minute.

Robin has suggested that people should have high variance when they are getting to know someone, to make it over the friend threshold. Then they should tone it down if they make it over, so they don’t fall back under again.

This was in terms of how much information a person should reveal. But suppose people take into account how successful your life is in deciding whether they want to associate with you. For a given friend’s admiration, you don’t have that much to gain by getting a promotion say, because you are already good enough to be their friend. You have more to lose by being downgraded in your career, because there is some chance they will lose interest in associating with you.

Depending on how good the friend is, the threshold will be some distance below you. But never above you, because I specified friends, not potential friends. This is relevant, because it is predominantly friends, not potential friends, who learn about details of your life. Because of this selection effect, most of the small chances you take run the risk of sending bad news to existing friends more than sending good news to potential friends.

If you think something is going to turn out well, you should be risk averse because there isn’t much to gain sending better news to existing friends, but there is a lot to lose from maybe sending bad news. If you think something is going to go a tiny bit badly, you still want to be risk averse, as long as you are a bit above the thresholds of all your acquaintances. But if you think it’s going to go more badly, a small chance of it not going badly at all might be more valuable than avoiding it going more badly.

This is less clear when things go badly, because the thresholds for each of your friends can be spread out in the space below you, so there might be quite a distance where losing twice as much loses you twice as many friends. But it is less clear that people are generally risk seeking in losses. They do buy insurance for instance. It’s also plausible that most of the thresholds are not far below you, if people try to associate with the best people who will have them.

Another feature of the prospect theory value function is that the loss region is steeper than the gain region. That also fits with the present theory, where mostly you just have things to lose.

In sum, people’s broad patterns of risk aversion according to prospect theory seem explicable in terms of  thresholds of association with a selection effect.

Can you think of a good way to test that?

GD Star Rating
loading...
Tagged as: , , ,

Henson On Ems

Keith Henson, of whom I’ve long been a fan, has a new article where he imagines our descendants as fragmenting Roman-Empire-like into distinct cultures, each ~300 meter spheres holding ~30 million ems each ~1 million times faster than a human, using ~1TW of power, in the ocean for cooling. The 300m radius comes from a max two subjective seconds of communication delay, and the 30 million number comes from assuming a shell of ~10cm cubes, each an em. (Quotes below)

The 10cm size could be way off, but the rest is reasonable, at least given Henson’s key assumptions that 1) competition to seem sexy would push ems to run as fast as feasible, and 2) the scale of em “population centers” and culture is set by the distance at which talk suffers a two subjective seconds delay.

Alas those are pretty unreasonable assumptions. Ems don’t reproduce via sex, and would be selected for not devoting lots of energy to sex. Yes, sex is buried deep in us, so ems would still devote some energy to it. But not so much as to make sex the overwhelming factor that sets em speeds. Not given em econ competitive pressures and the huge selection factors possible. I’m sure it is sexy today to spend money like a billionaire, but most people don’t because they can’t afford to. Since running a million times faster should cost a million times more, ems might not be able to afford that either.

Also, the scale at which we can talk without delay has just not been that important historically in setting our city and culture scales. We had integrated cultures even when talking suffered weeks of delay, we now have many cultures even though we can all talk without much delay, and city scales have been set more by how far we can commute in an hour than by communication delays. So while ems might well have a unit of organization corresponding to their easy-talk scale, important interactions should also exist at larger scales.

Those promised quotes from Henson’s article: Continue reading "Henson On Ems" »

GD Star Rating
loading...
Tagged as: , ,

Talk Rules Are Classist

Our society claims to be concerned about less-favored races, religions, genders, sexual preferences, etc. But our most visible and well-enforced policies for showing such concern are rules about what folks may not say. And these rules are heavily classist, imposing much larger burdens on lower classes. Let me explain.

Humans have complex coalition politics, wherein we jockey for allies, test potential allies for weaknesses, and try to undermine rivals. We often communicate at several levels at once, with overt talk that better withstands outside scrutiny, and covert talk that is more free.

Lower “working” class cultures tend to talk more overtly. Insults are more direct and cutting, friends and co-workers often tease each other about their weaknesses. Nicknames often express weakness – a fat man might be nicknamed “slim.”

Upper class culture, in contrast, tends more to emphasize politeness and indirect communication. This helps to signal intelligence and social awareness, and distinguishes upper from lower classes. Upper class folks can be just as cruel, but their words have more plausible deniability.

The enforcement of laws against racist, sexist, etc. expressions is limited by the ability of courts and related observers to agree on the intent of what was said. Observers will not have access to all the local context and history that local folks use to interpret each others’ words. Now since official observers like judges tend to be upper class, they do tend to be better able to interpret the intent of upper class words. But this advantage seems insufficient compensate for the much greater indirection and politeness of upper class talk.

So when an upper and a lower class person both express disfavor with a certain race, religion, gender, sexual preference, etc., the lower class expression is more likely to be legally and socially verifiable as racist, sexist, etc. If we add in the general reluctance of legal and social systems to punish upper class folks relative to lower class folks, we see that the burden of such policies mostly falls on the lower classes.

Could it be that advantaged folks are especially eager to support policies to help the disadvantaged when the cost of such policies are mainly borne by someone else?

(Idea stolen from a conversation with Katja Grace.)

GD Star Rating
loading...
Tagged as: , ,