Author Archives: Katja Grace

More signaling

Centurion: Where is Brian of Nazareth?
Brian: You sanctimonious bastards!
Centurion: I have an order for his release!
Brian: You stupid bastards!
Mr. Cheeky: Uh, I’m Brian of Nazareth.
Brian: What?
Mr. Cheeky: Yeah, I – I – I’m Brian of Nazareth.
Centurion: Take him down!
Brian: I’m Brian of Nazareth!
Victim #1: Eh, I’m Brian!
Mr. Big Nose: I’m Brian!
Victim #2: Look, I’m Brian!
Brian: I’m Brian!
Victims: I’m Brian!
Gregory: I’m Brian, and so’s my wife!

– Monty Python’s Life of Brian

It’s easy for everyone to claim to be Brian. What Brian (and those who wish to identify him) need is a costly signal: an action that’s only worth doing if you are Brian, given that anyone who does the act will be released. In Brian’s life-or-death situation it is pretty hard to arrange such a thing. But in many other situations, costly signals can be found. An unprotected posture can be a costly signal of confidence in your own fighting ability, if this handicap is small for a competent fighter but dangerous for a bad fighter. College can act as a costly signal of diligence, if lazy, disorganized people who don’t care for the future would find attending college too big a cost for the improved job prospects.

A situation requires costly signaling when one party wishes to treat two types of people differently, but both types of people want to be treated in the better way. An analogous way to think of this as a game is that Nature decides between A or -A, then the sender looks at Nature’s choice, and gives a signal to the receiver, B or -B. Then the receiver takes an action, C or -C. The sender always wants the receiver to do C, but the receiver wants to do C if A and -C if -A. To stop the sender from lying, you can modify the costs to the sender of B and -B.

Suppose instead that the sender and the receiver perfectly agreed: either both wanted C always, or both wanted C if A and -C if -A. Then the players can communicate perfectly well even if all of the signals are costless – the sender has every reason to tell the receiver the truth.

If players can have these two kinds of preferences, and you have two players, these are the two kinds of signaling equilibria you can have (if the receiver always wants C, then he doesn’t listen to signals anyway).

Most of the communication in society involves far more than two players. But you might suppose it can be basically decomposed into two player games. That is, if two players who talk to each other both want C iff A, you might suppose they can communicate costlessly, regardless of who the first got the message from and where the message goes to. If the first one always wants C, you might expect costly signaling. If the second does, you might expect the message to be unable to pass that part in the chain. This modularity is important, because we mostly want to model little bits of big communication networks using simple models.

Surprisingly, this is not how signaling pairs fit together. To see this, consider the simplest more complicated case: a string of three players, playing Chinese Whispers. Nature chooses, the sender sees and tells an intermediary, who tells a receiver, who acts. Suppose the sender and the intermediary both always want C, while the receiver wants to act appropriately to Nature’s choice. By the above modular thesis, there will be a signaling equilibrium where the first two players talk honestly for free, and the second and third use costly signals between them.

Suppose everyone is following this strategy: the sender tells the intermediary whatever she sees, and the intermediary also tells the receiver honestly, because when he would like to lie the signal to do so is too expensive. Suppose you are the sender, and looking at Nature you see -A. You know that the other players follow the above strategy. So if you tell the intermediary -A, he will transmit this to the receiver, though he would rather not modulo signal prices. And that’s too bad for you, because you want C.

Suppose instead you lie and say A. Then the intermediary will pay the cost to send this message to the receiver, since he assumes you too are following the above set of strategies. Then the receiver will do what you want: C. So of course you lie to the intermediary, and send the message you want with all the signaling costs of doing so accruing to the intermediary. Your values were aligned with his before taking into account signaling costs, but now they are so out of line you can’t talk to each other at all. Given that you behave this way, he will quickly stop listening to you. There is no signaling equilibrium here.

In fact to get the sender to communicate honestly with the intermediary, you need the signals between the sender and the intermediary to be costly too. Just as costly as the ones between the intermediary and the receiver, assuming the other payoffs involved are the same for each of them. So if you add an honest signaling game before a costly signaling game, you get something that looks like two costly signaling games.

For example, take a simple model where scientists observe results, and tell journalists, who tell the public. The scientist and the journalist might want the public to be excited regardless of the results, whereas the public might want to keep their excitement for exciting results. In order for journalists who have exciting news to communicate it to the public, they need to find a way of sending signals that can’t be cheaply imitated by the unlucky journalists. However now that the journalists are effectively honest, scientists have reason to misrepresent results to them. So before information can pass through the whole chain, the scientists need to use costly signals too.

If you have an arbitrarily long chain of people talking to each other in this way, with any combination of these two payoff functions among the intermediaries, everyone who starts off always wanting C must face costly signals, of the same size as if they were in an isolated two player signaling game. Everyone who wants C iff A can communicate for free. It doesn’t matter whether communicating pairs are cooperative or not, before signaling costs. So for instance a whole string of people who apparently agree with one another can end up using costly signals to communicate because the very last one talks to someone who will act according to the state of the world.

So such things are not modular in the way you might first expect, though they are easily predicted by other simple rules. I’m not sure what happens in more complicated networks than strings. The aforementioned results might influence how networks form, since in practice it should be effectively cheaper overall to direct information through smaller numbers of people with the wrong type of payoffs. Anyway, this is something I’ve been working on lately. More here.

GD Star Rating
loading...
Tagged as:

Filters and bottlenecks

Lots of processes have filters: a certain proportion of the time they fail at that stage. There are filters in the path from dead stars to booming civilizations. There are filters in the path from being a baby to being an old person. There are filters in the path from having an idea to having a thriving business.

Lots of processes also have bottlenecks. These look similar, in that many things fail at that point. For instance the path to becoming a Nobel Prize winner is bottlenecked by there only being so many Nobel Prizes ever year. Rather than a fixed fraction of people getting past that barrier, a fixed number of people do.

It’s worth noticing if something is a filter or a bottleneck, because you should treat them differently often. Either way you can increase the fraction reaching the end by widening the filter or bottleneck to let more past. But for the filter it might be worth doing this at any other stage in the process, whereas for the bottleneck it is pointless at all earlier stages. You can’t get more Nobel Prize winners by improving education, but you might get more thriving businesses.

GD Star Rating
loading...

Can a tiny bit of noise destroy communication?

If everyone knows a tenth of the population dishonestly claims to observe alien spaceships, this can make it very hard for the honest alien-spaceship-observer to communicate fact that she has actually seen an alien spaceship.

In general, if the true state of the world is seen as not much more likely than you sending the corresponding false message somehow, it’s hard to communicate the true state.

You might think there needs to be quite a bit of noise relative to true claims, or for acting on true claims to be relatively unimportant, for the signal to get drowned out. Yet it seems to me that a relatively small amount of noise could overwhelm communication, via feedback.

Suppose you have a network of people communicating one-on-one with one another. There are two possible mutually exclusive states of the world – A and B – which individuals occasionally get some info about directly. They can tell each other about info they got directly, and also about info they heard from others. Suppose that everyone likes for they and others to believe the truth, but they also like to say that A is true (or to suggest that it is more likely). However making pro-A claims is a bit costly for some reason, so it’s not worthwhile if A is false. Then everyone is honest, and can trust what one another says.

Now suppose that the costs people experience from making claims about A vary among the population. In the lowest reaches of the distribution, it’s worth lying about A. So there is a small amount of noise from people falsely claiming A. Also suppose that nobody knows anyone else’s costs specifically, just the distribution that costs are drawn from.

Now when someone gives you a pro-A message, there’s a small chance that it’s false. This slightly reduces the benefits to you of passing on such pro-A messages, since the value from bringing others closer to the truth is diminished. Yet you still bear the same cost. If the costs of sending pro-A messages were near the threshold of being too high for you, you will now stop sending pro-A messages.

From the perspective of other people, this decreases the probability that a given message of A is truthful, because some of the honest A messages have been removed. This makes passing on messages of A even less valuable, so more people further down the spectrum of costs find it not worthwhile. And so on.

At the same time as the value of passing on A-claims declines due to their likely falsehood, it also declines due to others anticipating their falsehood and thus not listening to them. So even if you directly observe evidence of A in nature, the value of passing on such claims declines (though it is still higher than for passing on an indirect claim).

I haven’t properly modeled this, but I guess for lots of distributions of costs this soon reaches an equilibrium where everyone who still claims A honestly finds it worthwhile. But it seems that for some, eventually nobody ever claims A honestly (though sometimes they would have said A either way, and in fact A happened to be true).

In this model the source of noise was liars at the bottom of the distribution of costs. These should also change during the above process. As the value of passing on A-claims declines, the cost threshold below which it is worth lying about such claims lowers. This would offset the new liars at the top of the spectrum, so lead to equilibrium faster. If the threshold becomes lower than the entire population, lying ceases. If others knew that this had happened, they could trust A-claims again. This wouldn’t help them with dishonest B-claims, which could potentially be rife, depending on the model. However they should soon lose interest in sending false B-claims, so this would be fixed in time. However by that time it will be worth lying about A again. This is less complicated if the initial noise is exogenous.

GD Star Rating
loading...
Tagged as: , ,

Could risk aversion be from friend thresholds?

If you are going for a job that almost nobody is going to get, it’s worth trying to be unusual. Better that one in a hundred employers loves you and the rest hate you than all of them think you’re mediocre.

On the other hand, if you are going for a job that almost everybody who applies is going to get, best to be as close to normal as possible.

In general, if you expect to fall on the bad side of some important threshold, it’s good to increase your variance and maybe make it over. If you expect to fall on the good side, it’s good to decrease your variance and stay there. This is assuming you can change your variance without changing your mean too much.

This suggests people should be risk seeking sometimes, and risk averse other times, depending on where the closest or most important thresholds are for them.

Prospect theory and its collected evidence says that people are generally risk averse for gains, and risk seeking for losses. That is, if you offer them fifty dollars for sure or half a chance of a hundred, they’ll take the sure fifty. If you offer them minus fifty dollars for sure, or half a chance of minus one hundred, they’ll take the gamble. The proposed value function looks something like this:

The zero point is a ‘reference point’, usually thought to be something like expectations or the status quo. This means people feel differently about gaining fifty dollars vs. a fifty percent of one hundred, and being given one hundred then later offered minus fifty or a fifty percent chance of minus one hundred, even though these things are equivalent in payoffs.

Risk aversion in gains and risk seeking in losses is what you would expect if people were usually sitting right near an important threshold, regardless of how much they had gained or lost in the past. What important threshold might people always be sitting on top of, regardless of their movement?

One that occurs to me is their friends’ and acquaintances’ willingness to associate with them. Which I will explain in a minute.

Robin has suggested that people should have high variance when they are getting to know someone, to make it over the friend threshold. Then they should tone it down if they make it over, so they don’t fall back under again.

This was in terms of how much information a person should reveal. But suppose people take into account how successful your life is in deciding whether they want to associate with you. For a given friend’s admiration, you don’t have that much to gain by getting a promotion say, because you are already good enough to be their friend. You have more to lose by being downgraded in your career, because there is some chance they will lose interest in associating with you.

Depending on how good the friend is, the threshold will be some distance below you. But never above you, because I specified friends, not potential friends. This is relevant, because it is predominantly friends, not potential friends, who learn about details of your life. Because of this selection effect, most of the small chances you take run the risk of sending bad news to existing friends more than sending good news to potential friends.

If you think something is going to turn out well, you should be risk averse because there isn’t much to gain sending better news to existing friends, but there is a lot to lose from maybe sending bad news. If you think something is going to go a tiny bit badly, you still want to be risk averse, as long as you are a bit above the thresholds of all your acquaintances. But if you think it’s going to go more badly, a small chance of it not going badly at all might be more valuable than avoiding it going more badly.

This is less clear when things go badly, because the thresholds for each of your friends can be spread out in the space below you, so there might be quite a distance where losing twice as much loses you twice as many friends. But it is less clear that people are generally risk seeking in losses. They do buy insurance for instance. It’s also plausible that most of the thresholds are not far below you, if people try to associate with the best people who will have them.

Another feature of the prospect theory value function is that the loss region is steeper than the gain region. That also fits with the present theory, where mostly you just have things to lose.

In sum, people’s broad patterns of risk aversion according to prospect theory seem explicable in terms of  thresholds of association with a selection effect.

Can you think of a good way to test that?

GD Star Rating
loading...
Tagged as: , , ,

Why underestimate acceptable partners?

The romantic view of romance in Western culture says a very small fraction of people would make a great partner for you, customarily one.

Some clues suggest that in fact quite a large fraction of people would make a suitable spouse for a given person. Arranged marriages apparently go pretty well rather than terribly. Relationships are often formed between the only available people in a small group, forced together. ‘If I didn’t have you‘ by Tim Minchin is funny. It could be that relationships chosen in constrained circumstances are a lot worse than others, though I haven’t heard that. But they are at least common enough that people find them worthwhile. And the fraction of very good mates must be at least a lot greater than suggested by the romantic view, as evidenced by people ever finding them.

So it seems we overstate the rarity of good matches. Why would we do that? One motive would be to look like you have high standards, which suggests that you are good enough yourself to support such standards.

But does this really make sense? In practice, most of the ways a person could be especially unusual such that it is hard for them to find a suitable mate are not in the direction of greatness. Most of them are just in various arbitrary directions of weirdness.

If I merely sought mates with higher mate value than me, they wouldn’t be that hard to find. They are mostly hard to find because I just don’t really get on well with people unless they are on some kind of audacious quest to save the world, in the top percentile of ‘overthinking things’ and being explicit, don’t much mind an above average degree of neuroticism on my part, and so on.

The romantic view is much closer to the truth for weird people than normal people. So while endorsing the romantic view should make you look more elite, by this argument it should much more make you look weird. In most cases – especially during romance – people go to a lot of trouble to not look weird. So it seems this is probably not how it is interpreted.

Most of anyone’s difficulty in finding mates should be due to them being weird, not awesome. So why does considering a very small fraction of people suitable make you seem good rather than weird?

GD Star Rating
loading...
Tagged as: , ,

The value of time as a student

When I was at college, many of my associates had part time jobs, or worked during school breaks. They were often unpleasant, uninspiring, and poorly paid jobs, such as food preparation. Some were better, such as bureaucracy. But they were generally much worse than any of us would expect to be after graduating. I think this is normal.

It was occasionally suggested that I too should become employed. This seemed false to me, for the following reasons. There are other activities I want to spend a lot of time on in my life, such as thinking about things. I expect the nth hour of thinking about things to be similarly valuable regardless of when it happens. I think for a hundred extra hours this year, or a hundred extra hours in five years, I still expect to have about the same amount of understanding at the end, and for hours in ten years to be about as valuable either way.

Depending on what one is thinking about, moving hours of thinking earlier might make them more valuable. Understanding things early on probably adds value to other activities, and youth is purportedly helpful for thinking. Also a better understanding early on probably makes later observations (which automatically happen with passing time) more useful.

This goes for many things. Learning an instrument, reading about a topic, writing. Some things are even more valuable early on in life, such as making friends, gaining respect and figuring out efficient lifestyle logistics.

Across many periods of time, work is roughly like this. It is the total amount of work you do that matters. But between before and after graduating, this is not so!

If activity A is a lot more valuable in the future, and activity B is about as valuable now or in the future, all things equal I should trade them and do B now.

Yes, work before graduating might get you a better wage after graduating, but so will the same amount of work after graduating, and it will be paid more at the time. Yes, you will be a year behind say, but you will have done something else for a year that you no longer need to do in the future.

On the other hand, working seems a great option if you have pressing needs for money now, or a strong aversion to indebtedness. My guess is that the latter played a large part in others’ choices. In Australia, most youth whose families aren’t wealthy can get enough money to live on from the government, and anyone can defer paying tuition indefinitely.

It seems that college students generally treat their time as low value. Not only do they work for low wages, but they go to efforts to get free food, and are happy to spend an hour of three people’s time to acquire discarded furniture they wouldn’t spend a hundred dollars on. This seems to mean they don’t think these activities they could do at any time in their life are valuable. If you are willing to trade an hour you could be reading for $10 worth of value, you don’t value reading much. When these people are paid a lot more, will they give up activities like reading all together? If not, it seems they must think reading is also more valuable in the future than now, and the relative values are jumping roughly in line with the value of working at these times. Or do they just make an error? Or am I just making some error?

GD Star Rating
loading...
Tagged as: , , ,

The transitivity of trust

Suppose you tell a close friend a secret. You consider them trustworthy, and don’t fear for its release. Suppose they request to tell the secret to a friend of theirs who you don’t know. They claim this person is also highly trustworthy. I think most people would feel significantly less secure agreeing to that.

In general, people trust their friends. Their friends trust their own friends, and so on. But I think people trust friends of friends, or friends of friends of friends less than proportionally. e.g. if you act like there’s a one percent chance of your friend failing you, you don’t act like there’s 1-(.99*.99) chance of your friend’s friend failing you.

One possible explanation is that we generally expect the people we trust to have much worse judgement about who to trust than about the average thing. But why would this be so? Perhaps everyone does just have worse judgement about who to trust than they do about other things. But to account for what we observe, people would on average have to think themselves better in this regard than others. Which might not be surprising, except that they have to think themselves more better than others in this domain than in other domains. Otherwise they would just trust others less in general. Why would this be?

Another possibility I have heard suggested is that we trust our friends more than is warranted by their true probability of defecting, for non-epistemic purposes. In which case, which purposes?

Trusting a person involves choosing to make your own payoffs depend on their actions in a circumstance where it would not be worth doing so if you thought they would defect with high probability. If you think they are likely to defect, you only rely on them when there are particularly large gains from them cooperating combined with small losses from them defecting. As they become more likely to cooperate, trusting them in more cases becomes worthwhile. So trusting for non-epistemic purposes involves relying on a person in a case where their probability of defecting should make it not worthwhile, for some other gain.

What other gains might you get? Such trust might signal something, but consistently relying too much on people doesn’t seem to make one look good in any way obvious to me. It might signal to that person that you trust them, but that just brings us back to the question of how trusting people excessively might benefit you.

Maybe merely relying on a person in such a case could increase their probability of taking the cooperative action? This wouldn’t explain the intransitivity on its own, since we would need a model where trusting a friend’s friend doesn’t cause the friend’s friend to become more trustworthy.

Another possibility is that merely trusting a person does not get such a gain, but a pair trusting one another does. This might explain why you can trust your friends above their reliability, but not their friends. By what mechanism could this happen?

An obvious answer is that a pair who keep interacting might cooperate a lot more than they naturally would to elicit future cooperation from the other. So you trust your friends the correct amount, but they are unusually trustworthy toward you. My guess is that this is what happens.

So here the theory is that you trust friends substantially more than friends of friends because friends have the right incentives to cooperate, whereas friends of friends don’t. But if your friends are really cooperative, why would they give you unreliable advice – to trust their own friends?

One answer is that your friends believe trustworthiness is a property of individuals, not relationships. Since their friends are trustworthy for them, they recommend them to you. But this leaves you with the question of why your friends are wrong about this, yet you know it. Particularly since generalizing this model, everyone’s friends are wrong, and everyone knows it.

One possibility is that everyone learns these things from experience, and they categorize the events in obvious ways that are different for different people. Your friend Eric sees a series of instances of his friend James being reliable and so he feels confident that James will be reliable. You see a series of instances of different friends of friends not being especially reliable and see James most easily as one of that set. It is not that your friends are more wrong than you, but that everyone is more wrong when recommending their friends to others than when deciding whether to trust such recommendations, as a result of sample bias. Eric’s sample of James mostly contains instances of James interacting with Eric, so he does overstate James’ trustworthiness. Your sample is closer to the true distribution of James’ behavior. However you don’t have an explicit model of why your estimate differs from Eric’s, which would allow you to believe in general that friends overestimate the trustworthiness of their friends to others, and thus correct your own such biases.

GD Star Rating
loading...

Rude research

Bryan Caplan says intelligence research is very unpopular because it looks so bad to call half of people stupider than average, let alone stupid outright. Calling people stupid is rude.

But if this is the main thing going on, many other kinds of research should be similarly hated. It’s rude to call people lazy, ugly bastards whose mothers wouldn’t love them. Yet there is little hostility regarding research into conscientiousness, physical attractiveness, parental marriage status, or personal relationships. At least as far as I can tell. Is there? Or what else is going on with intelligence?

 

GD Star Rating
loading...
Tagged as: , ,

Significance and motivation

Over at philosophical disquisitions, John Danaher is discussing Aaron Smuts’ response to Bernard Williams’ argument that immortality would be tedious. Smuts’ thesis, in Danaher’s words, is a familiar one:

Immortality would lead to a general motivational collapse because it would sap all our decisions of significance.

This is interestingly at odds with my observations, which suggests that people are much more motivated to do things that seem unimportant, and have to constantly press themselves to do important things once in a while. Most people have arbitrary energy for reading unimportant online articles, playing computer games, and talking aimlessly. Important articles, serious decisions, and momentous conversations get put off.

Unsurprisingly then, people also seem to take more joy from apparently long-run insignificant events. Actually I thought this was the whole point of such events. For instance people seem to quite like cuddling and lazing in the sun and eating and bathing and watching movies. If one had any capacity to get bored of these things, I predict it would happen within the first century. While significant events also bring joy, they seem to involve a lot more drudgery in preceding build up.

So it seems to me that living forever could only take the pressure off and make people more motivated and happy. Except inasmuch as the argument is faulty in other ways, e.g. impending death is not the only time constraint on activities.

Have I missed something?

GD Star Rating
loading...
Tagged as: , , , ,

OB/LW Party, Berkeley

Robin and I will both be in the Bay Area early for the Singularity Summit. We’d like to warm up for the big weekend with a little Overcoming Bias/LessWrong meetup party. The folks of 2135 Oregon St have kindly lent their home for this purpose, so please join us there between 7 and 10pm on Thurs 11 October to practice chatting about important and interesting things and maybe dance a bit. There’ll be some snacks and drinks, but feel free to bring more. There’ll be street parking and Ashby Bart station half a mile away. Hopefully there’ll be Robin dancing.

GD Star Rating
loading...