Monthly Archives: October 2012

Impatient Idealism

Humans have long lives. We are unusually dependent on our parents when young, and we then slowly gain competence over a lifetime, usually reaching peak productivity in our forties and fifties. Most of the time we are aware of this. For example, we count on our peak earning years by taking out loans as young students, and later saving for retirement. And we prefer leaders at those peak ages.

But when people get idealistic, they tend to forget this. Young idealists often ask me and others what they can do to most help the world. Which is a fine question. But such folks tend to be impatient – they want to know how to most help the world in the next few years, not over their lifetime. So when they consider joining an idealistic project, they focus more on whether the project will succeed than on what skills and contacts they would acquire.

Yet young folks shouldn’t expect to have their biggest influence when young. Yes young folks have higher variance, and so sometimes get very lucky, but they should expect to prepare and learn while young, and then have their biggest influence in their peak years. Why such a short term focus? Especially since idealism should if anything induce a far view. Yes young folks are often short-sighted, but why be more so about altruism than about school, relationships, etc.?

This seems related to the puzzle of why people don’t leverage the power of compound interest to donate to help the future needy, instead of today’s needy. Some argue that the future won’t have any needy, or that helping today’s needy automatically helps future needy, at a rate growing faster than investment rates of return. I’m pretty skeptical about both of these claims.

One plausible explanation is that a habit of extra youthful altruism evolved as a way to signal one’s attractiveness to potential associates. People tend more to form associations when young, associations that they tend more to rely on when old. And potential associates like to see altruism, because it correlates with generosity and cooperation (as near-far theory predicts). But if you save money to help the future needy, or if you invest now in skills useful in future idealistic projects, that is less clearly a signal of altruism, because you might later change your mind and use that money or those skills for other purposes.

So to signal your youthful idealism to potential associates, you must spend the money and time now, even if such spending is less effective toward the idealistic cause. But hey, at least the cause gets something.

GD Star Rating
loading...
Tagged as: , , ,

Lean Pork

If their representatives don’t bring home the bacon, Americans are free to fire them on Election Day. But what if members of Congress didn’t run for reelection in their home districts but were randomly assigned to run somewhere else? A 2011 paper, “Randomizing Districts for Reelections: A Thought Experiment,” tried to find benefits in a legislature divorced from geography. Under such a system, “legislators cannot focus their attention upon pleasing a geographically-concentrated special interest while neglecting the broader national interest.” (more)

A cute idea, but it would hurt incentives for info specialization, where representatives learn their district’s issues, and voters learn their incumbent’s record. I think my 17-year-old proposal would work better:

Congressfolk seeking re-election seek, among other things, concrete benefits they can bring to their district, which they can claim clear credit for. Thus they focus on getting dams, grants, etc. directed to their district, and seek tariffs or subsidies for industries especially concentrated in their district. They tend to give only lip-service for issues, like say health-care reform, which might benefit everyone in the nation, and which lots of congressfolk would be involved in developing — the benefits and the credit to be claimed are both diffuse and unconcentrated. …

So my simple proposal is to allow federal tax rates to vary by congressional district. Given this, taxes would suddenly become a concentrated benefit. Incumbents could brag about how much lower taxes were in their district, and challengers could complain how high they were. Incumbents would have clear incentives to trade votes to get taxes lowered in their district, and the credit would be clear – who else would want to push for lower taxes in that district? (more)

Admittedly, while excess local pork is a problem, it may not be the main problem our political system faces at the moment.

GD Star Rating
loading...
Tagged as:

The value of time as a student

When I was at college, many of my associates had part time jobs, or worked during school breaks. They were often unpleasant, uninspiring, and poorly paid jobs, such as food preparation. Some were better, such as bureaucracy. But they were generally much worse than any of us would expect to be after graduating. I think this is normal.

It was occasionally suggested that I too should become employed. This seemed false to me, for the following reasons. There are other activities I want to spend a lot of time on in my life, such as thinking about things. I expect the nth hour of thinking about things to be similarly valuable regardless of when it happens. I think for a hundred extra hours this year, or a hundred extra hours in five years, I still expect to have about the same amount of understanding at the end, and for hours in ten years to be about as valuable either way.

Depending on what one is thinking about, moving hours of thinking earlier might make them more valuable. Understanding things early on probably adds value to other activities, and youth is purportedly helpful for thinking. Also a better understanding early on probably makes later observations (which automatically happen with passing time) more useful.

This goes for many things. Learning an instrument, reading about a topic, writing. Some things are even more valuable early on in life, such as making friends, gaining respect and figuring out efficient lifestyle logistics.

Across many periods of time, work is roughly like this. It is the total amount of work you do that matters. But between before and after graduating, this is not so!

If activity A is a lot more valuable in the future, and activity B is about as valuable now or in the future, all things equal I should trade them and do B now.

Yes, work before graduating might get you a better wage after graduating, but so will the same amount of work after graduating, and it will be paid more at the time. Yes, you will be a year behind say, but you will have done something else for a year that you no longer need to do in the future.

On the other hand, working seems a great option if you have pressing needs for money now, or a strong aversion to indebtedness. My guess is that the latter played a large part in others’ choices. In Australia, most youth whose families aren’t wealthy can get enough money to live on from the government, and anyone can defer paying tuition indefinitely.

It seems that college students generally treat their time as low value. Not only do they work for low wages, but they go to efforts to get free food, and are happy to spend an hour of three people’s time to acquire discarded furniture they wouldn’t spend a hundred dollars on. This seems to mean they don’t think these activities they could do at any time in their life are valuable. If you are willing to trade an hour you could be reading for $10 worth of value, you don’t value reading much. When these people are paid a lot more, will they give up activities like reading all together? If not, it seems they must think reading is also more valuable in the future than now, and the relative values are jumping roughly in line with the value of working at these times. Or do they just make an error? Or am I just making some error?

GD Star Rating
loading...
Tagged as: , , ,

Two Kinds of Panspermia

Caleb A. Scharf offers an interesting argument against interstellar panspermia:

You and I, or fluffy bunnies and daffodils are all unlikely candidates for interplanetary or interstellar transferral. The sequence of events involved in panspermia will weed out all but the toughest or most serendipitously suited organisms. So, let’s suppose that galactic panspermia has really been going on for the past ten billion years or so – what do we end up with? …

Life driven by cosmic dispersal will probably end up being completely dominated by the super-hardy, spore-forming, radiation resistant, chemical-eating, and long-lived but prolific type of critters. …

The problem, and the potential paradox, is that if evolved galactic panspermia is real it’ll be capable of living just about everywhere. There should be stuff on the Moon, Mars, Europa, Ganymede, Titan, Enceladus, even minor planets and cometary nuclei. Every icy nook and cranny in our solar system should be a veritable paradise for these ultra-tough lifeforms, honed by natural selection to make the most of appalling conditions. So if galactic panspermia exists why haven’t we noticed it yet? (more)

I see two rather different interstellar panspermia scenarios:

  1. Space-centered – As Scharf says, life might mainly drift from one harsh space environment to another. Yes sometimes life would fall onto and then prosper on someplace like Earth, but being poorly adapted to space such planet life would contribute less to future space life. Under this scenario life must on average grow in common space environments, and so we should see a lot of life out there in such environments.
  2. Planet-centered – Alternatively, space life might usually die away, and only grow greatly in special rare places like planets (or perhaps comets). In this scenario the progress of life would alternate between growth on planets (or comets) and then decay in space. A similar scenario plays out when seeds like coconuts drift between islands in the ocean – seeds die away during ocean journeys, and then multiply on islands. In this scenario life would be adapted both to grow well on planets, and to decay as slow as possible in space.

Scharf’s argument weighs against a space-centered scenario, but not a planet-centered scenario. Of course there is actually a range of intermediate scenarios, depending on how wide a range of environments let life grow.

GD Star Rating
loading...
Tagged as: ,

Female Overconfidence

Men are famously more overconfident in war, in investments, in choosing firm projects, in their performance as managers (but not auditors), as math and econ students, and about their IQ. But these are traditional male areas (i.e., abilities expected more of men in traditional societies). I suspect, however, that women tend to be more overconfident in traditional female areas, such as parenting, housework, shopping, nurturing, and maintaining family relationships. Alas, though I found dozens of papers on overconfidence in traditional male areas, I couldn’t find any on traditional females areas. The closest I found was:

In both the lab and the field, female subjects tend to show greater confidence in their groups than in themselves, while male subjects show greater confidence in themselves than in their groups. (more)

This seems a nice opening for enterprising psych or econ experimentalists.

GD Star Rating
loading...
Tagged as: ,

The transitivity of trust

Suppose you tell a close friend a secret. You consider them trustworthy, and don’t fear for its release. Suppose they request to tell the secret to a friend of theirs who you don’t know. They claim this person is also highly trustworthy. I think most people would feel significantly less secure agreeing to that.

In general, people trust their friends. Their friends trust their own friends, and so on. But I think people trust friends of friends, or friends of friends of friends less than proportionally. e.g. if you act like there’s a one percent chance of your friend failing you, you don’t act like there’s 1-(.99*.99) chance of your friend’s friend failing you.

One possible explanation is that we generally expect the people we trust to have much worse judgement about who to trust than about the average thing. But why would this be so? Perhaps everyone does just have worse judgement about who to trust than they do about other things. But to account for what we observe, people would on average have to think themselves better in this regard than others. Which might not be surprising, except that they have to think themselves more better than others in this domain than in other domains. Otherwise they would just trust others less in general. Why would this be?

Another possibility I have heard suggested is that we trust our friends more than is warranted by their true probability of defecting, for non-epistemic purposes. In which case, which purposes?

Trusting a person involves choosing to make your own payoffs depend on their actions in a circumstance where it would not be worth doing so if you thought they would defect with high probability. If you think they are likely to defect, you only rely on them when there are particularly large gains from them cooperating combined with small losses from them defecting. As they become more likely to cooperate, trusting them in more cases becomes worthwhile. So trusting for non-epistemic purposes involves relying on a person in a case where their probability of defecting should make it not worthwhile, for some other gain.

What other gains might you get? Such trust might signal something, but consistently relying too much on people doesn’t seem to make one look good in any way obvious to me. It might signal to that person that you trust them, but that just brings us back to the question of how trusting people excessively might benefit you.

Maybe merely relying on a person in such a case could increase their probability of taking the cooperative action? This wouldn’t explain the intransitivity on its own, since we would need a model where trusting a friend’s friend doesn’t cause the friend’s friend to become more trustworthy.

Another possibility is that merely trusting a person does not get such a gain, but a pair trusting one another does. This might explain why you can trust your friends above their reliability, but not their friends. By what mechanism could this happen?

An obvious answer is that a pair who keep interacting might cooperate a lot more than they naturally would to elicit future cooperation from the other. So you trust your friends the correct amount, but they are unusually trustworthy toward you. My guess is that this is what happens.

So here the theory is that you trust friends substantially more than friends of friends because friends have the right incentives to cooperate, whereas friends of friends don’t. But if your friends are really cooperative, why would they give you unreliable advice – to trust their own friends?

One answer is that your friends believe trustworthiness is a property of individuals, not relationships. Since their friends are trustworthy for them, they recommend them to you. But this leaves you with the question of why your friends are wrong about this, yet you know it. Particularly since generalizing this model, everyone’s friends are wrong, and everyone knows it.

One possibility is that everyone learns these things from experience, and they categorize the events in obvious ways that are different for different people. Your friend Eric sees a series of instances of his friend James being reliable and so he feels confident that James will be reliable. You see a series of instances of different friends of friends not being especially reliable and see James most easily as one of that set. It is not that your friends are more wrong than you, but that everyone is more wrong when recommending their friends to others than when deciding whether to trust such recommendations, as a result of sample bias. Eric’s sample of James mostly contains instances of James interacting with Eric, so he does overstate James’ trustworthiness. Your sample is closer to the true distribution of James’ behavior. However you don’t have an explicit model of why your estimate differs from Eric’s, which would allow you to believe in general that friends overestimate the trustworthiness of their friends to others, and thus correct your own such biases.

GD Star Rating
loading...

Respectable Resentment

Assume for the purpose of this post that used car sales folks are exploitive and socially unproductive – they mainly trick buyers into spending more than they need. I don’t actually believe this, but I don’t want this post to be distracted by the issue of which professions are or are not socially productive.

So, imagine that you are competing to be a successful used car salesperson. But you find that you face real biases. Buyers are unfairly less willing to buy from you because you are female, or young, or the wrong ethnicity, or the wrong personality type. Or perhaps it is managers at used car sales firms who are biased against hiring your people. In any case, you have a legitimate complaint of bias, and you can legitimately resent that bias.

Even so, I don’t feel very sympathetic to your cause. Oh, on the margin I’d prefer that you win your battle against such biases. Its just that I don’t see it is as a high priority. Why? Because your cause is mostly selfish. Oh sure, the used car sales industry might be slightly more efficient if they weren’t unfairly biased against your sort. But by assumption what they’d get more efficient at is mostly exploiting ignorant buyers. Not a cause I can get behind.

Now imagine that you run a charity, and that while your charity is especially effective at its cause, e.g., reducing African poverty, it suffers from the bias that donors care more about using their donations to seem to help, than to actually help. You resent the fact that your charity doesn’t do so well because it isn’t as good at helping donors look caring. This time, I’m a huge supporter of your cause. Why? Because the bias you oppose is hurting us all, a lot.

So if you face gender bias getting hired as a cancer doctor, but for a type of cancer where doctors actually do little to help patients live longer, then I’m only mildly sympathetic. But If you suffer as a doctor because patients are biased to “do something,” and dislike your correctly telling them they are better off doing nothing, then I’m a huge fan and supporter.

If you suffer bias in academia because you are religious, but your chosen research area is mostly a pointless exercise in showing off math skills, I’m not going to get too worked up for you. But if your academic career suffers because your research is focused on a way to actually making important intellectual progress, which doesn’t happen to be a good way to show off math skills, I’ll shout your cause from the rooftops.

If you suffer from a bias based on the kind of person you are, you have a legitimate complaint. But it may not be an especially noble cause. However, if you suffer because of a common bias against doing a sort of thing that is especially useful, you may have a very noble cause. I can much more respect your resentment of a bias against doing good, than a bias against who you are.

GD Star Rating
loading...
Tagged as: , ,

Arresting irrational information cascades

Usually people don’t agree with one another as much as they should. Aumann’s Agreement Theorem (AAT) finds:

two people acting rationally (in a certain precise sense) and with common knowledge of each other’s beliefs cannot agree to disagree. More specifically, if two people are genuine Bayesian rationalists with common priors, and if they each have common knowledge of their individual posteriors, then their posteriors must be equal.[1]

The surprising part of the theorem isn’t that people should agree once they have heard the rationale for each of their positions and deliberated on who is right. The amazing thing is that their positions should converge, even if they don’t know how the other person reached their conclusion. Robin has reached a similar result using even weaker assumptions.

If we sincerely applied this finding in real life, it would go a long way to correcting the confirmation biases which makes us unwilling to adjust our positions in response to new information. But having a whole community take this theorem out of the lab and into real life is problematic, because using it in an imperfect and human way will leave them vulnerable to ‘information cascades’ (HT to Geoff Anders for the observation):

An information (or informationalcascade occurs when people observe the actions of others and then make the same choice that the others have made, independently of their own private information signals. A cascade develops, then, when people “abandon their own information in favor of inferences based on earlier people’s actions”.[1] Information cascades provide an explanation for how such situations can occur, how likely they are to cascade incorrect information or actions, how such behavior may arise and desist rapidly, and how effective attempts to originate a cascade tend to be under different conditions

There are four key conditions in an information cascade model:

  1. Agents make decisions sequentially
  2. Agents make decisions rationally based on the information they have
  3. Agents do not have access to the private information of others
  4. A limited action space exists (e.g. an adopt/reject decision).[3]

This is a fancy term for something we are all familiar with – ideas can build their own momentum as they move through a social group. If you observe your friends’ decisions or opinions and think they can help inform yours, but you aren’t motivated to double check up on their evidence, then you might simply free-ride by copying them. Unfortunately, if everyone copies in this way, we can all end up doing something foolish, so long as the first few people can be convinced to trigger the cascade. A silly or mischievous tail can end up wagging an entire dog. As a result, producing useful original research for social groups, for instance about which movies or restaurants are best, is a ‘public good’ which we reward with social status.

Now, nobody lives by agreement theorems in real life. We are biased towards thinking that when other people disagree with us, they are probably wrong – in part because copying others makes us seem submissive or less informed. Despite this, information cascades still seem to occur all over the place.

How much worse will this be for a group that seriously thinks that every rational person should automatically copy every other rational person, and  makes a virtue of not wasting the effort to confirm the data and reasoning the ultimately underlie their views? This is a bastardisation of any real agreement theorem, in which both sides should adjust their view a bit, which I expect will prevent a cascade from occurring. But mutual updating is hard and unnatural. Simply ‘copying’ the higher status members of the group is how humans are likely to end up agreeing in practice.

Imagine: Person A – a significant member of the community – comes to the group and expresses a casual opinion based on only a little bit of information. Person B listens to this, has no information of their own, and so automatically adopts A belief, without probing their evidence or confidence level. Person C hears that both A and B believe something, respects them both as rational Bayesians, so adopts their position by default. A hears that C has expressed the same opinion, thinks this represents an independent confirmation of the view, and as a result of this ‘pseudo-replication’, becomes more confident. And so the cycle grows until everyone holds a baseless view.

I can think of a few ways to try to arrest such a cascade.

Firstly, you can try to apply agreement theorems more faithfully, by ensuring two people discussing their view update both up and down, rather than just copying. I am skeptical that this will happen.

Secondly, you could stop the first few people from forming incorrect opinions, or sharing conclusions without being quite confident they are correct. That is difficult, stop prevents you aggregating tentative evidence, and also increases the credibility of any remaining views that are expressed.

Thirdly, you could take theorems with a grain of salt, and make sure at least some people in a group refuse to update their beliefs without looking into the weight of evidence that backs them up, and sounding the alarm for everyone else if it is weak. In a community where most folks are automatically copying one another’s beliefs, doing this work – just in case someone has made a mistake or is not the ‘rational Bayesian’ you thought they were – has big positive externalities for everyone else.

Fourthly, if you are part of a group of people trying to follow AAT, you could all unlearn the natural habit of being more confident about an idea just because many people express it. In such a situation, it’s entirely possible that they are all relying on the same evidence, which could be little more than hearsay.

GD Star Rating
loading...

Alms is not about alms experts

In September Robin suggested that there might be an Alms Expert Opening:

Today the three spending categories of medicine, school, and alms make up ~40% of US GDP, a far larger fraction than in 1800. …

Today, two of these three classic charities have very powerful associated “professions”: doctors and teachers. These professions are powerful because they are seen as representing the good in those causes – doctors are our official authorities on what is good for patients, and teachers are our official authorities on what is good for students…

The missing group here is alms experts: we have no strong profession of those who specialize in helping the poor, crippled, etc.

Are alms experts punching below their weight, given the large fraction of GDP spent on alms? I think not, because alms spending mostly bypasses the work of alms experts.

Medical spending mostly goes to pay doctors, nurses, and other medical professionals, or to provide facilities and equipment that supports their work: there were over 7.5 million technically skilled healthcare workers in 2011. In education elementary schoolhigh school, and post-secondary teachers added up to over 4.4 million people, with other spending going to school buildings, principals, utilities, libraries, and so forth.

But consider the largest alms program in the United States, the Social Security Administration, which makes cash payments to the elderly, the disabled, and surviving family members of certain deceased. Its budget request projects that in 2013 it will pay out some $873 billion to beneficiaries while spending less than $12 billion for operations, with only 80,000 state and federal employees.

The relatively small role for administration recurs elsewhere, e.g. the food voucher program SNAP disbursed $76 billion in 2011 with administrative costs of $6.9 billion and the Earned Income Tax Credit disbursed $59.5 billion with direct administrative costs of less than one percent. Staffing can be higher for programs involving social workers and foreign assistance, but less is spent on these than the large formula-driven programs.

Since alms employees are relatively scarce, they can directly deliver fewer votes or political contributions than teachers or medical workers. And since their role in the provision of alms is so much less central, it is harder for others to see them as “representing the good in those causes.” Instead, organizations of recipients can take on the role of defenders of the alms they receive. For alms influence and status, look to the 38 million members of the AARP, not 80,000 Social Security workers.

GD Star Rating
loading...
Tagged as: ,

On Play Hell

Our activities split into work and play. And positive and negative extremes are described as heavens and hells. So there are four possible work-play extremes: work heaven, work hell, play heaven, and play hell.

Among common scenarios we discuss and imagine, we know of many work hells, such as galley slaves. We have fewer work heavens, such as where one gets work credit for a play-like activity. We also have a great many play heavens. But we rarely talk about play hells.

But consider: it might take you years to find out that you are embarrassingly bad at your chosen hobby or sport. The radical science theory you pursue for decades could just be just wrong. You might go out dancing every evening hoping to catch someone’s attention, only to always see him or her go home with someone else. Your so-called best friend could spread nasty rumors about you. Your kids could despise you. Your lover could cheat on you. You could get divorced. These are play hells, most every bit as hellishness as typical work hells.

In the US today, only 14% (24/168) of adult hours each week are devoted to formal work. Since we devote far more time to play than work, I’d guess that most of the actual hells around us are play hells. Yet such play hells seem neglected. There are far fewer charities devoted to helping folks cope with them. And there are far fewer regulations designed to reduce them. The law also slights them – rarely can one sue about harms that arise from romance and friendship. Storybook heroes sally off to rid the world of work hells far more often than play hells.

I suspect we inherited this tendency from our foragers ancestors. Foragers have many rules about fights, hunts, and sharing the product of work, but far fewer rules on romance and friends. To foragers, work was more overt, play more covert.

GD Star Rating
loading...
Tagged as: , , ,