Author Archives: Katja Grace

How to motivate women to speak up

In mixed groups, women don’t talk as much as men. This is perhaps related to women being perceived as “bitches” if they do, i.e. pushy, domineering creatures whom one would best loath and avoid. Lindy West at Jezebel comments:

…it just goes back to that hoary old double standard—when men speak up to be heard they are confident and assertive; when women do it we’re shrill and bitchy. It’s a cliche, but it’s true. And it leaves us in this chicken/egg situation—we have to somehow change our behavior (i.e. stop conceding and start talking) while simultaneously changing the perception of us (i.e. asserting that assertiveness does not equal bitchiness). But how do you assert that your assertiveness isn’t bitchiness to a culture that perceives assertiveness as bitchiness? And how do you start talking to change the perception of how you talk when that perception is actively keeping you from talking? Answer: UGH, I HAVE NO IDEA…

One problem with asserting that your assertiveness doesn’t indicate bitchiness is that it probably does. If all women know that assertiveness will be perceived as bitchiness then those who are going to be perceived as bitches anyway (due to their actual bitchiness) and those who don’t mind being seen as bitches (and therefore are more likely to be bitches), will be the ones with the lowest costs to speaking up. So mostly the bitches speak, and the stereotype is self-fulfilling.

This model makes it clearer how to proceed. If you want to credibly communicate to the world that women who speak up are not bitches, first you need for the women who speak up to not be bitches. This can happen through any combination of bitches quietening down and non-bitches speaking up. Both are costly for the people involved, so they will need altruism or encouragement from the rest of the anti-stereotype conspiracy. Counterintuitively, not all women should be encouraged to speak more. The removal of such a stereotype should also be somewhat self-fulfilling – as it is reduced, the costs of speaking up decline, and non-bitchy women do it more often.

Interestingly and sadly, this is exactly opposite to the strategy that Lindy finds self-evident:

…But I guess I will start with this pledge I just made up: I, Lindy West, a shrill bitch, do hereby pledge to talk really really loud in meetings if I have something to say, even if dudes are talking louder and they don’t like me. I refuse to be a turtle—unless it is some really loud species of brave turtle with big ideas. I will not hold back just because I’m afraid of being called a loudmouth bitch (or a “trenchmouth loud ass,” which I was called the other day and as far as I can tell is some sort of pirate insult). Also, I will use the fuck out of the internet, because they can’t drown you out on the internet. The end. Amen or whatever.

GD Star Rating
loading...
Tagged as: , ,

Signaling bias in philosophical intuition

Intuitions are a major source of evidence in philosophy. Intuitions are also a significant source of evidence about the person having the intuitions. In most situations where onlookers are likely to read something into a person’s behavior, people adjust their behavior to look better. If philosophical intuitions are swayed in this way, this could be quite a source of bias.

One first step to judging whether signaling motives change intuitions is to determine whether people read personal characteristics into philosophical intuitions. It seems to me that they do, at least for many intuitions. If you claim to find libertarian arguments intuitive, I think people will expect you to have other libertarian personality traits, even if on consideration you aren’t a libertarian. If consciousness doesn’t seem intuitively mysterious to you, one can’t help wonder if you have a particularly un-noticable internal life. If it seems intuitively correct to push the fat man in front of the train, you will seem like a cold, calculating sort of person. If it seems intuitively fine to kill children in societies with pro-children-killing norms, but you choose to condemn it for other reasons, you will have all kinds of problems maintaining relationships with people who learn this.

So I think people treat philosophical intuitions as evidence about personality traits. Is there evidence of people responding by changing their intuitions?

People are enthusiastic to show off their better looking intuitions. They identify with some intuitions and take pleasure in holding them. For instance, in my philosophy of science class the other morning, a classmate proudly dismissed some point, declaring,’my intuitions are very rigorous’. If his intuitions are different from most, and average intuitions actually indicate truth, then his are especially likely to be inaccurate. Yet he seems particularly keen to talk about them, and chooses positions based much more strongly on they than others’ intuitions.

I see similar urges in myself sometimes. For instance consistent answers to the Allais paradox are usually so intuitive to me that I forget which way one is supposed to err. This seems good to me. So when folks seek to change normative rationality to fit their more popular intuitions, I’m quick to snort at such a project. Really, they and I have the same evidence from intuitions, assuming we believe one anothers’ introspective reports. My guess is that we don’t feel like coming to agreement because they want to cheer for something like ‘human reason is complex and nuanced and can’t be captured by simplistic axioms’ and I want to cheer for something like ‘maximize expected utility in the face of all temptations’ (I don’t mean to endorse such behavior). People identify with their intuitions, so it appears they want their intuitions to be seen and associated with their identity. It is rare to hear a person claim to have an intuition that they are embarrassed by.

So it seems to me that intuitions are seen as a source of evidence about people, and that people respond at least by making their better looking intuitions more salient. Do they go further and change their stated intuitions? Introspection is an indistinct business. If there is room anywhere to unconsciously shade your beliefs one way or another, it’s in intuitions. So it’s hard to imagine there not being manipulation going on, unless you think people never change their beliefs in response to incentives other than accuracy.

Perhaps this isn’t so bad. If I say X seems intuitively correct, but only because I guess others will think seeing X as intuitively correct is morally right, then I am doing something like guessing what others find intuitively correct. Which might be a bit of a noisy way to read intuitions, but at least isn’t obviously biased. That is, if each person is biased in the direction of what others think, this shouldn’t obviously bias the consensus. But there is a difference between changing your answer toward what others would think is true, and changing your answer to what will cause others to think you are clever, impressive, virile, or moral. The latter will probably lead to bias.

I’ll elaborate on an example, for concreteness. People ask if it’s ok to push a fat man in front of a trolley to stop it from killing some others. What would you think of me if I said that it at least feels intuitively right to push the fat man? Probably you lower your estimation of my kindness a bit, and maybe suspect that I’m some kind of sociopath. So if I do feel that way, I’m less likely to tell you than if I feel the opposite way. So our reported intuitions on this case are presumably biased in the direction of not pushing the fat man. So what we should really do is likely further in the direction of pushing the fat man than we think.

GD Star Rating
loading...
Tagged as: , ,

Surplus splitting strategy

When negotiating over the price of a nice chair at a garage sale, it can be useful to demonstrate there is only twenty dollars in your wallet. When determining whether your friend will make you a separate meal or you will eat something less preferable, it can be useful to have a longterm commitment to vegetarianism. In all sorts of situations where a valuable trade is to be made, but the distribution of the net benefits between the traders is yet to be determined, it can be good to have your hands tied.

If you can’t have your hands tied, the next best thing is to have a salient place to split the benefits. The garage sale owner did this when he put a price tag on the chair. If you want to pay something other than the price on the tag, you have to come up with some kind of reason, such as a credible commitment to not paying over $20. Many buyers will just pay the asking price.

This means manipulating salient ways to split benefits could be pretty profitable. This means people should probably be doing it on purpose. I’m curious to know if and how they do.

Often the default is to keep the way the benefits naturally fall without money (or anything else ‘extra’) changing hands. For instance suppose you come to lunch at my place and we both enjoy this to some extent. The default here is to keep the happiness we got from this, rather than say me paying you $10 on top.

So in such cases manipulating the division of benefits should mostly be done by steering toward more personally favorable variations on the basic plan. e.g. my suggesting you come to my place before you suggest that I come to yours. A straightforward way to get gains here is to just race to be the first to suggest a favorable option, but this is hard because it looks domineering to try to manipulate things in your favor in such a way. Unless you have some particular advantage at suggesting things fast and smoothly, such a race seems costly in expectation.

If in general trying to manipulate a group’s choice seems like a status-move or dominance-move, subtle ways to do this are valuable. Instead of a race to suggest options, you can have a prior race to make the options that you might want to suggest seem more suggestible. For instance if you’d prefer others come to your place than you go to others’ places, you can put a pool at your place, so suggestions to go to your place seem like altruism. If you know a lot of details about another person, you can use one of them to justify assuming that a particular outcome will be better for them. e.g. ‘We all know how much John likes steak, so we could hardly not go to Sozzy’s steak sauna!’. None of this works unless it’s ambiguous which way your own preferences go.

On the other hand if your preferences are very unambiguous, you can also do well. This is because others know your preferences without your having to execute a dominance move to inform them. If their preferences are less clear, it’s hard for them to compete with yours without contesting your status themselves. So arranging for others to know your preferences some other way could be strategic. e.g. If you and I are choosing which dessert to split, and it is common knowledge that I consider chocolate cake to be the high point of human experience, it is unlikely that we will get the carrot cake, even if you prefer it quite strongly.

So, strategy: if it’s clear that you have a pretty strong preference, make it quite obvious but not explicit. If you have a less clear preference, make it look like you have no preference, then position to get the thing you want based on apparently irrelevant considerations.

Even if the default is to transfer no cash, there can be a range of options that are clearly incrementally better for you and worse for me, with no salient division. e.g. If I invite you over for lunch, there are a range of foods I could offer you, some better for you, some cheaper for me. This seems quite similar to determining how much money to pay, given that someone will pay something.

In the lunch case I get to decide how good what I offer you is, and you have to take it or leave it. You can retaliate by thinking better or worse of me. You can’t very explicitly tell me how much you will think better or worse of me though, and you probably have little control over it. Your interpretation of my level of generosity toward you (and thus your feelings) and my expectations of your feelings are both heavily influenced by relevant social norms. So it’s not clear that either of us has much influence over which point is chosen. You could try to seem unforgiving or I could try to seem unusually ascetic, but these have many other effects, so are extreme ways to procure better lunching deals. I suspect this equilibrium is unusually hard to influence personally because there’s basically no explicit communication.

There are then cases where money or peanut butter sandwiches or something does change hands naturally, so ‘no transfer’ is not a natural option. Sometimes there is another default, such as the cost of procuring whatever is being traded. By default businesses put prices on items rather than consumers doing it, which appears to be an issue of convenience. If it’s clear how much surplus is being split, a natural way is to split it evenly. For instance if you and I make $20 busking in the street, it would be strange for you to take more than $10, even if you are a better singer. This fairness norm is again hard to manipulate personally, except by making it more or less salient. But it’s a nice example of a large scale human project to alter default surplus division.

When there are different norms among different groups, you can potentially reap more of it by changing groups. e.g. if you are a poor woman, you might do better in circles where men are expected to pay for many things.

These are just a random bunch of considerations that spring to mind. Do you notice people trying to manipulate default surplus divisions? How?

GD Star Rating
loading...
Tagged as: , , ,

On the goodness of Beeminder

Beeminder.com improves my life a lot. This is surprising: few things improve my life much, and when they do it’s usually because I’m imagining it. Or because they are things that everyone has known about for ages and I am slow on the uptake (e.g. not moving house three times a year, making a habit of eating breakfast, making habits at all). But Beeminder is new, and it definitely helps.

One measurable instrumental benefit of Beeminder is that I have exercised for half an hour or an hour per day on average since last October. Previously I exercised if I needed to get somewhere or if the fact that exercise is good for people crossed my mind particularly forcibly, or if some even less common events occurred. So this is big. It seems to help a lot for other things too, such as working, but the evidence there is weaker since I used to work pretty often anyway. I’m sorry that  I didn’t keep better track.

Unlike many other improvements to my life, I have some guesses about why this is so useful. But first let me tell you the basic concept of Beeminder.

Take a thing you can measure, such as how many pages you have written. Suppose you measure this every day, and enter the data as points in a graph. Suppose also that the graph contains a ‘road’ stretching up ahead of your data, to days that have not yet happened. Then you could play a game of keeping your new data points above the road. A single day below the road and you lose. It turns out this can be a pretty compelling game. This is basically Beeminder.

There are more details. You can change the steepness of the road, but only for a week in the future. So you can fine-tune the challengingness of a goal, but can’t change it out of laziness unless you are particularly forward thinking about your laziness (in which case you probably won’t sign up for this).

There is a lot of leeway in what indicators you measure, and some I tried didn’t help much. The main things I measure lately are:

  • number of 20 minute blocks of time spent working. They have to be continuous, though a tiny bit of interruption is allowed if someone else causes it
  • time spent exercising weighted by the type of exercise e.g. running = 2x dancing = 2 x walking
  • points accrued for doing tasks on my to-do list. When I think of anything I want to do I put it on the list, whether it’s watching a certain movie or figuring out how to make the to do list system better. Some things stay there permanently, e.g. laundry. I assign each task a number of points, which goes up every Sunday if it’s still on the list. I have to get 15 points per day or I lose.

At first glance, it looks like Beeminder is basically a commitment contract: that it gets its force from promising to take your money if you lose. In my experience this seems very minor. I often forget how much money is riding on goals, and seem to keep the ones with no money on about as well as the others. So at least for me the threat of losing money isn’t what’s going on.

What is going on? I think Beeminder – especially the way I use it – actually does a nice job of combining a bunch of good principles of motivation. Here are some I hypothesize:

Concrete steps

In order to use Beeminder for a goal, you need to be clear on how you will quantify progress toward it. This means being explicit about the parts it is made of. You can’t just intend to read more, you have to intend to read one philosophy paper every day. You can’t just intend to do your taxes, you have to intend to finish one of five forms every week. You can’t just intend to ponder whether you’re doing the right thing with your life, you have to intend to spend twenty minutes per week thinking up alternatives. Making a goal concrete enough to quantify it destroys ugh fields and makes it easier to start. ‘What get’s measured gets done’ – just making a concrete metric salient makes it easier to work toward than a similar vague goal.

Small steps

To Beemind a goal, you need to divide it into many small parts, so you can track progress. ‘Finish making my presentation’ might be explicit enough to measure, but the measure will be zero for a long time, then one. Breaking goals up into small steps has nice side effects. It removes ugh fields, induces near mode, makes success likely at any particular step. In Luke Muehlhauser’s terminology, it increases ‘expectancy’ and allows ‘success spirals’*. Trading long term goals for short term ones also avoids the kind of delay that might make it easy to succumb to procrastination.

Don’t break the chain 

Otherwise known as the Seinfeld hack. This might be the main thing that motivates me to keep my Beeminder goals, in the place of the money. Imagine you are skipping rope. You have made it to 70 skips. It was kind of hard, but you’re not so exhausted that you have to stop. You probably feel more compelled to keep going and make it to 80 than you did when you started. In general, once you have successfully done something a string of times, doing it again seems more desirable. Perhaps this is particular to OCD kinds of people, but a Google search suggests many find it useful.

Beeminder is a nicely flexible implementation of this, because the chain is a bit removed from what you are doing. You only have to maintain an average, so you can work extra one day to slack off the next. This doesn’t seem to undermine the motivational effect.

Hard lines in middle grounds

Firm commitments are naturally made to extremes. This is partly due to principled moral stances, which tend to be both extreme and firm. But that’s not all that’s going on. It’s hard to manage a principle of eating 40% less meat. If people want to eat less meat, they either eat none at all, or however much they feel like pushed down in a vague fashion with some bad feelings. The middle of the meat eating spectrum is too slippery for a hard line – it’s hard to tell how much you eat and annoying to track it. ‘None’ is salient and verifiable. In other realms intermediate lines are required: your diet can’t cut eating to zero. So often diets are more vague; which makes them harder to keep.

Similarly, it’s easy to commit to doing something every day, or every Sunday, or every month. It’s harder to commit to do a thing 2.7 times per week on average, because it’s awkward to track or remember this ‘habit’.

Compromised positions are often more desirable than extremes, and desired frequencies are unlikely to match memorable periods. So it’s a pity that vague commitments are harder to keep than firm ones. Often people don’t make commitments at all, because the readily available firm ones are too extreme. This is a big loss.

Beeminder helps with making firm commitments to intermediate positions. Since you only ever need to notice if the slope of your data isn’t steep enough, any rate is as easy to use as a goal. You can commit to eating 40% less meat, you just have to estimate once what 40% is, then record any meat you eat. I’ve used Beeminder to journal on average five nights per week. This is better than every night or no night, but would otherwise be annoying to track.

A small threat to overcome tiny temptations

While working, there are various moments when it would be easier to stop than to continue, particularly if you mostly feel the costs and benefits available in the next second or so, and if you assume that you could start again shortly (related). It is in these moments that I tend to stop and get a drink, or look out of the window, or open my browser or whatnot.

Counting short blocks of continuous time working pretty much solves this problem for me. The rule is that if you stop at all the whole block doesn’t count. So at any given moment there might be a tiny short term benefit to stopping for a second, but there is a huge cost to it. In my case this seems to remove stopping as an option, in the same way that a hundred dollar price on a menu item removes it as an option without apparent expense of willpower.

I originally thought it would be good to measure the amount of work I got done, rather than time spent doing it. This is because I want to get work done, not waste time on it. But given that I am working, I strongly prefer to do good work, fast. So there’s not much need for an added incentive there. I just need an incentive to begin, and one to not stop when a particular moment makes stopping look tasty. In Luke’s terminology, this kills impulsiveness.

Less stress

The long term threat of failing to write an essay is converted into a short term pleasure of winning each night at Beeminder. I’m not sure why this seems like a pleasure, rather than a threat of losing, but it does to me. Probably because losing at Beeminder isn’t that unpleasant or shameful. And how could getting points or climbing a scale not seem like winning? (This is about value in Luke’s terms).

More accuracy

It’s harder to maintain planning fallacy, overconfidence, or expectation of perfection in the future, in light of detailed quantitative data, and a definite trend line.

Just the difference between ‘I should do that’, and ‘I should do that, so how much time will it take?… About two hours, so I guess it should get 20 points.. that probably won’t be enough to compel me to do it soon, but that’s ok, it’s not urgent’ seems to change the mindset to one more sensitive to reality.

***

In sum, I think Beeminder partly works well because it causes you to think of your goals in small, concrete parts which can easily be achieved. It also makes achieving the parts more satisfying, and strings them into an addictive chain of just the right challengingness. Finally it lends itself to experimentation with a wide range of measures of success, such as measuring time blocks or ‘points’, at arbitrary rates. The value from innovations there is probably substantial. So, averse as I am to giving lifestyle advice, if you’re curious about the psychology of motivation in humans, or if you want to improve your life a lot, you should probably take a look at Beeminder.

*you can also increase expectancy by measuring something like time rather than progress.

GD Star Rating
loading...
Tagged as: , ,

Grace-Hanson Podcast

Robin and I have a new podcast on the subject of play (mp3wav, m4a). Older ones are here.

Don’t be thrown by a bit of silence at the start of the m4a one. We also don’t have the time right now to figure out how to put it in better formats. Sorry about that. If anyone else does, and posts such files, I’ll link to them.

GD Star Rating
loading...
Tagged as: , , ,

Disorganized collection growth

When I was a teenager, I lived in a nice house with my mother, stepfather and three younger brothers. The contents of the house were what you would expect if you took a normal house, multiplied the number of things in it by ten, then shook it very hard. Almost – a greater proportion of the things were in boxes or containers of some kind than you would expect by chance, and also there were narrow trails cleared along the important thoroughfares. For instance there was a clear path to the first few chairs in the living room, from which the more athletic members of the household could jump to most of the other chairs.

This state of affairs interested me. From what little I had seen of other families’ houses, it was pretty unusual. Yet looking at the details of of the processes which produced it, I couldn’t see what was unusual. I don’t remember my exact thoughts, but I figured it had to be something that affected the relative inflow and outflow of stuff from the house. But it wasn’t that we had way more spending power than other families, or that we kept a lot of garbage. Most of the things in the house were useful, or would be if you had a non-negligible chance of finding them. It seemed like my family bought usual kinds of things for usual kinds of reasons. A set of lego for the children to play with, a blender because sometimes we wanted to blend things, a box of second hand books or two because they were only 50c.

The last one there looks a bit problematic, but is not that unusual. People often buy marginally valuable items because they are cheap. There were a few other things like that that looked a bit problematic – a tendency to keep drawings, an inclination to buy several shirts if you found one that was good. But nothing that should obviously cause this massive phase transition into chaos.

In the end I’m still not sure what the dominant problem was, or if there was one. But I can tell you about one kind of failure that I think contributed, which I also notice in other places.

Suppose you have a collection of things, for instance household items. You want to use one, for instance a pair of scissors. Depending on the organization of your collection of household items, it can be more or less tricky to find the scissors. At a certain level of trickiness, it is cheaper to just buy some new scissors than to find the old ones. So you buy the new scissors.

Once you have the new scissors, you add them to your collection of things. This is both the obvious thing to do with items you possess, and the obvious solution to scissors having apparently been too rare amongst your possessions.

Unfortunately adding more scissors also decreases the density of every other kind of thing in the collection. So next time you are looking for a pen it is just a little bit harder to find. If pens are near the threshold where it’s easier to get new pens than find your old pens, you buy some more pens. Which pushes a couple of other items past the threshold. On it goes, and slowly it again becomes hard to find scissors.

In short, a given amount of organization can only support being able to cheaply find so much stuff. You can respond to this constraint by only keeping that much stuff, for instance borrowing or buying then discarding items if they are beyond what your system can keep track of. Or you can respond by continually trying to push the ratios of different things to something impossible, which leads to a huge disorganized pile of stuff.

Another place I notice this is in writing. Suppose you write a blog post. Sadly it is a bit too long for the average reader to remember a key point in the second paragraph. You suspect they will forget it and just fill in what they would expect, consequently missing the whole point. To avoid this, you emphasize the point again in the second last paragraph. But now the post is even longer, and it is not clear whether they will also remember another key part. So you add some more about that point in the conclusion. But now it’s so long the whole argument is probably too hard to piece together, so you add a bit of an outline. Perhaps this eventually reaches an equilibrium in which all the points have been repeated and emphasized and exemplified so much that nobody can fail to understand. Often it would nonetheless have been better to just quit early on.

I think I had a better list of such examples, in a half written post which I put in my collection of blog drafts. Unfortunately my collection is so sprawling and poorly organized that it seemed easier to just write the post again than to find the old one. So here you have it. It’s tempting to add this post too to my blog draft collection and look for it again when I find some more things to add, but no good lies in this direction.

GD Star Rating
loading...
Tagged as: , ,

Ethical heuristics

I would like to think I wouldn’t have been friends with slave owners, anti-semites or wife-beaters, but then again most of my friends couldn’t give a damn about the suffering of animals, so I guess I would have been. – Robert Wiblin

I expect the same friends would have been any of those things too, given the right place and period of history. The same ‘faults’ appear to be responsible for most old fashioned or foreign moral failings: not believing that anything bad is happening if you don’t feel bad about it, and not feeling bad about anything unless there is a social norm of feeling bad about it.

People here and now are no different in these regards, as far as I can tell. We may think we have better social norms, but the average person has little more reason to believe this than the average person five hundred years ago did. People are perhaps freer here and now to follow their own hearts on many moral issues, but that can’t make much difference to issues where the problem is that people’s hearts don’t automatically register a problem. So even if you aren’t a slave-owner, I claim you are probably using a similar decision procedure to that which would lead you to be one in different circumstances.

Are these really bad ways for most people to behave? Or are they pretty good heuristics for non-ethicists? It would be a huge amount of work for everyone to independently figure out for themselves the answer to every ethical question. What heuristics should people use?

GD Star Rating
loading...
Tagged as:

Robot ethics returns

People are often interested in robot ethics. I have argued before that this is strange. I offered two potential explanations:

  1. Ethics seems deep and human, so it’s engagingly eerie to combine it with heartless AI
  2. People vastly misjudge how much ethics contributes to the total value society creates

A more obvious explanation now: people are just more interested in ethics when the subject is far away, for instance in the future. This is the prediction of construal level theory. It says thinking about something far away makes you think more abstractly, and in terms of goals and ideals rather than low level constraints. Ethics is all this.

So a further prediction would be that when we come to use robots a lot, expertise from robot ethicists will be in as little demand as expertise from washing machine ethicists is now.

Some other predictions, to help check this theory:

  • Emerging or imagined technologies should arouse ethical feelings more than present technologies do in general
  • International trade should prompt more ethical feelings than local trade
  • Stories of old should be more moralizing than stories of now
  • Historical figures should be seen in a more moral light than present-day celebrities
  • Space travel should be discussed in terms of more moral goals than Earth travel.
  • Ethical features of obscure cultures should be relatively salient compared to familiar cultures

More? Which of these are actually true?

There is definitely some conflicting evidence, for instance people feel more compelled to help people in front of them than those in Africa (there was an old OB post on this, but I can’t find it). There are also many other reasons the predictions above may be true. Emerging technologies might prompt more ethical concerns because they are potentially more dangerous for instance. The ethical dimension to killing everyone is naturally prominent. Overall construal level theory still seems to me a promising model for variations in ethical concern.

Added: I’m not confident that there is disproportionate interest compared to other topic areas. I seem to have heard about it too much, but this could be a sampling bias.

GD Star Rating
loading...
Tagged as: , ,

Responsibility and Clicking

Sometimes when people hear obvious arguments regarding emotive topics, they just tentatively accept the conclusion instead of defending against it until they find some half satisfactory reason to dismiss it. Eliezer Yudkowsky calls this ‘clicking’, and wants to know what causes it:

My best guess is that clickiness has something to do with failure to compartmentalize – missing, or failing to use, the mental gear that lets human beings believe two contradictory things at the same time. Clicky people would tend to be people who take all of their beliefs at face value.

The Hansonian explanation (not necessarily endorsed by Robin Hanson) would say something about clicky people tending to operate in Near mode.  (Why?)

pjeby remarks (with 96 upvotes),

One of the things that I’ve noticed about this is that most people do not expect to understand things. For most people, the universe is a mysterious place filled with random events beyond their ability to comprehend or control. Think “guessing the teacher’s password”, but not just in school or knowledge, but about everything.

Such people have no problem with the idea of magic, because everything is magic to them, even science….

Hypothesis: people expect reality to make sense roughly in proportion to how personally responsible for manipulating it they feel. If you think of yourself as in charge of strategically doing something, you are eager to understand how doing that thing works, and automatically expect understanding to be possible. If you are driving a car, you insist the streets fit intuitive geometry. If you are engaging in office politics, you feel there must be some reason Gina said that thing.

If you feel like some vague ‘they’ is responsible for most things, and is meant to give you stuff that you have a right to, and that you are meant to be a good person in the mean time, you won’t automatically try to understand things or think of them as understandable. Modeling how things work isn’t something you are ‘meant’ to do, unless you are some kind of scientist. If you do dabble in that kind of thing, you enjoy the pretty ideas rather than feel any desperate urge for them to be sound or complete. Other people are meant to look after those things.

A usual observation is that understanding things properly allows you to manipulate them. I posit that thinking of them as something you might manipulate automatically makes you understand them better. This isn’t particularly new either. It’s related to ‘learned blankness‘, and searching vs. chasing, and near mode vs. far mode. The followup point is that chasing the one correct model of reality, which has to make sense, straight-forwardly leads to ‘clicking’ when you hear a sensible argument.

According to this hypothesis, the people who feel most personally responsible for everything a la Methods Harry Potter would also be the people who are most notice whether things make sense. The people who less trust doctors and churches to look after them on the way to their afterlives are the ones who notice that cryonics makes sense.

To see something as manipulable is to see it in the same light that science does, rather than as wallpaper. This is expensive, not just because a detailed model is costly to entertain, but because it interferes with saying socially advantageous things about the wallpaper. So you quite sensibly only do it when you actually want to manipulate a thing and feel potentially empowered to do so, i.e. when you hold yourself responsible for it.

GD Star Rating
loading...
Tagged as: ,

Fragmented status doesn’t help

David Friedman wrote, and others have claimed similarly:

It seems obvious that, if one’s concern is status rather than real income, we are in a zero sum game. If my status increases relative to yours, yours has decreased relative to mine. … Like many things that seem obvious, this one is false. …

…what matters to me is my status as I perceive it; what matters to you is your status as you perceive it. Since each of us has his own system of values, it is perfectly possible for my status as I view it to be higher than yours and yours as you view it to be higher than mine…

Status is about what other people think your status is, but Friedman’s argument is that you at least get some choice in whose views to care about. People split off into many different groups, and everyone may see their group as quite important, so see themselves as quite statusful. Maybe I feel good because I win at board games often, but you don’t feel bad if you don’t – you just quit playing board games and hang out with people who care about politics instead, because you have a good mind for that. As Will Wilkinson says:

I think that there are lots of pastors, PTA presidents, police chiefs, local scenesters, small town newspaper editors, and competitive Scrabble champions who are pretty pleased with their high relative standing within the circle they care about. Back where I come from, a single blue ribbon for a strawberry rhubarb pie at the State Fair could carry a small-town lady for years.

This is a popular retort to the fear that seeking status is zero sum, so any status I get comes at the cost of someone else’s status. I think it’s very weak.

There are two separate issues: whether increasing one person’s status decreases someone else’s status just as much (whether status seeking is constant sum) and whether the total benefits from status come to zero, or to some other positive or negative amount (whether status seeking is zero-sum in particular).

That people split into different pools and think theirs is better than others suggests (though does not prove) that the net value of status is more than zero. Disproportionately many people think they are above average, so as long as status translates to happiness in the right kind of way, disproportionately many people are happy.

The interesting question though – and the one that the above argument is intended to answer – is whether my gaining more status always takes away from your status. Here it’s less clear that the separation of people into different ponds makes much difference:

  1. One simple model would be that the difference between each person’s perception of the status ladder is that they each view their own pond as being at the top (or closer to the top than others think). But then when they move up in their pond, someone else in their pond moves down, and vice versa. So it’s still constant sum.
  2. Another simple model would be that people all agree on their positions on the status ladder, but they care a lot more about where they are relative to some of the people on the ladder (those in their pond). For instance I might agree that the queen of England is higher status than me, but mostly just think about my position in the blogosphere.  Here of course status is constant sum (since we don’t disagree on status). But the hope would be that at least the status we care more about isn’t constant sum. But it is. However much I move up relative to people in my pond, people in my pond move down relative to me (a person in their pond). So again involving ponds doesn’t change the constant-sumness of people gaining or losing status.
  3. But perhaps changing the number or contents of the ponds could increase the total status pie? Increasing the number of ponds could make things better – for instance if people measure status as distance from the top of one’s favorite pond. It could also make things worse – for instance if people measure status as the number of people under one in one’s favorite pond. It could also not change the total amount of status, if people measure status as something like proportion of the way up a status ladder. Instead of one big ladder there could be lots of little parallel ladders. This would stop people from having very high or very low status, but not change the total. It seems to me that some combination of these is true. The maker of the best rhubarb pie at the State Fair might feel statusful, but nowhere near as statusful as the president of america. Probably not even as statusful as someone at the 90th percentile of wealth. So I don’t think we just pay attention to the number above us in the group we care about most. Nor just our rank on some ladder – being further up of a bigger ladder is better. So it’s not clear to me that increasing the number of ponds should make for more status, or more enjoyment of status.
  4. Maybe moving people between ponds can help? Will Wilkinson tells of how he moved between ponds until he found one where he had a chance to excel. It seems likely that he feels higher status now. However the people in the ponds he left now have fewer people under them, and their ponds are smaller. Either of these might diminish their status. In his new pond, Will is probably better than others who were already competing. This lowers their status. It’s unclear whether everyone’s more statusful or better off overall than if they had all been in one big pond.

It might sound intuitive that more ponds mean more status for all, but in most straightforward models the number of ponds doesn’t change the size of the status pie.

GD Star Rating
loading...
Tagged as: