Monthly Archives: September 2012

How to motivate women to speak up

In mixed groups, women don’t talk as much as men. This is perhaps related to women being perceived as “bitches” if they do, i.e. pushy, domineering creatures whom one would best loath and avoid. Lindy West at Jezebel comments:

…it just goes back to that hoary old double standard—when men speak up to be heard they are confident and assertive; when women do it we’re shrill and bitchy. It’s a cliche, but it’s true. And it leaves us in this chicken/egg situation—we have to somehow change our behavior (i.e. stop conceding and start talking) while simultaneously changing the perception of us (i.e. asserting that assertiveness does not equal bitchiness). But how do you assert that your assertiveness isn’t bitchiness to a culture that perceives assertiveness as bitchiness? And how do you start talking to change the perception of how you talk when that perception is actively keeping you from talking? Answer: UGH, I HAVE NO IDEA…

One problem with asserting that your assertiveness doesn’t indicate bitchiness is that it probably does. If all women know that assertiveness will be perceived as bitchiness then those who are going to be perceived as bitches anyway (due to their actual bitchiness) and those who don’t mind being seen as bitches (and therefore are more likely to be bitches), will be the ones with the lowest costs to speaking up. So mostly the bitches speak, and the stereotype is self-fulfilling.

This model makes it clearer how to proceed. If you want to credibly communicate to the world that women who speak up are not bitches, first you need for the women who speak up to not be bitches. This can happen through any combination of bitches quietening down and non-bitches speaking up. Both are costly for the people involved, so they will need altruism or encouragement from the rest of the anti-stereotype conspiracy. Counterintuitively, not all women should be encouraged to speak more. The removal of such a stereotype should also be somewhat self-fulfilling – as it is reduced, the costs of speaking up decline, and non-bitchy women do it more often.

Interestingly and sadly, this is exactly opposite to the strategy that Lindy finds self-evident:

…But I guess I will start with this pledge I just made up: I, Lindy West, a shrill bitch, do hereby pledge to talk really really loud in meetings if I have something to say, even if dudes are talking louder and they don’t like me. I refuse to be a turtle—unless it is some really loud species of brave turtle with big ideas. I will not hold back just because I’m afraid of being called a loudmouth bitch (or a “trenchmouth loud ass,” which I was called the other day and as far as I can tell is some sort of pirate insult). Also, I will use the fuck out of the internet, because they can’t drown you out on the internet. The end. Amen or whatever.

GD Star Rating
Tagged as: , ,

3 Em Econ Talks Soon

Instead of my usual scattershot, these days I’m successfully focusing. My next three talks:

  1. Sept. 26, 6-7:30p, GMU Econ Society, Student Union II (= Hub) front ballroom, GMU, Fairfax, VA (audio, slides)
  2. Oct. 2, 6-7:30p, Pittsburgh Less Wrong, Baker Hall 150, CMU, Pittsburgh, PA (audio, slides)
  3. Oct. 14, 11:00-30a, Singularity Summit, 1111 Calif. St., San Fran., CA. ($100 off code: OVERCOMINGBIAS ) (videoslides)

All three will be on:

Em Econ 101: An Economic Analysis of Brain Emulation

The three most disruptive transitions in history were the introduction of humans, farming, and industry. If another transition lies ahead, a good guess for its source is artificial intelligence in the form of whole brain emulations, or “ems,” sometime in the next century. I apply standard social science to this unusual situation, to identify a relatively-likely reference scenario set modestly far into a post-em-transition world. I consider families, reproduction, life plans, daily activities, inequality, work training, property rights, firm management, industrial organization, urban agglomeration, security, and governance.

GD Star Rating
Tagged as: , ,

The Alms Expert Opening

Around 1800 in England and Russia, the three main do-gooder activities were medicine, school, and alms (= food/shelter for the weak, such as the old or crippled). Today the three spending categories of medicine, school, and alms make up ~40% of US GDP, a far larger fraction than in 1800. …

Foragers who personally taught kids, cared for sick folks, and gave food/shelter to weak folks, credibly signaled their loyalty to allies, at least when such needy were allies. Weak group selection helped encourage such aid as ways to signal loyalty. … [Today,] votes supporting spending taxes on medicine, school and alms are interpreted as showing loyal “caring” for one’s community. (more)

Today, two of these three classic charities have very powerful associated “professions”: doctors and teachers. These professions are powerful because they are seen as representing the good in those causes – doctors are our official authorities on what is good for patients, and teachers are our official authorities on what is good for students. So we tend to back these experts when they fight with other related organizations, such as when docs fight with insurance companies, or when teachers fight with mayors. This allows such experts to be very well paid and pampered relative to other professionals.

The missing group here is alms experts: we have no strong profession of those who specialize in helping the poor, crippled, etc. While there are of course people who specialize in such roles, they are not united together under a single recognized label to leverage public sympathy, and they do not speak as a unit, or negotiate as a unit with related organizations.

But, given the example of docs and teachers, it seems plausible that if alms experts were to create an encompassing profession of “feeders”, and if they as a unit publicly challenged other related organizations, like charities or government funders, this feeding profession could often get their way. Of course they’d probably mostly use their power to benefit themselves. To guess if they would help the world, ask yourself if organized docs and teachers help the world.

Even so, there does seem to be an as yet largely unused opening for a feeding profession.

GD Star Rating
Tagged as: ,

Missing Life-Lessons

We learn many things over the space of our lives. With language, we can share such things with many others distant in space and time. With such a fantastic capacity, you might think we humans would hardly ever have to learn anything important directly for ourselves. But while we do learn many things from textbooks and mentors, we are surprisingly bad at teaching the most important life lessons. Like, for example, what its like to be married a long time, how to stay married, and when that is worth the trouble.

One contributing factor is that folks, late in life, almost never write essays, or books, on “what I’ve learned about life.” It would only take a few pages, and would seem to offer great value to others early in their lives. Why the silence? Some possible explanations:

  1. People don’t actually learn much that can be abstracted from their life details.
  2. People don’t want to hear the truth, and they won’t find lies useful, so why bother.
  3. Young folks already think they know all the answers, so won’t listen.
  4. It seems arrogant to offer lessons from your life when few others do this.
  5. When folks write on their life, they care much more to brag about what they did.
  6. Useful lessons will suggest the author had average success, which is shameful.
  7. The lessons of folks with way above average success aren’t useful to average folks.
  8. People are too weak to write when they feel old enough to tell lessons.
  9. Few care what people will think of them after they are dead.
  10. Most lessons have been written, but few can be bothered to read them.

None of these explanations seem especially satisfactory. What’s going on?

GD Star Rating
Tagged as: , ,

Signaling bias in philosophical intuition

Intuitions are a major source of evidence in philosophy. Intuitions are also a significant source of evidence about the person having the intuitions. In most situations where onlookers are likely to read something into a person’s behavior, people adjust their behavior to look better. If philosophical intuitions are swayed in this way, this could be quite a source of bias.

One first step to judging whether signaling motives change intuitions is to determine whether people read personal characteristics into philosophical intuitions. It seems to me that they do, at least for many intuitions. If you claim to find libertarian arguments intuitive, I think people will expect you to have other libertarian personality traits, even if on consideration you aren’t a libertarian. If consciousness doesn’t seem intuitively mysterious to you, one can’t help wonder if you have a particularly un-noticable internal life. If it seems intuitively correct to push the fat man in front of the train, you will seem like a cold, calculating sort of person. If it seems intuitively fine to kill children in societies with pro-children-killing norms, but you choose to condemn it for other reasons, you will have all kinds of problems maintaining relationships with people who learn this.

So I think people treat philosophical intuitions as evidence about personality traits. Is there evidence of people responding by changing their intuitions?

People are enthusiastic to show off their better looking intuitions. They identify with some intuitions and take pleasure in holding them. For instance, in my philosophy of science class the other morning, a classmate proudly dismissed some point, declaring,’my intuitions are very rigorous’. If his intuitions are different from most, and average intuitions actually indicate truth, then his are especially likely to be inaccurate. Yet he seems particularly keen to talk about them, and chooses positions based much more strongly on they than others’ intuitions.

I see similar urges in myself sometimes. For instance consistent answers to the Allais paradox are usually so intuitive to me that I forget which way one is supposed to err. This seems good to me. So when folks seek to change normative rationality to fit their more popular intuitions, I’m quick to snort at such a project. Really, they and I have the same evidence from intuitions, assuming we believe one anothers’ introspective reports. My guess is that we don’t feel like coming to agreement because they want to cheer for something like ‘human reason is complex and nuanced and can’t be captured by simplistic axioms’ and I want to cheer for something like ‘maximize expected utility in the face of all temptations’ (I don’t mean to endorse such behavior). People identify with their intuitions, so it appears they want their intuitions to be seen and associated with their identity. It is rare to hear a person claim to have an intuition that they are embarrassed by.

So it seems to me that intuitions are seen as a source of evidence about people, and that people respond at least by making their better looking intuitions more salient. Do they go further and change their stated intuitions? Introspection is an indistinct business. If there is room anywhere to unconsciously shade your beliefs one way or another, it’s in intuitions. So it’s hard to imagine there not being manipulation going on, unless you think people never change their beliefs in response to incentives other than accuracy.

Perhaps this isn’t so bad. If I say X seems intuitively correct, but only because I guess others will think seeing X as intuitively correct is morally right, then I am doing something like guessing what others find intuitively correct. Which might be a bit of a noisy way to read intuitions, but at least isn’t obviously biased. That is, if each person is biased in the direction of what others think, this shouldn’t obviously bias the consensus. But there is a difference between changing your answer toward what others would think is true, and changing your answer to what will cause others to think you are clever, impressive, virile, or moral. The latter will probably lead to bias.

I’ll elaborate on an example, for concreteness. People ask if it’s ok to push a fat man in front of a trolley to stop it from killing some others. What would you think of me if I said that it at least feels intuitively right to push the fat man? Probably you lower your estimation of my kindness a bit, and maybe suspect that I’m some kind of sociopath. So if I do feel that way, I’m less likely to tell you than if I feel the opposite way. So our reported intuitions on this case are presumably biased in the direction of not pushing the fat man. So what we should really do is likely further in the direction of pushing the fat man than we think.

GD Star Rating
Tagged as: , ,

Placebos Show Care

Something similar to the placebo effect occurs in many animals. … Siberian hamsters do little to fight an infection if the lights above their lab cage mimic the short days and long nights of winter. But changing the lighting pattern to give the impression of summer causes them to mount a full immune response.

Likewise, those people who think they are taking a drug but are really receiving a placebo can have a response which is twice that of those who receive no pills. In Siberian hamsters and people, intervention creates a mental cue that kick-starts the immune response. …

The Siberian hamster subconsciously acts on a cue that it is summer because food supplies to sustain an immune response are plentiful at that time of year. We subconsciously respond to treatment – even a sham one – because it comes with assurances that it will weaken the infection, allowing our immune response to succeed rapidly without straining the body’s resources. … Farming and other innovations in the past 10,000 years mean that many people have a stable food supply and can safely mount a full immune response at any time – but our subconscious switch has not yet adapted to this. (more)

OK, but the key question is: why would getting a placebo pill ever have been a credible signal that you could safely turn on your immune system? If for our ancestors treatments like pills tended to be very effective at improving health, you might think that a pill would give you so much extra energy that you could afford to spend some of that extra on your immune system. But pills are rarely that effective, and your body would quickly notice that fact.

My showing that you care theory, that the main function of medicine is to signal concern, fits well here. The idea is that we are reassured by the fact that people take the trouble to take care of us.

The most severe part of our ancestors’ environment wasn’t the weather, it was other humans. When people were sick, they worried that their rivals and enemies would use that opportunity to hurt them. If such harms were coming, they had to be attentive, wary, and ready to act — they couldn’t afford to turn on their immune system, which would make them lethargic.

But if someone had caretakers, who spent time and other resources to take care of them when they were sick, why then such caretakers would probably also protect them from rivals. So they could afford to turn on their immune system. If your associates spend resources to buy you pills, and then take time to make sure you take certain pills at certain times, they probably care enough to protect you from rivals.

GD Star Rating
Tagged as: ,

Functions /= Tautologies


Calling the mind a computer is just a metaphor – and using metaphors to infer literal truths about the world is a fallacy.


I’m saying that your mind is literally a signal processing system. … While minds have a great many features, a powerful theory, in fact our standard theory, to explain the mix of features we see associated with minds, is that minds fundamentally function to process signals, and that brains are the physical devices that achieve that function.


The “standard theories of minds as signal processors” that Robin refers to aren’t theories at all. They’re just eccentric tautologies. As Robin has frankly admittedly to me several times, he uses the term “signal processors” so broadly that everything whatsoever is a signal processor. On Robin’s terms, a rock is a signal processor. What “signals” do rocks “process”? By moving or not moving, rocks process signals about the mass and distance of other objects in the universe.

Consider an analogy. Our theory of table legs is that they function mainly for structural support; table legs hold up tables. Yes, anything can be analyzed for the structural support it provides, and most objects can be arranged to as to provide some degree of structural support to something else. But that doesn’t make our theories of structural support tautologies. Our theories can tell us how efficient and effective any given arrangement of objects is at achieving this function. It we believe that something was designed to be a table leg, our theories of structural support make predictions about what sort of object arrangement it will be. And if our table is missing a leg, such theories recommend object arrangements to use as a substitute table leg.

Similarly, while any object arrangement can be analyzed in terms of the signals it sends out and the ways that it transforms incoming signals into outgoing signals, all of these do not function equally well as signal processors. If we know that something was designed as a signal processor, and know something about the kinds of signals it was designed to process for what purposes, then our theories of signal processing make predictions about how this thing will be designed. And if we find ourselves missing a part of a signal processor, such theories tell us what sort of replacement part(s) can efficiently restore the signaling function.

Animal brains evolved to direct animal actions. Fish, for example, swim toward prey and away from predators. So fish brains need to take in external signals about the locations of other fish, and process those signals into useful directions to give muscles about how to change the direction and intensity of swimming. This makes all sorts of predictions about how fish brains will be designed by evolution.

Human brains evolved to achieve many more functions than to merely to direct our speed and direction of motion. But we understand many of those functions in quite some detail, and that understanding implies many predictions about how human brains are efficiently designed to simultaneously achieve these functions.

This same combination of general signal processing theory and specific understandings about the functions evolution designed human brains to perform also implies predictions on how to substitute wholesale for human brain functions. For example, knowing that brain cells function mainly to take signals coming from other cells, transform them, and pass them on to other cells, implies predictions on what cell details one needs to emulate to replicate the signaling function of a human brain cell. It also makes predictions like:

In order manage its intended input-output relation, a single processor simply must be designed to minimize the coupling between its designed input, output, and internal channels, and all of its other “extra” physical degrees of freedom. (more)

All of which goes to show that signal processing theory is far from a tautology, even if every object can be seen as in some way processing signals.

GD Star Rating
Tagged as: ,

Theories vs. Metaphors

I have said things like:

We should expect brain emulation to be feasible because brains function to process signals, and the decoupling of signal dimensions from other system dimensions is central to achieving the function of a signal processor.

Bryan Caplan says I make:

the Metaphorical Fallacy. Its general form:

1. X is metaphorically Y.

2. Y is literally Z.

3. Therefore, X is literally Z.

…. To take a not-so-random example, … Robin says many crazy things … like:

1. The human mind is a computer.

2. Computers’ data can be uploaded to another computer.

3. Therefore, the human mind can be uploaded to a computer.

No, I’m pretty sure that I’m saying that your mind is literally a signal processing system. Not just metaphorically; literally. That is, while minds have a great many features, a powerful theory, in fact our standard theory, to explain the mix of features we see associated with minds, is that minds fundamentally function to process signals, and that brains are the physical devices that achieve that function. And our standard theories of how physical devices achieve signal processing functions predicts that we can replicate, or “emulate”, the same signal processing functions in quite different physical devices. In fact, such theories tell us how to replicate such functions in other devices.

Of course you can, like Bryan, disagree with our standard theory that the main function of minds is to process signals. Or you could disagree with our standard theories of how that function is achieved by physical devices. Or you could note that since the brain is a signal processor of unparalleled complexity, we are a long way away from knowing how to replicate it in other physical hardware.

But given how rich and well developed are our standard theories of minds as signal processors, signal processors in general, and the implementation of signal processors in physical hardware, it hardly seems fair to reject my conclusion based on a mere “metaphor.”

GD Star Rating
Tagged as: ,

A Survey Question

Here is a simple one question survey that I’d like to get a hundred or so folks to answer. It is a surprisingly interesting question, and I have a bet with Bryan Caplan on it, but I won’t say more now, so as not to bias your answer.

Added 9p: The survey(s) are now closed.  I did three of them:  a few of my facebook friends, 100 folks via Survey Monkey, and 1000 folks via Quick Survey . The question was “In a popular book with a modestly technical readership, what word should be used to refer to the set that includes both humans and artificial/robot intelligences?” Here are answer counts:

FB SM QS Total
Sentients 0 20 282 302
Intelligent Agents 3 29 238 270
Agents 26 35 181 242
Intelligences 17 23 176 216
Entitites 1 8 98 107
Sapients 0 8 90 98
Beings 1 8 86 95
Turings 2 1 82 85
Actors 1 8 60 69
Persons 5 6 52 63
People 1 6 38 45
Players 1 1 22 24
Folks 3 1 16 20
Creatures 3 4 10 17
Souls 0 1 14 15
Total  64  159  1445  1668


My bet with Bryan was on if a larger survey would confirm the initial small fb survey. He was right that results changed lots – the final winning item got no votes initially!  The initial survey put the temporarily leading items at the top of the list, while the other surveys randomized the order each time. Also in the initial survey, people could see three comments saying sentients is an incorrect term relative to sapients.

Right now its a hard to imagine filling a book with phrases like “a population of roughly a quadrillion sentients.” But perhaps I’d warm to it. And it would flow smoother than the “intelligent agents” phrase. A term like “sentients” probably wouldn’t make Bryan happy though; he wants a term agnostic on if robots are really conscious.

I’m struck by just how varied are people’s intuitions on how to talk about and compare humans and robots.

Added 9Sept: Bryan is ok with using “sentients”, to avoid reader confusion. That helps me warm to it.

GD Star Rating

Information won’t set you free by itself

Information storage and communication increases our ability to discover and accumulate knowledge. And if Stephen Pinker is to be believed, humans have become more peaceful over time. However, the connection between better access to information and our softer world is dubious at best according to Adam Gopnik:

N.Y.U. professor Clay Shirky—the author of “Cognitive Surplus” and many articles and blog posts proclaiming the coming of the digital millennium—is the breeziest and seemingly most self-confident. … Shirky believes that we are on the crest of an ever-surging wave of democratized information: the Gutenberg printing press produced the Reformation, which produced the Scientific Revolution, which produced the Enlightenment, which produced the Internet, each move more liberating than the one before. Though it may take a little time, the new connective technology, by joining people together in new communities and in new ways, is bound to make for more freedom. It’s the Wired version of Whig history: ever better, onward and upward, progress unstopped. In John Brockman’s anthology “Is the Internet Changing the Way You Think?,” the evolutionary psychologist John Tooby shares the excitement—“We see all around us transformations in the making that will rival or exceed the printing revolution”—and makes the same extended parallel to Gutenberg: “Printing ignited the previously wasted intellectual potential of huge segments of the population. . . . Freedom of thought and speech—where they exist—were unforeseen offspring of the printing press.”

Shirky’s and Tooby’s version of Never-Betterism has its excitements, but the history it uses seems to have been taken from the back of a cereal box. The idea, for instance, that the printing press rapidly gave birth to a new order of information, democratic and bottom-up, is a cruel cartoon of the truth. If the printing press did propel the Reformation, one of the biggest ideas it propelled was Luther’s newly invented absolutist anti-Semitism. And what followed the Reformation wasn’t the Enlightenment, a new era of openness and freely disseminated knowledge. What followed the Reformation was, actually, the Counter-Reformation, which used the same means—i.e., printed books—to spread ideas about what jerks the reformers were, and unleashed a hundred years of religious warfare. In the seventeen-fifties, more than two centuries later, Voltaire was still writing in a book about the horrors of those other books that urged burning men alive in auto-da-fé. Buried in Tooby’s little parenthetical—“where they exist”—are millions of human bodies. If ideas of democracy and freedom emerged at the end of the printing-press era, it wasn’t by some technological logic but because of parallel inventions, like the ideas of limited government and religious tolerance, very hard won from history.

Of course, if you stretch out the time scale enough, and are sufficiently casual about causes, you can give the printing press credit for anything you like. But all the media of modern consciousness—from the printing press to radio and the movies—were used just as readily by authoritarian reactionaries, and then by modern totalitarians, to reduce liberty and enforce conformity as they ever were by libertarians to expand it. As Andrew Pettegree shows in his fine new study, “The Book in the Renaissance,” the mainstay of the printing revolution in seventeenth-century Europe was not dissident pamphlets but royal edicts, printed by the thousand: almost all the new media of that day were working, in essence, for

Even later, full-fledged totalitarian societies didn’t burn books. They burned some books, while keeping the printing presses running off such quantities that by the mid-fifties Stalin was said to have more books in print than Agatha Christie. (Recall that in “1984” Winston’s girlfriend works for the Big Brother publishing house.) If you’re going to give the printed book, or any other machine-made thing, credit for all the good things that have happened, you have to hold it accountable for the bad stuff, too. The Internet may make for more freedom a hundred years from now, but there’s no historical law that says it has to.

Some more gems from this piece are below the fold.

Continue reading "Information won’t set you free by itself" »

GD Star Rating
Tagged as: , ,