Monthly Archives: January 2008

OB Meetup: Millbrae, Thu 21 Feb, 7pm

The Overcoming Bias meetup has been scheduled for Thursday, February 21st, at 7pm.  We’re going to look at locating this in Millbrae within walking distance of the BART / Caltrain station.  The particular restaurant I had in mind turns out to be booked for Thursdays, so if you know a good Millbrae restaurant (with a private room?) in walking distance of the train station, please post in the comments.  I’ll be looking at restaurants shortly.

Why not schedule to a day other than Thursday, you ask?

Because:

Robin Hanson will be in the Bay Area and attending!  Woohoo!

If you would be able to make Thursday the 21st, 7pm, in Millbrae, somewhere near the BART/Caltrain, please vote below.  No, seriously, please vote, now – the kind of restaurant I have to find depends on how many people will be attending.

Continue reading "OB Meetup: Millbrae, Thu 21 Feb, 7pm" »

GD Star Rating
loading...

Newcomb’s Problem and Regret of Rationality

Followup toSomething to Protect

The following may well be the most controversial dilemma in the history of decision theory:

A superintelligence from another galaxy, whom we shall call Omega, comes to Earth and sets about playing a strange little game.  In this game, Omega selects a human being, sets down two boxes in front of them, and flies away.

Box A is transparent and contains a thousand dollars.
Box B is opaque, and contains either a million dollars, or nothing.

You can take both boxes, or take only box B.

And the twist is that Omega has put a million dollars in box B iff Omega has predicted that you will take only box B.

Omega has been correct on each of 100 observed occasions so far – everyone who took both boxes has found box B empty and received only a thousand dollars; everyone who took only box B has found B containing a million dollars.  (We assume that box A vanishes in a puff of smoke if you take only box B; no one else can take box A afterward.)

Before you make your choice, Omega has flown off and moved on to its next game.  Box B is already empty or already full.

Omega drops two boxes on the ground in front of you and flies off.

Do you take both boxes, or only box B?

And the standard philosophical conversation runs thusly:

One-boxer:  "I take only box B, of course.  I’d rather have a million than a thousand."

Two-boxer:  "Omega has already left.  Either box B is already full or already empty.  If box B is already empty, then taking both boxes nets me $1000, taking only box B nets me $0.  If box B is already full, then taking both boxes nets $1,001,000, taking only box B nets $1,000,000.  In either case I do better by taking both boxes, and worse by leaving a thousand dollars on the table – so I will be rational, and take both boxes."

One-boxer:  "If you’re so rational, why ain’cha rich?"

Two-boxer:  "It’s not my fault Omega chooses to reward only people with irrational dispositions, but it’s already too late for me to do anything about that."

Continue reading "Newcomb’s Problem and Regret of Rationality" »

GD Star Rating
loading...

Dark Police

Years ago many were concerned that new computer and surveillance techs were driving a loss of privacy.  David Brin once thoughtfully argued that we were better off in a transparent society, as long as the light shined also on those in power, such as the police.  Sadly, it seems "privacy" laws now keep light off the police, even while it shines brightly on the rest of us.  From the Volokh Conspiracy

Last month, I linked to a story about someone who was "convicted of violating state wiretapping laws" for "conceal[ing] a camera to videotape a Boston University police sergeant … during a 2006 political protest." I wrote that this was outrageous, but entirely consistent with a 2001 Massachusetts Supreme Judicial Court decision in Commonwealth v. Hyde, which is based on Massachusetts’ extremely broad privacy law. The court there upheld a conviction of a person who had "secretly tape recording statements made by police officers during a routine traffic stop" of himself. … Now … the Massachusetts Lawyers Weekly reports:

[Simon Glik] will stand trial on Jan. 29 in Boston Municipal Court on charges of wiretapping, aiding an escape and disturbing the peace for allegedly using his cell phone to record the arrest of a 16-year-old juvenile in a drug case….

Maybe we live in a police state, but thank God its a democratic police state …

GD Star Rating
loading...
Tagged as:

Contingent Truth Value

Does allowing prophets, whistle-blowers, and dissidents to tell people truths they don’t want to hear help those other people or hurt them?  Today I heard an excellent talk (see slides and paper) by Roland Benabou explaining how it can help or hurt, depending on the situation:

HURT: If your future is likely to be enjoyable, and if before then anticipating your great future gives you enough joy, then if you come across bad news suggesting otherwise you might enjoy your life more overall if you quickly look the other way and forget about it.  Even if later on you realize you are the sort of person who would forget such news, you’d still reasonably guess you had a good chance of an enjoyable future, and you’d enjoy savoring that prospect, at least for a while.  Someone who forced you to pay attention to the bad news could do you a real harm.

HELP: On the other hand, if a group of you worked together to build an enjoyable future, how hard you each worked might depend on the chances you each assigned to your efforts working out well.  Given that you expected other people to avoid looking at bad news, you might also find it in your interest to avoid looking at bad news, so that you were all in an equilibrium where you all avoided bad news.  But for certain parameter values you might all be better off in a different equilibrium where you all expect each other to look at bad news and change your behavior in response.  In this case someone who collected bad news, saved it, and later forced you all to pay attention to the bad news you had tried to forget could upgrade your equilibrium.  This could do you all a favor, a favor you were individually not willing to do for yourselves. 

The value of truth is contingent, and depends on the details of your world and values.  It is not guaranteed.   So honestly demands that my commitment to truth also be contingent.

GD Star Rating
loading...
Tagged as:

Deliberative Prediction Markets — A Reply

Robin suggests that a more robust model of deliberative prediction markets would be useful, and I agree. Experimentation in the field would be even more useful. But my original paper and book section explain the logic of this market approach clearly, with both words and what I acknowledged was a "simple model."

I doubt, in any event, that a more elaborate model will change the basic conclusion: that the deliberative prediction market provides at least some increased incentive to reveal information. I’ll let interested readers look at the original paper for a more developed argument (including math), but it boils down to a very simple point. A prediction market, as Robin and I both note, already provides some incentives to reveal information. But if a trader’s payoff depends on whether the trader actually succeeds at persuading others rather than on whether the trader turns out in the long run to be correct, the trader will have an additional incentive to reveal that information.

Continue reading "Deliberative Prediction Markets — A Reply" »

GD Star Rating
loading...
Tagged as:

Something to Protect

Followup toTsuyoku Naritai, Circular Altruism

In the gestalt of (ahem) Japanese fiction, one finds this oft-repeated motif:  Power comes from having something to protect.

I’m not just talking about superheroes that power up when a friend is threatened, the way it works in Western fiction.  In the Japanese version it runs deeper than that.

In the X saga it’s explicitly stated that each of the good guys draw their power from having someone – one person – who they want to protect.  Who?  That question is part of X‘s plot – the "most precious person" isn’t always who we think.  But if that person is killed, or hurt in the wrong way, the protector loses their power – not so much from magical backlash, as from simple despair.  This isn’t something that happens once per week per good guy, the way it would work in a Western comic.  It’s equivalent to being Killed Off For Real – taken off the game board.

The way it works in Western superhero comics is that the good guy gets bitten by a radioactive spider; and then he needs something to do with his powers, to keep him busy, so he decides to fight crime.  And then Western superheroes are always whining about how much time their superhero duties take up, and how they’d rather be ordinary mortals so they could go fishing or something.

Similarly, in Western real life, unhappy people are told that they need a "purpose in life", so they should pick out an altruistic cause that goes well with their personality, like picking out nice living-room drapes, and this will brighten up their days by adding some color, like nice living-room drapes.  You should be careful not to pick something too expensive, though.

In Western comics, the magic comes first, then the purpose:  Acquire amazing powers, decide to protect the innocent.  In Japanese fiction, often, it works the other way around.

Of course I’m not saying all this to generalize from fictional evidence. But I want to convey a concept whose deceptively close Western analogue is not what I mean.

I have touched before on the idea that a rationalist must have something they value more than "rationality":  The Art must have a purpose other than itself, or it collapses into infinite recursion.  But do not mistake me, and think I am advocating that rationalists should pick out a nice altruistic cause, by way of having something to do, because rationality isn’t all that important by itself.  No.  I am asking:  Where do rationalists come from?  How do we acquire our powers? 

Continue reading "Something to Protect" »

GD Star Rating
loading...

Deliberation in Prediction Markets

Monday I wrote:

Abramowicz has let his imagination run free searching for ways we could use prediction markets in governance. … The main problem with using Abramowicz’s book as a "technical manual", however, is that he’s never actually seen, much less touched, most of the blocks he describes.  His conclusions are not supported or tested by math models, computer simulations, lab experiments, field trials, nor a track record of successful past proposals – it is all based on his untested intuitions. … My intuitions about what will work how well differ in many ways. 

Abramowicz countered:

I previously offered a mathematical elaboration of "deliberative markets."

Today let me disagree about that.  Here’s Abramowicz in his book:

Continue reading "Deliberation in Prediction Markets" »

GD Star Rating
loading...
Tagged as:

Trust in Bayes

Followup toBeautiful Probability, Trust in Math

In Trust in Math, I presented an algebraic proof that 1 = 2, which turned out to be – surprise surprise – flawed.  Trusting that algebra, correctly used, will not carry you to an absurd result, is not a matter of blind faith.  When we see apparent evidence against algebra’s trustworthiness, we should also take into account the massive evidence favoring algebra which we have previously encountered.  We should take into account our past experience of seeming contradictions which turned out to be themselves flawed.  Based on our inductive faith that we may likely have a similar experience in the future, we look for a flaw in the contrary evidence.

This seems like a dangerous way to think, and it is dangerous, as I noted in "Trust in Math".  But, faced with a proof that 2 = 1, I can’t convince myself that it’s genuinely reasonable to think any other way.

The novice goes astray and says, "The Art failed me."
The master goes astray and says, "I failed my Art."

To get yourself to stop saying "The Art failed me", it’s helpful to know the history of people crying wolf on Bayesian math – to be familiar with seeming paradoxes that have been discovered and refuted.  Here an invaluable resource is "Paradoxes of Probability Theory", Chapter 15 of E. T. Jaynes’s Probability Theory: The Logic of Science (available online).

I’ll illustrate with one of Jaynes’s examples:

Continue reading "Trust in Bayes" »

GD Star Rating
loading...

Predictocracy — A Preliminary Response

Thanks to Robin for posting a mini-review of Predictocracy. We’ve promised to debate the relative merits of a “futarchy” and a “predictocracy” later.

I’ll use this opportunity to respond briefly to his criticism (while gratefully accepting his praise). I agree that it’s best when technical designs for prediction markets can be supported by mathematical models or empirical evidence. At the same time, I didn’t want to scare away readers by including math. Meanwhile, I agree that field experiments can be helpful, and I am developing a web site that will test some of the ideas of the book (subject, of course and unfortunately, to legal restrictions). While recognizing the contributions of experimental economics, I doubt that laboratory experiments will be of much use in persuading skeptics that prediction markets can be useful in real-world institutions.

Nonetheless, almost all of the market designs that I describe in the book already have some support of the kind that Robin recommends (in some cases by Robin himself). For example, I previously offered a mathematical elaboration of “deliberative markets,” which seek to encourage participants to seek to persuade others that their predictions are correct.

Admittedly, there are a few exceptions. The incentives provided by two of my technical proposals (the decentralized subsidy approach and the nobody-loses prediction market) are sufficiently straightforward to me that math seems superfluous to me, though I agree that field tests comparing these with alternatives would be useful. Two of the proposals (the text-authoring market and the market web) could certainly benefit from experimentation, but the software needed to implement them would be considerably more complicated than what is needed for existing prediction markets.

A concluding thought: Robin’s articles are generally ridiculously underplaced in comparison to both their quality and their influence. But certainly I’m glad that Robin didn’t wait to publish his articles on science claims and futarchy until he had developed mathematical models or laboratory experiments. I don’t think that they would have added much. Academia may well be biased against articles whose primary thrust is to propose new institutions; I’ve also generally had better luck in placing more conventional articles. But I still think that such articles perform a useful function, and while they should include support, there may be an efficient division of labor between those who sketch out broad ideas and those who elaborate them (with or without mathematical models) or test them (in laboratory and field experiments). This is particularly so when the practical reality is that many different forms of elaboration and confirmation will be necessary before new institutions can be adopted.

GD Star Rating
loading...
Tagged as:

The “Intuitions” Behind “Utilitarianism”

Followup toCircular AltruismResponse toKnowing your argumentative limitations, OR "one [rationalist’s] modus ponens is another’s modus tollens."

(Still no Internet access.  Hopefully they manage to repair the DSL today.)

I haven’t said much about metaethics – the nature of morality – because that has a forward dependency on a discussion of the Mind Projection Fallacy that I haven’t gotten to yet.  I used to be very confused about metaethics.  After my confusion finally cleared up, I did a postmortem on my previous thoughts.  I found that my object-level moral reasoning had been valuable and my meta-level moral reasoning had been worse than useless.  And this appears to be a general syndrome – people do much better when discussing whether torture is good or bad than when they discuss the meaning of "good" and "bad".  Thus, I deem it prudent to keep moral discussions on the object level wherever I possibly can.

Occasionally people object to any discussion of morality on the grounds that morality doesn’t exist, and in lieu of jumping over the forward dependency to explain that "exist" is not the right term to use here, I generally say, "But what do you do anyway?" and take the discussion back down to the object level.

Paul Gowder, though, has pointed out that both the idea of choosing a googolplex dust specks in a googolplex eyes over 50 years of torture for one person, and the idea of "utilitarianism", depend on "intuition".  He says I’ve argued that the two are not compatible, but charges me with failing to argue for the utilitarian intuitions that I appeal to.

Continue reading "The “Intuitions” Behind “Utilitarianism”" »

GD Star Rating
loading...