Ian Maxwell, I would be happy to bet my $100 against your $10k that PvNP is shown to be undecidable by Peano arithmetic within the next 20 years. If you cap your losses at $100, it doesn't look to me worth the transaction costs to collect 20 years in the future, so I won't put up my $1. Similarly, I don't think my first offer gives you any upside.
Bollocks. I just flipped a coin. It either came up heads or tails. Are you really going to assign a probability of 1 or 0 to either outcome? How does your dutch book work out if the proposition turns out to be true? Can you dutch book someone who assigns 1:1 odds to heads?
Immediate revision to my last comment: In retrospect, I am prone to huge overestimates at the far end of my probability scale, so I should probably revise waaaay down to about 10:1 for P != NP and 100:1 for undecidablity.
Interesting. I find it really strange that Ken Steiglitz wouldn't accept any odds steeper than 2:1. I know very little of P vs. NP beyond the question itself (which I do actually understand technically), and so you would think I would be less sure than this fellow of the answer---yet, thinking about it, the point at which I actually feel indifferent about betting is something like 25:1. By contrast, I find it much less likely that P vs. NP is ever proven independent of current axioms---I might take something like 500:1 on that. (I have good reason to think that such a proof is impossible, but I'm still only an amateur mathematician and I may have missed some subtlety.)
If anyone wants to offer me a small-stakes wager based on this, please do. I would be quite willing to bet twenty against one that P != NP, or one against thirty that P = NP, as long as I don't stand to lose more than $100 or so.
"Does anyone doubt that two to one better summarizes his evidence than a million to one?"
RH: would you also argue that 35:1 on 00 on a U.S. roulette wheel better summarizes the evidence than 37:1? After all, millions of dollars are bet at 35:1.
You should have tried it with 1c vs $10K or even 1c vs $100. An amount like $1M is likely to spook most people because of the non-linear utility value of being completely broke.
But 2:1 says something. It's funny how people's 'intuition' (or left brain) shifts in to gear when confronted with a situation with actual consequences. For this reason I like the expression 'Your mouth is writing checks that your ass can't cash'.
I think that Neel's comment is both tongue-in-cheek and serious at the same time.
Neel simply points out that the idea of a "rational" agent does not hold. A rational agent (in other words, an agent of infinite computational power) should be able to know whether P=NP, P!=NP, or whether the problem is undecidable. We do not expect time to resolve this question as there is not a random component in this question.
So, a prediction market on this question assumes that the participants are not "rational", but rather operate under a "bounded rationality" (Rational ==calculating everything to the maximum degree possible.)
Oops: I see that in fact replies two levels deep (but not more) are allowed, at which point I no longer find it plausible that it was easier to allow that than to allow arbitrary depths.
The other thing about the million-to-one bet is that it's worth asking Steiglitz what numbers he'd require to take the bet the other way. I, for example, would also say something like a million to one for P=NP, but I might take a $1million-$1 if all I'm offering up is $1. I do not expect P to be = to NP, but I've no odds-calculating-robot certainty of it. Gaining $1 isn't worth the risk of losing a million, but gaining a million is worth the risk of losing a dollar on a subject whose odds I don't think are as obviously against me as, say, a lottery ticket. I wouldn't take a lot of those bets, and if several were offered I might look for an opportunity that I think is more likely to pay off, but losing a dollar isn't a big deal.
I think this is likely, and we could test this. What would Steiglitz take as a bet if the amount of money were higher? For example, I think it likely that Steiglitz would take a hundred-to-one bet if someone offered him, say, $10,000 if P!=NP for $1,000,000 if P=NP. I might even take that bet, but the thing about taking that sort of bet is that you can't push the million up much. If I had a billion dollars, I would take a $100,000 to $1 billion bet on that subject (10,000 to 1), but I can't take that bet right now because the consequences of being wrong, even if I think it's a million-to-one bet, are too extreme.
"This then means you cannot use their propensity to bet to draw conclusions about the strength of their evidence, because that principle only works for rational agents."
This is just non-responsive to Robin's post. Humans are not logically omniscient, and different sets of evidence will give us (with our inductive biases and limited computational capabilities) different degrees of confidence in logical and mathematical propositions. Mathematicians and computer scientists use such degrees of confidence every day in selecting possible avenues for research and proof.
I think replies nested more than one level deep just aren't allowed. I'd guess that the reason might actually be simply that whoever wrote the software that runs OB found it easier that way.
Anyway. On reflection, I agree that I overstated the case, but not in quite the way I think you're suggesting. I didn't say -- at least, I didn't intend to, and I don't think I actually did -- that the mere fact that other explanations are *possible* makes it impossible to draw any conclusions about Steiglitz's beliefs. I do think that a considerable degree of divergence between "real" opinions and willingness to bet is (not only possible in principle but) commonplace, and exactly how much is (not only in principle but in practice, and to a considerable extent) dependent on just what sort of bet is proposed. 2:1 versus 1000000:1 does indeed seem ridiculous, and I'm pretty sure Steiglitz doesn't "really" think P(P=NP) is as small as 10^-6. I'm not sure I'm really prepared to defend 2:1 versus 1000:1. But, say, 10:1 versus 1000:1 or 2:1 versus 30:1? Seems entirely plausible to me, unfortunately.
One of Cox's postulates, from which he derived the axioms of Bayesian probability, is that the probability calculus is an extension of logic, so that probabilities must agree with logic on certainly-true and certainly-false statements, which means that the probability of logical truths has to be 1, and logical falsehoods have to be 0.
More concretely, in this particular case, you can also construct the world's easiest Dutch book. Suppose my probability for true is p < 1. Then my probability for false is 1-p. Offer me a bet with odds better than p/(1-p) that false holds. Collect my money, because false doesn&#039t hold.
g, I can't reply to your reply. I'm not sure if that's a feature because Robin doesn't want overly protracted discussions, or just due to column width. I guess I'll try posting this and see it's allowed.
You seem overly reluctant to draw conclusions about people's beliefs about likelihoods from their actual behavior. Certainly nothing in this scenario consistitues a rigorous mathematical proof about what Ken does or does not believe, and all of the issues you've brought up indeed prevent us from making any airtight statements about Ken's beliefs. However, they don't strike me as overall adding up to a plausible explanation as to why someone (call him Ken_v2) who really believes something is 99.9% likely to occur would be willing to put up $2 to win $1, but not to put up $3; a theoretically possible explanation, yes, but not a plausible one. A much more plausible explanation is that the claim of 99.9% was a rhetorical flourish, rather than a true statement of his beliefs.
Yes, people are not calculating reasoning machines, but I disagree that "inferring his actual beliefs or evidence from his bets is a pretty hopeless enterprise." Your argument essentially appears to be that it's hopeless because one can always come up with arguments as to why complications, nonlinearities, and non-rational biases could conceivably explain his actions. That's true, but by the same logic, you could just as easily say "inferring a person's actual beliefs from his actions is a pretty hopeless enterprise." In the case of Ken_v2, I would ask what he means when he says "99.9% likely." Can he point me to other cases in which we have good frequentist methods of agreeing that the probability is roughly 99.9% (e.g. the probability that the weather in Denver is over 80 degrees Fahrenheit on a particular day in December), and in which he exhibits similar levels of risk aversion? If he can't, I think it signficantly more plausible to label his claims as insincere, rather than his actions as foolish.
Ian Maxwell, I would be happy to bet my $100 against your $10k that PvNP is shown to be undecidable by Peano arithmetic within the next 20 years. If you cap your losses at $100, it doesn't look to me worth the transaction costs to collect 20 years in the future, so I won't put up my $1. Similarly, I don't think my first offer gives you any upside.
Thanks - great story!
I'll take your bet at, hmm, 20 to 1. Up for it?Anyone else want to post the odds they'd take the bet at?
Bollocks. I just flipped a coin. It either came up heads or tails. Are you really going to assign a probability of 1 or 0 to either outcome? How does your dutch book work out if the proposition turns out to be true? Can you dutch book someone who assigns 1:1 odds to heads?
Immediate revision to my last comment: In retrospect, I am prone to huge overestimates at the far end of my probability scale, so I should probably revise waaaay down to about 10:1 for P != NP and 100:1 for undecidablity.
Interesting. I find it really strange that Ken Steiglitz wouldn't accept any odds steeper than 2:1. I know very little of P vs. NP beyond the question itself (which I do actually understand technically), and so you would think I would be less sure than this fellow of the answer---yet, thinking about it, the point at which I actually feel indifferent about betting is something like 25:1. By contrast, I find it much less likely that P vs. NP is ever proven independent of current axioms---I might take something like 500:1 on that. (I have good reason to think that such a proof is impossible, but I'm still only an amateur mathematician and I may have missed some subtlety.)
If anyone wants to offer me a small-stakes wager based on this, please do. I would be quite willing to bet twenty against one that P != NP, or one against thirty that P = NP, as long as I don't stand to lose more than $100 or so.
"Does anyone doubt that two to one better summarizes his evidence than a million to one?"
RH: would you also argue that 35:1 on 00 on a U.S. roulette wheel better summarizes the evidence than 37:1? After all, millions of dollars are bet at 35:1.
You should have tried it with 1c vs $10K or even 1c vs $100. An amount like $1M is likely to spook most people because of the non-linear utility value of being completely broke.
But 2:1 says something. It's funny how people's 'intuition' (or left brain) shifts in to gear when confronted with a situation with actual consequences. For this reason I like the expression 'Your mouth is writing checks that your ass can't cash'.
I think that Neel's comment is both tongue-in-cheek and serious at the same time.
Neel simply points out that the idea of a "rational" agent does not hold. A rational agent (in other words, an agent of infinite computational power) should be able to know whether P=NP, P!=NP, or whether the problem is undecidable. We do not expect time to resolve this question as there is not a random component in this question.
So, a prediction market on this question assumes that the participants are not "rational", but rather operate under a "bounded rationality" (Rational ==calculating everything to the maximum degree possible.)
Oops: I see that in fact replies two levels deep (but not more) are allowed, at which point I no longer find it plausible that it was easier to allow that than to allow arbitrary depths.
The other thing about the million-to-one bet is that it's worth asking Steiglitz what numbers he'd require to take the bet the other way. I, for example, would also say something like a million to one for P=NP, but I might take a $1million-$1 if all I'm offering up is $1. I do not expect P to be = to NP, but I've no odds-calculating-robot certainty of it. Gaining $1 isn't worth the risk of losing a million, but gaining a million is worth the risk of losing a dollar on a subject whose odds I don't think are as obviously against me as, say, a lottery ticket. I wouldn't take a lot of those bets, and if several were offered I might look for an opportunity that I think is more likely to pay off, but losing a dollar isn't a big deal.
I think this is likely, and we could test this. What would Steiglitz take as a bet if the amount of money were higher? For example, I think it likely that Steiglitz would take a hundred-to-one bet if someone offered him, say, $10,000 if P!=NP for $1,000,000 if P=NP. I might even take that bet, but the thing about taking that sort of bet is that you can't push the million up much. If I had a billion dollars, I would take a $100,000 to $1 billion bet on that subject (10,000 to 1), but I can't take that bet right now because the consequences of being wrong, even if I think it's a million-to-one bet, are too extreme.
"This then means you cannot use their propensity to bet to draw conclusions about the strength of their evidence, because that principle only works for rational agents."
This is just non-responsive to Robin's post. Humans are not logically omniscient, and different sets of evidence will give us (with our inductive biases and limited computational capabilities) different degrees of confidence in logical and mathematical propositions. Mathematicians and computer scientists use such degrees of confidence every day in selecting possible avenues for research and proof.
I think replies nested more than one level deep just aren't allowed. I'd guess that the reason might actually be simply that whoever wrote the software that runs OB found it easier that way.
Anyway. On reflection, I agree that I overstated the case, but not in quite the way I think you're suggesting. I didn't say -- at least, I didn't intend to, and I don't think I actually did -- that the mere fact that other explanations are *possible* makes it impossible to draw any conclusions about Steiglitz's beliefs. I do think that a considerable degree of divergence between "real" opinions and willingness to bet is (not only possible in principle but) commonplace, and exactly how much is (not only in principle but in practice, and to a considerable extent) dependent on just what sort of bet is proposed. 2:1 versus 1000000:1 does indeed seem ridiculous, and I'm pretty sure Steiglitz doesn't "really" think P(P=NP) is as small as 10^-6. I'm not sure I'm really prepared to defend 2:1 versus 1000:1. But, say, 10:1 versus 1000:1 or 2:1 versus 30:1? Seems entirely plausible to me, unfortunately.
One of Cox's postulates, from which he derived the axioms of Bayesian probability, is that the probability calculus is an extension of logic, so that probabilities must agree with logic on certainly-true and certainly-false statements, which means that the probability of logical truths has to be 1, and logical falsehoods have to be 0.
More concretely, in this particular case, you can also construct the world's easiest Dutch book. Suppose my probability for true is p < 1. Then my probability for false is 1-p. Offer me a bet with odds better than p/(1-p) that false holds. Collect my money, because false doesn&#039t hold.
Where do the axioms of Bayesian probability say that P(logical truth) = 1 ?
g, I can't reply to your reply. I'm not sure if that's a feature because Robin doesn't want overly protracted discussions, or just due to column width. I guess I'll try posting this and see it's allowed.
You seem overly reluctant to draw conclusions about people's beliefs about likelihoods from their actual behavior. Certainly nothing in this scenario consistitues a rigorous mathematical proof about what Ken does or does not believe, and all of the issues you've brought up indeed prevent us from making any airtight statements about Ken's beliefs. However, they don't strike me as overall adding up to a plausible explanation as to why someone (call him Ken_v2) who really believes something is 99.9% likely to occur would be willing to put up $2 to win $1, but not to put up $3; a theoretically possible explanation, yes, but not a plausible one. A much more plausible explanation is that the claim of 99.9% was a rhetorical flourish, rather than a true statement of his beliefs.
Yes, people are not calculating reasoning machines, but I disagree that "inferring his actual beliefs or evidence from his bets is a pretty hopeless enterprise." Your argument essentially appears to be that it's hopeless because one can always come up with arguments as to why complications, nonlinearities, and non-rational biases could conceivably explain his actions. That's true, but by the same logic, you could just as easily say "inferring a person's actual beliefs from his actions is a pretty hopeless enterprise." In the case of Ken_v2, I would ask what he means when he says "99.9% likely." Can he point me to other cases in which we have good frequentist methods of agreeing that the probability is roughly 99.9% (e.g. the probability that the weather in Denver is over 80 degrees Fahrenheit on a particular day in December), and in which he exhibits similar levels of risk aversion? If he can't, I think it signficantly more plausible to label his claims as insincere, rather than his actions as foolish.