15 Comments

I do appreciate the "display a big bag of money" step, but I still feel that this is insufficient to change people's core belief that they wont get paid the big money! (Possibly I have not overcome my confirmation bias with respect to this belief, though!)

Displaying a big bag of money to me would just make me recheck my math, thinking "what the heck do they know that I don't?" Surely, no one would offer a bet of negative expected gain to them...so something suspicious must be going on!

I hope I am not being stubborn on this point...also I admit have not read the entire article (it is quite long...talk about info processing).

Expand full comment

If you calculate the expected value of the most likely 99.9999% of the outcomes it is only $10. The most likely 99.9% and it is only $5.

To get an expected value that is reasonable by reppeatedly betting, you would have to include the value of your time. At any reasonable value of your time the expected value becomes very low.

Expand full comment

If change in risk aversion doesn't explain why the poorer participants were even more irrational than the richer participants, then what is going on? The probabilities are the same for both group, are they not, so shouldn't they neglect them equally...?

Expand full comment

I agree entirely with Dan about this inclusion... in fact may I suggest scrolling down: http://en.wikipedia.org/wik...

Dog of Justice also mentions this later down...the true expected value of St. Petersburg "Paradox" is actually a quite reasonable $10 or so, because the tail probabilities of earning zillions are CORRECTLY ignored (set to 0%, because they would literally never be paid).

Expand full comment

Subjects are acting as if the experimenter can't pay more than 16 Euros! And yet they showed they could pay what was promised by displaying a big bag of money.

Expand full comment

Interesting! This is related to a comment I wrote earlier over at Accelerating Future:

I don’t think that these discussions will lead anywhere because they miss the underlying reason for most of the superficial disagreement about risks from AI.

There are a few people who disagree about the possibility of AGI in general, for more or less weird reasons. Forget those. The problem becomes more obvious when you turn your attention to people like mathematician and climate activist John Baez, computer science professor Stan Franklin, Douglas Hofstadter or organisations like GiveWell (I personally contacted many more experts about this topic). The disagreement all comes down to a general averseness to options that have a low probability of being factual, even given that the stakes are high.

Should we really concentrate our efforts on such vague possibilities as risks from AI? Technically, from the standpoint of maximizing expected utility, given the absence of other existential risks, the answer might very well be yes. But even though we believe to understand this technical viewpoint of rationality very well in principle, it does also lead to problems such as Pascal’s Mugging. But it doesn’t need a true Pascal’s Mugging scenario to make people feel deeply uncomfortable with what Bayes’ Theorem, the expected utility formula, and Solomonoff induction seem to suggest one should do.

There seems to be a fundamental problem with the formalized version of rationality. The problem might be human nature itself, that some people are unable to accept what they should do if they want to maximize their expected utility. Or we are missing something else and our theories are flawed. Either way, to solve this problem we need to research those issues and thereby increase the confidence in the very methods used to decide what to do about risks from AI, or to increase the confidence in risks from AI directly, enough to make it look like a sensible option, a concrete and discernable problem that needs to be solved.

If you view the criticism the SIAI does encounter in the light of the above then what most people mean when they doubt the reputation of people who claim that risks from AI need to be taken seriously, or who say that AGI might be far off, what those people mean is that risks from AI are too vague to be taken into account at this point, that nobody knows enough to make predictions about the topic right now.

Many people perceive the whole world to be at stake, either due to climate change, war or engineered pathogens. Telling them about something like risks from AI, even though nobody seems to have any idea about the nature of intelligence, let alone general intelligence or the possibility of recursive self-improvement, seems like just another problem, one that is too vague to outweigh all the other risks. Most people feel like having a gun pointed to their heads, telling them about superhuman monsters that might turn them into paperclips then needs some really good arguments to outweigh the combined risk of all other problems.

It is true that a lot of people already work to mitigate climate change while major existential risks are completely ignored. But that argument seems to fail if people perceive the existential risk in question to be under a certain threshold. There seems to be a point where things become vague enough that they get discounted completely.

Summary: The problem isn’t that people doubt the possibility of artificial general intelligence. But most people would sooner question their grasp of “rationality” than give five dollars to a charity that tries to mitigate risks from AI because their calculations claim it was “rational”.

Expand full comment

Uh, isn't the dominant factor here counterparty risk? I.e. if you flip 50+ heads in a row, the guy offering the bet will only be able to afford to pay you as if you flipped ~20, so even without logarithmic utility this bet isn't worth more than 20.

Expand full comment

I agree with Robin. Manipulating the shape of these unobservable utility functions can't explain away something like this. More likely the cognitive load of imagining very low probability events outways the benefits of considering them (in some sort of rational manner).

There is a case for considering humans as not only utility maximizers, but also as embodiments of the principle of least effort.

http://en.m.wikipedia.org/w...

Expand full comment

The level of risk aversion required to explain this behavior is extreme, and completely inconsistent with lots of other risk-taking behavior by the same sort of subjects.

Expand full comment

If wealth is logarithmic, shouldn't the wealthier have disdained gambling since the gain is that much less utility for them? Yet:

> Offers increase significantly with income.

More evidence that the rich are able to be less risk averse...

Expand full comment

Addendum: the utility function must be asymptotic, not just non-linear, since the expected payoff is infinite.

Expand full comment

What would you pay for a lottery ticket with an expected value of infinity, even if the variance is huge?

Really should have included the link to the modern English explanation: http://en.wikipedia.org/wik...

Expand full comment

Robin, this problem is well known. What is at work here is logarithmic (or some other, non-linear) utility function. From the base case, large loss (initial payment) has higher absolute utility than large gain (total eventual pay-off) not because of loss aversion, but because of non-linear utility function.

Expand full comment

The paper appears to be arguing that the effect can be explained by neglect of small probabilities and by diminishing marginal utility of money. What there is making you say otherwise?

Expand full comment

This isn't explained very clearly, but if I understand it right the payoff is rather obviously linear in cutoff (series length). Could none of the subjects do induction?

Expand full comment