10 Comments
User's avatar
callinginthewilderness's avatar

Is it really the case that the election was decided on the cultural/far-mode issues? An alternative theory is that *reporting* and *the debate* was biased towards those, because of how easy, interesting, and tractable it is to fight and gain influence on that front, but in reality, the voters cared primarily about the near-mode economic issues.

Expand full comment
Chris B's avatar

This is how I see it as well.

Expand full comment
Dmitrii Zelenskii's avatar

Perhaps, but even those were viewed in a myopic way. "Inflation bad => incumbent bad" isn't really how anything works, but voters around the world think otherwise.

Expand full comment
Chris B's avatar

The narrative was easy enough to shape, regardless of how divorced it was from reality. Lots of people felt crunched and were happy to let the guy before the guy try again.

Expand full comment
spriteless's avatar

Do we have anything that's like prediction markets you could use? Or, what could you start studying, and come up with variations on that become more like prediction markets? Charity aggregators like CFC and Benevity? Currency markets in general?

Expand full comment
Berder's avatar

A prediction market would not work for quantifying existential threats such as AI apocalypse. The problem is the payoff matrix. If I believe an apocalypse will happen, and therefore I buy a share that pays off $1 if an apocalypse happens, and the apocalypse doesn't happen, I lose what I paid. If on the other hand I'm right and the apocalypse *does* happen, I also lose what I paid because society collapses and even if I manage to collect my $1 it will shortly be worthless. So, prediction markets would systematically underestimate existential threats.

Expand full comment
Robin Hanson's avatar

For every existential harm event there are many near miss events one can bet on.

Expand full comment
Berder's avatar

That may be true for some types of existential threats (works for asteroids), but predictions of AI apocalypse don't generally involve near misses. If an unfriendly AI has a runaway self-improvement process, its victory is then inevitable, so goes the scenario; no chance of fending it off. There are lots of existential threats where we don't get multiple chances. Global thermonuclear war, for example. (There are some arguable near-miss events for this, like Vasily Arkhipov, but it's difficult to relate the chance of such events to the chance of the actual global thermonuclear war happening.)

Expand full comment
Robin Hanson's avatar

A lack of near misses is more a feature of the scenario analyst than of reality. https://www.overcomingbias.com/p/foom-liability

Expand full comment
Berder's avatar

That's just your individual opinion. The dream of prediction markets is that the market can do *better* than anyone's individual opinion.

If bettors in an AI risk prediction market have a different opinion from you, the market won't accurately reflect their beliefs, because the payoff matrix doesn't reward being right about a total apocalypse.

This is a problem in case the market is used to guide decisions and they do happen to be right.

Also, even if there are near-misses, that doesn't tell you what the risk of a *total* apocalypse is. How many more things would have to go wrong to turn one of those near-misses into a full doom scenario? That's a question we'd like the market to answer, but it can't.

And it's not just AI; I mentioned global thermonuclear war. Actually any risk that destroys the value of the winning prediction market shares is affected by the same problem. For example, a prediction market cannot be used to judge the probability of a terrorist destroying the prediction market's servers, or of hostile regulation being passed that prevents the prediction market from paying out bets, or even just bankruptcy of the prediction market.

Expand full comment