21 Comments

I'm still waiting for a science fiction writer to go around to scientists in a bunch of fields, collect only the statements about each field's future that leading scientists are reasonably damn sure about, and then write a story that incorporates *all* of them.

Expand full comment

As long as we're thinking realistically about politics, the probability of your vote deciding an election is low enough not to bother with voting.

Expand full comment

We vote very frequently, in social interaction and pageantry.

Expand full comment

When I look into the distant past, I'd be more inclined to say that the people could hope for big changes that preserved their values and expect big changes that destroyed their values. Small changes, even changes small enough to be comprehended before-hand, were never on the table over long periods of time.

Expand full comment

A few people living on seasteads is a small change; millions so living is a bigger change. A few orgs using futarchy is a small change, whole nations using it is a big change. We can hope for big changes, but should expect small ones.

Expand full comment

Finally, I’m very curious as to whether Robin thinks futarchy is a big change or a little one, because I’m suspicious that people may be biased towards thinking their own ideas are plausible incremental changes while other people’s are wild extrapolations.

Me too.

Expand full comment

Let me disclaim that I may just be talking my book - I have powerful reasons to want Robin to be wrong. That said, here is why I think he's wrong:

First, he equates small influence on today with small differences on the future. This assumes that we live in a fundamentally non-chaotic world, one which is insensitive to initial conditions (where small differences in the current world lead to small differences in outcome). Or at least that any sensitivity to initial conditions is unpredictable - that we can't find small present changes which will lead to large future ones.

I disagree - I think the SIAI party line that various "activities undertaken to change the world" have orders of magnitude difference in expected utility effect is correct. I think smart people who study history and talk a lot with other smart people can find levers for change, which contradicts Robins claim that there are no predictable levers for change.

Second, there is decreasing marginal utility in any niche of activity, including the niche of looking for small improvements in the future. I am skeptical that there are so many small improvements to be made that the return of the millionth person looking for a small improvement is greater than the first person looking for a large one. Naturally due to male risk-seeking desire to win status tournaments, people will be biased towards thinking they should be looking for a big improvement, but that doesn't mean everyone should look for small ones.

Finally, I'm very curious as to whether Robin thinks futarchy is a big change or a little one, because I'm suspicious that people may be biased towards thinking their own ideas are plausible incremental changes while other people's are wild extrapolations. I might think "we've never had a political system ruled by profit-seeking gamblers, that's a huge change!", while Robin thinks "large parts of our economic systems are ruled by profit-seeking gamblers, it's only a small change to extend that to the political spehere".

Expand full comment

I wish more people took these values / ideas to politics.

That is, don't think in terms of what you think is the ideal set of laws / government, think in terms of what practical changes to the existing system help address the problems at hand.

This is my gripe with die-hard libertarians. It seems to me they live in a world where they think they can change all kinds of structures of society at once, and only then everything will work out for more wealth and greater justice. I'm not sure the ideal really works but the more immediate point is it doesn't matter: all that is ever really on the table is a small adjustment to what we have going on, and in the context of what we have going on, sometimes it's an adjustment toward "larger government" or "more regulation" that actually grants greater liberty, financial or otherwise.

Expand full comment

One popular hypothesis that holds that our influence is limited is known as "technological determinism". However, the extent to which technological determinism holds is not terribly clear.

Expand full comment

The conclusion of the post depends on a conditional statement that appears part way through: "if we are actually very constrained in our influence".

No case for that being true seems to be made in the post - but by the time the end comes, this conditional statement seems to have become rather forgotten about, and it appears that its truth is just being assumed.

Expand full comment

Robin, I think that one difference between the two of us is that I think that you think that far-mode thinking patterns only work when you use them while I think that they only work when whoever is using them is also consciously using the scientific method, E.T. Jaynes version and think that near-mode thinking patterns NEVER work with very well, though they are fast. System 1 and System 2 all over again.

OB types need to develop better near mode patterns to be effective in real time, but to understand the world, when time isn't an issue, they just need to get better at using far-mode patterns, and maybe at introspection. Most of all, to understand the world they simply need more data, which Tyler pursues impressively.

Expand full comment

Yes of course, if you have time, consider a whole probability distribution of outcomes, and take expected values regarding actions. Beware of using these as excuses to focus on far-fetched corners of outcome space. We expect a strong far view bias regarding the far future, overemphasizing far theories and ideals, and unlikely events. Surely one should typically first analyze the most typical outcomes.

Expand full comment

I also operate under the assumption that my ability to influence the future is likely quite limited, but I don't see why that implies I should spend most of my efforts trying to obtain a large probability of a small change to the "default" outcome, as opposed to aiming for a small probability of a large change.

It's unclear to me which is the better approach for trying to influence the future, and I tried to explain why in Value Uncertainty and the Singleton Scenario. I'd be interested in your thoughts on my arguments in that post.

Expand full comment

Robin Hanson's argument is excellent.

I agree with the responses: don't just consider the most likely amount of influence you'll end up having, and the future is hard to predict.

Neither of these takes that much wind out of the sails.

Another consideration: it may be hard to predict how much influence several thousand low-influence folks can end up wielding collectively (even if uncoordinated), even though it's easy to predict that you yourself can make only the slightest difference (unless you're the one to accidentally unleash insufficiently friendly self-improving AI).

Expand full comment

When you start to have a direct influence in the events you are trying to predict, probablity theory (Bayes) starts to break down. You are not just predicting the future, you are also making it. Bayesian induction is simply the (limiting) special case of Categorization where your influence on external events is zero.

As I've said, when you are the market maker in a prediction market, you set the odds you desire for the future. Under conditions of high uncertainty and volatility, as Roko says, the probabilities jump around all over the place, and thus it is easier to be the market maker.

Expand full comment

This is all great until you realize that the ~century from now future is really hard to predict, because of massive logical and empirical uncertainty. Political change, demographic change, technological change, uncertainty about anthropics and metaethics all interact in the most hideously complex way. Put these together and 2100 is smeared out like a wide Gaussian.

Expand full comment