41 Comments

"Nor does any data collected in the last century test his belief that the best governments are single rulers running city-sized polities with iron fists and complete discretion. It is not even clear what prior data makes his case."

It's nice to hear you actually go on the offense and criticize his favored system, however, your criticism isn't necessarily relevant to his claims against Futarchy (which can stand on their own against your system, albeit critically and not constructively). You also haven't proven why inductions are better that deductions, which, by nature, deal with rules of a generally applicable nature (if not, then the logic becomes unsound). During the debate you were too busy defending your own views to really investigate his. If you actually looked into how he defends his neocameralist formalism, he does cite plenty of historical examples of his system working out. It's not hard to think of them. Off the top of my head, Monaco is a great one. Singapore is another.

Expand full comment

Hanson, Moldbug, most of the commenters here - you are all would-be fascists, technocrats, nerds clutching for power. The idea of either of them or any of you having any power over anyone is truly repulsive.

"Friendly AI might be able to create such a design" Typical.

Expand full comment

To draw an analogy, I think mencius is arguing in favour of horse driven carriages because they have worked well in the past and the present era is the earliest days of automobile (algorithmic government) design. I'm pretty sure that the earliest cars were also pretty bad compared to good carriages.

The horse is organic. The monarchy with the big man feels natural to most of humanity.

The horse is tried and tested technology. The automobile is relatively new. Some have done it well, most are not doing it that well.

There are limits to a horse's power or even multiple horse's power.Similarly, there are always limits to the sovereign's thinking power. A well designed automobile will beat any number of multiple horses one day, but we don't have such a design yet.

I think Friendly AI might be able to create such a design.

Expand full comment

http://online.wsj.com/artic...

Supreme court just struck down several limits on political spending by corporations.

Expand full comment

So does this argument that we should replace Wall Street with two people trying to work out what asset prices should be?

Which two people?

Expand full comment

The irony is that 'Less Wrong' (with many enthusiastic Libertarians) deploys a democratic voting system to 'direct focus of attention' to sub-agents supporting LW dogma, which actually runs contrary to the 'free market of ideas'. This voting system awards points (Karma) to order to perform categorization of sub-agents. Undoubtably an AI would have to deploy something similar, and it would no doubt take precedence over Bayesian inference.

.

Expand full comment

"poor quality form of collective intelligence"

Agreed.

Are two people going to make a better decision if they write bets down on paper and reconcile the results with a market mechanism, or if they discuss the naturally fuzzy problem as people usually do? How about with 10 people, 100 people? Does any system enable 1000 people to make better decisions than 100 (once the participants are all moderately selected experts)?

As the group size increases people stop talking face to face and fall into hierarchies, mechanisms for booting out uninformative people kick in, and various conscious and subsconcious mechanisms of how to participate in such an arrangement appear. People become constrained in what they can say, who they can say it to, etc. It's no longer artificially simple, sadly. I suppose that's the downside.

Expand full comment

From an engineering perspective, Moldbug is correct. If you have a logical problem in your software design - ie a problem you have diagnosed via deduction - you do not ship that code. Even if the code is working on your staging environment, you do not ship it. If you accidentally release the code, and it works, you still fix the logical problem. You have to assume that you are just getting lucky and that it will break at the worst possible time.

So even if futuarchy worked on small scale systems that would not convince me that it was a good design. It would both have to work, and the actual working of it would have to reveal flaws in my logic and deduction.

Of course, if you have and idea that works via logic, you must also test it in the real world, in a safe contained environment before releasing it at large scale.

So neither deduction or experiment results are enough alone. For any well engineered system, you must have both.

It is not even clear what prior data makes his case – apparently it can’t be summarized in any concise form; you have to just read dozens of books and have a feel for it.

This is how actual management works in the real world, when extensive data is not available. And in the case of both futurachy and Moldbug's formalism, good data is not available, so "feel for it" is all you've got right now. Also, books are a form of data, a much richer form of data than straight statistics where complicated problems have been reduced to a sterile number. First hand accounts are also far richer than lab experiments with college students.

Expand full comment

Betting markets of any kind can easily be beaten by manipulation, simply by applying collusion. Take poker as an example. It's only a fair game if each agent bets individually, but when two or more agents coordindate their betting they can easily rip off the other players. Agents simply have to coordinate to over-power any market and rip everyone off with ridicious else.

Many on transhumanist lists seem to be so utterly blinded by Libertarian dogma that they have lost touch with reality. For example I see AI deisgners proposing ludicrious theories of mind in which the mind is treated as 'a mini market' for instance.of trading sub-agents. I tell you all again, Bayes is a bad basis for a rational foundation and AI designers which try to use prediction market-like systems (e.g CEV) are headed a tremenous beating.

The reasons have been touched in Robin's earlier thread. Not every infomation processing agent should be given equal attention so there needs to be a way of deciding 'which sub-agents should be heard'. This deeper system is a system of coordination of agent behavior. As I mentioned, manipulating the attention given to each sub-agent totally warps the probabilistic reasoning of the system because of bias (missing info, selection effects etc).

The deeper system of agent coordination is analogical inference, the real basis for mind.

Expand full comment

sconzey, in the linked Moldbug post Hanson is quoted as saying "We're both entertainers". So maybe it wasn't JUST for our entertainment.

Harold, I believe Hanson has said that the price has to stabilize around a certain range, so you can't disrupt it at the last minute without anyone being able to correct it.

tom, because lab experiments are limited Hanson also bases his case on field data regarding attempts to manipulate larger markets.

Tim Tyler:what's a cooperative alternative to futarchy?

That is mostly how the brain works, for example.That reminds me of a bit in Bertrand de Jouvenel's "On Power". Herbert Spencer used biological/anatomical analogies to justify quasi-anarchist libertarianism. But of course our bodies operate more along the lines of fascist principles!

individual vs collective intelligenceOddly enough, I think that widely-linked critique of the Phantom Menace helps make the point. When George Lucas had complete control, he did a much worse job than when he was hampered and had to compromise with others.

Brian Wang, recalling governors is an interesting analogue to firing CEOs. However, David Friedman's point about how bidding wars are different from decision markets remains.

Expand full comment

I think various democracies with their checks and balances vs joint-stock corporations with a far less complicated decision hierarchy is a better comparison than to the brain.

Expand full comment

I can see that volunteer studies may help confirm generally that manipulators may not poison a small market. But part of my comfort is that I think the burden of proof is pretty low to justify tests of things like CEO departure markets.

Haven't you thought that there are huge limits on the value of volunteer studies here when the incentives are necessarily so small (I'm guessing nobody ever got rich winning in your tests, or even paid his rent for a month) and so different in character from those of participants in a market with meaningful amounts of money?

If people can make a lot of money in your studies, please sign me up. I may even see if Moldbug would pay me to act a certain way.

Expand full comment

Assume Y is less than X. Who is taking the other side of the X trade and what are their motivations? A side-market would also function as a market. Arbitrage between markets would make it such that it doesn't matter if Y < X.

Expand full comment

"For me, government safety is like airplane safety. Not only do I want a watertight proof that Y is greater than X, I want two or three parallel and independent proofs."

If you're on a modern airliner, moving to a aeroplane designed by a social scientist that is statistically pretty good at staying aloft is a bad idea. But if you're currently on a plane whose main certification is that people have a "good feeling" about, changing might be a good idea.

Expand full comment