29 Comments

I meant 1890 compared to 1960.

Expand full comment

Concentration is modestly higher than in 1960.

Expand full comment

Yes, the process never did go to the extremes that Marx envisioned, but compared to, say, 1960...

Expand full comment

Ok, thanks for your responses.

Expand full comment

I'm just not going to get into the habit of publicly accusing particular people of particular biases, at least not without a strong reason to do so.

Expand full comment

Ah, thanks.

Slight change of topic: Do you think that this bias ("expecting concentration to be very natural or harmful") is more present in Eliezer than you and this bias difference is a substantial cause of y'all's disagreements on AGI dangers?

Expand full comment

Most libertarians dislike and distrust large firms, just like most other people.

Expand full comment

Ok. I think I understand you a bit better, but I'm confused as to why "Note the strong parallels with the usual concern about large firms in capitalism" is mentioned. It seems you are saying that whether or not you have that concern is irrelevant to how biased you probably are. Is the mention just to give an example of the bias against concentration leading to a wrong conclusion?

Edit: Also, on the cultishness assertion, I look forward to clarification/response from McCluskey.

Expand full comment

Being libertarian-ish isn't remotely enough to protect you against biases of expecting concentration to be very natural or harmful.

Expand full comment

>It seems crazy cultish to me to, when guessing if this bias might be a problem, to put much weight on estimating the personal bias-resisting abilities of two particular people. It’s a big world, and they too are human.

I think a lot of people that fear one/few powerful AGI are similar to Eliezer in that they are libertarian-ish, market-liking, decentralized-system-appreciating people. This goes way beyond two people, even if McCluskey only lists an example of two people.

Expand full comment

The 1890 firm concentration was vastly smaller than Marx envisioned, workers then were on ave better off than 1800, and economists & historians disagree with you about causes of firm size & changes in that.

Expand full comment

In the 1890s, the US economy was indeed characterized by the immiseration of workers, huge wealth inequality, and strong concentration of firms. This was changed by organized labor and government intervention.

Also, the reason that people expect to grow, even when it's not actually efficient, is that, when a firm grows, it increases the power and wealth of management, and management is who decides whether a firm grows. To paraphrase Walter White, there are lots of executives that aren't in the profit-making business, they're in the *empire* business. (As for why nobody conquered the world, well, the Mongols came pretty damn close - there were literally no other military forces in the world that could stand up to them.)

Expand full comment

>I presume you'd agree that ems could be plenty fast enough to fill an oversight role well.

No. Ems seem likely to be somewhat close to fast enough, but I consider it somewhat plausible that ems will be slow relative to a pure agent AGI, maybe due to working memory constraints, or expensive, due to a need to run processes which were created to control bodies.

>If so, we should want ems sooner.

Yes, if that's the only important safety effect of ems, then it's highly desirable to have ems sooner.

Expand full comment

I replied at the link.

Expand full comment

I presume you'd agree that ems could be plenty fast enough to fill an oversight role well. If so, we should want ems sooner.

I'd say past bio, business, and software systems show similar patterns, making it very plausible that AI will also show similar patterns.

Expand full comment

I think we agree on a lot here.

Drexler's paper deserves at least as much attention as Superintelligence, and I intended my post to help increase the attention that Drexler's ideas get.

I'm about 85% confident that Drexler's claims about modularity are close enough to being right for his purposes.

I'm only around 55% confident that Drexler is right about human oversight being fast enough to compete with agent AGI's - I think that's my biggest doubt.

For comparison, I think MIRI has more like a 5% chance of producing insights that are sufficient to avoid global catastrophe from AI, and that's good enough that I occasionally contribute a bit of money to them. So my doubts about Drexler's ideas are very much consistent with my generally favorable attitude toward this paper.

I don't see a strong correlation between people's attitudes toward business concentration and their attitudes toward software concentration. In particular, people associated with MIRI tend to be comfortable with market forces, while expecting software to be very centralized. That leads me to doubt that a business analogy describes the most important biases.

It seems like much more of the disagreement is explained by differing opinions about the relevance of past patterns of large software systems. I'll guess that that's often related to how much experience people have with writing software, but I don't see as much evidence as I'd like to see on this question.

(Also, I wasn't trying to evaluate people's overall resistance to bias.)

Expand full comment