My last post seems an example of an interesting general situation: when abstractions from different fields conflict on certain topics. In the case of my last post, the topic was the relative growth rate feasible for a small project hoping to create superintelligence, and the abstractions that seem to conflict are the ones I use, mostly from economics, and abstractions drawn from computer practice and elsewhere used by Bostrom, Yudkowsky, and many other futurists.
What typically happens when it seems that abstractions from field A suggests X, while abstraction from field B suggests not X? Well first, since both X and not X can’t be true, each field would likely see this as a threat to their good reputation. If they were forced to accept the existence of the conflict, then they’d likely try to denigrate the other field. If one field is higher status, the other field would expect to lose a reputation fight, and so they’d be especially eager to reject the claim that a conflict exists.
And in fact, it should usually be possible to reject a claim that a conflict exists. The judgement that a conflict exists would come from specific individuals studying the questions of if A suggests X and if B suggests not X. One could just suggest that some of those people were incompetent at analyzing the implications of the abstractions of particular fields. Or that they were talking past each other and misunderstanding what X and not X mean to the other. So one would need especially impeccable credentials to publicly make these claims and make them stick.
The ideal package of expertise for investigating such an issue would be expertise in both fields A and B. This would position one well to notice that a conflict exists, and to minimize the chance of problems arising from misunderstandings on what X means. Unfortunately, our institutions for crediting expertise don’t do well at encouraging combined expertise. For example, often patrons are interested in the intersection between fields A and B, and sponsor conferences, journal issues, etc. on this intersection. However, seeking maximal prestige they usually prefer people with the most prestige in each field, over people who actually know both fields simultaneously. Anticipating this, people usually choose to stay within each field.
Anticipating this whole scenario, people are likely to usually avoid seeking out or calling attention to such conflicts. To seek out or pursue a conflict, you’d have to be especially confident that your field would back you up in a fight, because your credentials are impeccable and the field thinks it could win a status conflict with the other field. And even then you’d have to waste some time studying a field that your field doesn’t respect. Even if you win the fight you might lose prestige in your field.
This is unfortunate, because such conflicts seem especially useful clues to help us refine our important abstractions. By definition, abstractions draw inferences from reduced descriptions, descriptions which ignore relevant details. Usually that is useful, but sometimes that leads to errors when the dropped details are especially relevant. Intellectual progress would probably be promoted if we could somehow induce more people to pursue apparent conflicts between the abstractions from different fields.
Strictly amateur, but let me take this chance to expand on my intuition:There was no blacksmith who invented industry and used it to take over the world. There were countries that advanced relatively quickly in industry and gained significant relative advantage. The continent of Europe pretty much did invent industry and use it to take over the world.
So, if interactions among firms during the AI transition end up being most like interactions among individuals during the industry transition, we might see some firms get rich but not to the point of hegemony. If firms are more like countries, we might see the top firms as a group come to dominate the world, but with power balanced among many firms. And if firms are more like continents, we might get a foom.
So why did industry have different distributional effects on different scales? It's not an easy question, but it's certainly within the field of economics and subject to economic modeling.
This seems to suggest that there is much value in collaborations such as the Santa Fe Institute.
[Entertaining-but-instructive aside: I just ran across a fascinating & hilarious interview of SFI cofounder Murray Gell-Mann, who Paul Kauffman once said "...may know more things than any other single human being" but who (we learn in the interview) considers himself a major league slacker: http://www.achievement.org/.... He touches on many topics, including academic silos, practical economics, and his own many neuroses]