6 Comments

This reminds me of Ernest Gellner's chapter "The Need for Philosophic History" in Plough, Sword and Book, which aims to give an explicit general theory of human history. Gellner was very much a generalist, but his approach wasn't as rigorous and formal as that suggested here.

"We inevitably assume a pattern of human history. There issimply no choice concerning whether we use such a pattern. Weare, all of us, philosophical historians malgre nous, whether wewish it or not. The only choice we do have is whether we makeour vision as explicit, coherent and compatible with availablefacts as we can, or whether we employ it more or lessunconsciously and incoherently. If we do the latter, we risk usingideas without examination and criticism, passed off tacitly assome kind of "common sense". ...

The joint result of our inescapable need for possessing somebackcloth vision of history, and of the low esteem in whichelaboration of global historical patterns is at present held, is amost paradoxical situation: the ideas of nineteenth-centuryphilosophers of history such as Hegel, Marx, Comte, or Spencerare treated with scant respect and yet are everywhere in use."

http://14.139.206.50:8080/jspui/bitstream/1/2215/1/Gellner,%20Ernest%20-%20Plough,%20Sword,%20and%20Book%20The%20Structure%20of%20Human%20History%201989.pdf

Expand full comment

"I’d suggest just picking some more limited category, such as perhaps government regulations, collecting some plausible data points, making some guesses about what useful features might be, and then just doing a quick survey of some social scientists where they each fill in the data table with their best guesses for data point features. If you ask enough people, you can average out a lot of individual noise, and at least have a data set about what social scientists think are features of items in this area. With this you could start to do some exploratory data analysis, and start to think about what theories might well account for the patterns you see."

This might be less tedious and labor-intensive than it seems. The machine-learning methods that are currently booming look like a perfect fit for making (and testing) predictions based on these features. You don't have to sift through these features; the algorithms will figure out which ones are relevant, and in what sense.

Expand full comment

"Almost all research into human behavior focuses on particular behaviors."

I take it you're on the "psychology is the study of behavior; not the study of the mind" side of the fence. :-)

I've been thinking a lot about this sort of problem. We have this massive amount of data that's being produced and stored on a seemingly endless variety of topics from academic publishing on the web, and we're unable to translate it into something that's easily conveyed and understood by the layman. And even experts will struggle to understand problems even slightly outside their field. The number of terms for nearly identical concepts is quite frustrating, for instance.

I originally got to thinking about this when reading an article about the Stanford Encyclopedia of Philosophy, and thinking how impressive what they managed to accomplish was. I also read into the failures of Citizendium. I think, had I been in Larry Sanger's shoes, I would have made the same mistakes.

Organizing a data set isn't a direction I had thought of however. I was thinking in terms of relying on existing data sets and simply determining the best ones as well as the best explanations of the results of those data sets.

I agree about the necessity of having multiple layers of review going beyond the existing ones. I also think there is an upper limit on how much content can be created while still being reasonably well organized. So it's important to think very carefully about a meta-strategy for determining what sort of content does and does not need to be added.

Another argument out there is Nick Bostrom's of how a superintelligence might be created by simply finding a way to organize the web to make better use of the collective intelligence of humanity. Such a superintelligence would, I think, be more likely to improve slowly rather than quickly, which would reduce the risk of a doomsday scenario.

The big trillion-dollar question to me: is there a better method to find information on the web than a search engine? How could such a method be created?

Expand full comment

It is true that small changes ultimately bring those big changes which show the focus of the countries and the hard work of their students.

Expand full comment

But most of these academic papers on particular human behaviors do in fact pay the bother to substantially formalize their data, their theories, or both. And if it is worth the bother to do this for all of these particular behaviors, it is hard to see why it isn’t be worth the bother for the broader generalizations we make from them.

You've argued formalization in the social sciences often isn't worth the bother...

Expand full comment

I'm no social scientist, but in some cases couldn't you design experiment templates which can be repeated in a variety of circumstances? For example, to test marginalism you might lower the price of many goods and services in many different markets and record the results.

To test near-far theory, you might ask participants to plan a task X time in advance, requiring they perform the task as planned at the scheduled time. The experiment could be repeated for many different tasks, and record the success rates and plan divergence for different values of X.

etc.

Expand full comment