I just noticed an interesting question that I’ve never heard anyone ask. To explain it, I need to review the basics of behavior, and of abstraction levels.
First, we explain agent behavior in terms of agent choices and the actual states of the world. Standard decision theory explains choices in terms of beliefs and preferences, also known as goals or motives, some of which are conscious, and others unconscious. Beliefs are explained using info and priors. Deviations from standard decision theory are often well explained in terms of random noise and habits, which come from patterns of prior choices by this agent and others this agent has observed. Thus agent behavior is explained in terms of noise, the world, and agent info, priors, habits, and motives both conscious and unconscious.
Second, construal level or “near-far” theory, which I’ve discussed often, says that human behavior varies greatly with level of abstraction. That is, some behavior is on very small units of time and space, and driven by very local goals and considerations, while other behavior is driven by very broad stable goals and considerations, and made regarding large units of time and space. For example, re traveling to work one makes many small choices that adjust one’s legs or a car steering wheel, intermediate choices about which routes to take, and big choices about what time of day to drive and where to work.
Okay, here is the interesting question: how does the relative explanatory power of noise, world, info, priors, habits, and motives vary with choice level of abstraction?
Very small scale choices are mostly made unconsciously. They seem roughly explained by the world, our info and our nearby conscious immediate goals. But noise, unconscious goals, and learned habits also seem to make big contributions. For example, our body language is driven by status considerations that we’d deny, and experts can often find more efficient ways to achieve our goals than our habits achieve.
Mid scale choices seem especially well explained by our info and conscious goals, with less room for explanation by noise, unconscious goals, and habits. These are the choices that we mention when telling stories about our events, and for which our languages are well designed. We are quite ready and able to explain that we went to the grocery story because we needed some groceries, and that we drove because it was too hot to walk. We locked the door to deter thieves, and we called the police on a thief because that’s what good citizens do. And such explanations do in fact account rather well for our specific choices at that level of abstraction.
In contrast, compared to noise and habits, our info and conscious goals seem to do much worse at explaining our high level choices, such as of where to live, which jobs to take, which people to marry, or which laws and norms to enforce in which ways. Our book The Elephant in the Brain shows how unconscious goals often do a much better job explaining many broad behaviors, like going to school or the doctor. And critics like Socrates have long shown how easy it is to find incoherence in the habits and rationales of our high level choices.
Why do our conscious motives do so much better a good job of explaining mid level choices, compared to smaller or larger scale choices? Re smaller choices a standard answer is that conscious thinking requires a large overhead that isn’t worth the bother for smaller scale choices. And re larger choices a standard answer is that we make those choices a lot less often, and so get a lot less personal data on which to base them.
Yet our societies contain vast numbers of people making rather similar large scale choices; why can’t we learn from those? I think a better explanation here is that our larger scale choices are much more strongly set by cultural evolution. We copy the habits of associates, even when such habits are not especially coherent. Then we make up incoherent rationales to justify them, These inherited habits are specified at some modestly high level of abstraction, and implemented via conscious calculation, which is what makes our intermediate scale choices so well explained by our conscious goals.
In a faster changing world, habits encoded at overly low levels of abstraction run real risks of not generalizing well to new environments. For this reason, we moderns, living in a faster changing world, have increased the level of abstraction at which we infer and encode our inherited behavioral habits. Alas, this makes it harder to reliably infer habits from observed behaviors, creating more risks of cultural drift.
Worse, “sincerity” describes a modern pattern of trying to modify our habit ideals to be more easily explained by a simple set of ideal motives, such as true love, true patriotism, or truly saved. This might, if successful, make our high level choices as coherently justified, and as well explained by conscious motives, as our mid level choices. We might then feel satisfyingly rational.
However, such modified habits would plausibly fail to inherit a great deal of the adaptiveness info embodied in unmodified habits, risking even more cultural drift. At any one moment the process of natural selection offers an incoherent mishmash of inherited habits that are nevertheless more adaptive than would be a far more coherent great simplification of them. Here a foolish consistency is indeed the hobgoblin of little minds.
Added 13May: Conscious motives also do a better job of explaining behavior at work, sports, and travel. It does a worse job for behavior labeled “cultural.”
> Yet our societies contain vast numbers of people making rather similar large scale choices; why can’t we learn from those?
this is exactly the insight that inspired me to pursue "open memetics" ! Those datasets are *already* available at the big tech / social media companies / personal data brokers. The current culture plays a kind of cat & mouse game here: companies want more insight, consumers want more privacy, we're in a deadlock
But we can flip it: I want to know what decisions people in my cohort do, and how that works out for them. If they also want that to be known, we can self-select into a kind of open culture study to find out. If I can find a cohort of people in my age range / ideological belief / career / gender, etc, I can see what kind of careers they end up in. If I notice that MOST people who fit my profile never become CEOs, or when they do, they fail/burn out, that's very meaningful data for me.
Just by living my life & seeing how it turns out, I would be contributing to helping others like me. Think of it like Open Street Map, but for culture study / open psychometrics. No such project exists (yet)
I see this as related to the question of how does the principle of conservatism apply in rapidly changing times.
In more static times the principle is simple: When confronted with a decision that you don't encounter very often, or is too large to get your brain around – you defer to the solutions of the past as transmitted to you via "culture". This is probably the best guidance you'll get since you lack enough information yourself to make a good judgment.
It's harder to apply this when times are changing quickly. Plausibly, the solutions that worked in the past may no longer be wholly useful in modern circumstances. This is especially acute in areas related to gender relations, career paths, technology, and global connectivity where changes have been rapid and profound.
I think you've put your finger on a genuine problem here, which is how should we rationally make the "high abstraction level" decisions when past experience (culture) is no longer as useful a guide. Culture drift seems all but guaranteed. Indeed even among those who think culture drift is a problem, it isn't obvious how to apply the principle of conservatism to real-world problems: I.e. what exactly is one arguing FOR. It's like applying a conservative reading of the Constitution to modern legal questions related to AI and digital copyright – how exactly one should do such a "conservative reading" is not obvious.