Tag Archives: Academia

Maps of Meaning

Like many folks recently, I decided to learn more about Jordan Peterson. Not being eager for self-help or political discussion, I went to his most well-known academic book, Maps of Meaning. Here is Peterson’s summary: 

I came to realize that ideologies had a narrative structure – that they were stories, in a word – and that the emotional stability of individuals depended upon the integrity of their stories. I came to realize that stories had a religious substructure (or, to put it another way, that well-constructed stories had a nature so compelling that they gathered religious behaviors and attitudes around them, as a matter of course). I understood, finally, that the world that stories describe is not the objective world, but the world of value – and that it is in this world that we live, first and foremost. … I have come to understand what it is that our stories protect us from, and why we will do anything to maintain their stability. I now realize how it can be that our religious mythologies are true, and why that truth places a virtually intolerable burden of responsibility on the individual. I know now why rejection of such responsibility ensures that the unknown will manifest a demonic face, and why those who shrink from their potential seek revenge wherever they can find it. (more)

In his book, Peterson mainly offers his best-guess description of common conceptual structures underlying many familiar cultural elements, such as myths, stories, histories, rituals, dreams, and language. He connects these structures to cultural examples, to a few psychology patterns, and to rationales of why such structures would make sense. 

But while he can be abstract at times, Peterson doesn’t go meta. He doesn’t offer readers any degree of certainty in his claims, nor distinguish in which claims he’s more confident. He doesn’t say how widely others agree with him, he doesn’t mention any competing accounts to his own, and he doesn’t consider examples that might go against his account. He seems to presume that the common underlying structures of past cultures embody great wisdom for human behavior today, yet he doesn’t argue for that explicitly, he doesn’t consider any other forces that might shape such structures, and he doesn’t consider how fast their relevance declines as the world changes. The book isn’t easy to read, with overly long and obscure words, and way too much repetition. He shouldn’t have used his own voice for his audiobook. 

In sum, Peterson comes across as pompous, self-absorbed, and not very self-aware. But on the one key criteria by which such a book should most be judged, I have to give it to him: the book offers insight. The first third of the book felt solid, almost self-evident: yes such structures make sense and do underly many cultural patterns. From then on the book slowly became more speculative, until at the end I was less nodding and more rolling my eyes. Not that most things he said even then were obviously wrong, just that it felt too hard to tell if they were right.  (And alas, I have no idea how original is this book’s insight.) 

Let me finish by offering a small insight I had while reading the book, one I haven’t heard from elsewhere. A few weeks ago I talked about how biological evolution avoids local maxima via highly redundant genotypes:

There are of course far more types of reactions between molecules than there are types of molecules. So using Wagner’s definitions, the set of genotypes is vastly larger than the set of phenotypes. Thus a great many genotypes result in exactly the same phenotype, and in fact each genotype has many neighboring genotypes with that same exact phenotype. And if we lump all the connected genotypes that have the same phenotype together into a unit (a unit Wagner calls a “genotype network”), and then look at the network of one-neighbor connections between such units, we will find that this network is highly connected.

That is, if one presumes that evolution (using a large population of variants) finds it easy to make “neutral” moves between genotypes with exactly the same phenotype, and hence the same fitness, then large networks connecting genotypes with the same phenotype imply that it only takes a few non-neutral moves between neighbors to get to most other phenotypes. There are no wide deep valleys to cross. Evolution can search large spaces of big possible changes, and doesn’t have a problem finding innovations with big differences. (more) 

It occurs to me that this is also an advantage of traditional ways of encoding cultural values. An explicit formal encoding of values, such as found in modern legal codes, is far less redundant. Most random changes to such an abstract formal encoding create big bad changes to behavior. But when values are encoded in many stories, histories, rituals, etc., a change to any one of them needn’t much change overall behavior. So the genotype can drift until it is near a one-step change to a better phenotype. This allows culture to evolve more incrementally, and avoid local maxima. 

Implicit culture seems more evolvable, at least to the extent slow evolution is acceptable. We today are changing culture quite rapidly, and often based on pretty abstract and explicit arguments. We should worry more about getting stuck in local maxima.  

GD Star Rating
a WordPress rating system
Tagged as: , ,

Sloppy Interior Vs. Careful Border Travel

Imagine that you are floating weightless in space, and holding on to one corner of a large cube-shaped structure. This cube has only corners and struts between adjacent corners; the interior and faces are empty. Now imagine that you want to travel to the opposite corner of this cube. The safe thing to do would be to pull yourself along a strut to an adjacent corner, always keeping at least one hand on a strut, and then repeat that process two more times. If you are in a hurry you might be tempted to just launch yourself through the middle of the cube. But if you don’t get the direction right, you risk sailing past the opposite corner on into open space.

Now let’s make the problem harder. You are still weightless holding on to a cube of struts, but now you live in 1000 dimensional space, in a fog, and subject to random winds. Each corner connects to 1000 struts. Now it would take 1000 single-strut moves to reach the opposite corner, while the direct distance across is only 32 times the length of one strut. You have only a limited ability to tell if you are near a corner or a strut, and now there are over 10300 corners, which look a lot alike. In this case you should be a lot more reluctant to leave sight of your nearest strut, or to risk forgetting your current orientation. Slow and steady wins this race.

If you were part of a group of dozens of people tethered together, it might make more sense to jump across the middle, at least in the case of the ordinary three dimensional cube. If any one of you grabs a corner or strut, they could pull the rest of you in to there. However, this strategy looks a lot more risky in a thousand dimensions with fog and wind, where there are so many more ways to go wrong. Even more so in a million dimensions.

Let me offer these problems as metaphors for the choice between careful and sloppy thinking. In general, you start with what you know now, and seek to learn more, in part to help you make key decisions. You have some degree of confidence in every relevant claim, and these can combine to specify a vector in a high dimensional cube of possible beliefs. Your key choice: how to move within this belief cube.

In a “sloppy interior” approach, you throw together weak tentative beliefs on everything relevant, using any basis available, and then try to crudely adjust them via considerations of consistency, evidence, elegance, rhetoric, and social conformity. You think intuitively, on your feet, and respond to social pressures. That is, a big group of you throw yourselves toward the middle of the cube, and pull on the tethers when you think that could help others get to a strut or corner you see. Sometimes a big group splits into two main groups who have a tug-o-war contest along one main tether axis, because that’s what humans do.

In a “careful border” approach, you try to move methodically along, or at least within sight of, struts. You make sure to carefully identify enough struts at your current corner to check your orientation and learn which strut to take next. Sometimes you “cut a corner”, jumping more than one corner at a time, but only via carefully chosen and controlled moves. It is great when you can move with a large group who work together, as individuals can specialize in particular strut directions, etc. But as there are more different paths to reach the same destination on the border, groups there more naturally split up. If your group seems inclined toward overly risk jumps, you can split off and move more methodically along the struts. Conversely, you might try to cut a corner to jump ahead when others nearby seem excessively careful.

Today public conversations tend more to take a sloppy interior approach, while expert conversations tend more to take a careful border approach. Academics often claim to believe nothing unless it has been demonstrated to the rigorous standards of their discipline, and they are fine with splitting into differing non-interacting groups that take different paths. Outsiders often see academics as moving excessively slowly; surely more corners could be cut with little risk. Public conversations, in contrast, are centered in much larger groups of socially-focused discussants who use more emotional, elegant, and less precise and expert language and reasoning tools.

Yes, this metaphor isn’t exactly right; for example, there is a sense in which we start more naturally from the middle a belief space. But I think it gets some important things right. It can feel more emotionally “relevant” to jump to where everyone else is talking, pick a position like others do there, use the kind of arguments and language they use, and then pull on your side of the nearest tug-o-war rope. That way you are “making a difference.” People who instead step slowly and carefully, making foundations they have sufficient confidence to build on, may seem to others as “lost” and “out of touch”, too “chicken” to engage the important issues.

And yes, in the short term sloppy interior fights have the most influence on politics, culture, and mob rule enforcement. But if you want to play the long game, careful border work is where most of the action is. In the long run, most of what we know results from many small careful moves of relatively high confidence. Yes, academics are often overly careful, as most are more eager to seem impressive than useful. And there are many kinds of non-academic experts. Even so, real progress is mostly in collecting relevant things one can say with high enough confidence, and slowly connecting them together into reliable structures that can reach high, not only into political relevance, but eventually into the stars of significance.

GD Star Rating
a WordPress rating system
Tagged as: ,

Social Innovation Disinterest Puzzle

Back in 1977, I started out college in engineering, then switched to physics, where I got a BS and MS. After that I spent nine years in computer research, at Lockheed and NASA. In physics, engineering, and software I saw that people are quite eager to find better designs, and that the world often pays a lot for them. As a result, it is usually quite hard to find even modesty better designs, at least for devices and mechanisms with modest switching costs.

Over time, I came to notice that many of our most important problems had cores causes in social arrangements. So I started to study economics, and found many simple proposed social innovations that could plausibly lead to large gains. And trying my own hand at looking for innovations, I found more apparently plausible gains. So in 1993 I switched to social science, and started a PhD program at the late age of 34, then having two kids age 0 and 2. (For over a decade after, I didn’t have much free time.)

I naively assumed that the world was just as eager for better social designs. But in fact, the world shows far less interest in better designs for social arrangements. Which, I should have realized, is a better explanation than my unusual genius for why it seemed so easy to find better social designs. But that raises a fundamental puzzle: why does the world seem so much less interested in social innovation, relative to innovation in physical and software devices and systems?

I’ve proposed the thesis of our new book as one explanation. But as many other explanations often come to people’s minds, I thought I might go over why I find them insufficient. Here goes: Continue reading "Social Innovation Disinterest Puzzle" »

GD Star Rating
a WordPress rating system
Tagged as: , ,

When Disciplines Disagree

Our new book, The Elephant in the Brain, can be seen as taking one side in a disagreement between disciplines. On one side are psychologists (among others) who say of course people try to spin their motives as being higher than they are, especially in public forums. People on this side find our basic book thesis, and our many specific examples, so plausible that they fear our book may be too derivative and unoriginal.

On the other side, however, are most experts in concrete policy analysis. They spend their time studying ways that schools could help people to learn more material, hospitals could help people get healthier, charities could better assist people in need, and so on. They thus implicitly accept the usual claims people make about what they are trying to achieve via schools, hospitals, charities, etc. And so the practice of policy experts disagrees a lot with our claims that people actually care more about other ends, and that this is why most people show so little interest in reforms proposed by policy experts. (The world shows great interest in new kinds of physical devices and software, but far less interest in most proposed social reforms.)

My first book The Age of Em can also be seen as expressing disagreement between disciplines. In that book I try to straightforwardly apply standard economics to the scenario where brain emulations are the first kind of AI to displace most all human workers. While the assumption of brain-emulation-based-AI seems completely standard and reasonable among large communities of futurists and technologists, it is seen as radical and doubtful in many other intellectual communities (including economics). And many in disciplines outside of economics are quite skeptical that economists know much of anything that can generalize outside of our particular social world.

Now if you are going to make claims with which whole disciplines of experts disagree, you should probably feel most comfortable doing so when you have at least a whole discipline supporting you. Then it isn’t just you the crazy outlier against a world of experts. Even so, this sort of situation is problematic, in part because disagreements usually don’t become debates. A book on one side of a disagreement between disciplines is usually ignored by the disciplines who disagree. And the disciplines that agree may also ignore it, if the result seems too obvious to them to be worth discussing within their discipline.

This sort of situation seems to me one of the worse failings of our intellectual world. We fail to generate a consistent consensus across the widest scope of topics. Smaller communities of experts often generate a temporary consistent consensus within each community, but these communities often disagree a lot at larger scopes. And then they mostly just ignore each other. Apparently experts and their patrons have little incentive to debate those from other disciplines who disagree.

When two disciplines disagree, you might think they would both turn especially to the people who have become experts in both disciplines. But in fact those people are usually ignored relative to the people who have the highest status within each discipline. If we generated our consensus via prediction markets, it would automatically be consistent across the widest scope of topics. But of course we don’t, and there’s little interest in moving in that direction.

GD Star Rating
a WordPress rating system
Tagged as: , , ,

Automatic Norms in Academia

In my career as a researcher and professor, I’ve come across many decisions where my intuition told me that some actions are prohibited by norms. I’ve usually just obeyed these intuitions, and assumed that everyone agrees. However, I only rarely observe what others think regarding the same situations. In these rare cases, I’m often surprised to see that others don’t agree with me.

I illustrate with the following set of questions on which I’ve noticed divergent opinions. Most academic institutions have no official rules to answer them, nor even an official person to which one can ask. Professors are just supposed to judge for themselves, which they usually do without consulting anyone. And yet many people treat these decisions if they are governed by norms.

  1. What excuses are acceptable for students missing an assignment or exam?
  2. If a teacher will be out of town on a class day, must a substitute teacher always be found or can classes sometimes be cancelled? How often can this be done?
  3. Is there any limit on how much extra help or extra credit assignments teachers can offer only to particular students?
  4. Should students be excused for misunderstanding questions due to poor understanding of English?
  5. Is it okay in college to teach students to just remember and then spit back relatively dogmatic statements, instead of trying to teach them how to think about more complex problems?
  6. Is it okay to assign a final exam, but then toss the exams and give out final grades based on all prior assignments?
  7. Is it okay to give all grad students A grades, and to praise all their papers as brilliant, as a way to compete to get students to pick you as their PhD advisor?
  8. Is it okay to lecture while stumbling drunk?
  9. Must you cite the work that actually influenced your work if it is lowbrow like blogs, wikipedia, or working papers, or if it is outside your discipline?
  10. Can you cite prestigious papers that look good in your references if they did not influence your work?
  11. Is it okay to write as if the first work of any consequence on a topic was the first to appear in a top prestige venue, in effect presuming that lower prestige prior work was inadequate?
  12. Should you cite papers requested by journal referees if you don’t think them relevant?
  13. How much searching is okay, searching in theory assumptions or in statistical model specifications, in order to find the kind of result you wanted? Must you disclose such searching?
  14. Is it okay to publish roughly the same idea in several places as long as you don’t use the exact same words?

I expect the same holds in most areas of life. Most detailed decisions that people treat as norm-governed have no official rules or judges. Most people decide for themselves without much thought or discussion, assuming incorrectly that relevant norms are obvious enough that everyone else agrees.

GD Star Rating
a WordPress rating system
Tagged as: ,

News As If Info Mattered

In our new book, we argue that most talk, including mass media news and academic talk, isn’t really about info, at least the obvious base-level info. But to study talk, it helps to think about what it would in fact look like if it were mostly about info. And as with effective altruism, such an exercise can also be useful for those who see themselves as having unusually sincere preferences, i.e., who actually care about info. So in this post let’s consider what info based talk would actually look like.

From an info perspective, a piece of “news” is a package that includes a claim that can be true or false, a sufficient explanation of what this claim means, and some support, perhaps implicit, to convince the reader of this claim. Here are a few relevant aspects of each such claim:

Surprise – how low a probability a reader would have previously assigned to this claim.
Confidence – how high a probability a reader is to assign after reading this news.
Importance – how much the probability of this claim matters to the reader.
Commonality – how many potential readers this consider this topic important.
Recency – how recently this news became available.
Support Type – what kind of support is offered for a reader to believe this claim.
Support Space – how many words it takes to show the support to a reader.
Definition Space – how many words it takes to explain what this claim means.
Bandwidth – number of channels of communication used at once to tell reader about this news.
Chunk – size of a hard-to-divide model containing news, such as a tweets or a book.

Okay, the amount of info that some news gives a reader on a claim is the ratio of its confidence to its surprise. The value of this info multiplies this info amount by the claim’s importance to that reader. The total value of this news to all readers (roughly) multiplies this individual value by its commonality. Valuable news tells many people to put high confidence in claims that they previously thought rather unlikely, on topics they consider important.

A reader who knew most everything that is currently known would focus mostly on recent news. Real people, however, who know very little of what is known, would in contrast focus mostly on much less recent news. Waiting to process recent news allows time for many small pieces of news to be integrated into large chunks that share common elements of definition and support, and that make better use of higher bandwidth.

In a world mainly interested in getting news for its info, most news would be produced by specialists in particular news topics. And there’d be far more news on topics of common interest to many readers, relative to niche topics of interest only to smaller sets of readers.

The cost of reading news to a reader is any financial cost, plus a time cost for reading (or watching etc.). This time cost is mostly set by the space required for that news, divided by the effective bandwidth used. Total space is roughly definition space plus support space. If the claim offered is a small variation on many similar previous claims already seen by a reader, little space may be required for its definition. In contrast, claims strange to a reader may take a lot more space to explain.

When the support offered for a claim is popularity or authority, such support may be seen as weak, but it can often be given quite concisely. However, when the support offered is an explicit argument, that can seem strong, but it can also take a lot more space. Some claims are self-evident to readers upon being merely stated, or after a single example. If prediction markets were common, market odds could offer concise yet strong support for many claims. The smallest news items will usually not come with arguments.

Given the big advantages of modularity, in news as in anything else, we need a big gain to justify the modularity costs of clumping news together into hard-to-divide units, like articles and books. There are two obvious gain cases here: 1) many related claims, and 2) one focus claim requiring much explanation or support. The first case has a high correlation in reader interest across a set of claims, at least for a certain set of readers. Here a sufficient degree of shared explanation or support across these claims could justify a package that explains and supports them all together.

The second case is where a single focal claim requires either a great deal of explanation to even make clear what is being claimed, or it requires extensive detailed arguments to persuade readers. Or both. Of course there can be mixes of these two cases. For example, if in making the effort to support one main claim, one has already done most of the work needed to support a related but less important claim, one might include that related claim in the same chunk.

For most readers, most of the claims that are important enough to be the focus of a large chunk are also relatively easy to understand. As a result, most of the space in most large focused chunks is devoted to support. And as argument is the main support that requires a lot of space, most of the space in big chunks focused on a central claim is devoted to supporting arguments. Also, to justify the cost of a large chunk with a large value for the reader, most large focused chunks focus on claims to which readers initially assign a low probability.

So how does all this compare to our actual world of talk today? There are a lot of parallels, but also some big deviations. Our real world has a lot of local artisan production on topics of narrow interest. That is, people just chat with each other about random stuff. Even for news produced by efficient specialists, an awful lot of it seems to be on topics of relatively low importance to readers. Readers seem to care more about commonality than about importance. And there’s a huge puzzling focus on the most recently available news.

Books are some of our largest common chunks of news today, and each one usually purports to offer recent news on arguments supporting a central claim that is relatively easy to understand. It seems puzzling that so few big chunks are explicitly justified via shared explanation and justification of many related small claims, or that so man big chunks seem neither to cover many related claims nor a single central claim. It also seems puzzling that most focal claims of books are not very surprising to most readers. Readers do not seem to be proportionally more interested in the books on with more surprising focal claims. And given how much space is devoted to arguments for focal claims, it is somewhat surprising that books often neglect to even mention other kinds of support, such as popularity or authority.

While I do think alternative theories, in which news is not mainly about info, can explain many of these puzzles, a discussion of that will have to wait for another post.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Organic Prestige Doesn’t Scale

Some parts of our world, such as academia, rely heavily on prestige to allocate resources and effort; individuals have a lot of freedom to choose topics, and are mainly rewarded for seeming impressive to others. I’ve talked before about how some hope for a “Star Trek” future where most everything is done that way, and I’m now reading Walkaway, outlining a similar hope. I was skeptical:

In academia, many important and useful research problems are ignored because they are not good places to show off the usual kinds of impressiveness. Trying to manage a huge economy based only on prestige would vastly magnify that inefficiency. Someone is going to clean shit because that is their best route to prestige?! (more)

Here I want to elaborate on this critique, with the help of a simple model. But first let me start with an example. Imagine a simple farming community. People there spend a lot of time farming, but they must also cook and sew. In their free time they play soccer and sing folk songs. As a result of doing all these things, they tend to “organically” form opinions about others based on seeing the results of their efforts at such things. So people in this community try hard to do well at farming, cooking, sewing, soccer, and folk songs.

If one person put a lot of effort into proving math theorems, they wouldn’t get much social credit for it. Others don’t naturally see outcomes from that activity, and not having done much math they don’t know how to judge if this math is any good. This situation discourages doing unusual things, even if no other social conformity pressures are relevant.

Now let’s say that in a simple model. Let there be a community containing people j, and topic areas i where such people can create accomplishments aij. Each person j seeks a high personal prestige pj = Σi vi aij, where vi is the visibly of area i. They also face a budget constraint on accomplishment, Σi aij2 ≤ bj. This assumes diminishing returns to effort in each area.

In this situation, each person’s best strategy is to choose aij proportional to vi. Assume that people tend to see the areas where they are accomplishing more, so that visibility vi is proportional to an average over individual aij. We now end up with many possible equilibria having different visibility distributions. In each equilibria, for all individuals j and areas i,k we have the same area ratios aij / akj = Vi/ Vk.

Giving individuals different abilities (such as via a budget constraint Σi aij2 / xij ≤ bj) could make individual choose somewhat different accomplishments, but the same overall result obtains. Spillovers between activities in visibility or effort can have similar effects. Making some activities be naturally more visible might push toward those activities, but there could still remain many possible equilibria.

This wide range of equilibria isn’t very reassuring about the efficiency of this sort of prestige. But perhaps in a small foraging or farming community, group selection might over a long run push toward an efficient equilibria where the high visibility activates are also the most useful activities. However, larger societies need a strong division of labor, and with such a division it just isn’t feasible for everyone to evaluate everyone else’s specific accomplishments. This can be solved either by creating a command and status hierarchy that assigns people to tasks and promotes by merit, or by an open market with prestige going to those who make the most money. People often complain that doing prestige in these ways is “inauthethnic”, and they prefer the “organic” feel of personally evaluating others’ accomplishments. But while the organic approach may feel better, it just doesn’t scale.

In academia today, patrons defer to insiders so much regarding evaluations that disciplines become largely autonomous. So economists evaluate other economists based mostly on their work in economics. If someone does work both in economics and also in aother area, they are judged mostly just on their work in economics. This penalizes careers working in multiple disciplines. It also suggests doubts on if different disciplines get the right relative support – who exactly can be trusted to make such a choice well?

Interestingly, academic disciplines are already organized “inorganically” internally. Rather than each economist evaluating each other economist personally, they trust journal editors and referees, and then judge people based on their publications. Yes they must coordinate to slowly update shared estimates of which publications count how much, but that seems doable informally.

In principle all of academia could be unified in this way – universities could just hire the candidates with the best overall publication (or citation) record, regardless of in which disciplines they did what work. But academia hasn’t coordinated to do this, nor does it seem much interested in trying. As usual, those who have won by existing evaluation criteria are reluctant to change criteria, after which they would look worse compared to new winners.

This fragmented prestige problem hurts me especially, as my interests don’t fit neatly into existing groups (academic and otherwise). People in each area tend to see me as having done some interesting things in their area, but too little to count me as high status; they mostly aren’t interested in my contributions to other areas. I look good if you count my overall citations, for example, but not if you only my citations or publications in each specific area.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Intellectuals as Artists

Consider some related phenomena:

  1. Casual conversation norms say to wander across many topics, with each person staying relevant to each current topic. This functions well to test individual impressiveness. Today, academic and mass media conversations today follow similar norms, though they did this much less in the ancient world.
  2. While ancient artists and musicians tried to perfect common styles, modern artists and musicians seek more distinctive personal styles. For example, while songs were once designed to sound good when ordinary folks sang them, now songs are designed to create a unique impressive performance by one artist.
  3. Politicians often go out of their way to do “position taking” on many issues, even on issues they have little chance of influencing policy while in office. Voters prefer systems like proportional representation where voters can identify more closely with particular representatives, even if this doesn’t give voters better outcomes overall. Knowing many of a politician’s positions helps voters to identify with them.
  4. “Sophomoric” thinkers, typically college sophomores, are eager to take positions on as many common topics as possible, even if this means taking poorly consider positions. They don’t feel they are adult until they have an opinion ready for most common intellectual conversations. This is more feasible when opinions on each topic area are reduced to choices between a small number of standard “isms”, offering integrated packages of answers. Sophomoric thinkers love isms.
  5. We often try to extract “isms” out of individuals, such as my colleagues Tyler Cowen or Bryan Caplan. We might ask “What is the Caplanian position on X?” That is, we wonder how they would answer random questions, presuming that we can infer a coherent style from past positions that would answer all future questions, at least within some wide scope. Intellectuals who desire wider attention often go out of their way to express opinions on many topics, chosen via a distinctive personal style.

We pretend that we search only for truth, picking each specific position only via the strongest specific evidence and arguments. And in many mundane contexts that’s not a bad approximation. But in many other grander contexts we seek more to become and associate with distinctive intellectual artists. Such artists are impressive both via the wide range of topics on which they can be impressive, and via having a distinctive personal style that they can bring to bear on this range of topics.

This all makes complete sense as an impressiveness contest, but far less sense as a way for the world to jointly estimate accurate Bayesian estimates on each topic. I’m sure you can make up reasons why distinctive intellectual styles that imply positions on wide ranges of topics are really great ways to produce accuracy. But they will mostly sound like excuses to me.

Sophomoric thinkers often retain for a lifetime the random opinions they quickly generate without much thought. Yet they don’t want to just inherit their parents positions; they need to generate their own new opinions. I wonder which effect will dominate when young ems choose opinions; will they tend to adopt standard positions of prior clan members, or generate their own new individual opinions?

GD Star Rating
a WordPress rating system
Tagged as: , , ,

Imagine Philosopher Kings

I just read Joseph Heath’s Enlightenment 2.0 (reviewed here by Alex). Heath is a philosopher who is a big fan of “reason,” which he sees as an accidentally-created uniquely-human mental capacity offering great gains in generality and accuracy over our other mental capacities. However, reason comes at the costs of being slow and difficult, requiring fragile social and environmental supports, and going against our nature.

Heath sees a recent decline in reliance on reason within our political system, which he blames much more on the right than the left, and he has a few suggestions for improvement. He wants the political process to take longer to consider each choice, to focus more on writing relative to sound and images, and to focus more on longer essays instead of shorter quips. Instead of people just presenting views, he wants more more cross-examination and debate. Media coverage should focus more on experts than on journalists. (Supporting quotes below.)

It seems to me that academic philosopher Heath’s ideal of reason is the style of conversation that academic philosophers now use among themselves, in journals, peer review, and in symposia. Heath basically wishes that political conversations could be more like the academic philosophy conversations of his world. And I expect many others share his wish; there is after all the ancient ideal of the “philosopher king.”

It would be interesting if someone would explore this idea in detail, by trying to imagine just what governance would look like if it were run similar to how academic philosophers now run their seminars, conferences, journals, and departments. For example, imagine requiring a Ph.D. in philosophy to run for political office, and that the only political arguments that one could make in public were long written essays that had passed a slow process of peer review for cogency by professional philosophers. Bills sent to legislatures also require such a peer-reviewed supporting essay. Imagine further incentives to write essays responding to others, rather than just presenting one’s one view. For example, one might have to publish two response essays before being allowed to publish one non-response essay.

Assume that this new peer review process managed to uphold intellectual standards roughly as well as does the typical philosophy subfield journal today. Even then, I don’t have much confidence that this would go well. But I’m not sure, and I’d love to see someone who knows the internal processes of academic philosophy in some detail, and also knows common governance processes in some detail, work out a plausible guess for what a direct combination of these processes would look like. Perhaps in the form of a novel. I think we might learn quite a lot about what exactly can go right and wrong with reason.

Other professions might plausibly also wish that we ran the government more according to the standards that they use internally. It could also be interesting to imagine a government that was run more like how an engineering community is run, or how a community of physicists is run. Or even a community of spiritualists. Such scenarios could be both entertaining and informative.

Those promised quotes from Enlightenment 2.0: Continue reading "Imagine Philosopher Kings" »

GD Star Rating
a WordPress rating system
Tagged as: ,

When Does Evidence Win?

Consider a random area of intellectual inquiry, and a random intellectual who enters this area. When this person first arrives, a few different points of view seemed worthy of consideration in this area. This person then becomes expert enough to favor one of these views. Then over the following years and decades the intellectual world comes to more strongly favor one of these views, relative to the others. My key question is: in what situations do such earlier arrivals, on average, tend to approve of this newly favored position?

Now there will be many cases where favoring a point helps people to be seen an intellectual of a certain standing. For example, jumping on an intellectual fashion could help one to better publish, and then get tenure. So if we look at tenured professors, we might well see that they tended to favor new fashions. To exclude this effect, I want to apply whatever standard is used to pick intellectuals before they choose their view on this area.

There will also be an effect whereby intellectuals move their work to focus on new areas even if they don’t actually think they are favored by the weight of evidence. (By “evidence” here I also mean to include relevant intellectual arguments.) So I don’t want to rely on the areas where people work to judge which areas they favor. I instead need something more like a survey that directly asks intellectuals which views they honestly think are favored by the weight of evidence. And I need this survey to be private enough for respondents to not fear retribution or disapproval for expressed views. (And I also want them to be intellectually honest in this situation.)

Once we are focused on people who were already intellectuals of some standing when they choose their views in an area, and on their answers to a private enough survey, I want to further distinguish between areas where relevant strong and clear evidence did or did not arrive. Strong evidence favors one of the views substantially, and clear evidence can be judged and understood by intellectuals at the margins of the field, such as those in neighboring fields or with less intellectual standing. These can included students, reporters, grant givers, and referees.

In my personal observation, when strong and clear evidence arrives, the weight of opinion does tend to move toward the views favored by this evidence. And early arrivals to the field also tend to approve. Yes many such intellectuals will continue to favor their initial views because the rise of other views tends to cut the perceived value of their contributions. But averaging over people with different views, on net opinion moves to favor the view that evidence favors.

However, the effectiveness of our intellectual world depends greatly on what happens in the other case, where relevant evidence is not clear and strong. Instead, evidence is weak, so that one must weigh many small pieces of evidence, and evidence is complex, requiring much local expertise to judge and understand. If even in this case early arrivals to a field tend to approve of new favored opinions, that (weakly) suggests that opinion is in fact moved by the information embodied in this evidence, even when it is weak and complex. But if not, that fact (weakly) suggests that opinion moves are mostly due to many other random factors, such as new political coalitions within related fields.

While I’ve outlined how one might do a such a survey, I have not actually done it. Even so, over the years I have formed opinions on areas where my opinions did not much influence my standing as an intellectual, and where strong and clear evidence has not yet arrived. Unfortunately, in those areas I have not seen much of a correlation between the views I see as favored on net by weak and complex evidence, and the views that have since become more popular. Sometimes fashion favors my views, and sometimes not.

In fact, most who choose newly fashionable views seem unaware of the contrary arguments against those views and for other views. Advocates for new views usually don’t mention them and few potential converts ask for them. Instead what matters most is: how plausible does the evidence for a view offered by its advocates seem to those who know little about the area. I see far more advertising than debate.

This suggests that most intellectual progress should be attributed to the arrival of strong and clear evidence. Other changes in intellectual opinion are plausibly due to a random walk in the space of other random factors. As a result, I have prioritized my search for strong and clear evidence on interesting questions. And I’m much less interested than I once was in weighing the many weak and complex pieces of evidence in other areas. Even if I can trust myself to judge such evidence honestly, I have little faith in my ability to persuade the world to agree.

Yes if you weigh such weak and complex evidence, you might come to a conclusion, argue for it, and find a world that increasingly agrees with you. And you might let your self then believe that you are in a part of the intellectual world with real and useful intellectual progress, progress to which you have contributed. Which would feel nice. But you should consider the possibility that this progress is illusory. Maybe for real progress, you need to instead chip away at hard problems, via strong and clear evidence.

GD Star Rating
a WordPress rating system
Tagged as: , ,