Tag Archives: Academia

Imagine Philosopher Kings

I just read Joseph Heath’s Enlightenment 2.0 (reviewed here by Alex). Heath is a philosopher who is a big fan of “reason,” which he sees as an accidentally-created uniquely-human mental capacity offering great gains in generality and accuracy over our other mental capacities. However, reason comes at the costs of being slow and difficult, requiring fragile social and environmental supports, and going against our nature.

Heath sees a recent decline in reliance on reason within our political system, which he blames much more on the right than the left, and he has a few suggestions for improvement. He wants the political process to take longer to consider each choice, to focus more on writing relative to sound and images, and to focus more on longer essays instead of shorter quips. Instead of people just presenting views, he wants more more cross-examination and debate. Media coverage should focus more on experts than on journalists. (Supporting quotes below.)

It seems to me that academic philosopher Heath’s ideal of reason is the style of conversation that academic philosophers now use among themselves, in journals, peer review, and in symposia. Heath basically wishes that political conversations could be more like the academic philosophy conversations of his world. And I expect many others share his wish; there is after all the ancient ideal of the “philosopher king.”

It would be interesting if someone would explore this idea in detail, by trying to imagine just what governance would look like if it were run similar to how academic philosophers now run their seminars, conferences, journals, and departments. For example, imagine requiring a Ph.D. in philosophy to run for political office, and that the only political arguments that one could make in public were long written essays that had passed a slow process of peer review for cogency by professional philosophers. Bills sent to legislatures also require such a peer-reviewed supporting essay. Imagine further incentives to write essays responding to others, rather than just presenting one’s one view. For example, one might have to publish two response essays before being allowed to publish one non-response essay.

Assume that this new peer review process managed to uphold intellectual standards roughly as well as does the typical philosophy subfield journal today. Even then, I don’t have much confidence that this would go well. But I’m not sure, and I’d love to see someone who knows the internal processes of academic philosophy in some detail, and also knows common governance processes in some detail, work out a plausible guess for what a direct combination of these processes would look like. Perhaps in the form of a novel. I think we might learn quite a lot about what exactly can go right and wrong with reason.

Other professions might plausibly also wish that we ran the government more according to the standards that they use internally. It could also be interesting to imagine a government that was run more like how an engineering community is run, or how a community of physicists is run. Or even a community of spiritualists. Such scenarios could be both entertaining and informative.

Those promised quotes from Enlightenment 2.0: Continue reading "Imagine Philosopher Kings" »

GD Star Rating
Tagged as: ,

When Does Evidence Win?

Consider a random area of intellectual inquiry, and a random intellectual who enters this area. When this person first arrives, a few different points of view seemed worthy of consideration in this area. This person then becomes expert enough to favor one of these views. Then over the following years and decades the intellectual world comes to more strongly favor one of these views, relative to the others. My key question is: in what situations do such earlier arrivals, on average, tend to approve of this newly favored position?

Now there will be many cases where favoring a point helps people to be seen an intellectual of a certain standing. For example, jumping on an intellectual fashion could help one to better publish, and then get tenure. So if we look at tenured professors, we might well see that they tended to favor new fashions. To exclude this effect, I want to apply whatever standard is used to pick intellectuals before they choose their view on this area.

There will also be an effect whereby intellectuals move their work to focus on new areas even if they don’t actually think they are favored by the weight of evidence. (By “evidence” here I also mean to include relevant intellectual arguments.) So I don’t want to rely on the areas where people work to judge which areas they favor. I instead need something more like a survey that directly asks intellectuals which views they honestly think are favored by the weight of evidence. And I need this survey to be private enough for respondents to not fear retribution or disapproval for expressed views. (And I also want them to be intellectually honest in this situation.)

Once we are focused on people who were already intellectuals of some standing when they choose their views in an area, and on their answers to a private enough survey, I want to further distinguish between areas where relevant strong and clear evidence did or did not arrive. Strong evidence favors one of the views substantially, and clear evidence can be judged and understood by intellectuals at the margins of the field, such as those in neighboring fields or with less intellectual standing. These can included students, reporters, grant givers, and referees.

In my personal observation, when strong and clear evidence arrives, the weight of opinion does tend to move toward the views favored by this evidence. And early arrivals to the field also tend to approve. Yes many such intellectuals will continue to favor their initial views because the rise of other views tends to cut the perceived value of their contributions. But averaging over people with different views, on net opinion moves to favor the view that evidence favors.

However, the effectiveness of our intellectual world depends greatly on what happens in the other case, where relevant evidence is not clear and strong. Instead, evidence is weak, so that one must weigh many small pieces of evidence, and evidence is complex, requiring much local expertise to judge and understand. If even in this case early arrivals to a field tend to approve of new favored opinions, that (weakly) suggests that opinion is in fact moved by the information embodied in this evidence, even when it is weak and complex. But if not, that fact (weakly) suggests that opinion moves are mostly due to many other random factors, such as new political coalitions within related fields.

While I’ve outlined how one might do a such a survey, I have not actually done it. Even so, over the years I have formed opinions on areas where my opinions did not much influence my standing as an intellectual, and where strong and clear evidence has not yet arrived. Unfortunately, in those areas I have not seen much of a correlation between the views I see as favored on net by weak and complex evidence, and the views that have since become more popular. Sometimes fashion favors my views, and sometimes not.

In fact, most who choose newly fashionable views seem unaware of the contrary arguments against those views and for other views. Advocates for new views usually don’t mention them and few potential converts ask for them. Instead what matters most is: how plausible does the evidence for a view offered by its advocates seem to those who know little about the area. I see far more advertising than debate.

This suggests that most intellectual progress should be attributed to the arrival of strong and clear evidence. Other changes in intellectual opinion are plausibly due to a random walk in the space of other random factors. As a result, I have prioritized my search for strong and clear evidence on interesting questions. And I’m much less interested than I once was in weighing the many weak and complex pieces of evidence in other areas. Even if I can trust myself to judge such evidence honestly, I have little faith in my ability to persuade the world to agree.

Yes if you weigh such weak and complex evidence, you might come to a conclusion, argue for it, and find a world that increasingly agrees with you. And you might let your self then believe that you are in a part of the intellectual world with real and useful intellectual progress, progress to which you have contributed. Which would feel nice. But you should consider the possibility that this progress is illusory. Maybe for real progress, you need to instead chip away at hard problems, via strong and clear evidence.

GD Star Rating
Tagged as: , ,

The Elephant in the Brain

One of the most frustrating things about writing physical books is the long time delays. It has been 17 months since I mentioned my upcoming book here, and now, 8.5 months after we submitted the full book for review, & over 4 months after 7 out of 7 referees said “great book, as it is”, I can finally announce that The Elephant in the Brain: Hidden Motives in Everyday Life, coauthored with Kevin Simler, will officially be published January 1, 2018. Sigh. See summary & detailed outline at the book’s website.

A related sad fact is that the usual book publicity equilibrium adds to intellectual inequality. Since most readers want to read books about which they’ve heard much publicity lately from multiple sources, publishers try to concentrate publicity into a narrow time period around the official publication date. Which makes sense.

But to create that burst of publicity, one must circulate the book well in advance privately among “thought leaders”, who might blurb or review it, invite the authors to talk on it, or recommend it to others who might do these things. So people who plausibly fit these descriptions get to read such books long before others. This lets early readers seem to be wise judges of future popular talk directions. Not because they actually have better judgement, but because they get inside info.

Alas, I’m stuck in this same equilibrium. I have a full copy of my final book, except for minor copy-editing changes, and I can share it privately with possible publicity helpers. And when the relative cost to send an email is small relative to possible gains, a small chance may be enough. I’ll also give in to some requests based on friendship or prior help given me (as on my last book), especially when combined with promises to buy the book when it comes out.

But just as grading is the worst part of teaching, I hate being put in the role of bouncer, deciding who is cool enough to be let into my book club, or who has enough favors to trade. At least when teaching I’m expert in whatever topic I’m grading. But here I’m much less expert on deciding who can help book publicity. I’d really prefer the intellectual world to be more of an open competition without favoritism for those with inside connections. But here I am, forced to play favorites.

These are a few of the prices one pays today to publish books. But still, books remain an unparalleled way to call attention to ideas that need more space to explain than an article can offer. And for a relatively unknown author, established publishers still offer more attention than you could generate on your own. But maybe, just maybe, I can do something different with my third book, whatever that may be on.

GD Star Rating
Tagged as: , ,

Dream Themes

The following are the twenty most frequent dream themes recalled by 1181 Canadian and 1186 Hong Kong college freshmen (most frequent first):


Note that these vary greatly in realism. Some are common events, and some are rare events that were were important for our ancestors. Some are about events that never actually happen: flying, being a child again, person now dead as alive, being in a story. Many of these are tied to rare extremes, especially negative extremes. The themes of arriving too late and failing exams are strikingly modern, and suggest that we industrial folks are often quite traumatized by our era’s event timing and school exam requirements.

GD Star Rating
Tagged as: ,

Missing Credentials

The typical modern credential (i.e., standard worker quality sign of widely understood significance) is based on a narrow written declared test of knowledge given early in one’s career on a pre-announced date at a quiet location. In this test, there is a list of questions to which one gives answers, answers then graded by independent judges who supposedly look only at the answers, and don’t take into account other things they know about the testee. In this post I want to point out that a much larger space of credentials are possible.

For example, you could be evaluated on actual products and contributions, based on your efforts over a long period, instead of being evaluated on short tests. You could be tested via tasks you must perform, instead of questions you must answer. After all, mostly we want to know what workers can do, not what questions they can answer. Since much of real question answering in the world is done verbally, test question-answering could also be done verbally, instead of in writing. And it could be done with frequent distractions and interruptions, as with most real question-answering.

However expressed, judges could take your first response as a starting point to ask you more questions (or give you more tasks), and dig deeper into your understanding. Judges could know you well, and choose questions specifically for you, and interpret your answers given all they know about you. This is, after all, closer to how most question-answering in the world actually goes.

Tests could be done at random days and times, and spread all through your career. Tests might be disguised as ordinary interactions, and not revealed to be tests until afterward. These approaches could discourage cramming for tests and other strategies that makes you good only at tests, and not so much at remembering or using your knowledge at other times.

Finally, you could be tested on your ability to integrate knowledge from a wide range of topic areas, instead of on your knowledge of a narrow topic area. Yes you could show that you know many areas via passing tests for many areas, but that won’t show that you have integrated these diverse areas usefully together in your mind.

Of course I’m not saying that these variations are never explored, just that they are used much less often than the standard credential test. This vast space of possible credentials suggests that a lot of innovation may be possible, and I’m naturally especially interested in helping to develop better credentials for abilities that I have which are neglected by the usual credentials. For example, I’d love to see a polymath credential, for those who can integrate understanding of many fields, and a conversation credential, on one’s ability to get to the bottom of topics via a back & forth interaction.

The narrow range of most credentials compared to the vast possible space also seems to confirm Bryan Caplan’s emphasis on school as emphasizing and screening conformity. Yes the the usual kinds of tests can often be cheaper in many ways, but the lack of much variation even when credentials are very important, and so worth spending a bit more on, suggests that conformity is also an issue. It really does seem that people see non-standard tests as illicit in many ways.

The dominance of the usual credential test can also be seen as a way our society is unfairly dominated by the sort of writing-focused book-smart narrowly-skilled people who happen to be especially good at such tests. These people are in fact usually in charge of designing such tests.

GD Star Rating
Tagged as: ,

Chip Away At Hard Problems

Catherine: And your own research.
Harold: Such as it is.
C: What’s wrong with it?
H: The big ideas aren’t there.
C: Well, it’s not about big ideas. It’s… It’s work. You got to chip away at a problem.
H: That’s not what your dad did.
C: I think it was, in a way. I mean, he’d attack a problem from the side, you know, from some weird angle. Sneak up on it, grind away at it.
(Lines from movie Proof; Catherine is a famous mathematician’s daughter.)

In math, plausibility arguments don’t count for much; proofs are required. So math folks have little choice but to chip away at hard problems, seeking weird angles where indirect progress may be possible.

Outside of math, however, we usually have many possible methods of study and analysis. And a key tradeoff in our methods is between ease and directness on the one hand, and robustness and rigor on the other. At one extreme, you can just ask your intuition to quickly form a judgement that’s directly on topic. At the other extreme, you can try to prove math theorems. In between these extremes, informal conversation is more direct, while statistical inference is more rigorous.

When you need to make an immediate decision fast, direct easy methods look great. But when many varied people want to share an analysis process over a longer time period, more robust rigorous methods start to look better. Easy direct easy methods tend to be more uncertain and context dependent, and so don’t aggregate as well. Distant others find it harder to understand your claims and reasoning, and to judge their reliability. So distant others tend more to redo such analysis themselves rather than building on your analysis.

One of the most common ways that wannabe academics fail is by failing to sufficiently focus on a few topics of interest to academia. Many of them become amateur intellectuals, people who think and write more as a hobby, and less to gain professional rewards via institutions like academia, media, and business. Such amateurs are often just as smart and hard-working as professionals, and they can more directly address the topics that interest them. Professionals, in contrast, must specialize more, have less freedom to pick topics, and must try harder to impress others, which encourages the use of more difficult robust/rigorous methods.

You might think their added freedom would result in amateurs contributing proportionally more to intellectual progress, but in fact they contribute less. Yes, amateurs can and do make more initial progress when new topics arise suddenly far from topics where established expert institutions have specialized. But then over time amateurs blow their lead by focusing less and relying on easier more direct methods. They rely more on informal conversation as analysis method, they prefer personal connections over open competitions in choosing people, and they rely more on a perceived consensus among a smaller group of fellow enthusiasts. As a result, their contributions just don’t appeal as widely or as long.

I must admit that compared to most academics near me, I’ve leaned more toward amateur styles. That is, I’ve used my own judgement more on topics, and I’ve been willing to use less formal methods. I clearly see the optimum as somewhere between the typical amateur and academic styles. But even so, I’m very conscious of trying to avoid typical amateur errors.

So instead of just trying to directly address what seem the most important topics, I instead look for weird angles to contribute less directly via more reliable/robust methods. I have great patience for revisiting the few biggest questions, not to see who agrees with me, but to search for new angles at which one might chip away.

I want each thing I say to be relatively clear, and so understandable from a wide range of cultural and intellectual contexts, and to be either a pretty obvious no-brainer, or based on a transparent easy to explain argument. This is partly why I try to avoid arguing values. Even so, I expect that the most likely reason I will fail is that that I’ve allowed myself to move too far in the amateur direction.

GD Star Rating
Tagged as: , ,

Chronicle Review Profile

I’m deeply honored to be the subject of a cover profile this week in The Chronicle Review:


By David Wescott, the profile is titled Is This Economist Too Far Ahead of His Time?, October 16, 2016.

In academic journal articles where the author has an intended answer to a yes or no question, that answer is more often yes, and I think that applies here as well. The profile includes a lot about my book The Age of Em on a far future, and its title suggests that anyone who’d study a far future must be too far ahead of their time. But, when else would one study the far future other than well ahead of time? It seems to me that even in a rational world where everyone was of their time, some people would study other times. But perhaps the implied message is that we don’t live in such a world.

I’m honored to have been profiled, and broad ranging profiles tend to be imprecisely impressionistic. I think David Wescott did a good job overall, but since these impressions are about me, I’ll bother to comment on some (and signal my taste for precision). Here goes.

You inhabit a robotic body, and you stand roughly two millimeters tall. This is the world Robin Hanson is sketching out to a room of baffled undergraduates at George Mason University on a bright April morning.

Honestly, “baffled” is how most undergrads look to most professors during lectures.

Hanson is .. determined to promote his theories in an academy he finds deeply flawed; a doggedly rational thinker prone to intentionally provocative ideas that test the limits of what typically passes as scholarship.

Not sure I’m any more determined to self-promote than a typical academic. I try to be rational, but of course I fail. I seek the possibility of new useful info, and so use the surprise of a claim as a sign of its interestingness. Surprise correlates with “provocative”, and my innate social-cluelessness means I’ll neglect the usual social signs to “avoid this topic!” I question if I’m “intentionally provocative” beyond these two factors.

Hanson, deeply skeptical of conventional intellectual discourse,

I’m deeply skeptical of all discourse, intellectual or not, conventional or not.

At Caltech he found that economists based their ideas on simple models, which worked well in experiments but often failed to capture the complexities of the real world.

That is true of simple models in all fields, not just economics, and it is a feature not a bug. Models can be understood, while the full complexity of reality cannot.

But out of 3600 words, that’s all I have to correct, so good job David Wescott.

GD Star Rating
Tagged as: ,

Idea Talkers Clump

I keep encountering people who are mad at me, indignant even, for studying the wrong scenario. While my book assumes that brain emulations are the first kind of broad human-level AI, they expect more familiar AI, based on explicitly-coded algorithms, to be first.

Now the prospect of human-level ordinary AI is definitely what more people are talking about today – the topic is in fashion. There are AI companies, demos, conferences, media articles, and more serious intellectual discussion. In fact, I’d estimate that there is now at least one hundred times as much attention given to the scenario of human level AI based on explicit coding (including machine learning code) than to brain emulations.

But I very much doubt that ordinary AI first is over one hundred times as probable as em-based AI first. In fact, I’ll happily take bets at a factor of ten. You pay me $1000 if em-AI comes first, and I pay you $100 if other AI comes first.

In addition, due to diminishing returns, intellectual attention to future scenarios should probably be spread out more evenly than are probabilities. The first efforts to study each scenario can pick the low hanging fruit to make faster progress. In contrast, after many have worked on a scenario for a while there is less value to be gained from the next marginal effort on that scenario.

Yes, sometimes there can be scale economies to work on a topic; enough people need to do enough work to pass a critical threshold of productivity. But I see little evidence of that here, and much evidence to the contrary. Even within the scope of working on my book I saw sharply diminishing returns to continued efforts. So even if em-based AI had only 1% the chance of the other scenario, we’d want much more than 1% of thinkers to study it. At least we would if our goal were better understanding.

But of course that is not usually the main goal of individual thinkers. We are more eager to jump on bandwagons than to follow roads less traveled. All those fellow travelers validate us and our judgement. We prefer to join and defend a big tribe against outsiders, especially smaller weaker outsiders.

So instead of praising my attention to a neglected if less-likely topic, those who think em-AI less likely mostly criticize me for studying the wrong scenario. And continue to define topics of articles, conferences, special journal issues, etc. to exclude em-AI scenarios.

And this is how it tends to work in general in the world of ideas. Idea talkers tend to clump onto the topics that others have discussed lately, leaving topics outside the fashionable clumps with less attention relative to their importance. So if you are a thinker with the slack and independence to choose your own topics, an easy way to make disproportionate intellectual progress is to focus on neglected topics.

Of course most intellectuals already know this, and choose otherwise.

Added:  Never mind about effort less proportional than chances; Owen Cotton-Barratt reminded me that if value diminishes with log of effort, optimal scenario effort is proportional to probability.

Added 11Oct: Anders Sandberg weighs in.

GD Star Rating
Tagged as: , ,

Write To Say Stuff Worth Knowing

I had the following thought, and then went looking for others who had said it before. Wasn’t hard to find:

There are two types of writers, Schopenhauer once observed, those who write because they have something they have to say and those who write for the sake of writing.

If you’re young and you think you want to be a writer, chances are you are already in the second camp. And all the advice you’ll get from other people about writing only compounds this terrible impulse.

Write all the time, they’ll tell you. Write for your college newspaper. Get an MFA. Go to writer’s groups. Send query letters to agents.

What do they never say? Go do interesting things.

I was lucky enough to actually get this advice. .. A fair amount of aspiring writers email me about becoming a writer and I always say: Well, that’s your first mistake.

The problem is identifying as a writer. As though assembling words together is somehow its own activity. It isn’t. It’s a means to an end. And that end is always to say something, to speak some truth or reach someone outside yourself.

Deep down, you already know this. Take any good piece of writing, something that matters to you. Why is it good? Because of what it says. Because what the writer manages to communicate to you, their reader. It’s because of what’s within it, not how they wrote it.

No one ever reads something and says, “Well, I got absolutely nothing out of this and have no idea what any of this means but it sure is technically beautiful!” But they say the opposite all the time, they say “Goddamn, that’s good” to things with typos, poor grammar and simple diction ..

So if you want to be a writer, put “writing” on hold for a while. When you find something that is new and different and you can’t wait to share with the world, you’ll beat your fat hands against the keyboard until you get it out in one form or another. (more)

I’ll actually go much further: hold yourself to a far higher standard than merely having something you feel passionate about saying, which many readers will like. Instead, find a way to contribute to a lasting accumulation of knowledge on topics that matter.

Yes, you could weigh in on some standard topic of opinion, one where many have already stated their opinion, and where little progress seems possible. This might make you and your readers feel good. But your one vote will contribute only a tiny amount to long-term human understanding.

You’d do better to focus on a topic where opinions seem to change over time in substantial part due to arguments. Then you could contribute to our collective learning by declaring your support for particular arguments. In this case you’d be voting on which arguments to give more weight. But if many others vote on such arguments, you’d still only make a small fractional contribution. And that fraction might be smaller than you think, if future folks don’t bother to remember your vote.

Better to find a topic where humanity seems to be able to make intellectual progress via arguments, and then also to specialize in a particular subtopic, a subtopic about which few others write. If you can then get other influential writers in overlapping topic areas to read and be persuaded by your argument, you might contribute to a larger process whereby we all learn faster by usefully dividing up the task of learning about everything. You could do your part, and the rest of us could do our parts, and we could all learn together. That can be writing worth reading.

GD Star Rating
Tagged as:

Talks Not About Info

You can often learn about your own world by first understanding some other world, and then asking if your world is more like that other world than you had realized. For example, I just attended WorldCon, the top annual science fiction convention, and patterns that I saw there more clearly also seem echoed in wider worlds.

At WorldCon, most of the speakers are science fiction authors, and the modal emotional tone of the audience is one of reverence. Attendees love science fiction, revere its authors, and seek excuses to rub elbows with them. But instead of just having social mixers, authors give speeches and sit on panels where they opine on many topics. When they opine on how to write science fiction, they are of course experts, but in fact they mostly prefer to opine on other topics. By presenting themselves as experts on a great many future, technical, cultural, and social topics, they help preserve the illusion that readers aren’t just reading science fiction for fun; they are also part of important larger conversations.

When science fiction books overlap with topics in space, physics, medicine, biology, or computer science, their authors often read up on those topics, and so can be substantially more informed than typical audience members. And on such topics actual experts will often be included on the agenda. Audiences may even be asked if any of them happen to have expertise on a such a topic.

But the more that a topic leans social, and has moral or political associations, the less inclined authors are to read expert literatures on that topic, and the more they tend to just wing it and think for themselves, often on their feet. They less often add experts to the panel or seek experts in the audience. And relatively neutral analysis tends to be displaced by position taking – they find excuses to signal their social and political affiliations.

The general pattern here is: an audience has big reasons to affiliate with speakers, but prefers to pretend those speakers are experts on something, and they are just listening to learn about that thing. This is especially true on social topics. The illusion is exposed by facts like speakers not being chosen for knowing the most about a subject discussed, and those speakers not doing much homework. But enough audience members are ignorant of these facts to provide a sufficient fig leaf of cover to the others.

This same general pattern repeats all through the world of conferences and speeches. We tend to listen to talks and panels full of not just authors, but also generals, judges, politicians, CEOs, rich folks, athletes, and actors. Even when those are not the best informed, or even the most entertaining, speakers on a topic. And academic outlets tend to publish articles and books more for being impressive than for being informative. However, enough people are ignorant of these facts to let audiences pretend that they mainly listen to learn and get information, rather than to affiliate with the statusful.

Added 22Aug: We feel more strongly connected to people when we together visibly affirm our shared norms/values/morals. Which explains why speakers look for excuses to take positions.

GD Star Rating
Tagged as: ,