Tag Archives: Academia

Dream Themes

The following are the twenty most frequent dream themes recalled by 1181 Canadian and 1186 Hong Kong college freshmen (most frequent first):


Note that these vary greatly in realism. Some are common events, and some are rare events that were were important for our ancestors. Some are about events that never actually happen: flying, being a child again, person now dead as alive, being in a story. Many of these are tied to rare extremes, especially negative extremes. The themes of arriving too late and failing exams are strikingly modern, and suggest that we industrial folks are often quite traumatized by our era’s event timing and school exam requirements.

GD Star Rating
Tagged as: ,

Missing Credentials

The typical modern credential (i.e., standard worker quality sign of widely understood significance) is based on a narrow written declared test of knowledge given early in one’s career on a pre-announced date at a quiet location. In this test, there is a list of questions to which one gives answers, answers then graded by independent judges who supposedly look only at the answers, and don’t take into account other things they know about the testee. In this post I want to point out that a much larger space of credentials are possible.

For example, you could be evaluated on actual products and contributions, based on your efforts over a long period, instead of being evaluated on short tests. You could be tested via tasks you must perform, instead of questions you must answer. After all, mostly we want to know what workers can do, not what questions they can answer. Since much of real question answering in the world is done verbally, test question-answering could also be done verbally, instead of in writing. And it could be done with frequent distractions and interruptions, as with most real question-answering.

However expressed, judges could take your first response as a starting point to ask you more questions (or give you more tasks), and dig deeper into your understanding. Judges could know you well, and choose questions specifically for you, and interpret your answers given all they know about you. This is, after all, closer to how most question-answering in the world actually goes.

Tests could be done at random days and times, and spread all through your career. Tests might be disguised as ordinary interactions, and not revealed to be tests until afterward. These approaches could discourage cramming for tests and other strategies that makes you good only at tests, and not so much at remembering or using your knowledge at other times.

Finally, you could be tested on your ability to integrate knowledge from a wide range of topic areas, instead of on your knowledge of a narrow topic area. Yes you could show that you know many areas via passing tests for many areas, but that won’t show that you have integrated these diverse areas usefully together in your mind.

Of course I’m not saying that these variations are never explored, just that they are used much less often than the standard credential test. This vast space of possible credentials suggests that a lot of innovation may be possible, and I’m naturally especially interested in helping to develop better credentials for abilities that I have which are neglected by the usual credentials. For example, I’d love to see a polymath credential, for those who can integrate understanding of many fields, and a conversation credential, on one’s ability to get to the bottom of topics via a back & forth interaction.

The narrow range of most credentials compared to the vast possible space also seems to confirm Bryan Caplan’s emphasis on school as emphasizing and screening conformity. Yes the the usual kinds of tests can often be cheaper in many ways, but the lack of much variation even when credentials are very important, and so worth spending a bit more on, suggests that conformity is also an issue. It really does seem that people see non-standard tests as illicit in many ways.

The dominance of the usual credential test can also be seen as a way our society is unfairly dominated by the sort of writing-focused book-smart narrowly-skilled people who happen to be especially good at such tests. These people are in fact usually in charge of designing such tests.

GD Star Rating
Tagged as: ,

Chip Away At Hard Problems

Catherine: And your own research.
Harold: Such as it is.
C: What’s wrong with it?
H: The big ideas aren’t there.
C: Well, it’s not about big ideas. It’s… It’s work. You got to chip away at a problem.
H: That’s not what your dad did.
C: I think it was, in a way. I mean, he’d attack a problem from the side, you know, from some weird angle. Sneak up on it, grind away at it.
(Lines from movie Proof; Catherine is a famous mathematician’s daughter.)

In math, plausibility arguments don’t count for much; proofs are required. So math folks have little choice but to chip away at hard problems, seeking weird angles where indirect progress may be possible.

Outside of math, however, we usually have many possible methods of study and analysis. And a key tradeoff in our methods is between ease and directness on the one hand, and robustness and rigor on the other. At one extreme, you can just ask your intuition to quickly form a judgement that’s directly on topic. At the other extreme, you can try to prove math theorems. In between these extremes, informal conversation is more direct, while statistical inference is more rigorous.

When you need to make an immediate decision fast, direct easy methods look great. But when many varied people want to share an analysis process over a longer time period, more robust rigorous methods start to look better. Easy direct easy methods tend to be more uncertain and context dependent, and so don’t aggregate as well. Distant others find it harder to understand your claims and reasoning, and to judge their reliability. So distant others tend more to redo such analysis themselves rather than building on your analysis.

One of the most common ways that wannabe academics fail is by failing to sufficiently focus on a few topics of interest to academia. Many of them become amateur intellectuals, people who think and write more as a hobby, and less to gain professional rewards via institutions like academia, media, and business. Such amateurs are often just as smart and hard-working as professionals, and they can more directly address the topics that interest them. Professionals, in contrast, must specialize more, have less freedom to pick topics, and must try harder to impress others, which encourages the use of more difficult robust/rigorous methods.

You might think their added freedom would result in amateurs contributing proportionally more to intellectual progress, but in fact they contribute less. Yes, amateurs can and do make more initial progress when new topics arise suddenly far from topics where established expert institutions have specialized. But then over time amateurs blow their lead by focusing less and relying on easier more direct methods. They rely more on informal conversation as analysis method, they prefer personal connections over open competitions in choosing people, and they rely more on a perceived consensus among a smaller group of fellow enthusiasts. As a result, their contributions just don’t appeal as widely or as long.

I must admit that compared to most academics near me, I’ve leaned more toward amateur styles. That is, I’ve used my own judgement more on topics, and I’ve been willing to use less formal methods. I clearly see the optimum as somewhere between the typical amateur and academic styles. But even so, I’m very conscious of trying to avoid typical amateur errors.

So instead of just trying to directly address what seem the most important topics, I instead look for weird angles to contribute less directly via more reliable/robust methods. I have great patience for revisiting the few biggest questions, not to see who agrees with me, but to search for new angles at which one might chip away.

I want each thing I say to be relatively clear, and so understandable from a wide range of cultural and intellectual contexts, and to be either a pretty obvious no-brainer, or based on a transparent easy to explain argument. This is partly why I try to avoid arguing values. Even so, I expect that the most likely reason I will fail is that that I’ve allowed myself to move too far in the amateur direction.

GD Star Rating
Tagged as: , ,

Chronicle Review Profile

I’m deeply honored to be the subject of a cover profile this week in The Chronicle Review:


By David Wescott, the profile is titled Is This Economist Too Far Ahead of His Time?, October 16, 2016.

In academic journal articles where the author has an intended answer to a yes or no question, that answer is more often yes, and I think that applies here as well. The profile includes a lot about my book The Age of Em on a far future, and its title suggests that anyone who’d study a far future must be too far ahead of their time. But, when else would one study the far future other than well ahead of time? It seems to me that even in a rational world where everyone was of their time, some people would study other times. But perhaps the implied message is that we don’t live in such a world.

I’m honored to have been profiled, and broad ranging profiles tend to be imprecisely impressionistic. I think David Wescott did a good job overall, but since these impressions are about me, I’ll bother to comment on some (and signal my taste for precision). Here goes.

You inhabit a robotic body, and you stand roughly two millimeters tall. This is the world Robin Hanson is sketching out to a room of baffled undergraduates at George Mason University on a bright April morning.

Honestly, “baffled” is how most undergrads look to most professors during lectures.

Hanson is .. determined to promote his theories in an academy he finds deeply flawed; a doggedly rational thinker prone to intentionally provocative ideas that test the limits of what typically passes as scholarship.

Not sure I’m any more determined to self-promote than a typical academic. I try to be rational, but of course I fail. I seek the possibility of new useful info, and so use the surprise of a claim as a sign of its interestingness. Surprise correlates with “provocative”, and my innate social-cluelessness means I’ll neglect the usual social signs to “avoid this topic!” I question if I’m “intentionally provocative” beyond these two factors.

Hanson, deeply skeptical of conventional intellectual discourse,

I’m deeply skeptical of all discourse, intellectual or not, conventional or not.

At Caltech he found that economists based their ideas on simple models, which worked well in experiments but often failed to capture the complexities of the real world.

That is true of simple models in all fields, not just economics, and it is a feature not a bug. Models can be understood, while the full complexity of reality cannot.

But out of 3600 words, that’s all I have to correct, so good job David Wescott.

GD Star Rating
Tagged as: ,

Idea Talkers Clump

I keep encountering people who are mad at me, indignant even, for studying the wrong scenario. While my book assumes that brain emulations are the first kind of broad human-level AI, they expect more familiar AI, based on explicitly-coded algorithms, to be first.

Now the prospect of human-level ordinary AI is definitely what more people are talking about today – the topic is in fashion. There are AI companies, demos, conferences, media articles, and more serious intellectual discussion. In fact, I’d estimate that there is now at least one hundred times as much attention given to the scenario of human level AI based on explicit coding (including machine learning code) than to brain emulations.

But I very much doubt that ordinary AI first is over one hundred times as probable as em-based AI first. In fact, I’ll happily take bets at a factor of ten. You pay me $1000 if em-AI comes first, and I pay you $100 if other AI comes first.

In addition, due to diminishing returns, intellectual attention to future scenarios should probably be spread out more evenly than are probabilities. The first efforts to study each scenario can pick the low hanging fruit to make faster progress. In contrast, after many have worked on a scenario for a while there is less value to be gained from the next marginal effort on that scenario.

Yes, sometimes there can be scale economies to work on a topic; enough people need to do enough work to pass a critical threshold of productivity. But I see little evidence of that here, and much evidence to the contrary. Even within the scope of working on my book I saw sharply diminishing returns to continued efforts. So even if em-based AI had only 1% the chance of the other scenario, we’d want much more than 1% of thinkers to study it. At least we would if our goal were better understanding.

But of course that is not usually the main goal of individual thinkers. We are more eager to jump on bandwagons than to follow roads less traveled. All those fellow travelers validate us and our judgement. We prefer to join and defend a big tribe against outsiders, especially smaller weaker outsiders.

So instead of praising my attention to a neglected if less-likely topic, those who think em-AI less likely mostly criticize me for studying the wrong scenario. And continue to define topics of articles, conferences, special journal issues, etc. to exclude em-AI scenarios.

And this is how it tends to work in general in the world of ideas. Idea talkers tend to clump onto the topics that others have discussed lately, leaving topics outside the fashionable clumps with less attention relative to their importance. So if you are a thinker with the slack and independence to choose your own topics, an easy way to make disproportionate intellectual progress is to focus on neglected topics.

Of course most intellectuals already know this, and choose otherwise.

Added:  Never mind about effort less proportional than chances; Owen Cotton-Barratt reminded me that if value diminishes with log of effort, optimal scenario effort is proportional to probability.

Added 11Oct: Anders Sandberg weighs in.

GD Star Rating
Tagged as: , ,

Write To Say Stuff Worth Knowing

I had the following thought, and then went looking for others who had said it before. Wasn’t hard to find:

There are two types of writers, Schopenhauer once observed, those who write because they have something they have to say and those who write for the sake of writing.

If you’re young and you think you want to be a writer, chances are you are already in the second camp. And all the advice you’ll get from other people about writing only compounds this terrible impulse.

Write all the time, they’ll tell you. Write for your college newspaper. Get an MFA. Go to writer’s groups. Send query letters to agents.

What do they never say? Go do interesting things.

I was lucky enough to actually get this advice. .. A fair amount of aspiring writers email me about becoming a writer and I always say: Well, that’s your first mistake.

The problem is identifying as a writer. As though assembling words together is somehow its own activity. It isn’t. It’s a means to an end. And that end is always to say something, to speak some truth or reach someone outside yourself.

Deep down, you already know this. Take any good piece of writing, something that matters to you. Why is it good? Because of what it says. Because what the writer manages to communicate to you, their reader. It’s because of what’s within it, not how they wrote it.

No one ever reads something and says, “Well, I got absolutely nothing out of this and have no idea what any of this means but it sure is technically beautiful!” But they say the opposite all the time, they say “Goddamn, that’s good” to things with typos, poor grammar and simple diction ..

So if you want to be a writer, put “writing” on hold for a while. When you find something that is new and different and you can’t wait to share with the world, you’ll beat your fat hands against the keyboard until you get it out in one form or another. (more)

I’ll actually go much further: hold yourself to a far higher standard than merely having something you feel passionate about saying, which many readers will like. Instead, find a way to contribute to a lasting accumulation of knowledge on topics that matter.

Yes, you could weigh in on some standard topic of opinion, one where many have already stated their opinion, and where little progress seems possible. This might make you and your readers feel good. But your one vote will contribute only a tiny amount to long-term human understanding.

You’d do better to focus on a topic where opinions seem to change over time in substantial part due to arguments. Then you could contribute to our collective learning by declaring your support for particular arguments. In this case you’d be voting on which arguments to give more weight. But if many others vote on such arguments, you’d still only make a small fractional contribution. And that fraction might be smaller than you think, if future folks don’t bother to remember your vote.

Better to find a topic where humanity seems to be able to make intellectual progress via arguments, and then also to specialize in a particular subtopic, a subtopic about which few others write. If you can then get other influential writers in overlapping topic areas to read and be persuaded by your argument, you might contribute to a larger process whereby we all learn faster by usefully dividing up the task of learning about everything. You could do your part, and the rest of us could do our parts, and we could all learn together. That can be writing worth reading.

GD Star Rating
Tagged as:

Talks Not About Info

You can often learn about your own world by first understanding some other world, and then asking if your world is more like that other world than you had realized. For example, I just attended WorldCon, the top annual science fiction convention, and patterns that I saw there more clearly also seem echoed in wider worlds.

At WorldCon, most of the speakers are science fiction authors, and the modal emotional tone of the audience is one of reverence. Attendees love science fiction, revere its authors, and seek excuses to rub elbows with them. But instead of just having social mixers, authors give speeches and sit on panels where they opine on many topics. When they opine on how to write science fiction, they are of course experts, but in fact they mostly prefer to opine on other topics. By presenting themselves as experts on a great many future, technical, cultural, and social topics, they help preserve the illusion that readers aren’t just reading science fiction for fun; they are also part of important larger conversations.

When science fiction books overlap with topics in space, physics, medicine, biology, or computer science, their authors often read up on those topics, and so can be substantially more informed than typical audience members. And on such topics actual experts will often be included on the agenda. Audiences may even be asked if any of them happen to have expertise on a such a topic.

But the more that a topic leans social, and has moral or political associations, the less inclined authors are to read expert literatures on that topic, and the more they tend to just wing it and think for themselves, often on their feet. They less often add experts to the panel or seek experts in the audience. And relatively neutral analysis tends to be displaced by position taking – they find excuses to signal their social and political affiliations.

The general pattern here is: an audience has big reasons to affiliate with speakers, but prefers to pretend those speakers are experts on something, and they are just listening to learn about that thing. This is especially true on social topics. The illusion is exposed by facts like speakers not being chosen for knowing the most about a subject discussed, and those speakers not doing much homework. But enough audience members are ignorant of these facts to provide a sufficient fig leaf of cover to the others.

This same general pattern repeats all through the world of conferences and speeches. We tend to listen to talks and panels full of not just authors, but also generals, judges, politicians, CEOs, rich folks, athletes, and actors. Even when those are not the best informed, or even the most entertaining, speakers on a topic. And academic outlets tend to publish articles and books more for being impressive than for being informative. However, enough people are ignorant of these facts to let audiences pretend that they mainly listen to learn and get information, rather than to affiliate with the statusful.

Added 22Aug: We feel more strongly connected to people when we together visibly affirm our shared norms/values/morals. Which explains why speakers look for excuses to take positions.

GD Star Rating
Tagged as: ,

Sycophantry Masquerading As Bargains

The Catholic Church used to sell “indulgences”; you gave them cash and they gave you the assurance that God would let you sin without punishment. If you are at all suspicious about whether this church can actually deliver on their claim, this seems a bad deal. You give them something tangible and clearly valuable, and they give you a vague promise on something you can’t see, and can’t even check if anyone has ever received.

We make similar bad “bargains” with a few kinds of workers, to whom we grant extraordinary privileges of “self-regulation.” That is, we let certain “professionals” run their own organizations which tell us how their job their job is to be done, and who can do it. In some areas, such as with doctors, these judgements are enforced by law: you can only buy medical services approved by doctors, and can only buy such services from those who the official medical organizations labels “doctors.” In other areas, such as with academics, these judgements are more enforced by our strong eagerness to associate with high prestige professionals: most everyone just accepts the word of key academic organizations on who is a good academic.

There is a literature which frames this as a “grand bargain”. The philosopher Donald Schön says:

In return for access to their extraordinary knowledge in matters of great human importance, society has granted them [professionals] a mandate for social control in their fields of specialization, a high degree of autonomy in their practice, and a license to determine who shall assume the mantle of professional authority.

In their book The Future of the Professions: How Technology Will Transform the Work of Human Experts, Richard and Daniel Susskind elaborate:

In acknowledgement of and in return for their expertise, experience, and judgement, which they are expected to apply in delivering affordable, accessible, up-to-date, reassuring, and reliable services, and on the understanding that they will curate and update their knowledge and methods, train their members, set and enforce standards for the quality of their work, and that they will only admit appropriately qualified individuals into their ranks, and that they will always act honestly, in good faith, putting the interests of clients ahead of their own, we (society) place our trust in the professions in granting them exclusivity over a wide range of socially significant services and activities, by paying them a fair wage, by conferring upon them independence, autonomy, rights of self-determination, and by according them respect and status.

Notice how in this supposed bargain, what we give the professionals is concrete and clearly valuable, while what they give us (over what we’d get without the deal) is vague and very hard for us to check. Like an indulgence. The Susskinds claim that while this bargain has been a good deal so far, we will soon cancel it:

We predict that increasingly capable machines, operating on their own or with non-specialist users, will take on many of the tasks that have been the historic preserve of the professions. We anticipate an ‘incremental transformation’ in the way that we produce and distribute expertise in society. This will lead eventually to a dismantling of the traditional professions.

This seems seriously mistaken to me. There is actually no bargain, there is just the rest of us submitting to professionals’ prestige. Cheaper yet outcome-effective substitutes to expensive professionals have long been physically available, and yet we have mostly not chosen those substitutes due to our eagerness to affiliate with prestigious professionals. We don’t choose nurses who can do primary care as well as doctors, and we don’t watch videos of the best professors from which we could learn as much as from attending typical lectures in person. And we aren’t interested in outcome track records for our lawyers. The existence of even more such future substitutes won’t change this situation much.

GD Star Rating
Tagged as: , ,

Missing Engagement

On the surface, there seems to have been a big debate over the last few years on how fast automation will displace jobs over the next decade or so. Some have claimed very rapid displacement, much faster than we’ve seen in recent decades (or centuries). Others have been skeptical (like me here, here, here, and here).

On October 13, David Mindell, Professor at MIT of both Aeronautics and Astronautics, and also History of Engineering and Manufacturing weighed in on this debate, publishing Our Robots, Ourselves: Robotics and the Myths of Autonomy:

If robotics in extreme environments are any guide, Mindell says, self-driving cars should not be fully self-driving. That idea, he notes, is belied by decades of examples involving spacecraft, underwater exploration, air travel, and more. In each of those spheres, fully automated vehicles have frequently been promised, yet the most state-of-the-art products still have a driver or pilot somewhere in the network. This is one reason Mindell thinks cars are not on the road to complete automation. ..

“There’s an idea that progress in robotics leads to full autonomy. That may be a valuable idea to guide research … but when automated and autonomous systems get into the real world, that’s not the direction they head. We need to rethink the notion of progress, not as progress toward full autonomy, but as progress toward trusted, transparent, reliable, safe autonomy that is fully interactive: The car does what I want it to do, and only when I want it to do it.” (more)

In his book, Mindell expertly supports his position with a detailed review of the history of automation in planes, spacecraft and submarines. You might think than Mindell’s prestige, expertise, and detailed book on past automation rates and patterns would earn him a place in this debate on future rates of automation progress. Many of those who blurbed the book clearly think so:

“Mindell’s ingenious and profoundly original book will enlighten those who prophesy that robots will soon make us redundant.”—David Autor

“My thanks to the author for bringing scholarship and sanity to a debate which has run off into a magic la-la land in the popular press.”—Rodney Brooks

But looking over dozens of reviews Mindell’s book in the 75 days since it was published, I find no thoughtful response from the other side! None. No one who expects rapid automation progress has bothered to even outline why they find Mindell’s arguments unpersuasive.

Perhaps this shows that people on the other side know Mindell’s arguments to be solid, making any response unpersuasive, and so they’d rather ignore him. Maybe they just don’t think the past is any guide to the future, at least in automation, making Mindell’s discussion of the past irrelevant to the debate. I’ve known people who think this way.

But perhaps a more plausible interpretation is that on subjects like this in our intellectual world, usually there just is no “debate”; there are just different sides who separately market their points of view. Just as in ordinary marketing, where firms usually pitch their products without mentioning competing products, intellectuals marketing of points of view also usually ignore competing points of view. Instead of pointing out contrary arguments and rebutting them, intellectual usually prefer to ignore contrary arguments.

This seems a sad state of affairs with respect to intellectual progress. But of course such progress is a public good, where individual contributions must trade a personal cost against a collective benefit, encouraging each of us to free-ride on the efforts of others. We might create intellectual institutions that better encourage more engagement with and response to contrary arguments, but unless these are global institutions others may prefer to free-ride and not contribute to local institutions.

You might think that academic norms of discourse are such global institutions encouraging engagement. And academics do give much lip service to that idea. But in fact it is mostly empty talk; academics don’t actually encourage much engagement and response beyond the narrow scope of prestigious folks in the same academic discipline.

GD Star Rating
Tagged as: , , ,

Could Gambling Save Psychology?

A new PNAS paper:

Prediction markets set up to estimate the reproducibility of 44 studies published in prominent psychology journals and replicated in The Reproducibility Project: Psychology predict the outcomes of the replications well and outperform a survey of individual forecasts. … Hypotheses being tested in psychology typically have low prior probabilities of being true (median, 9%). … Prediction markets could be used to obtain speedy information about reproducibility at low cost and could potentially even be used to determine which studies to replicate to optimally allocate limited resources into replications. (more; see also coverage at 538AtlanticScience, Gelman)

We’ve had enough experiments with prediction markets over the years, both lab and field experiments, to not be at all surprised by these findings of calibration and superior accuracy. If so, you might ask: what is the intellectual contribution of this paper?

When one is trying to persuade groups to try prediction markets, one encounters consistent skepticism about experiment data that is not on topics very close to the proposed topics. So one value of this new data is to help persuade academic psychologists to use prediction markets to forecast lab experiment replications. Of course for this purpose the key question is whether enough academic psychologists were close enough to the edge of making such markets a continuing practice that it was worth the cost of a demonstration project to create closely related data, and so push them over the edge.

I expect that most ordinary academic psychologists need stronger incentives than personal curiosity to participate often enough in prediction markets on whether key psychology results will be replicated (conditional on such replication being attempted). Such additional incentives could come from:

  1. direct monetary subsidies for market trading, such as via subsidized market makers,
  2. traders with higher than average trading records bragging about it on their vitae, and getting hired etc. more because of that, or
  3. prediction market prices influencing key decisions such as what articles get published where, who gets what grants, or who gets what jobs.

For example, imagine that one or more top psychology journals used prediction market chances that an empirical paper’s main result(s) would be confirmed (conditional on an attempt) as part of deciding whether to publish that paper. In this case the authors of a paper and their rivals would have incentives to trade in such markets, and others could be enticed to trade if they expected trades by insiders and rivals alone to produce biased estimates. This seems a self-reinforcing equilibrium; if good people think hard before participating in such markets, others could see those market prices as deserving of attention and deference, including in the journal review process.

However, the existing equilibrium also seems possible, where there are few or small markets on such topics off to the side, markets that few pay much attention to and where there is little resources or status to be won. This equilibrium arguably results in less intellectual progress for any given level of research funding, but of course progress-inefficient academic equilibria are quite common.

Bottom line: someone is going to have to pony up some substantial scarce academic resources to fund an attempt to move this part of academia to a better equilibria. If whomever funded this study didn’t plan on funding this next step, I could have told them ahead of time that they were mostly wasting their money in funding this study. This next move won’t happen without a push.

GD Star Rating
Tagged as: ,