Tag Archives: Disagreement

Caplan on Age of Em

As many have noted, ours in an era of ideological polarization. On topics where there are strong emotions, we tend to gravitate to extremes, and are less interest in intermediate positions. Which is a problem for my book; while most see it as too weird, others mostly see it as not weird enough. A  tech futurism minority expects to soon see very rapid progress in artificial intelligence and machine learning, and so see brain emulations as too slow and inefficient compared to the super-intelligence they foresee. And the majority to whom that seems pretty crazy also seem brain emulations as similarly crazy; they don’t care much if ems seem a bit less crazy.

At least future tech enthusiasts who think my book not weird enough are willing to write reviews to say so. But those who think my book too weird mostly stay silent; I’ve heard privately of many who were going to cover the book before they fully realized what it is about. So I thank my college Bryan Caplan for being willing to say what others won’t, in his critical review. His review is long, with ten criticisms. This response will also be long, going point by point.

Six of his ten objections seem to be mainly about my language. (His review is indented, and often contains book quotes; my replies are not.) Continue reading "Caplan on Age of Em" »

GD Star Rating
loading...
Tagged as: ,

Age of Em Criticism

My book’s topic seems to me so obviously important that I figure a reader’s main question must be whether he can trust me to actually know something on it. As a result, potential readers should be especially interested to hear criticisms; where do reviewers think my book gets it wrong? And as the book draws on many disciplines, readers should be especially interested in expert criticism, i.e., reviewers who find fault in an area they know well. Let us consider the reviews so far.

Three reviews so far can be seen as “main stream media.” At the Financial Times, journalist Sarah O’Connor calls the book “alluring” and “fascinating”, but notes that not everyone will accept the premise that ems are possible or “that current economic and social theories will hold in this strange new world.” However, the closest she gets to direct criticism is:

Some of the forecasts seem old-fashioned, like the notion that male ems will prefer females with “signs of nurturing inclinations and fertility, such as youthful good looks” while females will prefer males with “signs of wealth and status”.

At the Guardian, journalist Zoe Williams uses the book to direct readers to her critical question: “In a world without work, how do we distribute resources?” At Reason, journalist Ronald Bailey calls the book “fascinating”, and summarizes it in detail, but doesn’t otherwise evaluate it, other than to note that “other futurists have projected other pathways” that the future might take.

There are 2.5 reviews at widely read blogs. Economist Tyler Cowen likes the book, but cares less about its official topic than its indirect uses, such as a “Straussian commentary on the world we actually live in” and “A reminder of how strange everything is.” Economist Bryan Caplan has posted half of a review, on “What’s Right in Robin Hanson’s The Age of Em”; his other shoe has yet to drop.

Psychiatrist Scott Alexander really likes and highly recommends the book, though he worries that it is not weird enough, and he thinks I overstate my case on prior futurist accuracy. Alexander assigns low moral value to the scenario I describe, even though he sees it as full of happy complex creatures. He fears it will get even worse, leading to ems who are only ever focused on their particular work task, with no mind-wandering, breaks from work, or socializing.

There are also five reviews at other blogs. (There are also three reviews at Goodreads, and one more at Amazon, which don’t mention author expertise or offer field-specific criticisms.)

Futurist and computational neuroscientist Anders Sandberg calls the book a “very rich synthesis of many ideas with a high density of fascinating arguments,” but warns “most readers will disagree with large parts of it” and “many elements presented as uncontroversial will be highly controversial.” He himself only complains that I put in “too little effort bolstering the plausibility” of the basic idea of an emulation, a topic to which he has devoted in much effort.

Education reformer Neerav Kingsland calls the book “worth reading” though he would have rather I had written more fiction. He questions our ability to foresee the results of changes this big, and he questions my prediction of low wages: “Perhaps it would become taboo to replicate yourself, akin to teenage pregnancy?”

Private investor Peter McCluskey calls the book “quite valuable” though he notes my key assumptions could end up being wrong. He wishes I would have estimated wages relative to suspense more precisely, though he felt I was borderline overconfident overall, and thought I devoted too much attention to topics like swearing, relative to topics like democracy.

Economist Peter St Onge says “The pacing is fast, chock-full of interesting ideas to play with .. Hanson has done a fantastic job.” But he sees me as “too pessimistic” because the cost to run an em is very low compared to the cost to maintain a human today, and he just can’t see marginal product of human-like labor falling that low, no matter how many workers there are.

Physicist Richard Jones, in contrast, to the above nine reviewers, criticizes just about everything but my physics. He has long criticized Eric Drexler’s efforts to apply principles of mechanical engineering to tiny chemical systems. On Age of Em, he says:

Mind uploading .. will not be possible any time soon .. The brain .. is not the product of design, it is the product of evolution, and for this reason we can’t expect there to be such a digital abstraction layer. .. It would need to incorporate a molecularly accurate model of brain development and plasticity. .. His argument is that our understanding of human nature and the operations of human societies .. is now sufficiently robust that .. meaningful predictions can be made about the character of the resulting post-human societies. I don’t find this enormously convincing. .. Hanson often is simply unable to make firm predictions; this is commendably even-handed, but somewhat undermines his broader argument. .. How do we know what forager values actually were? Very few forager societies survived in any form into historical times, .. and what we know about their values is mediated by the biases of the anthropologists and ethnographers that recorded them.

So according to Jones, we can’t trust anthropologists to describe foragers they’ve met, we can’t trust economics when tech changes society, and familiar design principles fail for understanding brains and tiny chemical systems. Apparently only his field, physics, can be trusted well outside current experience. In reply, I say I’d rather rely on experts in each field, relative to his generic skepticism. Brain scientists see familiar design principles as applying to brains, even when designed by evolution, economists see economics as applying to past and distant societies with different tech, and anthropologists think they can understand cultures they visit.

Regarding O’Connor concerns on old-fashioned mate preferences I cited a literature on that, and regarding Alexander’s zero-leisure fears the book cites a literature on max productivity breaks and vacations. Regarding Kingsland and St Onge hopes for high wages, I’ll note that though most of history before the industrial era, taboos against having kids didn’t prevent marginal productivity from typically being very low.

So far I’d say that reviews give readers reasons to suspect my emphasis is at times off, but not strong reasons to fear that Age of Em is so wrong as to be not worth reading. But more reviews are yet to come.

GD Star Rating
loading...
Tagged as: ,

Missing Engagement

On the surface, there seems to have been a big debate over the last few years on how fast automation will displace jobs over the next decade or so. Some have claimed very rapid displacement, much faster than we’ve seen in recent decades (or centuries). Others have been skeptical (like me here, here, here, and here).

On October 13, David Mindell, Professor at MIT of both Aeronautics and Astronautics, and also History of Engineering and Manufacturing weighed in on this debate, publishing Our Robots, Ourselves: Robotics and the Myths of Autonomy:

If robotics in extreme environments are any guide, Mindell says, self-driving cars should not be fully self-driving. That idea, he notes, is belied by decades of examples involving spacecraft, underwater exploration, air travel, and more. In each of those spheres, fully automated vehicles have frequently been promised, yet the most state-of-the-art products still have a driver or pilot somewhere in the network. This is one reason Mindell thinks cars are not on the road to complete automation. ..

“There’s an idea that progress in robotics leads to full autonomy. That may be a valuable idea to guide research … but when automated and autonomous systems get into the real world, that’s not the direction they head. We need to rethink the notion of progress, not as progress toward full autonomy, but as progress toward trusted, transparent, reliable, safe autonomy that is fully interactive: The car does what I want it to do, and only when I want it to do it.” (more)

In his book, Mindell expertly supports his position with a detailed review of the history of automation in planes, spacecraft and submarines. You might think than Mindell’s prestige, expertise, and detailed book on past automation rates and patterns would earn him a place in this debate on future rates of automation progress. Many of those who blurbed the book clearly think so:

“Mindell’s ingenious and profoundly original book will enlighten those who prophesy that robots will soon make us redundant.”—David Autor

“My thanks to the author for bringing scholarship and sanity to a debate which has run off into a magic la-la land in the popular press.”—Rodney Brooks

But looking over dozens of reviews Mindell’s book in the 75 days since it was published, I find no thoughtful response from the other side! None. No one who expects rapid automation progress has bothered to even outline why they find Mindell’s arguments unpersuasive.

Perhaps this shows that people on the other side know Mindell’s arguments to be solid, making any response unpersuasive, and so they’d rather ignore him. Maybe they just don’t think the past is any guide to the future, at least in automation, making Mindell’s discussion of the past irrelevant to the debate. I’ve known people who think this way.

But perhaps a more plausible interpretation is that on subjects like this in our intellectual world, usually there just is no “debate”; there are just different sides who separately market their points of view. Just as in ordinary marketing, where firms usually pitch their products without mentioning competing products, intellectuals marketing of points of view also usually ignore competing points of view. Instead of pointing out contrary arguments and rebutting them, intellectual usually prefer to ignore contrary arguments.

This seems a sad state of affairs with respect to intellectual progress. But of course such progress is a public good, where individual contributions must trade a personal cost against a collective benefit, encouraging each of us to free-ride on the efforts of others. We might create intellectual institutions that better encourage more engagement with and response to contrary arguments, but unless these are global institutions others may prefer to free-ride and not contribute to local institutions.

You might think that academic norms of discourse are such global institutions encouraging engagement. And academics do give much lip service to that idea. But in fact it is mostly empty talk; academics don’t actually encourage much engagement and response beyond the narrow scope of prestigious folks in the same academic discipline.

GD Star Rating
loading...
Tagged as: , , ,

Why Have Opinions?

I just surprised some people here at a conference by saying that I don’t have opinions on abortion or gun control. I have little use for such opinions, and so haven’t bothered to form them. Since that attitude seems to be unusual among my intellectual peers, let me explain myself.

I see four main kinds of reasons to have opinions on subjects:

  • Decisions – Sometimes I need to make concrete decisions where the best choice depends on particular key facts or values. In such cases I am forced to have opinions on those subjects, in order to make good decisions. I may well just adopt, without much reflection, the opinions of some standard expert source. I have to make a lot of decisions and don’t have much time to reflect. But even so, I must have an opinion. And my incentives here tend to be toward having true opinions.
  • Socializing – A wide range of topics come up when talking informally with others, and people tend to like you to express opinions on at least some substantial subset of those topics. They typically aren’t very happy if you explain that you just adopted the opinion of some standard expert source without reflection, and so we are encouraged to “think for ourselves” to generate such opinions. Here my incentives are to have opinions that others find interesting or loyal, which is less strongly (but not zero) correlated with truth.
  • Research – As a professional intellectual, I specialize in particular topics. On those topics I generate opinions together with detailed supporting justifications for those opinions. I am evaluated on the originality, persuasiveness, and impressiveness of these opinions and justifications. These incentives are somewhat more strongly, but still only somewhat, correlated with truth.
  • Exploration – I’m not sure what future topics to research, and so continually explore a space of related topics which seem like they might have the potential to become promising research areas for me. Part of that process of exploration involves generating tentative opinions and justifications. Here it is even less important that these opinions be true than they help reveal interesting, neglected, areas especially well-suited to my particular skills and styles.

Most topics that are appropriate for research have little in the way of personal decision impact. So intellectuals focus more on research reasons for such topics. Most intellectuals also socialize a lot, so they also generate opinions for social reasons. Alas most intellectuals generate these different types of opinions in very different ways. You can almost hear their mind gears shift when they switch from being careful on research topics to being sloppy on social topics. Most academics have a pretty narrow speciality area, which they know isn’t going to change much, so they do relatively little exploration that isn’t close to their specialty area.

Research opinions are my best contribution to the world, and so are where I should focus my altruistic efforts. (They also give my best chance for fame and glory.) So I try to put less weight on socializing reasons for my opinions, and more weight on the exploration reasons. As long as I see little prospect of my research going anywhere near the abortion or gun control topics, I won’t explore there much. Topics diagnostic of left vs. right ideological positions seem especially unlikely to be places where I could add something useful to what everyone else is saying. But I do explore a wide range of topics that seem plausibly related to areas in which I have specialized, or might specialize. I have specialized in far more different areas than have most academics. And I try to keep myself honest by looking for plausible decisions I might make related to all these topics, though that tends to be hard. If we had more prediction markets this could get much easier, but alas we do not.

Of course if you care less about research, and more about socializing, your priorities could easily differ from mine.

GD Star Rating
loading...
Tagged as: , ,

Take Origins Seriously

We have a strong tendency to believe what we were taught to believe. This is a serious problem when we were taught different things. How can we rationally have much confidence in the beliefs we were taught, if we know that others were taught to believe other things? In order to overcome this bias, we either need to find a way to later question our initial teachings so well that we eliminate this correlation between our beliefs and our early teachings, or we need to find strong arguments for why one should expect more accurate beliefs to come from the source of our personal teaching, arguments that should persuade people regardless of their teaching. These are both hard standards to meet.

We also have strong tendencies to acquire tastes. Many of the things we like we didn’t like initially, but came to like after a time. In foods, kids don’t initially like spice or bitterness, or meat, especially raw. Kids don’t initially like jogging or structured exercise, or cold showers, or fist fights, but many claim later to love such things. People find they love the kinds of music they grew up with more than other kinds. People who grow up with arranged marriages generally like them, while those who don’t are horrified. Many kids find the very idea of sex repellent, but later come to love it. Particular sex practices seem repellent or not depending on how one is exposed to them.

Now some change in tastes over time could be due to new expressions of hormones at different ages, and some can be the honest discovery of a long-term compatibility between one’s genetic nature and particular practices. But honestly, these just aren’t very plausible explanations for most of our acquired tastes. Instead, it seems that we are designed to acquire tastes according to which things seem high status, make us look good, are endorsed by our community, etc.

Now one doesn’t need to doubt culturally-acquired tastes in the same way one should doubt culturally-acquired beliefs. Once you’d gone through the early acquiring process your tastes may really be genuine, in the sense of really making you happy when satisfied. But you do have to wonder if you could come to acquire new tastes. And even if you are too old for that, you have to wonder what kind of tastes new kids could acquire. There seem to be huge gains from choosing the kinds of tastes to have new kids acquire. If they’d be just as happy with such tastes later, why not get kids to acquire tastes for hard work, for well paid work, or for products that are easier to make. For example, why not encourage a taste for common products, instead of for massive product variety?

The points I’m making are old, and often go under the label “cultural relativity.” This is sometimes summarized as saying that nothing is true or good, except relative to a culture. Which is of course just wrong. But that doesn’t mean there aren’t huge important issues here. The strong ability of cultures to influence our beliefs and tastes does force us to question our beliefs and tastes. But on the flip side, this strong effect offers the promise of big gains in both belief accuracy and happiness efficiency, if only we can think through this culture stuff well.

GD Star Rating
loading...
Tagged as: ,

Disciplines As Contrarian Correlators

I’m often interested in subjects that fall between disciplines, or more accurately that intersect multiple disciplines. I’ve noticed that it tends to be harder to persuade people of claims in these areas, even when one is similarly conservative in basing arguments on standard accepted claims from relevant fields.

One explanation is that people realize that they can’t gain as much prestige from thinking about claims outside their main discipline, so they just don’t bother to think much about such claims. Instead they default to rejecting claims if they see any reason whatsoever to doubt them.

Another explanation is that people in field X more often accept the standard claims from field X than they accept the standard claims from any other field Y. And the further away in disciplinary space is Y, or the further down in the academic status hierarchy is Y, the less likely they are to accept a standard Y claim. So an argument based on claims from both X and Y is less likely to be accepted by X folks than a claim based only on claims from X.

A third explanation is that people in field X tend to learn and believe a newspaper version of field Y that differs from the expert version of field Y. So X folks tend to reject claims that are based on expert versions of Y claims, since they instead believe the differing newspaper versions. Thus a claim based on expert versions of both X and Y claims will be rejected by both X and Y folks.

These explanations all have a place. But a fourth explanation just occurred to me. Imagine that smart people who are interested in many topics tend to be contrarian. If they hear a standard claim of any sort, perhaps 1/8 to 1/3 of the time they will think of a reason why that claim might not be true, and decide to disagree with this standard claim.

So far, this contrarianism is a barrier to getting people to accept any claims based on more than a handful of other claims. If you present an argument based on five claims, and your audience tends to randomly reject more than one fifth of claims, then most of your audience will reject your claim. But let’s add one more element: correlations within disciplines.

Assume that the process of educating someone to become a member of discipline X tends to induce a correlation in contrarian tendencies. Instead of independently accepting or rejecting the claims that they hear, they see claims in their discipline X as coming in packages to be accepted or rejected together. Some of them reject those packages and leave X for other places. But the ones who haven’t rejected them accept them as packages, and so are open to arguments that depend on many parts of those packages.

If people who learn area X accept X claims as packages, but evaluate Y claims individually, then they will be less willing to accept claims based on many Y claims. To a lesser extent, they also reject claims based on some Y claims and some X claims.

Note that none of these explanations suggest that these claims are actually false more often; they are just rejected more.

GD Star Rating
loading...
Tagged as: ,

Show Outside Critics

Worried that you might be wrong? That you might be wrong because you are biased? You might think that your best response is to study different kinds of biases, so that you can try to correct your own biases. And yes, that can help sometimes. But overall, I don’t think it helps much. The vast depths of your mind are quite capable of tricking you into thinking you are overcoming biases, when you are doing no such thing.

A more robust solution is to seek motivated and capable critics. Real humans who have incentives to find and explain flaws in your analysis. They can more reliably find your biases, and force you to hear about them. This is of course an ancient idea. The Vatican has long had “devil’s advocates”, and many other organizations regularly assign critics to evaluate presented arguments. For example, academic conferences often assign “discussants” tasked with finding flaws in talks, and journals assign referees to criticize submitted papers.

Since this idea is so ancient, you might think that the people who talk the most about trying to overcoming bias would apply this principle far more often than do others. But from what I’ve seen, you’d be wrong.

Oh, almost everyone circulates drafts among close associates for friendly criticism. But that criticism is mostly directed toward avoiding looking bad when they present to a wider audience. Which isn’t at all the same as making sure they are right. That is, friendly local criticism isn’t usually directed at trying to show a wider audience flaws in your arguments. If your audience won’t notice a flaw, your friendly local critics have little incentive to point it out.

If your audience cared about flaws in your arguments, they’d prefer to hear you in a context where they can expect to hear motivated capable outside critics point out flaws. Not your close associates or friends, or people from shared institutions via which you could punish them for overly effective criticism. Then when the flaws your audience hears about are weak, they can have more confidence that your arguments are strong.

And if even if your audience only cared about the appearance of caring about flaws in your argument, they’d still want to hear you matched with apparently motivated capable critics. Or at least have their associates hear that such matching happens. Critics would likely be less motivated and capable in this case, but at least there’d be a fig leaf that looked like good outside critics matched with your presented arguments.

So when you see people presenting arguments without even a fig leaf of the appearance of outside critics being matched with presented arguments, you can reasonably conclude that this audience doesn’t really care much about appearing to care about hidden flaws in your argument. And if you are the one presenting arguments, and if you didn’t try to ensure available critics, then others can reasonably conclude that you don’t care much about persuading your audience that your argument lacks hidden flaws.

Now often this criticism approach is often muddled by the question of which kinds of critics are in fact motivated and capable. So often “critics” are used who don’t have in fact have much relevant expertise, or who have incentives that are opaque to the audience. And prediction markets can be seen as a robust solution to this problem. Every bet is an interaction between two sides who each implicitly criticize the other. Both are clearly motivated to be accurate, and have clear incentives to only participate if they are capable. Of course prediction market critics typically don’t give as much detail to explain the flaws they see. But they do make clear that they see a flaw.

GD Star Rating
loading...
Tagged as: , , ,

Me At NIPS Workshop

Tomorrow I’ll present on prediction markets and disagreement, in Montreal at the NIPS Workshop on Transactional Machine Learning and E-Commerce. A video will be available later.

GD Star Rating
loading...
Tagged as: , ,

The Puzzle Of Persistent Praise

We often praise and criticize people for the things they do. And while we have many kinds of praise, one very common type (which I focus on in this post) seems to send the message “what you did was good, and it would be good if more of that sort of thing were done.” (Substitute “bad” for “good” to get the matching critical message.)

Now if it would be good to have more of some act, then that act is a good candidate for something to subsidize more. And if most people agreed that this sort of act deserved more subsidy, then politicians should be tempted to run for office on the platform that they will increase the actual subsidy given to that kind of act. After all, if we want more of some kind of acts, why don’t we try to better reward those acts? And so good acts shouldn’t long remain with an insufficient subsidy. Or bad acts without an insufficient tax.

But in fact we seem to have big categories of acts which we consistently praise for being good, and where this situation persists for decades or centuries. Think charity, innovation, or artistic or sport achievement. Our political systems do not generate much political pressure to increase the subsidies for such things. Subsidy-increasing proposals are not even common issues in elections. Similarly, large categories of acts are consistently criticized, yet few politicians run on platforms proposing to increase taxes on such acts.

My best interpretation of this situation is that while our words of praise give the impression that we think that most people would agree that the acts we praise are good, and should be more common, we don’t really believe this. Either we think that the acts signal impressive or praise-worthy features, but shouldn’t be more common, or we think such acts should be more common, but we also see large opposing political coalitions who disagree with our assessment.

That is, my best guess is that when we look like we are praising acts for promoting a commonly accepted good, we are usually really praising impressiveness, or we are joining in a partisan battle on what should be seen as good.

Because my explanation is cynical, many people count it as “extraordinary”, and think powerful extraordinary evidence must be mustered before one can reasonably suggest that it is plausible. In contrast, the usual self-serving idealistic explanations people give for their behavior are ordinary, and therefore can be accepted on face value without much evidence at all being offered in their defense. People get mad at me for even suggesting cynical theories in short blog posts, where large masses of extraordinary evidences have not been mustered. I greatly disagree with this common stacking of the deck against cynical theories.

Even so, let us consider some seven other possible explanations of this puzzle of persistent praise (and criticism). And in the process make what could have been a short blog post considerably longer. Continue reading "The Puzzle Of Persistent Praise" »

GD Star Rating
loading...
Tagged as: , , ,

Beware Status Arrogance

Imagine that you are expert in field A, and a subject in field B comes up at party. You know that there may be others at the party who are expert in field B. How reluctant does this make you to openly speculate about this topic? Do you clam up and only cautiously express safe opinions, or do you toss out the thoughts that pop into your head as if you knew as much about the subject as anyone?

If you are like most people, the relative status of fields A and B will likely influence your choice. If the other field has higher status than yours, you are more likely to be cautious, while if the other field has lower status than yours, you are more likely to speculate freely. In both cases your subconscious will have made good guesses about the likely status consequences to you if an expert in B were to speak up and challenge your speculations. At some level you would know that others at the party are likely to back whomever has the higher status, even if the subject is within the other person’s area of expertise.

But while you are likely to be relatively safe from status losses, you should know that you are not safe from being wrong. When people from different fields argue about something within one of their areas of expertise, that expert is usually right, even when the other field has higher status. Yes people from your field may on average be smarter and harder-working, and your field may have contributed more to human progress. Even so, people who’ve studied more about the details of something usually know more about it.

GD Star Rating
loading...
Tagged as: ,