Tag Archives: Disagreement

When Does Evidence Win?

Consider a random area of intellectual inquiry, and a random intellectual who enters this area. When this person first arrives, a few different points of view seemed worthy of consideration in this area. This person then becomes expert enough to favor one of these views. Then over the following years and decades the intellectual world comes to more strongly favor one of these views, relative to the others. My key question is: in what situations do such earlier arrivals, on average, tend to approve of this newly favored position?

Now there will be many cases where favoring a point helps people to be seen an intellectual of a certain standing. For example, jumping on an intellectual fashion could help one to better publish, and then get tenure. So if we look at tenured professors, we might well see that they tended to favor new fashions. To exclude this effect, I want to apply whatever standard is used to pick intellectuals before they choose their view on this area.

There will also be an effect whereby intellectuals move their work to focus on new areas even if they don’t actually think they are favored by the weight of evidence. (By “evidence” here I also mean to include relevant intellectual arguments.) So I don’t want to rely on the areas where people work to judge which areas they favor. I instead need something more like a survey that directly asks intellectuals which views they honestly think are favored by the weight of evidence. And I need this survey to be private enough for respondents to not fear retribution or disapproval for expressed views. (And I also want them to be intellectually honest in this situation.)

Once we are focused on people who were already intellectuals of some standing when they choose their views in an area, and on their answers to a private enough survey, I want to further distinguish between areas where relevant strong and clear evidence did or did not arrive. Strong evidence favors one of the views substantially, and clear evidence can be judged and understood by intellectuals at the margins of the field, such as those in neighboring fields or with less intellectual standing. These can included students, reporters, grant givers, and referees.

In my personal observation, when strong and clear evidence arrives, the weight of opinion does tend to move toward the views favored by this evidence. And early arrivals to the field also tend to approve. Yes many such intellectuals will continue to favor their initial views because the rise of other views tends to cut the perceived value of their contributions. But averaging over people with different views, on net opinion moves to favor the view that evidence favors.

However, the effectiveness of our intellectual world depends greatly on what happens in the other case, where relevant evidence is not clear and strong. Instead, evidence is weak, so that one must weigh many small pieces of evidence, and evidence is complex, requiring much local expertise to judge and understand. If even in this case early arrivals to a field tend to approve of new favored opinions, that (weakly) suggests that opinion is in fact moved by the information embodied in this evidence, even when it is weak and complex. But if not, that fact (weakly) suggests that opinion moves are mostly due to many other random factors, such as new political coalitions within related fields.

While I’ve outlined how one might do a such a survey, I have not actually done it. Even so, over the years I have formed opinions on areas where my opinions did not much influence my standing as an intellectual, and where strong and clear evidence has not yet arrived. Unfortunately, in those areas I have not seen much of a correlation between the views I see as favored on net by weak and complex evidence, and the views that have since become more popular. Sometimes fashion favors my views, and sometimes not.

In fact, most who choose newly fashionable views seem unaware of the contrary arguments against those views and for other views. Advocates for new views usually don’t mention them and few potential converts ask for them. Instead what matters most is: how plausible does the evidence for a view offered by its advocates seem to those who know little about the area. I see far more advertising than debate.

This suggests that most intellectual progress should be attributed to the arrival of strong and clear evidence. Other changes in intellectual opinion are plausibly due to a random walk in the space of other random factors. As a result, I have prioritized my search for strong and clear evidence on interesting questions. And I’m much less interested than I once was in weighing the many weak and complex pieces of evidence in other areas. Even if I can trust myself to judge such evidence honestly, I have little faith in my ability to persuade the world to agree.

Yes if you weigh such weak and complex evidence, you might come to a conclusion, argue for it, and find a world that increasingly agrees with you. And you might let your self then believe that you are in a part of the intellectual world with real and useful intellectual progress, progress to which you have contributed. Which would feel nice. But you should consider the possibility that this progress is illusory. Maybe for real progress, you need to instead chip away at hard problems, via strong and clear evidence.

GD Star Rating
loading...
Tagged as: , ,

Why We Mix Fact & Value Talk

For a while now I’ve been tired of the US political drama, and I’ve been hoping that others would tire of it as well. Then maybe we could talk about something else, like say, my books. So I was thinking of writing a post reminding folks about futarchy, saying that politics doesn’t have to be this way. That is, we could largely (if not entirely) separate the political processes that deal with facts and values. In this case, even when there’s a big change in which values set policy, the fact estimates that set policy could remain the same, and be very expert.

In contrast, most of our current political processes mix up facts and values. The candidates we vote for, the bills they adopt, and the rulings that agencies make, all represent bundles of opinions on both facts and values. As a result, the fact estimates implicit in policy choices are less than fully expert, as such estimates must appeal to the citizens, politicians, administrators, etc. who we choose in part for their value positions. And so, to influence the values that our systems uses, we must each talk about facts as well, even when we aren’t personally very expert on those facts.

On reflection, however, I think I had it wrong. Most of those engaged by the current US political drama are enjoying it, even if they say otherwise. They get a rare chance to feel especially self-righteous, and to bond more strongly with political allies. And I think the usual mixing of facts and values actually helps them achieve these ends. Let me explain.

For the purpose of making effective decisions, on average the best mix of fact vs. value in analysis has over 90% of the attention go to facts. Yes, you need to pay some attention to values, but most of the devil is in the details, and most of the relevant details are on facts. This is true at all levels, including personal, family, firm, church, city, state, and national levels.

However, for the purpose of feeling self-righteous and bonding with allies, value talk is much more potent than fact talk. You need to believe that your values are superior to feel self-righteous, and shared values bond you with allies much more strongly than do shared facts. Yet even for this purpose, the ideal conversation isn’t more than 90% focused on values; something closer to a 50-50 mix works better.

The problem is that when we frame a debate as a pure value disagreement, we actually find it harder to feel enough obviously superior, and to dismiss the other side. We aren’t really as confident in our value positions as we pretend. We can see how observers might perceive a symmetry between us and our opponents, and label us unfair if we just try to crush the other side to achieve our values at the expense of their values.

However, by mixing enough facts into a value discussion, we can explain to ourselves and others why crushing them is really best for everyone. We can say that they just don’t understand that global warming is a real thing, or that kids really need two parents to grow up healthy. It is the other side’s failure to accept key facts that can justify to outsiders our uncompromising determination to crush them for a total win. Later on they may see we were right, and even thank us. But even if that doesn’t happen, right now we can feel justified in dismissing them.

I expect this dynamic plays out not only in national politics, but also in firm, church, and family politics. And it helps explain our widespread reluctance to adopt prediction markets, and other neutral fact estimation methods such as experiments, in relatively political contexts. We regularly want to support decisions that advance the values we share with our political allies, but we prefer the cover of seeming to be focused on estimating facts. To successfully use facts as a cover for values, we need to have enough fact issues mixed into our debates. And we need to avoid out-of-control fact estimation mechanisms that lack enough adjustment knobs to let us get the answers we want.

GD Star Rating
loading...
Tagged as: , ,

Surprising Popularity

This week Nature published some empirical data on a surprising-popularity consensus mechanism (a previously published mechanism, e.g., Science in 2004, with variations going by the name “Bayesian Truth Serum”). The idea is to ask people to pick from several options, and also to have each person forecast the distribution of opinion among others. The options that are picked surprisingly often, compared to what participants on average expected, are suggested as more likely true, and those who pick such options as better informed.

Compared to prediction markets, this mechanism doesn’t require that those who run the mechanism actually know the truth later. Which is indeed a big advantage. This mechanism can thus be applied to most any topic, such as the morality of abortion, the existence of God, or the location of space aliens. Also, incentives can be tied to this method, as you can pay people based on how well they predict the distribution of opinion. The big problem with this method, however, is that it requires that learning the truth be the cheapest way to coordinate opinion. Let me explain.

When you pay people for better predicting the distribution of opinion, one way they can do this prediction task is to each look for and report their best estimate of the truth. If everyone does this, and if participant errors and mistakes are pretty random, then those who do this task better will in fact have a better estimate of the distribution of opinion.

For example, imagine you are asked which city is the the capital of a particular state. Imagine you are part of a low-incentive one-time survey, and you don’t have an easy way to find and communicate with other survey participants. In this case, your best strategy may well be to think about which city is actually the capital.

Of course even in this case your incentive is to report the city that most sources would say is the capital. If you (and a few others) in fact know that according to the detailed legal history another city is rightfully the capital, not the city that the usual records give, your incentive is still to go with usual records.

More generally, you want to join the largest coalition who can effectively coordinate to give the same answers. If you can directly talk with each other, then you can agree on a common answer and report that. If not, you can try to use prearranged Schelling points to figure out your common answer from the context.

If this mechanism were repeated, say daily, then a safe way to coordinate would be to report the same answer as yesterday. But since everyone can easily do this too, it doesn’t give your coalition much of a relative advantage. You only win against those who make mistakes in implementing this obvious strategy. So you might instead coordinate to change your group’s answer each day based on some commonly observed changing signal.

To encourage this mechanism to better track truth, you’d want to make it harder for participants to coordinate their answers. You might ask random people at random times to answer quickly, put them in isolated rooms where they can’t talk to others, and ask your questions in varying and unusual styles that make it hard to guess how others will frame those questions. Prefer participants with more direct personal reasons to care about telling related truth, and prefer those who used different ways to learn about a topic. Perhaps ask different people for different overlapping parts and then put the final answer together yourself from those parts. I’m not sure how far you could get with these tricks, but they seem worth a try.

Or course these tricks are nothing like the way most of us actually consult experts. We are usually eager to ask standard questions to standard experts who coordinate heavily with each other. This is plausibly because we usually care much more to get the answers that others will also get, so that we don’t look foolish when we parrot those answers to others. That is, we care more about getting a coordinated standard answer than a truthful answer.

Thus I actually see a pretty bright future for this surprisingly-popular mechanism. I can see variations on it being used much more widely to generate standard safe answers that people can adopt with less fear of seeming strange or ignorant. But those who actually want to find true answers even when such answers are contrarian, they will need something closer to prediction markets.

GD Star Rating
loading...
Tagged as: , ,

Trade Engagement?

First, let me invite readers, especially longtime/frequent readers, to suggest topics for me to blog on. I try to pick topics that are important, neglected, and where I can find something original and insightful to say. But I also like to please readers, and maybe I’m forgetting/missing topics that you could point out.

Second, many of my intellectual projects remain limited by a lack of engagement. I can write books, papers, and blog posts, but to have larger intellectual impact I need people to engage my ideas. Not to agree or disagree with them, but to dive into and critique the details of my arguments, and then publicly describe their findings. (Yes, journal referees engage submissions to some extent, but it isn’t remotely enough.)

This is more useful to me when such engagers have more relevant ability, popularity, and/or status. Since I also have modest ability, popularity, and status, at least in some areas, this suggests the possibility of mutually beneficial trade. I engage your neglected ideas and you engage mine. Of course there are many details to work out to arrange such trade.

First, there’s timing. I don’t want to put in lots of work engaging your ideas based on a promise that you’ll later engage mine, and then have you renege. So we may need to start small, back and forth. Or you can go first.

Second, there’s the issue of relative price. If we have differing levels of ability, popularity, and status, then we should agree to differing relative efforts to reflect those differences. If you are more able than I, maybe I should engage several ideas of yours in trade for your only engaging one of mine.

Third, we may disagree about our relevant differences. While it may be easy to quickly demonstrate one’s popularity, status, and overall intelligence, it can be harder to demonstrate one’s other abilities relevant to a particular topic. Yes if I read a bunch of your papers I might be able to see that your ability is higher than your status would suggest, but I might not have time for that.

Fourth, we may each fear adverse selection. Why should I be so stupid as to join a club that would stoop so low as to consider me as a member? The fact that you are seeking to trade for engagement, and willing to consider me as a trading partner, makes me suspect that your ideas, ability, and status are worse than they appear.

Fifth, we might prefer to disguise our engagement trade. When engagement is often a side effect of other processes, then it might look bad to go out of your way to trade engagements. (Trading engagement for money or sex probably looks even worse.) So people may prefer to hide their engagement trades within other process that give plausible deniability about such trades. I just happened to invite you to talk at my seminar series after you invited me to talk at yours; move along, no trade to see here.

These are substantial obstacles, and may together explain the lack of observed engagement trades. Even so, I suspect people haven’t tried very hard to overcome such obstacles, and in the spirit of innovation I’m willing to explore such possibilities, at least a bit. My neglected ideas include em futures, hidden motives, decision markets, irrational disagreement, mangled worlds, and more.

GD Star Rating
loading...
Tagged as: , ,

When Is Talk Meddling Okay?

“How dare X meddle in Y’s business on Z?! Yes, X only tried to influence Y people on Z by talking, and said nothing false. But X talked selectively, favoring one position over another!”

Consider some possible triples X,Y,Z:

  • How dare my wife’s friend meddle in my marriage by telling my wife I treat her poorly?
  • How dare John try to tempt my girlfriend away from me by flirting with her?
  • How dare my neighbors tell my kids that they don’t make their kids do as many chores?
  • How dare Sue from another division suggest I ask too much overtime of my employees?
  • How dare V8 try to tempt cola buyers to switch by dissing cola ingredients?
  • How dare economists say that sociologists keep PhD students around too long?
  • How dare New York based media meddle in North Carolina’s transexual bathroom policy?
  • How dare westerners tell North Koreans that their government treats them badly?
  • How dare Russia tell US voters unflattering things about Hillary Clinton?

We do sometimes feel justly indignant at outsiders interfering in our “internal” affairs. In such cases, we prefer equilibria where we each stay out of others’ families, professions, or nations. But in many other contexts we embrace social norms that accept and even encourage criticism from a wide range of sources.

The usual (and good) argument for free speech (or really, free hearing) is that on average listeners can be better informed if they have access to more different info sources. Yes, it would be even better if each source fairly told everything relevant it knew, or at least didn’t select what it said to favor some views. But we usually think it infeasible to enforce norms against selectivity, and so limit ourselves to more enforceable norms against lying. As we can each adjust our response to sources based on our estimates of their selectivity, reasonable people can be better informed via having more sources to hear from, even when those sources are selective.

So why do we sometimes oppose such free hearing? Paternalism seems one possible explanation – we think many of us are unreasonable. But this fits awkwardly, as most expect themselves to be better informed if able to choose from more sources. More plausibly, we often don’t expect that we can limit retaliation against talk to other talk. For example, if you may respond with violence to someone overtly flirting with your girlfriend, we may prefer a norm against such overt flirting. Similarly, if nations may respond with war to other nations weighing in on their internal elections, we may prefer a norm of nations staying out of other nations’ internal affairs.

Of course the US has for many decades been quite involved in the internal affairs of many nations, including via assassination, funding rebel armies, bribery, academic and media lecturing, and selective information revelation. Some say Putin focused on embarrassing Clinton in retaliation for her previously supporting the anti-Putin side in Russian internal affairs. Thus it is hard to believe we really risk more US-Russian war if these two nations overtly talk about the others’ internal affairs.

Yes, we should consider the possibility that retaliation against talk will be more destructive than talk, and be ready to forgo the potentially large info gains from wider talk and criticism to push a norm against meddling in others’ internal affairs. But the international stage at the moment doesn’t seem close to such a situation. We’ve long since tolerated lots of such meddling, and the world is probably better for it. We should allow a global conversation on important issues, where all can be heard even when they speak selectively.

GD Star Rating
loading...
Tagged as: , , ,

Beware Futurism As Political Allegory

Imagine that you are junior in high school who expects to attend college. At that point in your life you have opinions related to frequent personal choices if blue jeans feel comfortable or if you prefer vanilla to chocolate ice cream. And you have opinions on social norms in your social world, like how much money it is okay to borrow from a friend, how late one should stay at a party, or what are acceptable excuses for breaking up with boy/girlfriend. And you know you will soon need opinions on imminent major life choices, such as what college to attend, what major to have, and whether to live on campus.

But at that point in life you will have less need of opinions on what classes to take as college senior, and where to live then. You know you can wait and learn more before making such decisions. And you have even less need of opinions on borrowing money, staying at parties, or breaking up as a college senior. Social norms on those choices will come from future communities, who may not yet have even decided on such things.

In general, you should expect to have more sensible and stable opinions related to choices you actually make often, and less coherent and useful opinions regarding choices you will make in the future, after you learn many new things. You should have less coherent opinions on how your future communities will evaluate the morality and social acceptability of your future choices. And your opinions on collective choices, such as via government, should be even less reliable, as your incentives to get those right are even weaker.

All of this suggests that you be wary of simply asking your intuition for opinions about what you or anyone else should do in strange distant futures. Especially regarding moral and collective choices. Your intuition may dutifully generate such opinions, but they’ll probably depend a lot on how the questions were framed, and the context in which questions were asked. For more reliable opinions, try instead to chip away at such topics.

However, this context-dependence is gold to those who seek to influence others’ opinions. Warriors attack where an enemy is weak. When seeking to convert others to a point of view, you can have only limited influence on topics where they have accepted a particular framing, and have incentives to be careful. But you can more influence how a new topic is framed, and when there are many new topics you can emphasize the few where your preferred framing helps more.

So legal advocates want to control how courts pick cases to review and the new precedents they set. Political advocates want to influence which news stories get popular and how those stories are framed. Political advocates also seek to influence the choices and interpretations of cultural icons like songs and movies, because being less constrained by facts such things are more open to framing.

As with the example above of future college choices, distant future choices are less thoughtful or stable, and thus more subject to selection and framing effects. Future moral choices are even less stable, and more related to political positions that advocates want to push. And future moral choices expressed via culture like movies are even more flexible, and thus more useful. So newly-discussed culturally-expressed distant future collective moral choices create a perfect storm of random context-dependent unreliable opinions, and thus are ideal for advocacy influence, at least when you can get people to pay attention to them.

Of course most people are usually reluctant to think much about distant future choices, including moral and collective ones. Which greatly limits the value of such topics to advocates. But a few choices related to distant futures have engaged wider audiences, such as climate change and, recently, AI risk. And political advocates do seem quite eager to influence such topics, due to their potency. They seem select such topics from a far larger set of similarly important issues, in part for their potency at pushing common political positions. The science-fiction truism really does seem to apply: most talk on the distant future is really indirect talk on our world today.

Of course the future really will happen eventually, and we should want to consider choices today that importantly influence that future, some of those choices will have moral and collective aspects, some of these issues can be expressed via culture like movies, and at some point such issue discussion will be new. But as with big hard problems in general, it is probably better to chip away at such problems.

That is: Anchor your thoughts to reality rather than to fiction. Make sure you have a grip on current and past behavior before looking at related future behavior. Try to stick with analyzing facts for longer before being forced to make value choices. Think about amoral and decentralized choices carefully before considering moral and collective ones. Avoid feeling pressured to jump to strong conclusions on recently popular topics. Prefer robust and reliable methods even when they are less easy and direct. Mostly the distant future doesn’t need action today – decisions will wait a bit for us to think more carefully.

GD Star Rating
loading...
Tagged as: , ,

Smart Sincere Contrarian Trap

We talk as if we pick our beliefs mainly for accuracy, but in fact we have many social motives for picking beliefs. In particular, we use many kinds of beliefs as group affiliation/conformity signals. Some of us also use a few contrarian beliefs to signal cleverness and independence, but our groups have a limited tolerance for such things.

We can sometimes win socially by joining impressive leaders with the right sort of allies who support new fashions contrary to the main current beliefs. If enough others also join these new beliefs, they can become the new main beliefs of our larger group. At that point, those who continue to oppose them become the contrarians, and those who adopted the new fashions as they were gaining momentum gain more relative to latecomers. (Those who adopt fashions too early also tend to lose.)

As we are embarrassed if we seem to pick beliefs for any reason other than accuracy, this sort of new fashion move works better when supported by good accuracy-oriented reasons for changing to the new beliefs. This produces a weak tendency, all else equal, for group-based beliefs to get more accurate over time. However, many of our beliefs are about what actions are effective at achieving the motives we claim to have. And we are often hypocritical about our motives. Because of this, workable fashion moves need not just good reasons to belief claims about the efficacy of actions for stated motives, but also enough of a correspondence between the outcomes of those actions and our actual motives. Many possible fashion moves are unworkable because we don’t actually want to pursue the motives we proclaim.

Smarter people are better able to identify beliefs better supported by reasons, which all else equal makes those beliefs better candidates for new fashions. So those with enough status to start a new fashion may want to listen to smart people in the habit of looking for such candidates. But reasonably smart people who put in the effort are capable of finding a great many places where there are good reasons for picking a non-status-quo belief. And if they also happen to be sincere, they tend to visibly support many of those contrarian beliefs, even in the absence of supporting fashion movements with a decent chance of success. Which results in such high-effort smart sincere people sending bad group affiliation/conformity signals. So while potential leaders of new fashions want to listen to such people, they don’t want to publicly affiliate with them.

I fell into this smart sincere conformity trap long ago. I’ve studied many different areas, and when I’ve discovered an alternate belief that seems to have better supporting reasons than a usual belief, I have usually not hesitated to publicly embrace it. People have told me that it would have been okay for me to publicly embrace one contrarian belief. I might then have had enough overall status to plausibly lead that as a new fashion. But the problem is that I’ve supported many contrarian beliefs, not all derived from a common core principle. And so I’m not a good candidate to be a leader for any of my groups or contrarian views.

Which flags me as a smart sincere person. Good to listen to behind the scenes to get ideas for possible new fashions, but bad to embrace publicly as a loyal group member. I might gain if my contrarian views eventually became winning new fashions, but my early visible adoption of those views probably discourages others from trying to lead them, as they can less claim to have been first with those views.

If the only people who visibly supported contrarian views were smart sincere people who put in high effort, then such views might become known for high accuracy. This wouldn’t necessarily induce most people to adopt them, but it would help. However, there seem to be enough people who visibly adopt contrarian views for others reasons to sufficiently muddy the waters.

If prediction markets were widely adopted, the visible signals of which beliefs were more accurate would tend to embarrass more people into adopting them. Such people do not relish this prospect, as it would have them send bad group affiliation signals. Smart sincere people might relish the prospect, but there are not enough of them to make a difference, and even the few there are mostly don’t seem to relish it enough to work to get prediction markets adopted. Sincerely holding a belief isn’t quite the same as being willing to work for it.

GD Star Rating
loading...
Tagged as: , ,

Caplan Debate Status

In this post I summarize my recent disagreement with Bryan Caplan. In the next post, I’ll dive into details of what I see as the key issue.

I recently said:

If you imagine religions, governments, and criminals not getting too far out of control, and a basically capitalist world, then your main future fears are probably going to be about for-profit firms, especially regarding how they treat workers. You’ll fear firms enslaving workers, or drugging them into submission, or just tricking them with ideology.

Because of this, I’m not so surprised by the deep terror many non-economists hold of future competition. For example, Scott Alexander (see also his review):

I agree with Robin Hanson. This is the dream time .. where we are unusually safe from multipolar traps, and as such weird things like art and science and philosophy and love can flourish. As technological advance increases, .. new opportunities to throw values under the bus for increased competitiveness will arise. .. Capitalism and democracy, previously our protectors, will figure out ways to route around their inconvenient dependence on human values. And our coordination power will not be nearly up to the task, assuming something much more powerful than all of us combined doesn’t show up and crush our combined efforts with a wave of its paw.

But I was honestly surprised to see my libertarian economist colleague Bryan Caplan also holding a similarly dark view of competition. As you may recall, Caplan had many complaints about my language and emphasis in my book, but in terms of the key evaluation criteria that I care about, namely how well I applied standard academic consensus to my scenario assumptions, he had three main points.

First, he called my estimate of an em economic growth doubling time of one month my “single craziest claim.” He seems to agree that standard economic growth models can predict far faster growth when substitutes for human labor can be made in factories, and that we have twice before seen economic growth rates jump by more than a factor of fifty in a less than previous doubling time. Even so, he can’t see economic growth rates even doubling, because of “bottlenecks”:

Politically, something as simple as zoning could do the trick. .. the most favorable political environments on earth still have plenty of regulatory hurdles .. we should expect bottlenecks for key natural resources, location, and so on. .. Personally, I’d be amazed if an em economy doubled the global economy’s annual growth rate.

His other two points are that competition would lead to ems being very docile slaves. I responded that slavery has been rare in history, and that docility and slavery aren’t especially productive today. But he called the example of Soviet nuclear scientists “powerful” even though “Soviet and Nazi slaves’ productivity was normally low.” He rejected the relevance of our large literatures on productivity correlates and how to motive workers, as little of that explicitly includes slaves. He concluded:

If, as I’ve argued, we would copy the most robot-like people and treat them as slaves, at least 90% of Robin’s details are wrong.

As I didn’t think the docility of ems mattered that much for most of my book, I challenged him to audit five random pages. He reported “Robin’s only 80% wrong”, though I count only 63% from his particulars, and half of those come from his seeing ems as very literally “robot-like”. For example, he says ems are not disturbed by “life events”, only by disappointing their masters. They only group, identify, and organize as commanded, not as they prefer or choose. They have no personality “in a human sense.” They never disagree with each other, and never need to make excuses for anything.

Caplan offered no citations with specific support for these claims, instead pointing me to the literature on the economics of slavery. So I took the time to read up on that and posted a 1600 summary, concluding:

I still can’t find a rationale for Bryan Caplan’s claim that all ems would be fully slaves. .. even less .. that they would be so docile and “robot-like” as to not even have human-like personalities.

Yesterday, he briefly “clarified” his reasoning. He says ems would start out as slaves since few humans see them as having moral value:

1. Most human beings wouldn’t see ems as “human,” so neither would their legal systems. .. 2. At the dawn of the Age of Em, humans will initially control (a) which brains they copy, and (b) the circumstances into which these copies emerge. In the absence of moral or legal barriers, pure self-interest will guide creators’ choices – and slavery will be an available option.

Now I’ve repeatedly pointed out that the first scans would be destructive, so either the first scanned humans see ems as “human” and expect to not be treated badly, or they are killed against their will. But I want to focus instead on the core issue: like Scott Alexander and many others, Caplan sees a robust tendency of future competition to devolve into hell, held at bay only by contingent circumstances such as strong moral feelings. Today the very limited supply of substitutes for human workers keeps wages high, but if that supply were to greatly increase then Caplan expects that without strong moral resistance capitalist competition eventually turns everyone into docile inhuman slaves, because that arrangment robustly wins productivity competitions.

In my next post I’ll address that productivity issue.

GD Star Rating
loading...
Tagged as: , ,

See A Wider View

Ross Douthat in the NYT:

From now on the great political battles will be fought between nationalists and internationalists, nativists and globalists. .. Well, maybe. But describing the division this way .. gives the elite side of the debate .. too much credit for being truly cosmopolitan.

Genuine cosmopolitanism is a rare thing. It requires comfort with real difference, with forms of life that are truly exotic relative to one’s own. .. The people who consider themselves “cosmopolitan” in today’s West, by contrast, are part of a meritocratic order that transforms difference into similarity, by plucking the best and brightest from everywhere and homogenizing them into the peculiar species that we call “global citizens.”

This species is racially diverse (within limits) and eager to assimilate the fun-seeming bits of foreign cultures — food, a touch of exotic spirituality. But no less than Brexit-voting Cornish villagers, our global citizens think and act as members of a tribe. They have their own distinctive worldview .. common educational experience, .. shared values and assumptions .. outgroups (evangelicals, Little Englanders) to fear, pity and despise. .. From London to Paris to New York, each Western “global city” .. is increasingly interchangeable, so that wherever the citizen of the world travels he already feels at home. ..

It is still possible to disappear into someone else’s culture, to leave the global-citizen bubble behind. But in my experience the people who do are exceptional or eccentric or natural outsiders to begin with .. It’s a problem that our tribe of self-styled cosmopolitans doesn’t see itself clearly as a tribe. .. They can’t see that paeans to multicultural openness can sound like self-serving cant coming from open-borders Londoners who love Afghan restaurants but would never live near an immigrant housing project.

You have values, and your culture has values. They are similar, and this isn’t a coincidence. Causation here mostly goes from culture to individual. And even if you did pick your culture, you have to admit that the young you who did was’t especially wise or well-informed. And you were unaware of many options. So you have to wonder if you’ve too easily accepted your culture’s values.

Of course your culture anticipates these doubts, and is ready with detailed stories on why your culture has the best values. Actually most stories you hear have that as a subtext. But you should wonder how well you can trust all this material.

Now, you might realize that for personal success and comfort, you have little to gain, and much to lose, by questioning your culture’s values. Your associates mostly share your culture, and are comforted more by your loyalty displays than your intellectual cleverness. Hey, everyone agrees cultures aren’t equal; someone has to be best. So why not give yours the benefit of the doubt? Isn’t that reasonable?

But if showing cleverness is really important to you, or if perhaps you really actually care about getting values right, then you should wonder what else you can do to check your culture’s value stories. And the obvious option is to immerse yourself in the lives and viewpoints of other cultures. Not just via the stories or trips your culture has set up to tell you of its superiority. But in ways that give those other cultures, and their members, a real chance. Not just slight variations on your culture, but big variations as well. Try to see a wider landscape of views, and then try to see the universe from many widely dispersed points on that landscape.

Yes if you are a big-city elite, try to see the world from Brexit or Trump fan views. But there are actually much bigger view differences out there. Try a islamic fundamentalist, or a Chinese nationalist. But even if you grow to be able to see the world as do most people in the world today, there still remain even bigger differences out there. Your distant ancestors were quite human, and yet they saw the universe very differently. Yes, they were wrong on some facts, but that hardly invalidates most of their views. Learn some ancient history, to see their views.

And if you already know some ancient history, perhaps the most alien culture you have yet to encounter is that of your human-like descendants. But we can’t possibly know anything about that yet, you say? I beg to differ. I introduce my new book with this meet-a-strange-culture rationale: Continue reading "See A Wider View" »

GD Star Rating
loading...
Tagged as: , ,

Caplan Audits Age of Em

When I showed Bryan Caplan an early draft of my book, his main concern was that I didn’t focus enough on humans, as he doesn’t think robots can be conscious. In his first critical post, he focused mainly on language and emphasis issues. But he summarized “the reasoning simply isn’t very rigorous”, and he gave 3 substantive objections:

The idea that the global economy will start doubling on a monthly basis is .. a claim with a near-zero prior probability. ..

Why wouldn’t ems’ creators use the threat of `physical hunger, exhaustion, pain, sickness, grime, hard labor, or sudden unexpected death’ to motivate the ems? .. `torturing’ ems, .. why not?” ..

Why wouldn’t ems largely be copies of the most “robot-like” humans – humble workaholics with minimal personal life, content to selflessly and uncomplainingly serve their employers?

He asked me direct questions on my moral evaluation of ems, so I asked him to estimate my overall book accuracy relative to the standard of academic consensus theories, given my assumptions. Caplan said:

The entire analysis hinges on which people get emulated, and there is absolutely no simple standard academic theory of that. If, as I’ve argued, we would copy the most robot-like people and treat them as slaves, at least 90% of Robin’s details are wrong.

Since I didn’t think how docile are ems matters that much for most of my book, I challenged him to check five random pages. Today, he reports back:

Limiting myself to his chapters on Economics, Organization, and Sociology, [half of the book’s six sections] .. After performing this exercise, I’m more inclined to say Robin’s only 80% wrong. .. My main complaint is that his premises about em motivation are implausible and crucial.

Caplan picked 23 quotes from those pages. (I don’t know how picked; I count ~35 claims.) In one of these (#22) he disputes the proper use of the word “participate”, and in one (#12) he says he can’t judge.

In two more, he seems to just misread the quotes. In #21, I say taxes can’t discourage work by retired humans, and he says but ems work. In #8 I say if most ems are in the few biggest cities, they must also be in the few biggest nations (by population). He says there isn’t time for nations to merge.

If I set aside all these, that leaves 19 evaluations, out of which I count 7 (#1,4,9,13,17,19,20) where he says agree or okay, making me only 63% wrong in his eyes. Now lets go through the 12 disagreements, which fall into five clumps.

In #6, Caplan disagrees with my claim that “well-designed computers can be secure from theft, assault, and disease.” On page 62, I had explained:

Ems may use technologies such as provably secure operating system kernels (Klein et al. 2014), and capability-based secure computing systems, which limit the powers of subsystems (Miller et al. 2003).

In #5, I had cited sources showing that in the past most innovation has come from many small innovations, instead of a few big ones. So I said we should expect that for ems too. Caplan says that should reverse because ems are more homogenous than humans. I have no idea what he is thinking here.

In #3,7, he disagrees with my applying very standard urban econ to ems:

It’s not clear what even counts as urban concentration in the relevant sense. .. Telecommuting hasn’t done much .. why think ems will lead to “much larger” em cities? .. Doesn’t being a virtual being vitiate most of the social reasons to live near others? ..

But em virtual reality makes “telecommuting” a nearly perfect substitute for in-person meetings, at least at close distances. And one page before, I had explained that “fast ems .. can suffer noticeable communication delays with city scale separations.” In addition, many ems (perhaps 20%) do physical tasks, and all are housed in hardware needing physical support.

In #2,23, Caplan disagrees with my estimating that the human fraction of income controlled slowly falls, because he says all ems must always remain absolute slaves; “humans hold 100% of wealth regardless .. ems own nothing.”

Finally, half of his disagreements (#10,11,14,15,16,18) stem from his seeing ems them as quite literally “robot-like”. If not for this, he’d score me as only 31% wrong. According to Caplan, ems are not disturbed by “life events”, only by disappointing their masters. They only group, identify, and organize as commanded, not as they prefer or choose. They have no personality “in a human sense.” They never disagree with each other, and never need to make excuses for anything.

Remember, Caplan and I agree that the key driving factor here is that a competitive em world seeks the most productive (per subjective minute) combinations of humans to scan, mental tweaks and training methods to apply, and work habits and organization to use. So our best data should be the most productive people in the world today, or that we’ve seen in history. Yet the most productive people I know are not remotely “robot-like”, at least in the sense he describes above. Can Caplan name any specific workers, or groups, he knows that fit the bill?

In writing the book I searched for literatures on work productivity, and used many dozens of articles on specific productivity correlates. But I never came across anything remotely claiming “robot-like” workers (or tortured slaves) to be the most productive in modern jobs. Remember that the scoring standard I set was not personal intuition but the consensus of the academic literature. I’ve cited many sources, but Caplan has yet to cite any.

From Caplan, I humbly request some supporting citations. But I think he and I will make only limited progress in this discussion until some other professional economists weigh in. What incantations will summon the better spirits of the Econ blogosphere?

GD Star Rating
loading...
Tagged as: ,