Tag Archives: Personal

Opinion Entrenchment

How do and should we form and change opinions? Logic tells us to avoid inconsistencies and incoherences. Language tells us to attend to how meaning is inferred from ambiguous language. Decision theory says to distinguish values from fact opinion, and says exactly how decisions should respond to these. Regarding fact opinion, Bayesian theory says to distinguish priors from likelihoods, and says exactly how fact opinion should respond to evidence.

Simple realism tells us to expect errors in actual opinions, relative to all of these standards. Computing theory says to expect larger errors on more complex topics, and opinions closer to easily computed heuristics. And many kinds of human and social sciences suggest that we see human beliefs as often like clothes, which in mild weather we use more to show our features to associates than to protect ourselves from the elements. Beliefs are especially useful for showing loyalty and morality.

There’s another powerful way to think about opinions that I’ve only recently appreciated: opinions get entrenched. In biology, natural selection picks genes that are adaptive, but adds error. These gene choices change as environments change, except that genes which are entangled with large complex and valued systems of genes change much less; they get entrenched.

We see entrenchment also all over our human systems. For example, at my university the faculty is divided into disciplines, the curricula into classes, and classes into assignments in ways that once made sense, but now mostly reflect inertia. Due to many interdependencies, it would be slow and expensive to change such choices, so they remain. Our legal system accumulates details that become precedents that many rely on, and which become hard to change. As our software system accrue features, they get fragile and harder to change. And so on.

Beliefs also get entrenched. That is, we are often in the habit of building many analyses from the same standard sets of assumptions. And the more analyses that we have done using some set of assumptions, the more reluctant we are to give up that set. This attitude toward the set is not very sensitive to the evidential or logical support we see for each of its assumptions. In fact, we are often pretty certain that individual assumptions are wrong, but because they greatly simplify our analysis, we hope that they are still enable a decent approximation from their set.

When we use such standard assumption sets, we usually haven’t thought much about the consequences of individually changing each assumption in the set. As long as we can see some plausible ways in which each assumption might change conclusions, we accept it as part of the set, and hold roughly the same reluctance to give it up as for all the other members.

For example, people often say “I just can’t believe Fred’s dead”, meaning not that the evidence of Fred’s death isn’t sufficient, but that it will take a lot of work to think through all the implications of this new fact. The existence of Fred had been a standard assumption in their analysis. A person tempted to have an affair is somewhat deterred from this because of their standard assumption that they were not the sort of person who has affairs; it would take a lot of work to think through their world under this new assumption. This similarly discourages people from considering that their spouses might be having affairs.

In academic theoretical analysis, each area tends to have standard assumptions, many of which are known to be wrong. But even so, there are strong pressures to continue using prior standard assumptions, to make one’s work comparable to that of others. The more different things that are seen to be explained or understood via an assumption set, the more credibility is assigned to each assumption in that set. Evidence directly undermining any one such assumption does little by itself to reduce use of the set.

In probability theory, the more different claims one adds to a bundle, the less likely is the conjunction of that bundle. However, the more analyses that one makes with an assumption set, the more entrenched it becomes. So by combining different assumption sets so that they all get credit for all of their analyses, one makes those sets more, not less, entrenched. Larger bundles get less probability but more entrenchment.

Note that fictional worlds that specify maximal detail are maximally large assumption sets, which thus maximally entrench.

Most people feel it is quite reasonable to disagree, and that claim is a standard assumption in most reasoning about reasoning. But a philosophy literature did arise wherein some questioned that assumption, in the context of a certain standard disagreement scenario. I was able to derive some strong results, but in a different and to my mind more relevant scenario. But the fact of my using a different scenario, and being from a different discipline, meant my results got ignored.

Our book Elephant in the Brain says that social scientists have tended to assume the wrong motives re many common behaviors. While our alternate motives are about as plausible and easy to work with as the usual motives, the huge prior investment in analysis based on the usual motives means that few are interested in exploring our alternate motives. There is not just theory analysis investment, but also investment in feeling that we are good people, a claim which our alternate assumptions undermine.

Even though most automation today has little to do with AI, and has long followed steady trends, with almost no effect on overall employment, the favored assumption set among talking elites recently remains this: new AI techniques are causing a huge trend-deviating revolution in job automation, soon to push a big fraction of workers out of jobs, and within a few decades may totally surpass humans at most all jobs. Once many elites are talking in terms this assumption set, others also want to join the same conversation, and so adopt the same set. And once each person has done a lot of analysis using that assumption set, they are reluctant to consider alternative sets. Challenging any particular item in the assumption set does little to discourage use of the set.

The key assumption of my book Age of Em, that human level robots will be first achieved via brain emulations, not AI, has a similar plausibility to AI being first. But this assumption gets far less attention. Within my book, I picked a set of standard assumptions to support my analysis, and for an assumption that has an X% chance of being wrong, my book gave far less than X% coverage to that possibility. That is, I entrenched my standard assumptions within my book.

Physicists have long taken one of their standard assumptions to be denial of all “paranormal” claims, taken together as a set. That is, they see physics as denying the reality of telepathy, ghosts, UFOs, etc., and see the great success (and status) of physics overall as clearly disproving such claims. Yes, they once mistakenly included meteorites in that paranormal set, but they’ve fixed that. Yet physicists don’t notice that even though many describe UFOs as “physics-defying”, they aren’t that at all; they only plausibly defy current human tech abilities. Yet the habit of treating all paranormal stuff as the same denied set leads physicists to continue to staunchly ridicule UFOs.

I can clearly feel my own reluctance to consider theories wherein the world is not as it appears, because we are being fooled by gods, simulation sysops, aliens, or a vast world elite conspiracy. Sometimes this is because those assumptions seem quite unlikely, but in other cases it is because I can see how much I’d have to rethink given such assumptions. I don’t want to be bothered; haven’t I already considered enough weird stuff for one person?

Life on Mars is treated as an “extraordinary” claim, even though the high rate of rock transfer between early Earth and early Mars make it nearly as likely that life came from Mars to Earth as vice versa. This is plausibly because only life on Earth is the standard assumption used in many analyses, while life starting on Mars seems like a different conflicting assumption.

Across a wide range of contexts, our reluctance to consider contrarian claims is often less due to their lacking logical or empirical support, and more because accepting them would require reanalyzing a great many things that one had previously analyzed using non-contrarian alternatives.

In worlds of beliefs with strong central authorities, those authorities will tend to entrench a single standard set of assumptions, thus neglecting alternative assumptions via the processes outlined above. But in worlds of belief with many “schools of thought”, alternative assumptions will get more attention. It is a trope that “sophomores” tend to presume that most fields are split among different schools of thought, and are surprised to find that this is usually not true.

This entrenchment analysis makes me more sympathetic toward allowing and perhaps even encouraging different schools of thought in many fields. And as central funding sources are at risk of being taken over by a particular school, multiple independent sources of funding seem more likely to promote differing schools of thought.

The obvious big question here is: how can we best change our styles of thought, talk, and interaction to correct for the biases that entrenchment induces?

GD Star Rating
loading...
Tagged as: ,

Hail S. Jay Olson

Over the years I’ve noticed that grad students tend to want to declare their literature search over way too early. If they don’t find something in the first few places they look, they figure it isn’t there. Alas, they implicitly assume that the world of research is better organized than it is; usually a lot more search is needed.

Seems I’ve just made this mistake myself. Having developed a grabby aliens concept and searched around a bit I figured it must be original. But it turns out that in the last five years physicist S. Jay Olson has a whole sequence of seven related papers, most of which are published, and some which got substantial media attention at the time. (We’ll change our paper to cite these soon.)

Olson saw that empirical study of aliens gets easier if you focus on the loud (not quiet) aliens, who expand fast and make visible changes, and also if you focus on simple models with only a few free parameters, to fit to the few key datums that we have. Olson variously called these aliens “aggressively expanding civilizations”, “expanding cosmological civilizations”, “extragalactic civilizations”, and “visible galaxy-spanning civilizations”. In this post, I’ll call them “expansionist”, intended to include both his and my versions.

Olson showed that if we assume that humanity’s current date is a plausible expansionist alien origin date, and if we assume a uniform distribution over our percentile rank among such origin dates, then we can estimate two things from data:

  1. from our current date, an overall appearance rate constant, regarding how frequently expansionist aliens appear, and
  2. from the fact that we do not see grabby controlled volumes in our sky, their expansion speed.

Olson only required one more input to estimate the full distribution of such aliens over space and time, and that is an “appearance rate” function f(t), to multiply by the appearance rate constant, to obtain the rate at which expansionist aliens appear at each time t. Olson tried several different approaches to this function, based on different assumptions about the star formation rate and the rate of local extinction events like supernovae. Different assumptions made only make modest differences to his conclusions.

Our recent analysis of “grabby aliens”, done unaware of Olson’s work, is similar in many ways. We also assume visible long-expanding civilizations, we focus on a very simple model, in our case with three free parameters, and we fit two of them (expansion speed and appearance rate constant) to data in nearly the same way that Olson did.

The key point on which we differ is:

  1. My group uses a simple hard-steps-power-law for the expansionist alien appearance rate function, and estimates the power in that power law from the history of major evolutionary events on Earth.
  2. Using that same power law, we estimate humanity’s current date to be very early, at least if expansionist aliens do not arrive to set an early deadline. Others have estimated modest degrees of earliness, but they have ignored the hard-steps power law. With that included, we are crazy early unless both the power is implausibly low, and the minimum habitable star mass is implausibly large.

So we seem to have something to add to Olson’s thoughtful foundations.

Looking over the coverage by others of Olson’s work, I notice that it all seems to completely ignore his empirical efforts! What they mainly care about seems to be that his having published on the idea of expansionist aliens licensed them to speculate on the theoretical plausibility of such aliens: How physically feasible is it to rapidly expansion in space over millions of years? If physically feasible, is it socially feasible, and if that would any civilization actually choose it?

That is, those who commented on Olson’s work all acted as if the only interesting topic was the theoretical plausibility of his postulates. They showed little interest in the idea that we could confront a simple aliens model with data, to estimate the actual aliens situation out there. They seem stuck assuming that this is a topic on which we essentially have no data, and thus can only speculate using our general priors and theories.

So I guess that should become our central focus now: to get people to see that we may actually have enough data now to get decent estimates on the basic aliens situation out there. And with a bit more work we might make much better estimates. This is not just a topic for theoretical speculation, where everyone gets to say “but have you considered this other scenario that I just made up, isn’t it sorta interesting?”

Here are some comments via email from S. Jay Olson:

It’s been about a week since I learned than Robin Hanson had, in a flash, seen all the basic postulates, crowd-sourced a research team, and smashed through his personal COVID infection to present a paper and multiple public talks on this cosmology. For me, operating from the outskirts of academia, it was a roller coaster ride just to figure out what was happening.

But, what I found most remarkable in the experience was this. Starting from two basic thoughts — 1) some fraction of aliens should be high-speed expansionistic, and 2) their home galaxy is probably not a fundamental barrier to expansion — so many conclusions appear inevitable: “They” are likely a cosmological distance from us. A major fraction of the universe is probably saturated by them already. Sufficiently high tech assumptions (high expansion speed) means they are likely invisible from our vantage point. If we can see an alien domain, it will likely cover a shockingly large angle in the sky. And the key datum for prediction is our cosmic time of arrival. It’s all there (and more), in both lines of research.

Beyond that, Robin has a knack for forcing the issue. If their “hard steps model” for the appearance rate of life is valid (giving f(t) ~ t^n), there aren’t too many ways to solve humanity’s earliness problem. Something would need to make the universe a very different place in the near cosmic future, as far as life is concerned. A phase transition resulting in the “end of the universe” would do it — bad news indeed. But the alternative is that we are, literally, the phase transition.

GD Star Rating
loading...
Tagged as: , ,

Reponse to Weyl

To my surprise, thrice in his recent 80,000 hours podcast interview with Robert Wiblin, Glen Weyl seems to point to me to represent a view that he dislikes. Yet, in all three cases, these disliked views aren’t remotely close to views that I hold.

Weyl: The Vickrey Auction, … problem is he had this very general solution, but which doesn’t really make any sense like in any practical case. And he pointed out that that was true. But everybody was so enamored of the fact that his was generally correct, that they didn’t try to find like versions of it that might actually make sense. They basically just said, “Oh, that’s correct in general,” and then either you were like Tyler and you’re like … just dismiss that whole thing and you’re like, “Ah, too abstract.” Or you were like, you know, Robin Hanson and you just said, “Let’s just do it! Let’s just do it!” You know? And like neither of those was really convincing.

The Vickrey auction was taught to me in grad school, but I’ve never been a big fan because it looked vulnerable to collusion (also a concern re Weyl’s quadratic voting proposals), and because I’d heard of problems in related lab experiments. I’ve long argued (e.g. here) for exploring new institution ideas, but via working our way up from smaller to larger scale trials, and then only after we’ve seen success at smaller scales. Theory models are often among the smallest possible trials. 

Weyl: What I definitely am against … is something which builds a politics that only wants to speak or only respects nerdy and mathematically inclined ways of approaching issues. I think that’s a huge mistake. … the rationalist community … has … obsessive focus on communicating primarily with and relating socially primarily to people who also agree that whatever set of practices they think defined rationality are the way to think about everything. And I think that, that is extremely dangerous … because I think A, it’s not actually true that most useful knowledge that we have comes from those methods. … And B, it’s fundamentally anti-democratic as an attitude … because if you think that the only people who have access to the truth are philosopher kings, it becomes hard to escape the conclusion that philosopher kings should rule. …

Weyl: So, Robin Hanson has this book, Elephant In The Brain, which has some interesting things in it, but I think ultimately is a long complaint that people aren’t interested in talking about politics in the way that I am interested in talking about politics. And that really annoys me. I would submit that, to someone that has that attitude, you should say, “Perhaps consider talking about politics in a different way. You might find that other people might find it easier to speak to you that way.” 

Weyl: There’s something called neo-reaction, … a politics that is built around the notion that basically there should be a small elite of people who own property and control power through that property. … Even though most people in this rationalist community would reject that kind of politics, I think there’s a natural tendency, if you have that set of social attitudes, to have your politics drift in that direction.

Our book, The Elephant in the Brain, has ten application chapters, only one of which is on politics, and that chapter compares key patterns of political behavior to two theories of why we are political: to change policy outcomes or to show loyalty to political allies. Neither theory is about being nerdy, mathematical, or “rational”, and most of the evidence we point to is not on styles of talking, nor do we recommend any style of talking.

Furthermore, every style of thinking or talking is compatible with the view that some people think much better than others, and also with the opposite view.  Nerdy or math styles are not different in this regard, so I see no reason to expect people with those styles of thinking to more favor “anti-democratic” views on thinking eliteness.

And of course, it remains possible that some people actually are much better at thinking than others. (See also two posts on my responses to other critics of econ style thinking.)

Wiblin: I guess in that case it seems like Futarchy, like Robin Hanson’s idea where people vote for what they want, but then bet on what the outcomes will be, might work quite well because you would avoid exploitation by having distributed voting power, but then you would have these superhuman minds would predict what the outcomes of different policies or different actions would be. Then they would be able to achieve whatever outcome was specified by a broad population. …

Weyl: I have issues with Futarchy, but I think what I really object to, it’s less even the worldview I’m talking about. I think really, the problem I have is that there is a rhetoric out there of trying to convince people that they’re insufficient and that everything should be the private property of a small number of people for this reason when in fact, if it was really the case that those few people were so important, and great, and powerful, they wouldn’t need to have all this rhetoric to convince other people of it. People would just see it, they would get it. 

Futarchy has nothing to do with the claim that everything should be the private property of a small number of people, nor have I ever made any such claim. Hopefully, this is just a case of a possible misreading of what Weyl said, and he didn’t intend to relate futarchy or myself to such views.

Added 3p: Weyl & I have been having a Twitter conversation on this, which you can find from here.

GD Star Rating
loading...
Tagged as: ,

Have A Thing

I’m not into small talk; I prefer to talk to people about big ideas. I want to talk big ideas to people who are smart, knowledgeable, and passionate about big ideas, and where it seems that convincing them about something on a big idea has a decent chance of changing their behavior in important ways.

Because of this, I prefer to talk to people who “have a thing.” That is, who have some sort of abstract claim (or question) which they consider important and neglected, for which they often argue, and which intersects somehow with their life hopes/plans. When they argue, they are open to and will engage counter-arguments. They might push this thing by themselves, or as part of a group, but either way it matters to them, they represent it personally, and they have some reason to think that their personal efforts can make a difference to it.

People with a thing allow me to engage a big idea that matters to someone, via someone who has taken the time to learn a lot about it, and who is willing to answer many questions about it. Such a person creates the hope that I might change their actions by changing their mind, or that they might convince me to change my life hopes/plans. I may convince them that some variation is more promising, or that some other thing fits better with the reasons they give. Or I might know of a resource, such as a technique or a person, who could help them with their thing.

Yes, in part this is all because I’m a person with many things. So I can relate better to such people. And after I engage their thing, there’s a good chance that they will listen to and engage one of my things. Even so, having a thing is handy for many people who are different from me. It lets you immediately engage many people in conversation in a way so that they are likely to remember you, and be impressed by you if you are in fact impressive.

Yes, having a thing can be off-putting to the sort of people who like to keep everything mild and low-key, and make sure that their talk has little risk of convincing them to do something that might seem weird or passionate. But I consider this off-putting effect to be largely a gain, in sorting out the sort of people I’m less interested in.

Now having a thing won’t save you if you are a fool or an idiot. In fact, it might make that status more visible. But if you doubt you are either, consider having a thing.

Added 11p: Beware of two common failures modes for people with things: 1) not noticing how much others want to hear about your thing, 2) getting so attached to your thing that you don’t listen enough to criticism of it.

Note also that having things promotes an intellectual division of labor, which helps the world to better think through everything.

Added 11Jan: Beware a third failure mode: being more serious or preachy than your audience wants. You can be focused and interesting without making people feel judged.

GD Star Rating
loading...
Tagged as: ,

Do I Offend?

The last eight months have seen four episodes where many people on Twitter called me a bad offensive person, often via rude profanity, sometimes calling for me to be fired or arrested. These four episodes were: sex inequality and redistribution, chances of a delayed harassment complaint, morality-induced overconfidence on historical counterfactuals, and implicit harassment in A Star Is Born. While these topics have occupied only a small fraction of my thought over these months, and a much smaller fraction over my career, they may have disproportionate effects on my reputation. So I’ve tried to pay close attention to the reasons people give. 

I think I see a consistent story. While in these cases I have not made moral, value, or political claims, when people read small parts of what I’ve claimed or asked, they say they can imagine someone writing those words for the purpose of promoting political views they dislike. And not just mild views that just a bit on other side of the political spectrum. No, they attribute to me the most extreme bad views imaginable, such as that I advocate rape, murder, slavery, and genocide. People say they are directly and emotionally traumatized by the offensive “creepy” feeling they get when they encounter someone with any prestige and audience seeming to publicly promote views with which they strongly disagree.

Some plausibly contributing factors here include my sometimes discussing sensitive topics, our increasing political polarization, the ease of making mobs and taking words out of context on Twitter, increasing ease of making new accusations similar to previous ones, and my terse and analytic writing style combined with my adding disclaimers re my allegiance to “correct” views. There’s also my following the standard poll practice of not telling those who answer polls the motives for those polls. And I’m a non-poor older white male associated with economics in general and GMU econ in particular; many see all these as indicators of bad political views. 

Digging a little deeper, trauma is plausibly increased by a poll format, which stokes fears that bad people will find out that they are not alone, and be encouraged to learn that many others share their views. I suspect this helps explain complaints that my poll population is not representative of my nation or planet.  

I also suspect bad faith. Long ago when I had two young kids, they would sometimes pick fights, for example on long car trips. One might start singing, to which the other would complain. We might agree that singing is too much for such a small space. Then the first might start to quietly hum, which we might decide is okay. Then first might hum more loudly and triumphantly, while the second might writhe, cover their ears, and make a dramatic display of suffering. 

Similarly, I suspect bad faith when some a) claim to experience “harassment” level suffering due to encountering political views with which they disagree, and yet are fine with high levels of sex, violence, and profanity in TV & movies, b) infer indirectly from my neutral analytical text that I promote the most extreme views imaginable, and c) do not notice that such claims are both a priori implausible and inconsistent with my large corpus of public writing; they either haven’t read much of it or purposely mischaracterize it. 

The idea of a large shared intellectual sphere wherein we can together analyze difficult topics holds a strong appeal to me. The main criteria for consideration in such a sphere should be the coherence and persuasiveness of specific relevant arguments. When evaluating each arguments, there is usually little need to infer distantly related positions of those who offer arguments. Usually an argument either works or it doesn’t, regardless of who says it or why.

I try to live up to such ideals in how I write and talk. I hope that many who read and follow me share these ideals, and I appreciate their support. I’m thus not favorably inclined toward suggestions that I stop discussing sensitive topics, or that adopt a much more elaborate disclaimer style, or that I stop asking my followers questions, to prevent others from being traumatized by hearing their answers, and or to keep followers from finding out that others share their opinions.

Added 29Dec:  I did 4 follow up polls to probe tendencies to take offense, focusing on the Nazi case. Respondents said the fraction of tweeters who actually wish Nazis had won WWII is tiny; 63% said it is <0.1%, though 4% gave >10%. And 79% said that this Nazi fraction is <3% among those “who mention `Nazis’ neutrally in a tweet, without explicitly praising or criticizing them, and who explicitly claim otherwise”, though 10% said >15%. Also, 58% said that for a tweet to be considered “offensive” or “harassment”, it would need to suggest a chance >50% that its author actually wishes Nazis had won WWII. However, 10% gave a threshold of <3% and 19% gave one <15%.

Finally, 43% gave a <3% “chance the author of a Twitter poll which asks about chance world would have been better off had Nazis won WWII, actually wishes that Nazis had won WWII”. However 20% gave a chance >50%, and 37% gave a chance >15%.

A obvious conclusion here is that, even among those who respond to my twitter polls, a substantial fraction have set hair-triggers for offense. For example, it seems >20% say merely asking if the world would have been better off if Nazis had won justifies a high enough chance of a Nazi author to be offensive. Explicit denials may help, but if the offended are much more vocal than are others, a vocal choir of objection seems largely inevitable.

This makes me wonder again if the “silent majority” might benefit from juries or polls which show them that the vocal offended are a minority. Though that minority will likely also express offense re such juries or polls.

Added 28Jan: A recent burst of outrage on the Star is Born episode confirms this account to some extent.

GD Star Rating
loading...
Tagged as: , , ,

My Poll, Explained

So many have continued to ask me the same questions about my recent twitter poll, that I thought I’d try to put all my answers in one place. This topic isn’t that fundamentally interesting, so most you you may want to skip this post.

Recently, Christine Blasey Ford publicly accused US Supreme Court nominee Brett Kavanaugh of a sexual assault. This accusation will have important political consequences, however it is resolved. Congress and the US public are now put in the position of having to evaluate the believability of this accusation, and thus must consider which clues might indicate if the accusation is correct or incorrect.

Immediately after the accusation, many said that the timing of the accusation seemed to them suspicious, occurring exactly when it would most benefit Democrats seeking to derail any nomination until after the election, when they may control the Senate. And it occurred to me that a Bayesian analysis might illuminate this issue. If T = the actual timing, A = accurate accusation, W = wrong accusation, then how much this timing consideration pushes us toward final beliefs is given by the likelihood ratio p(T|W)/p(T|A). A ratio above one pushes against believing the accusation, while a ratio below one pushes for it.

The term P(T|A) seemed to me the most interesting term, and it occurred to me to ask what people thought about it via a Twitter poll. (If there was continued interest, I could ask another question about the other term.) Twitter polls are much cheaper and easier for me to do than other polls. I’ve done dozens of them so far, and rarely has anyone objected. Such polls only allow four options, and you don’t have many characters to explain your question. So I used those characters mainly to make clear a few key aspects of the accusation’s timing:

Many claimed that my wording was misleading because it didn’t include other relevant info that might support the accusation. Like who else the accuser is said to have told when, and what pressures she is said to have faced when to go public. They didn’t complain about my not including info that might lean the other way, such as low detail on the claimed event and a lack of supporting witnesses. But a short tweet just can’t include much relevant info; I barely had enough characters to explain key accusation timing facts.

It is certainly possible that my respondents suffered from cognitive biases, such as assuming too direct a path between accuser feelings and a final accusation. To answer my poll question well, they should have considered many possible complex paths by which an accuser says something to others, who then tell others people, some of which then chose when to bring pressure back on that accuser to make a public accusation. But that’s just the nature of any poll; respondents may well not think carefully enough before answering.

For the purposes of a Twitter poll, I needed to divide the range from 0% to 100% into four bins.
I had high uncertainty about where poll answers would lie, and for the purpose of Bayes rule it is factors that matter most. So I choose three ranges of roughly a factor of 4 to 5, and a leftover bin encompassing an infinite factor. If anything, my choice was biased against answers in the infinite factor bin.

I really didn’t know which way poll answers would go. If most answers were high fractions, that would tend to support the accusation, while if most answers were low fractions, that would tend to question the accusation. Many accused me of posting the poll in order to deny the accusation, but for that to work I would have needed a good guess on the poll answers. Which I didn’t have.

My personal estimate would be somewhere in the top two ranges, and that plausibly biased me to pick bins toward such estimates.  As two-thirds of my poll answers were in the lowest bin I offered, that suggests that I should have offered an even wider range of factors. Some claimed that I biased the results by not putting more bins above 20%. But that fraction is still below the usual four-bin target fraction of 25% per bin.

It is certainly plausible that my pool of poll respondents are not representative of the larger US or world population. And many called it is irresponsible and unscientific to run an unrepresentative poll, especially if one doesn’t carefully show which wordings matter how via A/B testing. But few complain about the thousands of other Twitter polls run every day, or of my dozens of others. And the obvious easy way to show that my pool or wordings matter is to show different answers with another poll where those vary. Yet almost no one even tried that.

Also, people don’t complain about others asking questions in simple public conversations, even though those can be seen as N=1 examples of unrepresentative polls without A/B testing on wordings. It is hard to see how asking thousands of people the same question via a Twitter poll is less informative than just asking one person that same question.

Many people said it is just rude to ask a poll question that insinuates that rape accusations might be wrong, especially when we’ve just seen someone going through all the pain of making one. They say that doing so is pro-rape and discourages the reporting of real rapes, and that this must have been my goal in making this poll. But consider an analogy with discussing gun control just after a shooting. Some say this is rude then to discuss anything but sympathy for victims, but others say this is exactly a good time to discuss gun control. I say that when we must evaluate a specific rape accusation is exactly a good time to think about what clues might indicate in what direction on whether this is an accurate or wrong accusation.

Others say that it is reasonable to conclude that I’m against their side if I didn’t explicitly signal within my poll text  that I’m on their side. That’s just the sort of signaling game equilibrium we are in. And so they are justified in denouncing me for being on the wrong side. But it seems a quite burdensome standard to hold on polls, which already have too few characters to allow an adequate explanation of a question, and it seems obvious that the vast majority of Twitter polls today are not in fact being held to this standard.

Added 24Sep: I thought the poll interesting enough to ask, relative to its costs to me, but I didn’t intend to give it much weight. It was all the negative comments that made it a bigger deal.

Note that, at least in my Twitter world, we see a big difference in attitudes between vocal folks who tweet and those who merely answer polls. That later “silent majority” is more skeptical of the accusation.

GD Star Rating
loading...
Tagged as: , , ,

Great Filter, 20 Years On

Twenty years ago today, I introduced the phrase “The Great Filter” in an essay on my personal website. Today Google says 300,000 web pages use this phrase, and 4.3% of those mention my name. This essay has 45 academic citations, and my related math paper has 17 cites.

These citations are a bit over 1% of my total citations, but this phrase accounts for 5% of my press coverage. This press is mostly dumb luck. I happened to coin a phrase on a topic of growing and wide interest, yet others more prestigious than I didn’t (as they often do) bother to replace it with another phrase that would trace back to them.

I have mixed feelings about writing the paper. Back then I was defying the usual academic rule to focus narrowly. I was right that it is possible to contribute to many more different areas than most academics do. But what I didn’t fully realize is that to academic economists non-econ publications don’t exist, and that publication is only the first step to academic influence. If you aren’t around in an area to keep publishing, giving talks, going to meetings, doing referee reports, etc., academics tend to correctly decide that you are politically powerless and thus you and your work can safely be ignored.

So I’m mostly ignored by the academics who’ve continued in this area – don’t get grants, students, or invitations to give talks, to comment on paper drafts, or to referee papers, grants, books, etc. The only time I’ve ever been invited to talk on the subject was a TEDx talk a few years ago. (And I’ve given over 350 talks in my career.) But the worst scenario of being ignored is that it is as if your paper never existed, and so you shouldn’t have bothered writing it. Thankfully I have avoided that outcome, as some of my insights have been taken to heart, both academically and socially. People now accept that finding independent alien life simpler than us would be bad news, that the very hard filter steps should be roughly equally spaced in our history, and that the great filter gives a reason to worry about humanity’s future prospects.

GD Star Rating
loading...
Tagged as: , ,

My Market Board Game

From roughly 1989 to 1992, I explored the concept of prediction markets (which I then called “idea futures”) in part via building and testing a board game. I thought I’d posted details on my game before, but searching I couldn’t find anything. So here is my board game.

The basic idea is simple: people bet on “who done it” while watching a murder mystery. So my game is an add-on to a murder mystery movie or play, or a game like How to Host a Murder. While watching the murder mystery, people stand around a board where they can reach in with their hands to directly and easily make bets on who done it. Players start with the same amount of money, and in the end whoever has the most money wins (or maybe wins in proportion to their winnings).

Together with Ron Fischer (now deceased) I tested this game a half-dozen times with groups of about a dozen. People understood it quickly and easily, and had fun playing. I looked into marketing the game, but was told that game firms do not listen to proposals by strangers, as they fear being sued later if they came out with a similar game. So I set the game aside.

All I really need to explain here is how mechanically to let people bet on who done it. First, you give all players 200 in cash, and from then on they have access to a “bank” where they can always make “change”:

Poker chips of various colors can represent various amounts, like 1, 5, 10, 25, or 100. In addition, you make similar-sized cards that read things like “Pays 100 if Andy is guilty.” There are different cards for different suspects in the murder mystery, each suspect with a different color card. The “bank” allows exchanges like trading two 5 chips for one 10 chip, or trading 100 in chips for a set of all the cards, one for each suspect.

Second, you make a “market board”, which is an array of slots, each of which can hold either chips or a card. If there were six suspects, an initial market board could look like this:

For this board, each column is about one of the six suspects, and each row is about one of these ten prices: 5,10,15,20,25,30,40,50,60,80. Here is a blow-up of one slot in the array:

Every slot holds either the kind of card for that column, or it holds the amount of chips for that row. The one rule of trading is: for any slot, anyone can swap the right card for the right amount of chips, or can make the opposite swap, depending on what is in the slot at the moment. The swap must be immediate; you can’t put your hand over a slot to reserve it while you get your act together.

This could be the market board near the end of the game:

Here the players have settled on Pam as most likely to have done it, and Fred as least likely. At the end, players compute their final score by combining their cash in chips with 100 for each winning card; losing cards are worth nothing. And that’s the game!

For the initial board, fill a row with chips when the number of suspects times the price for that row is less than 100, and fill that row with cards otherwise. Any number of suspects can work for the columns, and any ordered set of prices between 0 and 100 can work for the rows. I made my boards by taping together clear-color M512 boxes from Tap Plastics, and taping printed white paper on tops around the edge.

Added 30Aug: Here are a few observations about game play. 1) Many, perhaps most, players were so engaged by “day trading” in this market that they neglected to watch and think enough about the murder mystery. 2) You can allow players to trade directly with each other, but players show little interest in doing this. 3) Players found it more natural to buy than to sell. As a result, prices drifted upward, and often the sum of the buy prices for all the suspects was over 100. An electronic market maker could ensure that such arbitrage opportunities never arise, but in this mechanical version some players specialized in noticing and correcting this error.

Added 31Aug: A twitter poll picked a name for this game: Murder, She Bet.

Added 9Sep: Expert gamer Zvi Mowshowitz gives a detailed analysis of this game. He correctly notes that incentives for accuracy are lower in the endgame, though I didn’t notice substantial problems with endgame accuracy in the trials I ran.

GD Star Rating
loading...
Tagged as: , ,

Age of Em Paperback

Today is the official U.S. release date for the paperback version of my first book The Age of Em: Work, Love, and Life when Robots Rule the Earth. (U.K. version came out a month ago.) Here is the new preface:

I picked this book topic so it could draw me in, and I would finish. And that worked: I developed an obsession that lasted for years. But once I delivered the “final” version to my publisher on its assigned date, I found that my obsession continued. So I collected a long file of notes on possible additions. And when the time came that a paperback edition was possible, I grabbed my chance. As with the hardback edition, I had many ideas for changes that might make my dense semi-encyclopedia easier for readers to enjoy. But my core obsession again won out: to show that detailed analysis of future scenarios is possible, by showing just how many reasonable conclusions one can draw about this scenario.

Also, as this book did better than I had a right to expect, I wondered: will this be my best book ever? If so, why not make it the best it can be? The result is the book you now hold. It has over 42% more citations, and 18% more words, but it is only a bit easier to read. And now I must wonder: can my obsession stop now, pretty please?

Many are disappointed that I do not more directly declare if I love or hate the em world. But I fear that such a declaration gives an excuse to dismiss all this; critics could say I bias my analysis in order to get my desired value conclusions. I’ve given over 100 talks on this book, and never once has my audience failed to engage value issues. I remain confident that such issues will not be neglected, even if I remain quiet.

These are the only new sections in the paperback: Anthropomorphize, Motivation, Slavery, Foom, After Ems. (I previewed two of them here & here.)  I’ll make these two claims for my book:

  1. There’s at least a 5% chance that my analysis will usefully inform the real future, i.e., that something like brain emulations are actually the first kind of human-level machine intelligence, and my analysis is mostly right on what happens then. If it is worth having twenty books on the future, it is worth having a book with a good analysis of a 5% scenario.
  2. I know of no other analysis of a substantially-different-from-today future scenario that is remotely as thorough as Age of Em. I like to quip, “Age of Em is like science fiction, except there is no plot, no characters, and it all makes sense.” If you often enjoy science fiction but are frustrated that it rarely makes sense on closer examination, then you want more books like Age of Em. The success or not of Age of Em may influence how many future authors try to write such books.
GD Star Rating
loading...
Tagged as: ,

Our Book’s New Ground

In today’s Wall Street Journal, Matthew Hutson, author of The 7 Laws of Magical Thinking: How Irrational Beliefs Keep Us Happy, Healthy, and Sane, reviews our new book The Elephant in the Brain. He starts and ends with obligatory but irrelevant references to Trump. Quotes from the rest:

The book builds on centuries of writing about self-deception. … I can’t say that the book covers new ground, but it is a smart synthesis and offers several original metaphors. People self-deceive about lots of things. We overestimate our ability to drive. We conveniently forget who started an argument. … Much of what we do, including our most generous behavior, the authors say, is not meant to be helpful. We are, like many other members of the animal kingdom, competitively altruistic—helpful in large part to earn status. … Casual conversations, for instance, often trade in random information. But the point is not to trade facts for facts; what you are actually doing, the book argues, is showing off so people can evaluate your intellectual versatility. …

The authors take particular interest in large-scale social issues and institutions, showing how systems of collective self-deception help explain the odd behavior we see in art, charity, education, medicine, religion and politics. Why do people vote? Not to strengthen the republic. …. Instead, we cheer for our team and participate as a signal of loyalty, hoping for the benefits of inclusion. In education, as many economists have argued, learning is ancillary to accreditation and status. … In many areas of medicine, they note, increased care does not improve outcomes. People offer it to broadcast helpfulness, or demand it to demonstrate how much support they have from others.

“The Elephant in the Brain” is refreshingly frank and penetrating, leaving no stone of presumed human virtue unturned. The authors do not even spare themselves. … It is accessibly erudite, deftly deploying essential technical concepts. … Still, the authors urge hope. … There are ways to leverage our hidden motives in the pursuit of our ideals. The authors offer a few suggestions. … Unfortunately, the book devotes only a few pages to such solutions. “The Elephant in the Brain” does not judge us for hiding selfish motives from ourselves. And to my mind, given that we will always have selfish motives, keeping them concealed might even provide a buffer against naked strife. (more)

All reasonable, except maybe for “can’t say that the book covers new ground.” Yes, scholars of self-deception like Hutson will find plausible both our general thesis and most of our claims about particular areas of life. And yes those specific claims have almost all been published before. Even so, I bet most policy experts will call our claims on their particular area “surprising” and even “extraordinary”, and judge that we have not offered sufficiently extraordinary evidence in support. I’ve heard education policy experts say this on Bryan Caplan’s new book, The Case Against Education. And I’ve heard medicine policy experts say this on our medicine claims, and political system experts say this on our politics claims.

In my view, the key problem is that, to experts in each area, no modest amount of evidence seems sufficient support for claims that sound to them so surprising and extraordinary. Our story isn’t the usual one that people tell, after all. It is only by seeing that substantial if not overwhelming evidence is available for similar claims covering a great many areas of life that each claim can become plausible enough that modest evidence can make these conclusions believable. That is, there’s an intellectual contribution to make by arguing together for a large set of related contrarian-to-experts claims. This is what I suggest is original about our book.

I expect that experts in each policy area X will be much more skeptical about our claims on X than about our claims on the other areas. You might explain this by saying that our arguments are misleading, and only experts can see the holes. But I instead suggest that policy experts in each X are biased because clients prefer them to assume the usual stories. Those who hire education policy experts expect them to talk about better learning the material, and so on. Such biases are weaker for those who study motives and self-deception in general.

Hutson has one specific criticism:

The case for medicine as a hidden act of selfishness may have some truth, but it also has holes. For example, the book does not address why medical spending is so much higher in the U.S. than elsewhere—do Americans care more than others about health care as a status symbol?

We do not offer our thesis as an explanation for all possible variations in these activities! We say that our favored motive is under-acknowledged, but we don’t claim that it is the only motive, nor that motive variations are the only way to explain behavioral variation. The world is far too big and complex for one simple story to explain it all.

Finally, I must point out one error:

“The Elephant in the Brain,” a book about unconscious motives. (The titular pachyderm refers not to the Republican Party but to a metaphor used in 2006 by the social psychologist Jonathan Haidt, in which reason is the rider on the elephant of emotion.)

Actually it is a reference to common idea of “the elephant in the room”, a thing we can all easily see but refuse to admit is there. We say there’s a big one regarding how our brains work.

GD Star Rating
loading...
Tagged as: , , ,