Tag Archives: Personal

My 11 Bets at 10-1 Odds On 10M Covid deaths by 2022

In February 2020, I made many bets on Covid19, including 11 bets at ten to one odds on if it would cause 10 million deaths worldwide by 2022, as estimated by WHO.

WHO has a Q&A page on Covid excess deaths that includes this section:

Why is excess mortality the preferred measure? … aggregate COVID-19 case and death numbers … being reported to WHO … under-estimate the number of lives lost due to the pandemic … In light of the challenges posed by using reported data on COVID-19 cases and deaths, excess mortality is considered a more objective and comparable measure that accounts for both the direct and indirect impacts of the pandemic.

This WHO page, updated daily, lists reported deaths. This WHO page estimated “The true death toll of COVID-19”, or world covid excess deaths, as of Dec. 31, 2020. I expect them to post a page like it soon with death estimates as of Dec. 31, 2021. But I doubt those estimates will differ much from The Economist, which as of Dec. 30, 2021 said:

The pandemic’s true death toll; Our daily estimate of excess deaths around the world … Although the official number of deaths caused by covid-19 is now 5.4m, our single best estimate is that the actual toll is 18.6m people. We find that there is a 95% chance that the true value lies between 11.6m and 21.6m additional deaths.


For many bets we agreed that if there were two number estimates instead of one, we’d go with a geometric mean of them. The geometric mean of 5.4 and 18.6 is 10.02.

Here is the current status of my 11 bets, with a link to the bets and the amount I’m owed. (I’ll update this as things change.)

These claim to win, say I should pay them:

No response to queries (both msg & email):

Still thinking:

  • A Twitter msg bet that I’m keeping private for now, $5000

Waiting for official WHO 2021 Excess Deaths page:

Paid to me:

Some say that it is rude of me to brag about winning. But I need to make this bet situation public in order to pressure bettors to make good on their promises.

Some say it is immoral to bet on death. But I didn’t cause these deaths, and my public bets helped convince many to take this problem more seriously, for which they’ve thanked me.

Added 12Jan: Many are talking as if the issue is direct vs. indirect deaths, but I’d be very surprised if more than a third of excess deaths are indirect. Most of them were caused directly by covid, but just not caught by official testing and diagnosis systems.

GD Star Rating
loading...
Tagged as: ,

Minds Almost Meeting

Many travel to see exotic mountains, buildings, statues, or food. But me, I want to see different people. If it could be somehow arranged, I’d happily “travel” to dozens of different subcultures that live within 100 miles of me. But I wouldn’t just want to walk past them, I’d want to interact enough to get in their heads.

Working in diverse intellectual areas has helped. So far, these include engineering, physics, philosophy, computer science, statistics, economics, polisci, finance, futurism, psychology, and astrophysics. But there are so many other intellectual areas I’ve hardly touched, and far more non-intellectual heads of which I’ve seen so little.

Enter the remarkable Agnes Callard with whom I’ve just posted ten episodes of our new podcast “Minds Almost Meeting”:

Tagline: Agnes and Robin talk, try to connect, often fail, but sometimes don’t.

Summary: Imagine two smart curious friendly and basically truth-seeking people, but from very different intellectual traditions. Traditions with different tools, priorities, and ground rules. What would they discuss? Would they talk past each other? Make any progress? Would anyone want to hear them? Economist Robin Hanson and philosopher Agnes Callard decided to find out.

Topics: Paradox of Honesty, Plagiarism, Future Generations, Paternalism, Punishment, Pink and Purple, Aspiration, Prediction Markets, Hidden Motives, Distant Signals.

It’s not clear who will be entertained by our efforts, but I found the process fascinating, informative, and rewarding. Though our audio quality was low at times, it is still understandable.

Agnes is a University of Chicago professor of philosophy and a rising-star “public intellectual” who often publishes in places like The New Yorker. She and I are similar in both being oddball, hard-to-offend, selfish parents and academics. We both have religious upbringings, broad interests, and a taste for abstraction. But we differ by generation, gender, and especially in our intellectual backgrounds and orientations (me vs. her): STEM vs. humanities, futurist vs. classicist, explaining via past shapings vs. future aspirations, and relying more vs. less on large systems of thought.

Before talking to Agnes, I hadn’t realized just how shaped I’ve been by assimilating many large formal systems of thought, such as calculus, physics, optimization, algorithms, info theory, decision theory, game theory, economics, etc. Though the core of these systems can be simple, each has been connected to many diverse applications, and many larger analysis structures have been built on top of them.

Yes these systems, and their auxiliary structures and applications, are based on assumptions that can be wrong. But their big benefit is that shared efforts to use them have rooted out many (though hardly all) contradictions, inconsistencies, and incoherences. So my habit of trying when possible to match any new question to one of these systems is likely to, on average, produce a more coherent resulting analyses. I’m far more interested in applying existing systems to big neglected topics than in inventing new systems.

In contrast, though philosophers like Agnes who rely on few such structures beyond simple logic can expect their arguments to be accessible to wider audiences, they must also expect a great many incoherences in their analysis. Which is part of why they so often disagree, and build such long chains of back and forth argumentation. I agree with Tyler, who in his conversation with Agnes said these long chains suggest a problem. However, I do see the value of having some fraction of intellectuals taking this simple robust strategy, as a complement to more system-focused strategies.

Thank you Agnes Callard, for helping me to see a wider intellectual world, including different ways of thinking and topics I’ve neglected.

GD Star Rating
loading...
Tagged as: , ,

What I Hold Sacred

Someone recently told me that I stood out compared to other writers in never seeming to treat anything as sacred. Which seemed to them awkward, odd, and implausible, as much as the opposite writers who seem to treat most all topics and issues as sacred. More plausibly, most people do treat some minority of things as especially sacred, and if they don’t reveal that in their writing, they are probably hiding it from others, and maybe also from themselves.

This seems plausible enough that it pushes me to try to identify and admit what I hold sacred. When I search for ways to identify what people hold sacred, I find quite a lot of rather vague descriptions and associations. The most concrete signs I find are: associating it with rituals and symbols, treating it with awe and reverence, unwillingness to trade other things for it, and outrage at those who disrespect it.

The best candidate I can find is: truth-seeking. More specifically: truth-seeking among intellectuals on important topics. That is, the goal is for the world to learn more together on key abstract topics, and I want each person who contributes substantially to such projects to add the most that they can, given their constraints and the budgets they are willing to allocate to it. I don’t insist anyone devote themselves wholly to this, and I’m less concerned with each person always being perfectly honest than with us together figuring stuff out.

I admit that I do treat this with reverence, and I’m reluctant to trade it for other things. And I’d more often express outrage at others disrespecting it if I thought I’d get more support on such occasions. Yes, most everyone gives great lip service allegiance to this value. But most suggest that there are few tradeoffs between this and other values, and also that following a few simple rules of thumb (e.g., don’t lie, give confidence intervals) is sufficient; no need to dig deeper. In contrast, I think it takes long-sustained careful thought to really see what would most help for his goal, and I also see many big opportunities to sacrifice other things for this goal.

How can you better affirm this value? Its simple, but hard: Continually ask yourself what are the most important topics, what are the most promising ways to advance them, and what are your comparative advantages re such efforts. Do not assume that answers to these questions are implicit in the status and rewards that others offer you for various activities. The world mostly doesn’t care much, and so if you do care more you can’t focus on pleasing the world.

So why do I seem reluctant to talk about this? I think because I feel vulnerable. When you admit what is most precious to you, others might threaten it in order to extort concessions from you. And it is hard to argue well for why any particular value should be the most sacred. You run out of arguments and must admit you’ve made a choice you can’t justify. I so admit.

GD Star Rating
loading...
Tagged as: , ,

Opinion Entrenchment

How do and should we form and change opinions? Logic tells us to avoid inconsistencies and incoherences. Language tells us to attend to how meaning is inferred from ambiguous language. Decision theory says to distinguish values from fact opinion, and says exactly how decisions should respond to these. Regarding fact opinion, Bayesian theory says to distinguish priors from likelihoods, and says exactly how fact opinion should respond to evidence.

Simple realism tells us to expect errors in actual opinions, relative to all of these standards. Computing theory says to expect larger errors on more complex topics, and opinions closer to easily computed heuristics. And many kinds of human and social sciences suggest that we see human beliefs as often like clothes, which in mild weather we use more to show our features to associates than to protect ourselves from the elements. Beliefs are especially useful for showing loyalty and morality.

There’s another powerful way to think about opinions that I’ve only recently appreciated: opinions get entrenched. In biology, natural selection picks genes that are adaptive, but adds error. These gene choices change as environments change, except that genes which are entangled with large complex and valued systems of genes change much less; they get entrenched.

We see entrenchment also all over our human systems. For example, at my university the faculty is divided into disciplines, the curricula into classes, and classes into assignments in ways that once made sense, but now mostly reflect inertia. Due to many interdependencies, it would be slow and expensive to change such choices, so they remain. Our legal system accumulates details that become precedents that many rely on, and which become hard to change. As our software system accrue features, they get fragile and harder to change. And so on.

Beliefs also get entrenched. That is, we are often in the habit of building many analyses from the same standard sets of assumptions. And the more analyses that we have done using some set of assumptions, the more reluctant we are to give up that set. This attitude toward the set is not very sensitive to the evidential or logical support we see for each of its assumptions. In fact, we are often pretty certain that individual assumptions are wrong, but because they greatly simplify our analysis, we hope that they are still enable a decent approximation from their set.

When we use such standard assumption sets, we usually haven’t thought much about the consequences of individually changing each assumption in the set. As long as we can see some plausible ways in which each assumption might change conclusions, we accept it as part of the set, and hold roughly the same reluctance to give it up as for all the other members.

For example, people often say “I just can’t believe Fred’s dead”, meaning not that the evidence of Fred’s death isn’t sufficient, but that it will take a lot of work to think through all the implications of this new fact. The existence of Fred had been a standard assumption in their analysis. A person tempted to have an affair is somewhat deterred from this because of their standard assumption that they were not the sort of person who has affairs; it would take a lot of work to think through their world under this new assumption. This similarly discourages people from considering that their spouses might be having affairs.

In academic theoretical analysis, each area tends to have standard assumptions, many of which are known to be wrong. But even so, there are strong pressures to continue using prior standard assumptions, to make one’s work comparable to that of others. The more different things that are seen to be explained or understood via an assumption set, the more credibility is assigned to each assumption in that set. Evidence directly undermining any one such assumption does little by itself to reduce use of the set.

In probability theory, the more different claims one adds to a bundle, the less likely is the conjunction of that bundle. However, the more analyses that one makes with an assumption set, the more entrenched it becomes. So by combining different assumption sets so that they all get credit for all of their analyses, one makes those sets more, not less, entrenched. Larger bundles get less probability but more entrenchment.

Note that fictional worlds that specify maximal detail are maximally large assumption sets, which thus maximally entrench.

Most people feel it is quite reasonable to disagree, and that claim is a standard assumption in most reasoning about reasoning. But a philosophy literature did arise wherein some questioned that assumption, in the context of a certain standard disagreement scenario. I was able to derive some strong results, but in a different and to my mind more relevant scenario. But the fact of my using a different scenario, and being from a different discipline, meant my results got ignored.

Our book Elephant in the Brain says that social scientists have tended to assume the wrong motives re many common behaviors. While our alternate motives are about as plausible and easy to work with as the usual motives, the huge prior investment in analysis based on the usual motives means that few are interested in exploring our alternate motives. There is not just theory analysis investment, but also investment in feeling that we are good people, a claim which our alternate assumptions undermine.

Even though most automation today has little to do with AI, and has long followed steady trends, with almost no effect on overall employment, the favored assumption set among talking elites recently remains this: new AI techniques are causing a huge trend-deviating revolution in job automation, soon to push a big fraction of workers out of jobs, and within a few decades may totally surpass humans at most all jobs. Once many elites are talking in terms this assumption set, others also want to join the same conversation, and so adopt the same set. And once each person has done a lot of analysis using that assumption set, they are reluctant to consider alternative sets. Challenging any particular item in the assumption set does little to discourage use of the set.

The key assumption of my book Age of Em, that human level robots will be first achieved via brain emulations, not AI, has a similar plausibility to AI being first. But this assumption gets far less attention. Within my book, I picked a set of standard assumptions to support my analysis, and for an assumption that has an X% chance of being wrong, my book gave far less than X% coverage to that possibility. That is, I entrenched my standard assumptions within my book.

Physicists have long taken one of their standard assumptions to be denial of all “paranormal” claims, taken together as a set. That is, they see physics as denying the reality of telepathy, ghosts, UFOs, etc., and see the great success (and status) of physics overall as clearly disproving such claims. Yes, they once mistakenly included meteorites in that paranormal set, but they’ve fixed that. Yet physicists don’t notice that even though many describe UFOs as “physics-defying”, they aren’t that at all; they only plausibly defy current human tech abilities. Yet the habit of treating all paranormal stuff as the same denied set leads physicists to continue to staunchly ridicule UFOs.

I can clearly feel my own reluctance to consider theories wherein the world is not as it appears, because we are being fooled by gods, simulation sysops, aliens, or a vast world elite conspiracy. Sometimes this is because those assumptions seem quite unlikely, but in other cases it is because I can see how much I’d have to rethink given such assumptions. I don’t want to be bothered; haven’t I already considered enough weird stuff for one person?

Life on Mars is treated as an “extraordinary” claim, even though the high rate of rock transfer between early Earth and early Mars make it nearly as likely that life came from Mars to Earth as vice versa. This is plausibly because only life on Earth is the standard assumption used in many analyses, while life starting on Mars seems like a different conflicting assumption.

Across a wide range of contexts, our reluctance to consider contrarian claims is often less due to their lacking logical or empirical support, and more because accepting them would require reanalyzing a great many things that one had previously analyzed using non-contrarian alternatives.

In worlds of beliefs with strong central authorities, those authorities will tend to entrench a single standard set of assumptions, thus neglecting alternative assumptions via the processes outlined above. But in worlds of belief with many “schools of thought”, alternative assumptions will get more attention. It is a trope that “sophomores” tend to presume that most fields are split among different schools of thought, and are surprised to find that this is usually not true.

This entrenchment analysis makes me more sympathetic toward allowing and perhaps even encouraging different schools of thought in many fields. And as central funding sources are at risk of being taken over by a particular school, multiple independent sources of funding seem more likely to promote differing schools of thought.

The obvious big question here is: how can we best change our styles of thought, talk, and interaction to correct for the biases that entrenchment induces?

GD Star Rating
loading...
Tagged as: ,

Hail S. Jay Olson

Over the years I’ve noticed that grad students tend to want to declare their literature search over way too early. If they don’t find something in the first few places they look, they figure it isn’t there. Alas, they implicitly assume that the world of research is better organized than it is; usually a lot more search is needed.

Seems I’ve just made this mistake myself. Having developed a grabby aliens concept and searched around a bit I figured it must be original. But it turns out that in the last five years physicist S. Jay Olson has a whole sequence of seven related papers, most of which are published, and some which got substantial media attention at the time. (We’ll change our paper to cite these soon.)

Olson saw that empirical study of aliens gets easier if you focus on the loud (not quiet) aliens, who expand fast and make visible changes, and also if you focus on simple models with only a few free parameters, to fit to the few key datums that we have. Olson variously called these aliens “aggressively expanding civilizations”, “expanding cosmological civilizations”, “extragalactic civilizations”, and “visible galaxy-spanning civilizations”. In this post, I’ll call them “expansionist”, intended to include both his and my versions.

Olson showed that if we assume that humanity’s current date is a plausible expansionist alien origin date, and if we assume a uniform distribution over our percentile rank among such origin dates, then we can estimate two things from data:

  1. from our current date, an overall appearance rate constant, regarding how frequently expansionist aliens appear, and
  2. from the fact that we do not see grabby controlled volumes in our sky, their expansion speed.

Olson only required one more input to estimate the full distribution of such aliens over space and time, and that is an “appearance rate” function f(t), to multiply by the appearance rate constant, to obtain the rate at which expansionist aliens appear at each time t. Olson tried several different approaches to this function, based on different assumptions about the star formation rate and the rate of local extinction events like supernovae. Different assumptions made only make modest differences to his conclusions.

Our recent analysis of “grabby aliens”, done unaware of Olson’s work, is similar in many ways. We also assume visible long-expanding civilizations, we focus on a very simple model, in our case with three free parameters, and we fit two of them (expansion speed and appearance rate constant) to data in nearly the same way that Olson did.

The key point on which we differ is:

  1. My group uses a simple hard-steps-power-law for the expansionist alien appearance rate function, and estimates the power in that power law from the history of major evolutionary events on Earth.
  2. Using that same power law, we estimate humanity’s current date to be very early, at least if expansionist aliens do not arrive to set an early deadline. Others have estimated modest degrees of earliness, but they have ignored the hard-steps power law. With that included, we are crazy early unless both the power is implausibly low, and the minimum habitable star mass is implausibly large.

So we seem to have something to add to Olson’s thoughtful foundations.

Looking over the coverage by others of Olson’s work, I notice that it all seems to completely ignore his empirical efforts! What they mainly care about seems to be that his having published on the idea of expansionist aliens licensed them to speculate on the theoretical plausibility of such aliens: How physically feasible is it to rapidly expansion in space over millions of years? If physically feasible, is it socially feasible, and if that would any civilization actually choose it?

That is, those who commented on Olson’s work all acted as if the only interesting topic was the theoretical plausibility of his postulates. They showed little interest in the idea that we could confront a simple aliens model with data, to estimate the actual aliens situation out there. They seem stuck assuming that this is a topic on which we essentially have no data, and thus can only speculate using our general priors and theories.

So I guess that should become our central focus now: to get people to see that we may actually have enough data now to get decent estimates on the basic aliens situation out there. And with a bit more work we might make much better estimates. This is not just a topic for theoretical speculation, where everyone gets to say “but have you considered this other scenario that I just made up, isn’t it sorta interesting?”

Here are some comments via email from S. Jay Olson:

It’s been about a week since I learned than Robin Hanson had, in a flash, seen all the basic postulates, crowd-sourced a research team, and smashed through his personal COVID infection to present a paper and multiple public talks on this cosmology. For me, operating from the outskirts of academia, it was a roller coaster ride just to figure out what was happening.

But, what I found most remarkable in the experience was this. Starting from two basic thoughts — 1) some fraction of aliens should be high-speed expansionistic, and 2) their home galaxy is probably not a fundamental barrier to expansion — so many conclusions appear inevitable: “They” are likely a cosmological distance from us. A major fraction of the universe is probably saturated by them already. Sufficiently high tech assumptions (high expansion speed) means they are likely invisible from our vantage point. If we can see an alien domain, it will likely cover a shockingly large angle in the sky. And the key datum for prediction is our cosmic time of arrival. It’s all there (and more), in both lines of research.

Beyond that, Robin has a knack for forcing the issue. If their “hard steps model” for the appearance rate of life is valid (giving f(t) ~ t^n), there aren’t too many ways to solve humanity’s earliness problem. Something would need to make the universe a very different place in the near cosmic future, as far as life is concerned. A phase transition resulting in the “end of the universe” would do it — bad news indeed. But the alternative is that we are, literally, the phase transition.

GD Star Rating
loading...
Tagged as: , ,

Reponse to Weyl

To my surprise, thrice in his recent 80,000 hours podcast interview with Robert Wiblin, Glen Weyl seems to point to me to represent a view that he dislikes. Yet, in all three cases, these disliked views aren’t remotely close to views that I hold.

Weyl: The Vickrey Auction, … problem is he had this very general solution, but which doesn’t really make any sense like in any practical case. And he pointed out that that was true. But everybody was so enamored of the fact that his was generally correct, that they didn’t try to find like versions of it that might actually make sense. They basically just said, “Oh, that’s correct in general,” and then either you were like Tyler and you’re like … just dismiss that whole thing and you’re like, “Ah, too abstract.” Or you were like, you know, Robin Hanson and you just said, “Let’s just do it! Let’s just do it!” You know? And like neither of those was really convincing.

The Vickrey auction was taught to me in grad school, but I’ve never been a big fan because it looked vulnerable to collusion (also a concern re Weyl’s quadratic voting proposals), and because I’d heard of problems in related lab experiments. I’ve long argued (e.g. here) for exploring new institution ideas, but via working our way up from smaller to larger scale trials, and then only after we’ve seen success at smaller scales. Theory models are often among the smallest possible trials. 

Weyl: What I definitely am against … is something which builds a politics that only wants to speak or only respects nerdy and mathematically inclined ways of approaching issues. I think that’s a huge mistake. … the rationalist community … has … obsessive focus on communicating primarily with and relating socially primarily to people who also agree that whatever set of practices they think defined rationality are the way to think about everything. And I think that, that is extremely dangerous … because I think A, it’s not actually true that most useful knowledge that we have comes from those methods. … And B, it’s fundamentally anti-democratic as an attitude … because if you think that the only people who have access to the truth are philosopher kings, it becomes hard to escape the conclusion that philosopher kings should rule. …

Weyl: So, Robin Hanson has this book, Elephant In The Brain, which has some interesting things in it, but I think ultimately is a long complaint that people aren’t interested in talking about politics in the way that I am interested in talking about politics. And that really annoys me. I would submit that, to someone that has that attitude, you should say, “Perhaps consider talking about politics in a different way. You might find that other people might find it easier to speak to you that way.” 

Weyl: There’s something called neo-reaction, … a politics that is built around the notion that basically there should be a small elite of people who own property and control power through that property. … Even though most people in this rationalist community would reject that kind of politics, I think there’s a natural tendency, if you have that set of social attitudes, to have your politics drift in that direction.

Our book, The Elephant in the Brain, has ten application chapters, only one of which is on politics, and that chapter compares key patterns of political behavior to two theories of why we are political: to change policy outcomes or to show loyalty to political allies. Neither theory is about being nerdy, mathematical, or “rational”, and most of the evidence we point to is not on styles of talking, nor do we recommend any style of talking.

Furthermore, every style of thinking or talking is compatible with the view that some people think much better than others, and also with the opposite view.  Nerdy or math styles are not different in this regard, so I see no reason to expect people with those styles of thinking to more favor “anti-democratic” views on thinking eliteness.

And of course, it remains possible that some people actually are much better at thinking than others. (See also two posts on my responses to other critics of econ style thinking.)

Wiblin: I guess in that case it seems like Futarchy, like Robin Hanson’s idea where people vote for what they want, but then bet on what the outcomes will be, might work quite well because you would avoid exploitation by having distributed voting power, but then you would have these superhuman minds would predict what the outcomes of different policies or different actions would be. Then they would be able to achieve whatever outcome was specified by a broad population. …

Weyl: I have issues with Futarchy, but I think what I really object to, it’s less even the worldview I’m talking about. I think really, the problem I have is that there is a rhetoric out there of trying to convince people that they’re insufficient and that everything should be the private property of a small number of people for this reason when in fact, if it was really the case that those few people were so important, and great, and powerful, they wouldn’t need to have all this rhetoric to convince other people of it. People would just see it, they would get it. 

Futarchy has nothing to do with the claim that everything should be the private property of a small number of people, nor have I ever made any such claim. Hopefully, this is just a case of a possible misreading of what Weyl said, and he didn’t intend to relate futarchy or myself to such views.

Added 3p: Weyl & I have been having a Twitter conversation on this, which you can find from here.

GD Star Rating
loading...
Tagged as: ,

Have A Thing

I’m not into small talk; I prefer to talk to people about big ideas. I want to talk big ideas to people who are smart, knowledgeable, and passionate about big ideas, and where it seems that convincing them about something on a big idea has a decent chance of changing their behavior in important ways.

Because of this, I prefer to talk to people who “have a thing.” That is, who have some sort of abstract claim (or question) which they consider important and neglected, for which they often argue, and which intersects somehow with their life hopes/plans. When they argue, they are open to and will engage counter-arguments. They might push this thing by themselves, or as part of a group, but either way it matters to them, they represent it personally, and they have some reason to think that their personal efforts can make a difference to it.

People with a thing allow me to engage a big idea that matters to someone, via someone who has taken the time to learn a lot about it, and who is willing to answer many questions about it. Such a person creates the hope that I might change their actions by changing their mind, or that they might convince me to change my life hopes/plans. I may convince them that some variation is more promising, or that some other thing fits better with the reasons they give. Or I might know of a resource, such as a technique or a person, who could help them with their thing.

Yes, in part this is all because I’m a person with many things. So I can relate better to such people. And after I engage their thing, there’s a good chance that they will listen to and engage one of my things. Even so, having a thing is handy for many people who are different from me. It lets you immediately engage many people in conversation in a way so that they are likely to remember you, and be impressed by you if you are in fact impressive.

Yes, having a thing can be off-putting to the sort of people who like to keep everything mild and low-key, and make sure that their talk has little risk of convincing them to do something that might seem weird or passionate. But I consider this off-putting effect to be largely a gain, in sorting out the sort of people I’m less interested in.

Now having a thing won’t save you if you are a fool or an idiot. In fact, it might make that status more visible. But if you doubt you are either, consider having a thing.

Added 11p: Beware of two common failures modes for people with things: 1) not noticing how much others want to hear about your thing, 2) getting so attached to your thing that you don’t listen enough to criticism of it.

Note also that having things promotes an intellectual division of labor, which helps the world to better think through everything.

Added 11Jan: Beware a third failure mode: being more serious or preachy than your audience wants. You can be focused and interesting without making people feel judged.

GD Star Rating
loading...
Tagged as: ,

Do I Offend?

The last eight months have seen four episodes where many people on Twitter called me a bad offensive person, often via rude profanity, sometimes calling for me to be fired or arrested. These four episodes were: sex inequality and redistribution, chances of a delayed harassment complaint, morality-induced overconfidence on historical counterfactuals, and implicit harassment in A Star Is Born. While these topics have occupied only a small fraction of my thought over these months, and a much smaller fraction over my career, they may have disproportionate effects on my reputation. So I’ve tried to pay close attention to the reasons people give. 

I think I see a consistent story. While in these cases I have not made moral, value, or political claims, when people read small parts of what I’ve claimed or asked, they say they can imagine someone writing those words for the purpose of promoting political views they dislike. And not just mild views that just a bit on other side of the political spectrum. No, they attribute to me the most extreme bad views imaginable, such as that I advocate rape, murder, slavery, and genocide. People say they are directly and emotionally traumatized by the offensive “creepy” feeling they get when they encounter someone with any prestige and audience seeming to publicly promote views with which they strongly disagree.

Some plausibly contributing factors here include my sometimes discussing sensitive topics, our increasing political polarization, the ease of making mobs and taking words out of context on Twitter, increasing ease of making new accusations similar to previous ones, and my terse and analytic writing style combined with my adding disclaimers re my allegiance to “correct” views. There’s also my following the standard poll practice of not telling those who answer polls the motives for those polls. And I’m a non-poor older white male associated with economics in general and GMU econ in particular; many see all these as indicators of bad political views. 

Digging a little deeper, trauma is plausibly increased by a poll format, which stokes fears that bad people will find out that they are not alone, and be encouraged to learn that many others share their views. I suspect this helps explain complaints that my poll population is not representative of my nation or planet.  

I also suspect bad faith. Long ago when I had two young kids, they would sometimes pick fights, for example on long car trips. One might start singing, to which the other would complain. We might agree that singing is too much for such a small space. Then the first might start to quietly hum, which we might decide is okay. Then first might hum more loudly and triumphantly, while the second might writhe, cover their ears, and make a dramatic display of suffering. 

Similarly, I suspect bad faith when some a) claim to experience “harassment” level suffering due to encountering political views with which they disagree, and yet are fine with high levels of sex, violence, and profanity in TV & movies, b) infer indirectly from my neutral analytical text that I promote the most extreme views imaginable, and c) do not notice that such claims are both a priori implausible and inconsistent with my large corpus of public writing; they either haven’t read much of it or purposely mischaracterize it. 

The idea of a large shared intellectual sphere wherein we can together analyze difficult topics holds a strong appeal to me. The main criteria for consideration in such a sphere should be the coherence and persuasiveness of specific relevant arguments. When evaluating each arguments, there is usually little need to infer distantly related positions of those who offer arguments. Usually an argument either works or it doesn’t, regardless of who says it or why.

I try to live up to such ideals in how I write and talk. I hope that many who read and follow me share these ideals, and I appreciate their support. I’m thus not favorably inclined toward suggestions that I stop discussing sensitive topics, or that adopt a much more elaborate disclaimer style, or that I stop asking my followers questions, to prevent others from being traumatized by hearing their answers, and or to keep followers from finding out that others share their opinions.

Added 29Dec:  I did 4 follow up polls to probe tendencies to take offense, focusing on the Nazi case. Respondents said the fraction of tweeters who actually wish Nazis had won WWII is tiny; 63% said it is <0.1%, though 4% gave >10%. And 79% said that this Nazi fraction is <3% among those “who mention `Nazis’ neutrally in a tweet, without explicitly praising or criticizing them, and who explicitly claim otherwise”, though 10% said >15%. Also, 58% said that for a tweet to be considered “offensive” or “harassment”, it would need to suggest a chance >50% that its author actually wishes Nazis had won WWII. However, 10% gave a threshold of <3% and 19% gave one <15%.

Finally, 43% gave a <3% “chance the author of a Twitter poll which asks about chance world would have been better off had Nazis won WWII, actually wishes that Nazis had won WWII”. However 20% gave a chance >50%, and 37% gave a chance >15%.

A obvious conclusion here is that, even among those who respond to my twitter polls, a substantial fraction have set hair-triggers for offense. For example, it seems >20% say merely asking if the world would have been better off if Nazis had won justifies a high enough chance of a Nazi author to be offensive. Explicit denials may help, but if the offended are much more vocal than are others, a vocal choir of objection seems largely inevitable.

This makes me wonder again if the “silent majority” might benefit from juries or polls which show them that the vocal offended are a minority. Though that minority will likely also express offense re such juries or polls.

Added 28Jan: A recent burst of outrage on the Star is Born episode confirms this account to some extent.

GD Star Rating
loading...
Tagged as: , , ,

My Poll, Explained

So many have continued to ask me the same questions about my recent twitter poll, that I thought I’d try to put all my answers in one place. This topic isn’t that fundamentally interesting, so most you you may want to skip this post.

Recently, Christine Blasey Ford publicly accused US Supreme Court nominee Brett Kavanaugh of a sexual assault. This accusation will have important political consequences, however it is resolved. Congress and the US public are now put in the position of having to evaluate the believability of this accusation, and thus must consider which clues might indicate if the accusation is correct or incorrect.

Immediately after the accusation, many said that the timing of the accusation seemed to them suspicious, occurring exactly when it would most benefit Democrats seeking to derail any nomination until after the election, when they may control the Senate. And it occurred to me that a Bayesian analysis might illuminate this issue. If T = the actual timing, A = accurate accusation, W = wrong accusation, then how much this timing consideration pushes us toward final beliefs is given by the likelihood ratio p(T|W)/p(T|A). A ratio above one pushes against believing the accusation, while a ratio below one pushes for it.

The term P(T|A) seemed to me the most interesting term, and it occurred to me to ask what people thought about it via a Twitter poll. (If there was continued interest, I could ask another question about the other term.) Twitter polls are much cheaper and easier for me to do than other polls. I’ve done dozens of them so far, and rarely has anyone objected. Such polls only allow four options, and you don’t have many characters to explain your question. So I used those characters mainly to make clear a few key aspects of the accusation’s timing:

Many claimed that my wording was misleading because it didn’t include other relevant info that might support the accusation. Like who else the accuser is said to have told when, and what pressures she is said to have faced when to go public. They didn’t complain about my not including info that might lean the other way, such as low detail on the claimed event and a lack of supporting witnesses. But a short tweet just can’t include much relevant info; I barely had enough characters to explain key accusation timing facts.

It is certainly possible that my respondents suffered from cognitive biases, such as assuming too direct a path between accuser feelings and a final accusation. To answer my poll question well, they should have considered many possible complex paths by which an accuser says something to others, who then tell others people, some of which then chose when to bring pressure back on that accuser to make a public accusation. But that’s just the nature of any poll; respondents may well not think carefully enough before answering.

For the purposes of a Twitter poll, I needed to divide the range from 0% to 100% into four bins.
I had high uncertainty about where poll answers would lie, and for the purpose of Bayes rule it is factors that matter most. So I choose three ranges of roughly a factor of 4 to 5, and a leftover bin encompassing an infinite factor. If anything, my choice was biased against answers in the infinite factor bin.

I really didn’t know which way poll answers would go. If most answers were high fractions, that would tend to support the accusation, while if most answers were low fractions, that would tend to question the accusation. Many accused me of posting the poll in order to deny the accusation, but for that to work I would have needed a good guess on the poll answers. Which I didn’t have.

My personal estimate would be somewhere in the top two ranges, and that plausibly biased me to pick bins toward such estimates.  As two-thirds of my poll answers were in the lowest bin I offered, that suggests that I should have offered an even wider range of factors. Some claimed that I biased the results by not putting more bins above 20%. But that fraction is still below the usual four-bin target fraction of 25% per bin.

It is certainly plausible that my pool of poll respondents are not representative of the larger US or world population. And many called it is irresponsible and unscientific to run an unrepresentative poll, especially if one doesn’t carefully show which wordings matter how via A/B testing. But few complain about the thousands of other Twitter polls run every day, or of my dozens of others. And the obvious easy way to show that my pool or wordings matter is to show different answers with another poll where those vary. Yet almost no one even tried that.

Also, people don’t complain about others asking questions in simple public conversations, even though those can be seen as N=1 examples of unrepresentative polls without A/B testing on wordings. It is hard to see how asking thousands of people the same question via a Twitter poll is less informative than just asking one person that same question.

Many people said it is just rude to ask a poll question that insinuates that rape accusations might be wrong, especially when we’ve just seen someone going through all the pain of making one. They say that doing so is pro-rape and discourages the reporting of real rapes, and that this must have been my goal in making this poll. But consider an analogy with discussing gun control just after a shooting. Some say this is rude then to discuss anything but sympathy for victims, but others say this is exactly a good time to discuss gun control. I say that when we must evaluate a specific rape accusation is exactly a good time to think about what clues might indicate in what direction on whether this is an accurate or wrong accusation.

Others say that it is reasonable to conclude that I’m against their side if I didn’t explicitly signal within my poll text  that I’m on their side. That’s just the sort of signaling game equilibrium we are in. And so they are justified in denouncing me for being on the wrong side. But it seems a quite burdensome standard to hold on polls, which already have too few characters to allow an adequate explanation of a question, and it seems obvious that the vast majority of Twitter polls today are not in fact being held to this standard.

Added 24Sep: I thought the poll interesting enough to ask, relative to its costs to me, but I didn’t intend to give it much weight. It was all the negative comments that made it a bigger deal.

Note that, at least in my Twitter world, we see a big difference in attitudes between vocal folks who tweet and those who merely answer polls. That later “silent majority” is more skeptical of the accusation.

GD Star Rating
loading...
Tagged as: , , ,

Great Filter, 20 Years On

Twenty years ago today, I introduced the phrase “The Great Filter” in an essay on my personal website. Today Google says 300,000 web pages use this phrase, and 4.3% of those mention my name. This essay has 45 academic citations, and my related math paper has 17 cites.

These citations are a bit over 1% of my total citations, but this phrase accounts for 5% of my press coverage. This press is mostly dumb luck. I happened to coin a phrase on a topic of growing and wide interest, yet others more prestigious than I didn’t (as they often do) bother to replace it with another phrase that would trace back to them.

I have mixed feelings about writing the paper. Back then I was defying the usual academic rule to focus narrowly. I was right that it is possible to contribute to many more different areas than most academics do. But what I didn’t fully realize is that to academic economists non-econ publications don’t exist, and that publication is only the first step to academic influence. If you aren’t around in an area to keep publishing, giving talks, going to meetings, doing referee reports, etc., academics tend to correctly decide that you are politically powerless and thus you and your work can safely be ignored.

So I’m mostly ignored by the academics who’ve continued in this area – don’t get grants, students, or invitations to give talks, to comment on paper drafts, or to referee papers, grants, books, etc. The only time I’ve ever been invited to talk on the subject was a TEDx talk a few years ago. (And I’ve given over 350 talks in my career.) But the worst scenario of being ignored is that it is as if your paper never existed, and so you shouldn’t have bothered writing it. Thankfully I have avoided that outcome, as some of my insights have been taken to heart, both academically and socially. People now accept that finding independent alien life simpler than us would be bad news, that the very hard filter steps should be roughly equally spaced in our history, and that the great filter gives a reason to worry about humanity’s future prospects.

GD Star Rating
loading...
Tagged as: , ,