Tag Archives: Personal

Re An Accused, Tell The Truth

Agnes Callard says we should not fight her cancellation:

Within the mob there is no justice and no argument and no reasoning, no space for inquiry or investigation. The only good move is not to play. … If I am being canceled I want my friends … to stand by, remain silent, and do nothing. If you care about me, let them eat me alive. … The expectation that one’s friends exhibit the “courage” to speak up one one’s behalf, the inclination to see the cancellation as a test of the friendship, which suddenly requires proofs of loyalty — these are the first step on the road to the friend purge.

Here is how it goes: a few of the cancelee’s friends meet the expectation to speak up in support, but those who remain silent — which is most of them — become suspect. New, publicly aligned friends are acquired to take their place. The beleaguered cancelee now feels she sees who her “real friends” are, but in fact she has no friends anymore. All she has are allies. First she turned her friends, and perhaps even her family members, into allies; and then she acquired more allies to fill the ranks of the purged friends. The end result is a united front, but what I would call real friendship has gone missing in the bargain. I do not want any of that. I want friends who feel free to disagree with me both publicly and privately.

If I were accused of a crime, I wouldn’t want my friends to protest outside the courthouse, at least at first; I’d want to give the legal system a chance. But if my associates were called on to testify about me, I’d want them to comply, and to tell the truth as they saw it. Not to say whatever would seem to “support” me, but just to tell the truth.

Humans have only had legal systems for the last ten thousand years or so. For a million years before that, we had mob justice, which worked better than no justice, even if not as well as legal justice. (if you doubt this, see no justice among non-human primates.) Today we still handle some kinds of accusations and punishments via mobs. I’d rather we handled them via law, but given that some accusations are handled by mobs, I’d still want to help mob justice to work as well as possible. Mob justice is in fact possible, and legitimate.

Under mob justice, there is no central authority to subpoena witnesses. So people must instead volunteer their relevant testimony. But such testimony still functions as in legal trials to appropriately influence mob jury verdicts. Thus if I were accused under mob justice (as has in fact happened to me in the past), I’d want my associates to offer testimony relevant to that accusation. Not loyal ally support, but to just tell the relevant truth.

For example, many recent mob justice accusations have been of the form that someone’s statement is a “dog whistle”, purposely done to express nefarious beliefs or allegiances. Thus intent is relevant here, and intent is something on which close associates are often especially qualified to testify. The mob jury can thus reasonably want to hear associates’ take. Given what you know about this person’s views and styles, how plausible is it that their statement was in fact intended to express the alleged beliefs or connections?

We humans are often far more willing to say positive than negative things about associates. But this can work out okay as we commonly infer negative things from the unwillingness to say positive things. For example, when asked for a recommendation re a previous worker, many employers are willing to say express honest positive opinions, but will decline to say anything if their opinion is negative.

I have at times had private contact with people who actually hold views that, at least in a technical sense, might reasonably be labeled as racist or sexist. So if I had to answer the question of whether an expression of theirs might plausibly express such views, my honest answer would have to be yes. But if I had the option, I’d try to instead just say nothing about the subject. But for most of my associates, I’d happily say that such an interpretation is quite implausible, given what I know about them.

In this sort of context, Callard’s request for silence from her friends would hinder mob justice, and make it more likely to go awry against her. The silence of her friends (among which I count myself) would likely, and reasonably, be taken by the mob jury as evidence against her. I get that she is willing to accept this cost, for the cause of preventing the friend purge process that she reasonably detests. But I will hold my friends to a higher standard: don’t just support me unconditionally, but instead tell relevant truths.

If you don’t know anything relevant to the accusation, then yes stay silent. But if you have testimony relevant to the accusations against me, then speak up. Politely, calmly, and with appropriate qualifiers and doubts, but truthfully. Please friends, enemies, and others, in any trial, done at court or before a mob, just tell the relevant truth.

GD Star Rating
loading...
Tagged as: , , ,

Losing My Religion

To a few of my associates, I gave the xmas present of a blog post on a topic they pick. Bryan Caplan just finally made his choice: the story of how I became an atheist.

My immediate family is very religious. My dad (now dead) was a part-time pastor for decades, my mom (still alive) wrote many Christian tween novels, one brother is now a pastor, and the other brother is the music director at what was my dad’s church. As a tween, I myself joined what my parents considered a Christian “cult”, and within a year my parents forbade me from associating with it.

In college I drifted slowly away, eventually to full atheism. (At a similar speed to most peoples’ biggest view changes.) But my change had little to do with disagreeing with church doctrines or with difficulties explaining evil. And I never resented nor confronted my parents for teaching me something with which I later came to disagree. This wasn’t about my relation to them either.

No, the main issue for me was that in college I became greatly persuaded by and deeply immersed in a physics view of the universe. It was not just one set of lenses through which one might look to gain insight. No it purported to offer a complete (if not fully fleshed-out) description of the reality accessible to me. It offered me many detailed ways to test that claim, and it passed those tests as far as I could tell. So far as I could see then, and now, the world immediately around me *IS* in fact the world of photons, electrons, protons, and neutrons described by the physics I learned.

But that world just offers few openings for hidden powers to be listening to or influencing my thoughts and feelings, or changing how my life goes according to my sins and prayers. Sure my family, coworkers, or governments might try to do those things. But I at least see many traces of their existence around me. It is the idea of completely hidden powers doing such things that seems crazy to me. Not logically impossible, but quite implausible given our evidence.

Now I must admit that a similar fraction of those who know physics better than most believe in the god of prayer, compared to others. So what else explains how physics influenced me, compared to them? It might be that I just know physics better than most of them. But modesty forces me to consider other possibilities.

Those of us who are different in the head tend to need some convincing of that fact. You see, we assume we are normal, and relevant evidence tends to be ambiguous. For example, most people I’ve seen doing their homework were doing it alone, in a library, on the bus, or in their bedroom. So I assumed most people were used to thinking by themselves. But I was wrong.

In seventh grade, my English teacher assigned me an unusual lesson plan: go to the library every day and just write. No particular topics, just on whatever I wanted. I loved it, and learned lots. My favorite class in high school was physics because it didn’t ask me to just accept things on faith; we could check claimed results in lab experiments.

In college as a physics major, I discovered that the last two years we went over exactly the same topics as the first two years, this time with more math. I instead want to really understand those topics. So I stopped doing the homework and instead spent the time playing with the equations. I’d ace the exams. I also began to browse libraries for interesting things, think about interesting questions that occurred to me, and worked on my own self-invented projects.

I bailed from my grad program in philosophy of science when it seemed I’d found answers to the main questions I’d had there. And after two years of working full time at Lockheed I switched to thirty hours per week so I could spend the rest of my time studying things on my own. And I’ve since change fields many times when it seemed I was learning less where I was than where I could switch to.

I often meet people who ask how to study a topic, what school should they go to, and I say aren’t you old enough to just go learn stuff by yourself? Most researchers are terrible at explaining why their projects offer the world the best progress bang for their effort buck, but I have no problem offering such explanations.

All of this I think suggests that I’m unusually willing to fully own all of my main opinions and research choices, instead of inheriting them from others. So perhaps that’s another explanation for my atheism. Most people accept the usual beliefs of others around them and assume they must have good reasons. I’m instead enough of a think-for-myself polymath that I have to see such reasons for myself, and know enough tools from enough fields to be able to follow most relevant arguments. And I just don’t see good reasons to believe in hidden powers influencing the thoughts, feelings, and life outcomes of most humans.

Merry Christmas, Bryan.

GD Star Rating
loading...
Tagged as: ,

My 11 Bets at 10-1 Odds On 10M Covid deaths by 2022

In February 2020, I made many bets on Covid19, including 11 bets at ten to one odds on if it would cause 10 million deaths worldwide by 2022, as estimated by WHO.

WHO has a Q&A page on Covid excess deaths that includes this section:

Why is excess mortality the preferred measure? … aggregate COVID-19 case and death numbers … being reported to WHO … under-estimate the number of lives lost due to the pandemic … In light of the challenges posed by using reported data on COVID-19 cases and deaths, excess mortality is considered a more objective and comparable measure that accounts for both the direct and indirect impacts of the pandemic.

This WHO page, updated daily, lists reported deaths. This WHO page estimated “The true death toll of COVID-19”, or world covid excess deaths, as of Dec. 31, 2020. I expect them to post a page like it soon with death estimates as of Dec. 31, 2021. But I doubt those estimates will differ much from The Economist, which as of Dec. 30, 2021 said:

The pandemic’s true death toll; Our daily estimate of excess deaths around the world … Although the official number of deaths caused by covid-19 is now 5.4m, our single best estimate is that the actual toll is 18.6m people. We find that there is a 95% chance that the true value lies between 11.6m and 21.6m additional deaths.


For many bets we agreed that if there were two number estimates instead of one, we’d go with a geometric mean of them. The geometric mean of 5.4 and 18.6 is 10.02.

Here is the current status of my 11 bets, with a link to the bets and the amount I’m owed. (I’ll update this as things change.)

These claim to win, say I should pay them:

No response since 31Dec:

  • A Twitter msg bet that I’m keeping private for now, $5000

Paid to me:

Some say that it is rude of me to brag about winning. But I need to make this bet situation public in order to pressure bettors to make good on their promises.

Some say it is immoral to bet on death. But I didn’t cause these deaths, and my public bets helped convince many to take this problem more seriously, for which they’ve thanked me.

Added 12Jan: Many are talking as if the issue is direct vs. indirect deaths, but I’d be very surprised if more than a third of excess deaths are indirect. Most of them were caused directly by covid, but just not caught by official testing and diagnosis systems.

Added 18Jan: Nature article:

Demographers, data scientists and public-health experts are striving to narrow the uncertainties for a global estimate of pandemic deaths. … Among these models, the World Health Organization (WHO) is still working on its first global estimate, but the Institute for Health Metrics and Evaluation in Seattle, Washington, offers daily updates of its own modelled results, as well as projections of how quickly the global toll might rise. And one of the highest-profile attempts to model a global estimate has come from the news media. The Economist magazine in London has used a machine-learning approach to produce an estimate of 12 million to 22 million excess deaths.

That IHME 95% confidence interval is 9 to 18 million deaths.

Added 26Jan: This Sept. 2021 PLOS paper says

[In] the United States … in 2020 … there were 375,235 excess deaths, with 83% attributable to direct, and 17% attributable to indirect effects of COVID-19.

Added 9May: WHO finally speaks on 2021 excess deaths:

 

GD Star Rating
loading...
Tagged as: ,

Minds Almost Meeting

Many travel to see exotic mountains, buildings, statues, or food. But me, I want to see different people. If it could be somehow arranged, I’d happily “travel” to dozens of different subcultures that live within 100 miles of me. But I wouldn’t just want to walk past them, I’d want to interact enough to get in their heads.

Working in diverse intellectual areas has helped. So far, these include engineering, physics, philosophy, computer science, statistics, economics, polisci, finance, futurism, psychology, and astrophysics. But there are so many other intellectual areas I’ve hardly touched, and far more non-intellectual heads of which I’ve seen so little.

Enter the remarkable Agnes Callard with whom I’ve just posted ten episodes of our new podcast “Minds Almost Meeting”:

Tagline: Agnes and Robin talk, try to connect, often fail, but sometimes don’t.

Summary: Imagine two smart curious friendly and basically truth-seeking people, but from very different intellectual traditions. Traditions with different tools, priorities, and ground rules. What would they discuss? Would they talk past each other? Make any progress? Would anyone want to hear them? Economist Robin Hanson and philosopher Agnes Callard decided to find out.

Topics: Paradox of Honesty, Plagiarism, Future Generations, Paternalism, Punishment, Pink and Purple, Aspiration, Prediction Markets, Hidden Motives, Distant Signals.

It’s not clear who will be entertained by our efforts, but I found the process fascinating, informative, and rewarding. Though our audio quality was low at times, it is still understandable.

Agnes is a University of Chicago professor of philosophy and a rising-star “public intellectual” who often publishes in places like The New Yorker. She and I are similar in both being oddball, hard-to-offend, selfish parents and academics. We both have religious upbringings, broad interests, and a taste for abstraction. But we differ by generation, gender, and especially in our intellectual backgrounds and orientations (me vs. her): STEM vs. humanities, futurist vs. classicist, explaining via past shapings vs. future aspirations, and relying more vs. less on large systems of thought.

Before talking to Agnes, I hadn’t realized just how shaped I’ve been by assimilating many large formal systems of thought, such as calculus, physics, optimization, algorithms, info theory, decision theory, game theory, economics, etc. Though the core of these systems can be simple, each has been connected to many diverse applications, and many larger analysis structures have been built on top of them.

Yes these systems, and their auxiliary structures and applications, are based on assumptions that can be wrong. But their big benefit is that shared efforts to use them have rooted out many (though hardly all) contradictions, inconsistencies, and incoherences. So my habit of trying when possible to match any new question to one of these systems is likely to, on average, produce a more coherent resulting analyses. I’m far more interested in applying existing systems to big neglected topics than in inventing new systems.

In contrast, though philosophers like Agnes who rely on few such structures beyond simple logic can expect their arguments to be accessible to wider audiences, they must also expect a great many incoherences in their analysis. Which is part of why they so often disagree, and build such long chains of back and forth argumentation. I agree with Tyler, who in his conversation with Agnes said these long chains suggest a problem. However, I do see the value of having some fraction of intellectuals taking this simple robust strategy, as a complement to more system-focused strategies.

Thank you Agnes Callard, for helping me to see a wider intellectual world, including different ways of thinking and topics I’ve neglected.

GD Star Rating
loading...
Tagged as: , ,

What I Hold Sacred

Someone recently told me that I stood out compared to other writers in never seeming to treat anything as sacred. Which seemed to them awkward, odd, and implausible, as much as the opposite writers who seem to treat most all topics and issues as sacred. More plausibly, most people do treat some minority of things as especially sacred, and if they don’t reveal that in their writing, they are probably hiding it from others, and maybe also from themselves.

This seems plausible enough that it pushes me to try to identify and admit what I hold sacred. When I search for ways to identify what people hold sacred, I find quite a lot of rather vague descriptions and associations. The most concrete signs I find are: associating it with rituals and symbols, treating it with awe and reverence, unwillingness to trade other things for it, and outrage at those who disrespect it.

The best candidate I can find is: truth-seeking. More specifically: truth-seeking among intellectuals on important topics. That is, the goal is for the world to learn more together on key abstract topics, and I want each person who contributes substantially to such projects to add the most that they can, given their constraints and the budgets they are willing to allocate to it. I don’t insist anyone devote themselves wholly to this, and I’m less concerned with each person always being perfectly honest than with us together figuring stuff out.

I admit that I do treat this with reverence, and I’m reluctant to trade it for other things. And I’d more often express outrage at others disrespecting it if I thought I’d get more support on such occasions. Yes, most everyone gives great lip service allegiance to this value. But most suggest that there are few tradeoffs between this and other values, and also that following a few simple rules of thumb (e.g., don’t lie, give confidence intervals) is sufficient; no need to dig deeper. In contrast, I think it takes long-sustained careful thought to really see what would most help for his goal, and I also see many big opportunities to sacrifice other things for this goal.

How can you better affirm this value? Its simple, but hard: Continually ask yourself what are the most important topics, what are the most promising ways to advance them, and what are your comparative advantages re such efforts. Do not assume that answers to these questions are implicit in the status and rewards that others offer you for various activities. The world mostly doesn’t care much, and so if you do care more you can’t focus on pleasing the world.

So why do I seem reluctant to talk about this? I think because I feel vulnerable. When you admit what is most precious to you, others might threaten it in order to extort concessions from you. And it is hard to argue well for why any particular value should be the most sacred. You run out of arguments and must admit you’ve made a choice you can’t justify. I so admit.

GD Star Rating
loading...
Tagged as: , ,

Opinion Entrenchment

How do and should we form and change opinions? Logic tells us to avoid inconsistencies and incoherences. Language tells us to attend to how meaning is inferred from ambiguous language. Decision theory says to distinguish values from fact opinion, and says exactly how decisions should respond to these. Regarding fact opinion, Bayesian theory says to distinguish priors from likelihoods, and says exactly how fact opinion should respond to evidence.

Simple realism tells us to expect errors in actual opinions, relative to all of these standards. Computing theory says to expect larger errors on more complex topics, and opinions closer to easily computed heuristics. And many kinds of human and social sciences suggest that we see human beliefs as often like clothes, which in mild weather we use more to show our features to associates than to protect ourselves from the elements. Beliefs are especially useful for showing loyalty and morality.

There’s another powerful way to think about opinions that I’ve only recently appreciated: opinions get entrenched. In biology, natural selection picks genes that are adaptive, but adds error. These gene choices change as environments change, except that genes which are entangled with large complex and valued systems of genes change much less; they get entrenched.

We see entrenchment also all over our human systems. For example, at my university the faculty is divided into disciplines, the curricula into classes, and classes into assignments in ways that once made sense, but now mostly reflect inertia. Due to many interdependencies, it would be slow and expensive to change such choices, so they remain. Our legal system accumulates details that become precedents that many rely on, and which become hard to change. As our software system accrue features, they get fragile and harder to change. And so on.

Beliefs also get entrenched. That is, we are often in the habit of building many analyses from the same standard sets of assumptions. And the more analyses that we have done using some set of assumptions, the more reluctant we are to give up that set. This attitude toward the set is not very sensitive to the evidential or logical support we see for each of its assumptions. In fact, we are often pretty certain that individual assumptions are wrong, but because they greatly simplify our analysis, we hope that they are still enable a decent approximation from their set.

When we use such standard assumption sets, we usually haven’t thought much about the consequences of individually changing each assumption in the set. As long as we can see some plausible ways in which each assumption might change conclusions, we accept it as part of the set, and hold roughly the same reluctance to give it up as for all the other members.

For example, people often say “I just can’t believe Fred’s dead”, meaning not that the evidence of Fred’s death isn’t sufficient, but that it will take a lot of work to think through all the implications of this new fact. The existence of Fred had been a standard assumption in their analysis. A person tempted to have an affair is somewhat deterred from this because of their standard assumption that they were not the sort of person who has affairs; it would take a lot of work to think through their world under this new assumption. This similarly discourages people from considering that their spouses might be having affairs.

In academic theoretical analysis, each area tends to have standard assumptions, many of which are known to be wrong. But even so, there are strong pressures to continue using prior standard assumptions, to make one’s work comparable to that of others. The more different things that are seen to be explained or understood via an assumption set, the more credibility is assigned to each assumption in that set. Evidence directly undermining any one such assumption does little by itself to reduce use of the set.

In probability theory, the more different claims one adds to a bundle, the less likely is the conjunction of that bundle. However, the more analyses that one makes with an assumption set, the more entrenched it becomes. So by combining different assumption sets so that they all get credit for all of their analyses, one makes those sets more, not less, entrenched. Larger bundles get less probability but more entrenchment.

Note that fictional worlds that specify maximal detail are maximally large assumption sets, which thus maximally entrench.

Most people feel it is quite reasonable to disagree, and that claim is a standard assumption in most reasoning about reasoning. But a philosophy literature did arise wherein some questioned that assumption, in the context of a certain standard disagreement scenario. I was able to derive some strong results, but in a different and to my mind more relevant scenario. But the fact of my using a different scenario, and being from a different discipline, meant my results got ignored.

Our book Elephant in the Brain says that social scientists have tended to assume the wrong motives re many common behaviors. While our alternate motives are about as plausible and easy to work with as the usual motives, the huge prior investment in analysis based on the usual motives means that few are interested in exploring our alternate motives. There is not just theory analysis investment, but also investment in feeling that we are good people, a claim which our alternate assumptions undermine.

Even though most automation today has little to do with AI, and has long followed steady trends, with almost no effect on overall employment, the favored assumption set among talking elites recently remains this: new AI techniques are causing a huge trend-deviating revolution in job automation, soon to push a big fraction of workers out of jobs, and within a few decades may totally surpass humans at most all jobs. Once many elites are talking in terms this assumption set, others also want to join the same conversation, and so adopt the same set. And once each person has done a lot of analysis using that assumption set, they are reluctant to consider alternative sets. Challenging any particular item in the assumption set does little to discourage use of the set.

The key assumption of my book Age of Em, that human level robots will be first achieved via brain emulations, not AI, has a similar plausibility to AI being first. But this assumption gets far less attention. Within my book, I picked a set of standard assumptions to support my analysis, and for an assumption that has an X% chance of being wrong, my book gave far less than X% coverage to that possibility. That is, I entrenched my standard assumptions within my book.

Physicists have long taken one of their standard assumptions to be denial of all “paranormal” claims, taken together as a set. That is, they see physics as denying the reality of telepathy, ghosts, UFOs, etc., and see the great success (and status) of physics overall as clearly disproving such claims. Yes, they once mistakenly included meteorites in that paranormal set, but they’ve fixed that. Yet physicists don’t notice that even though many describe UFOs as “physics-defying”, they aren’t that at all; they only plausibly defy current human tech abilities. Yet the habit of treating all paranormal stuff as the same denied set leads physicists to continue to staunchly ridicule UFOs.

I can clearly feel my own reluctance to consider theories wherein the world is not as it appears, because we are being fooled by gods, simulation sysops, aliens, or a vast world elite conspiracy. Sometimes this is because those assumptions seem quite unlikely, but in other cases it is because I can see how much I’d have to rethink given such assumptions. I don’t want to be bothered; haven’t I already considered enough weird stuff for one person?

Life on Mars is treated as an “extraordinary” claim, even though the high rate of rock transfer between early Earth and early Mars make it nearly as likely that life came from Mars to Earth as vice versa. This is plausibly because only life on Earth is the standard assumption used in many analyses, while life starting on Mars seems like a different conflicting assumption.

Across a wide range of contexts, our reluctance to consider contrarian claims is often less due to their lacking logical or empirical support, and more because accepting them would require reanalyzing a great many things that one had previously analyzed using non-contrarian alternatives.

In worlds of beliefs with strong central authorities, those authorities will tend to entrench a single standard set of assumptions, thus neglecting alternative assumptions via the processes outlined above. But in worlds of belief with many “schools of thought”, alternative assumptions will get more attention. It is a trope that “sophomores” tend to presume that most fields are split among different schools of thought, and are surprised to find that this is usually not true.

This entrenchment analysis makes me more sympathetic toward allowing and perhaps even encouraging different schools of thought in many fields. And as central funding sources are at risk of being taken over by a particular school, multiple independent sources of funding seem more likely to promote differing schools of thought.

The obvious big question here is: how can we best change our styles of thought, talk, and interaction to correct for the biases that entrenchment induces?

GD Star Rating
loading...
Tagged as: ,

Hail S. Jay Olson

Over the years I’ve noticed that grad students tend to want to declare their literature search over way too early. If they don’t find something in the first few places they look, they figure it isn’t there. Alas, they implicitly assume that the world of research is better organized than it is; usually a lot more search is needed.

Seems I’ve just made this mistake myself. Having developed a grabby aliens concept and searched around a bit I figured it must be original. But it turns out that in the last five years physicist S. Jay Olson has a whole sequence of seven related papers, most of which are published, and some which got substantial media attention at the time. (We’ll change our paper to cite these soon.)

Olson saw that empirical study of aliens gets easier if you focus on the loud (not quiet) aliens, who expand fast and make visible changes, and also if you focus on simple models with only a few free parameters, to fit to the few key datums that we have. Olson variously called these aliens “aggressively expanding civilizations”, “expanding cosmological civilizations”, “extragalactic civilizations”, and “visible galaxy-spanning civilizations”. In this post, I’ll call them “expansionist”, intended to include both his and my versions.

Olson showed that if we assume that humanity’s current date is a plausible expansionist alien origin date, and if we assume a uniform distribution over our percentile rank among such origin dates, then we can estimate two things from data:

  1. from our current date, an overall appearance rate constant, regarding how frequently expansionist aliens appear, and
  2. from the fact that we do not see grabby controlled volumes in our sky, their expansion speed.

Olson only required one more input to estimate the full distribution of such aliens over space and time, and that is an “appearance rate” function f(t), to multiply by the appearance rate constant, to obtain the rate at which expansionist aliens appear at each time t. Olson tried several different approaches to this function, based on different assumptions about the star formation rate and the rate of local extinction events like supernovae. Different assumptions made only make modest differences to his conclusions.

Our recent analysis of “grabby aliens”, done unaware of Olson’s work, is similar in many ways. We also assume visible long-expanding civilizations, we focus on a very simple model, in our case with three free parameters, and we fit two of them (expansion speed and appearance rate constant) to data in nearly the same way that Olson did.

The key point on which we differ is:

  1. My group uses a simple hard-steps-power-law for the expansionist alien appearance rate function, and estimates the power in that power law from the history of major evolutionary events on Earth.
  2. Using that same power law, we estimate humanity’s current date to be very early, at least if expansionist aliens do not arrive to set an early deadline. Others have estimated modest degrees of earliness, but they have ignored the hard-steps power law. With that included, we are crazy early unless both the power is implausibly low, and the minimum habitable star mass is implausibly large.

So we seem to have something to add to Olson’s thoughtful foundations.

Looking over the coverage by others of Olson’s work, I notice that it all seems to completely ignore his empirical efforts! What they mainly care about seems to be that his having published on the idea of expansionist aliens licensed them to speculate on the theoretical plausibility of such aliens: How physically feasible is it to rapidly expansion in space over millions of years? If physically feasible, is it socially feasible, and if that would any civilization actually choose it?

That is, those who commented on Olson’s work all acted as if the only interesting topic was the theoretical plausibility of his postulates. They showed little interest in the idea that we could confront a simple aliens model with data, to estimate the actual aliens situation out there. They seem stuck assuming that this is a topic on which we essentially have no data, and thus can only speculate using our general priors and theories.

So I guess that should become our central focus now: to get people to see that we may actually have enough data now to get decent estimates on the basic aliens situation out there. And with a bit more work we might make much better estimates. This is not just a topic for theoretical speculation, where everyone gets to say “but have you considered this other scenario that I just made up, isn’t it sorta interesting?”

Here are some comments via email from S. Jay Olson:

It’s been about a week since I learned than Robin Hanson had, in a flash, seen all the basic postulates, crowd-sourced a research team, and smashed through his personal COVID infection to present a paper and multiple public talks on this cosmology. For me, operating from the outskirts of academia, it was a roller coaster ride just to figure out what was happening.

But, what I found most remarkable in the experience was this. Starting from two basic thoughts — 1) some fraction of aliens should be high-speed expansionistic, and 2) their home galaxy is probably not a fundamental barrier to expansion — so many conclusions appear inevitable: “They” are likely a cosmological distance from us. A major fraction of the universe is probably saturated by them already. Sufficiently high tech assumptions (high expansion speed) means they are likely invisible from our vantage point. If we can see an alien domain, it will likely cover a shockingly large angle in the sky. And the key datum for prediction is our cosmic time of arrival. It’s all there (and more), in both lines of research.

Beyond that, Robin has a knack for forcing the issue. If their “hard steps model” for the appearance rate of life is valid (giving f(t) ~ t^n), there aren’t too many ways to solve humanity’s earliness problem. Something would need to make the universe a very different place in the near cosmic future, as far as life is concerned. A phase transition resulting in the “end of the universe” would do it — bad news indeed. But the alternative is that we are, literally, the phase transition.

GD Star Rating
loading...
Tagged as: , ,

Reponse to Weyl

To my surprise, thrice in his recent 80,000 hours podcast interview with Robert Wiblin, Glen Weyl seems to point to me to represent a view that he dislikes. Yet, in all three cases, these disliked views aren’t remotely close to views that I hold.

Weyl: The Vickrey Auction, … problem is he had this very general solution, but which doesn’t really make any sense like in any practical case. And he pointed out that that was true. But everybody was so enamored of the fact that his was generally correct, that they didn’t try to find like versions of it that might actually make sense. They basically just said, “Oh, that’s correct in general,” and then either you were like Tyler and you’re like … just dismiss that whole thing and you’re like, “Ah, too abstract.” Or you were like, you know, Robin Hanson and you just said, “Let’s just do it! Let’s just do it!” You know? And like neither of those was really convincing.

The Vickrey auction was taught to me in grad school, but I’ve never been a big fan because it looked vulnerable to collusion (also a concern re Weyl’s quadratic voting proposals), and because I’d heard of problems in related lab experiments. I’ve long argued (e.g. here) for exploring new institution ideas, but via working our way up from smaller to larger scale trials, and then only after we’ve seen success at smaller scales. Theory models are often among the smallest possible trials. 

Weyl: What I definitely am against … is something which builds a politics that only wants to speak or only respects nerdy and mathematically inclined ways of approaching issues. I think that’s a huge mistake. … the rationalist community … has … obsessive focus on communicating primarily with and relating socially primarily to people who also agree that whatever set of practices they think defined rationality are the way to think about everything. And I think that, that is extremely dangerous … because I think A, it’s not actually true that most useful knowledge that we have comes from those methods. … And B, it’s fundamentally anti-democratic as an attitude … because if you think that the only people who have access to the truth are philosopher kings, it becomes hard to escape the conclusion that philosopher kings should rule. …

Weyl: So, Robin Hanson has this book, Elephant In The Brain, which has some interesting things in it, but I think ultimately is a long complaint that people aren’t interested in talking about politics in the way that I am interested in talking about politics. And that really annoys me. I would submit that, to someone that has that attitude, you should say, “Perhaps consider talking about politics in a different way. You might find that other people might find it easier to speak to you that way.” 

Weyl: There’s something called neo-reaction, … a politics that is built around the notion that basically there should be a small elite of people who own property and control power through that property. … Even though most people in this rationalist community would reject that kind of politics, I think there’s a natural tendency, if you have that set of social attitudes, to have your politics drift in that direction.

Our book, The Elephant in the Brain, has ten application chapters, only one of which is on politics, and that chapter compares key patterns of political behavior to two theories of why we are political: to change policy outcomes or to show loyalty to political allies. Neither theory is about being nerdy, mathematical, or “rational”, and most of the evidence we point to is not on styles of talking, nor do we recommend any style of talking.

Furthermore, every style of thinking or talking is compatible with the view that some people think much better than others, and also with the opposite view.  Nerdy or math styles are not different in this regard, so I see no reason to expect people with those styles of thinking to more favor “anti-democratic” views on thinking eliteness.

And of course, it remains possible that some people actually are much better at thinking than others. (See also two posts on my responses to other critics of econ style thinking.)

Wiblin: I guess in that case it seems like Futarchy, like Robin Hanson’s idea where people vote for what they want, but then bet on what the outcomes will be, might work quite well because you would avoid exploitation by having distributed voting power, but then you would have these superhuman minds would predict what the outcomes of different policies or different actions would be. Then they would be able to achieve whatever outcome was specified by a broad population. …

Weyl: I have issues with Futarchy, but I think what I really object to, it’s less even the worldview I’m talking about. I think really, the problem I have is that there is a rhetoric out there of trying to convince people that they’re insufficient and that everything should be the private property of a small number of people for this reason when in fact, if it was really the case that those few people were so important, and great, and powerful, they wouldn’t need to have all this rhetoric to convince other people of it. People would just see it, they would get it. 

Futarchy has nothing to do with the claim that everything should be the private property of a small number of people, nor have I ever made any such claim. Hopefully, this is just a case of a possible misreading of what Weyl said, and he didn’t intend to relate futarchy or myself to such views.

Added 3p: Weyl & I have been having a Twitter conversation on this, which you can find from here.

GD Star Rating
loading...
Tagged as: ,

Have A Thing

I’m not into small talk; I prefer to talk to people about big ideas. I want to talk big ideas to people who are smart, knowledgeable, and passionate about big ideas, and where it seems that convincing them about something on a big idea has a decent chance of changing their behavior in important ways.

Because of this, I prefer to talk to people who “have a thing.” That is, who have some sort of abstract claim (or question) which they consider important and neglected, for which they often argue, and which intersects somehow with their life hopes/plans. When they argue, they are open to and will engage counter-arguments. They might push this thing by themselves, or as part of a group, but either way it matters to them, they represent it personally, and they have some reason to think that their personal efforts can make a difference to it.

People with a thing allow me to engage a big idea that matters to someone, via someone who has taken the time to learn a lot about it, and who is willing to answer many questions about it. Such a person creates the hope that I might change their actions by changing their mind, or that they might convince me to change my life hopes/plans. I may convince them that some variation is more promising, or that some other thing fits better with the reasons they give. Or I might know of a resource, such as a technique or a person, who could help them with their thing.

Yes, in part this is all because I’m a person with many things. So I can relate better to such people. And after I engage their thing, there’s a good chance that they will listen to and engage one of my things. Even so, having a thing is handy for many people who are different from me. It lets you immediately engage many people in conversation in a way so that they are likely to remember you, and be impressed by you if you are in fact impressive.

Yes, having a thing can be off-putting to the sort of people who like to keep everything mild and low-key, and make sure that their talk has little risk of convincing them to do something that might seem weird or passionate. But I consider this off-putting effect to be largely a gain, in sorting out the sort of people I’m less interested in.

Now having a thing won’t save you if you are a fool or an idiot. In fact, it might make that status more visible. But if you doubt you are either, consider having a thing.

Added 11p: Beware of two common failures modes for people with things: 1) not noticing how much others want to hear about your thing, 2) getting so attached to your thing that you don’t listen enough to criticism of it.

Note also that having things promotes an intellectual division of labor, which helps the world to better think through everything.

Added 11Jan: Beware a third failure mode: being more serious or preachy than your audience wants. You can be focused and interesting without making people feel judged.

GD Star Rating
loading...
Tagged as: ,

Do I Offend?

The last eight months have seen four episodes where many people on Twitter called me a bad offensive person, often via rude profanity, sometimes calling for me to be fired or arrested. These four episodes were: sex inequality and redistribution, chances of a delayed harassment complaint, morality-induced overconfidence on historical counterfactuals, and implicit harassment in A Star Is Born. While these topics have occupied only a small fraction of my thought over these months, and a much smaller fraction over my career, they may have disproportionate effects on my reputation. So I’ve tried to pay close attention to the reasons people give. 

I think I see a consistent story. While in these cases I have not made moral, value, or political claims, when people read small parts of what I’ve claimed or asked, they say they can imagine someone writing those words for the purpose of promoting political views they dislike. And not just mild views that just a bit on other side of the political spectrum. No, they attribute to me the most extreme bad views imaginable, such as that I advocate rape, murder, slavery, and genocide. People say they are directly and emotionally traumatized by the offensive “creepy” feeling they get when they encounter someone with any prestige and audience seeming to publicly promote views with which they strongly disagree.

Some plausibly contributing factors here include my sometimes discussing sensitive topics, our increasing political polarization, the ease of making mobs and taking words out of context on Twitter, increasing ease of making new accusations similar to previous ones, and my terse and analytic writing style combined with my adding disclaimers re my allegiance to “correct” views. There’s also my following the standard poll practice of not telling those who answer polls the motives for those polls. And I’m a non-poor older white male associated with economics in general and GMU econ in particular; many see all these as indicators of bad political views. 

Digging a little deeper, trauma is plausibly increased by a poll format, which stokes fears that bad people will find out that they are not alone, and be encouraged to learn that many others share their views. I suspect this helps explain complaints that my poll population is not representative of my nation or planet.  

I also suspect bad faith. Long ago when I had two young kids, they would sometimes pick fights, for example on long car trips. One might start singing, to which the other would complain. We might agree that singing is too much for such a small space. Then the first might start to quietly hum, which we might decide is okay. Then first might hum more loudly and triumphantly, while the second might writhe, cover their ears, and make a dramatic display of suffering. 

Similarly, I suspect bad faith when some a) claim to experience “harassment” level suffering due to encountering political views with which they disagree, and yet are fine with high levels of sex, violence, and profanity in TV & movies, b) infer indirectly from my neutral analytical text that I promote the most extreme views imaginable, and c) do not notice that such claims are both a priori implausible and inconsistent with my large corpus of public writing; they either haven’t read much of it or purposely mischaracterize it. 

The idea of a large shared intellectual sphere wherein we can together analyze difficult topics holds a strong appeal to me. The main criteria for consideration in such a sphere should be the coherence and persuasiveness of specific relevant arguments. When evaluating each arguments, there is usually little need to infer distantly related positions of those who offer arguments. Usually an argument either works or it doesn’t, regardless of who says it or why.

I try to live up to such ideals in how I write and talk. I hope that many who read and follow me share these ideals, and I appreciate their support. I’m thus not favorably inclined toward suggestions that I stop discussing sensitive topics, or that adopt a much more elaborate disclaimer style, or that I stop asking my followers questions, to prevent others from being traumatized by hearing their answers, and or to keep followers from finding out that others share their opinions.

Added 29Dec:  I did 4 follow up polls to probe tendencies to take offense, focusing on the Nazi case. Respondents said the fraction of tweeters who actually wish Nazis had won WWII is tiny; 63% said it is <0.1%, though 4% gave >10%. And 79% said that this Nazi fraction is <3% among those “who mention `Nazis’ neutrally in a tweet, without explicitly praising or criticizing them, and who explicitly claim otherwise”, though 10% said >15%. Also, 58% said that for a tweet to be considered “offensive” or “harassment”, it would need to suggest a chance >50% that its author actually wishes Nazis had won WWII. However, 10% gave a threshold of <3% and 19% gave one <15%.

Finally, 43% gave a <3% “chance the author of a Twitter poll which asks about chance world would have been better off had Nazis won WWII, actually wishes that Nazis had won WWII”. However 20% gave a chance >50%, and 37% gave a chance >15%.

A obvious conclusion here is that, even among those who respond to my twitter polls, a substantial fraction have set hair-triggers for offense. For example, it seems >20% say merely asking if the world would have been better off if Nazis had won justifies a high enough chance of a Nazi author to be offensive. Explicit denials may help, but if the offended are much more vocal than are others, a vocal choir of objection seems largely inevitable.

This makes me wonder again if the “silent majority” might benefit from juries or polls which show them that the vocal offended are a minority. Though that minority will likely also express offense re such juries or polls.

Added 28Jan: A recent burst of outrage on the Star is Born episode confirms this account to some extent.

GD Star Rating
loading...
Tagged as: , , ,