Monthly Archives: February 2021

Is Status-Seeking A Context-Neglecting-Value?

The main evolutionary function of sex for humans is obviously procreation. Yet our deep values regarding sex don’t seem to pay that much attention to info we have about if procreation is likely to actually happen in any given sex-related context. Consider our preferences regarding context for pornography, strip clubs, romance novels, and contraception. Oh our sex preferences do attend to cues that robustly correlated with procreation success for our distant ancestors. Such as the status of males and the youthfulness of females. But regarding kinds of context rare among our distant ancestors, our sex preferences seem drawn to the naive appearance of the possibility of successful sex, and neglect the more detailed context info that we have.

This sort of context-neglecting values also seems to happen with media. People seem to act as if the TV actors they watch regularly are actually their friends, and the sports stars they associate with are somehow going to raise their status. They feel they are raising their status by correcting strangers “wrong on the internet”. They also don’t seem to pay that much attention to how the processing of their food might change its nutrition, as long as it doesn’t hurt the taste.

Back in 2010 I posted on a context-neglecting-values theory to explain the demographic transition, i.e., the puzzlingly low fertility that seems to happen as societies get rich. I suggested that women who find that they are rich presume that they are relatively rich, and this have a shot at being “queens”, i.e., at mating with a high status man and producing high status kids. Or a shot at having their kids become kings or queens. This can justify delaying her own fertility to invest in status markers, or justify having fewer kids to let each kid gain more status markers. When entire societies get rich, each person neglects the fact that being absolutely rich doesn’t make you relatively rich. Plausibly among our distant ancestors, societies almost never got very rich for very long, and so this neglect wasn’t much of a problem back then.

Recently I realized that I should consider generalizations of this theory. What if, when societies get rich, we all feel like we have high relative status, and a decent chance to get even more, neglecting the fact that most everyone around us is also richer as well? In this case we’d be primed to take the sort of actions that makes sense for ambitious people with high relative status.

This might explain two big puzzles that I’ve long pondered. The first puzzle is our strong taste for variety in the last few centuries, which doesn’t seem to actually produce that much net value for us. Making new unusual choices can make sense for the high status, if they can use this as a way to show that they are leaders. That is, if they pick or do something different, and lots of people follow their example, they may prove to observers that they are a “thought” leader. And if we all see ourselves as strong leader candidates, we may all be attracted to such strategies.

The other big puzzle I’ve long pondered is our strong taste for paternalism, especially in the last few centuries, which seems to mostly hurt us on average. Instead of showing our high status by showing that others copy us when we do unusual things, we can also show our high status by our visible ability to stop others from doing unusual things. If people hear that we have such power and regularly use it, they have to conclude that we are “somebody.” And so ordinary people lend their support to paternalist policies in the hope that they will be personally credited for it. Much like people seem to think their status will be raised if they associate with celebrities who have never heard of them.

So my new suggestion in this post is that, because in a rich world we all greatly overestimate our relative status, we intuit that it makes sense to try to raise our status either by choosing variety and getting others to copy it, or by showing off our ability to stop others from choosing variety. These both actually make less sense for most of us as ways to gain status, because we aren’t actually high in relative status. But our intuitions don’t notice that.

Why would our preferences neglect context so? The idea is that they are coded in us at very deep levels, at places where our conscious thoughts just can’t change them. Such changes mostly require slower genetic and cultural selection processes.

Should welfare analysis focus on the context-neglecting preferences that we currently express, or on the ones that we would have if we took context more into account. That depends on if you care more about the immediate surface feelings of people today, or longer term outcomes and descendants.

GD Star Rating
loading...
Tagged as: , , ,

Hail S. Jay Olson

Over the years I’ve noticed that grad students tend to want to declare their literature search over way too early. If they don’t find something in the first few places they look, they figure it isn’t there. Alas, they implicitly assume that the world of research is better organized than it is; usually a lot more search is needed.

Seems I’ve just made this mistake myself. Having developed a grabby aliens concept and searched around a bit I figured it must be original. But it turns out that in the last five years physicist S. Jay Olson has a whole sequence of seven related papers, most of which are published, and some which got substantial media attention at the time. (We’ll change our paper to cite these soon.)

Olson saw that empirical study of aliens gets easier if you focus on the loud (not quiet) aliens, who expand fast and make visible changes, and also if you focus on simple models with only a few free parameters, to fit to the few key datums that we have. Olson variously called these aliens “aggressively expanding civilizations”, “expanding cosmological civilizations”, “extragalactic civilizations”, and “visible galaxy-spanning civilizations”. In this post, I’ll call them “expansionist”, intended to include both his and my versions.

Olson showed that if we assume that humanity’s current date is a plausible expansionist alien origin date, and if we assume a uniform distribution over our percentile rank among such origin dates, then we can estimate two things from data:

  1. from our current date, an overall appearance rate constant, regarding how frequently expansionist aliens appear, and
  2. from the fact that we do not see grabby controlled volumes in our sky, their expansion speed.

Olson only required one more input to estimate the full distribution of such aliens over space and time, and that is an “appearance rate” function f(t), to multiply by the appearance rate constant, to obtain the rate at which expansionist aliens appear at each time t. Olson tried several different approaches to this function, based on different assumptions about the star formation rate and the rate of local extinction events like supernovae. Different assumptions made only make modest differences to his conclusions.

Our recent analysis of “grabby aliens”, done unaware of Olson’s work, is similar in many ways. We also assume visible long-expanding civilizations, we focus on a very simple model, in our case with three free parameters, and we fit two of them (expansion speed and appearance rate constant) to data in nearly the same way that Olson did.

The key point on which we differ is:

  1. My group uses a simple hard-steps-power-law for the expansionist alien appearance rate function, and estimates the power in that power law from the history of major evolutionary events on Earth.
  2. Using that same power law, we estimate humanity’s current date to be very early, at least if expansionist aliens do not arrive to set an early deadline. Others have estimated modest degrees of earliness, but they have ignored the hard-steps power law. With that included, we are crazy early unless both the power is implausibly low, and the minimum habitable star mass is implausibly large.

So we seem to have something to add to Olson’s thoughtful foundations.

Looking over the coverage by others of Olson’s work, I notice that it all seems to completely ignore his empirical efforts! What they mainly care about seems to be that his having published on the idea of expansionist aliens licensed them to speculate on the theoretical plausibility of such aliens: How physically feasible is it to rapidly expansion in space over millions of years? If physically feasible, is it socially feasible, and if that would any civilization actually choose it?

That is, those who commented on Olson’s work all acted as if the only interesting topic was the theoretical plausibility of his postulates. They showed little interest in the idea that we could confront a simple aliens model with data, to estimate the actual aliens situation out there. They seem stuck assuming that this is a topic on which we essentially have no data, and thus can only speculate using our general priors and theories.

So I guess that should become our central focus now: to get people to see that we may actually have enough data now to get decent estimates on the basic aliens situation out there. And with a bit more work we might make much better estimates. This is not just a topic for theoretical speculation, where everyone gets to say “but have you considered this other scenario that I just made up, isn’t it sorta interesting?”

Here are some comments via email from S. Jay Olson:

It’s been about a week since I learned than Robin Hanson had, in a flash, seen all the basic postulates, crowd-sourced a research team, and smashed through his personal COVID infection to present a paper and multiple public talks on this cosmology. For me, operating from the outskirts of academia, it was a roller coaster ride just to figure out what was happening.

But, what I found most remarkable in the experience was this. Starting from two basic thoughts — 1) some fraction of aliens should be high-speed expansionistic, and 2) their home galaxy is probably not a fundamental barrier to expansion — so many conclusions appear inevitable: “They” are likely a cosmological distance from us. A major fraction of the universe is probably saturated by them already. Sufficiently high tech assumptions (high expansion speed) means they are likely invisible from our vantage point. If we can see an alien domain, it will likely cover a shockingly large angle in the sky. And the key datum for prediction is our cosmic time of arrival. It’s all there (and more), in both lines of research.

Beyond that, Robin has a knack for forcing the issue. If their “hard steps model” for the appearance rate of life is valid (giving f(t) ~ t^n), there aren’t too many ways to solve humanity’s earliness problem. Something would need to make the universe a very different place in the near cosmic future, as far as life is concerned. A phase transition resulting in the “end of the universe” would do it — bad news indeed. But the alternative is that we are, literally, the phase transition.

GD Star Rating
loading...
Tagged as: , ,

What Is At Stake?

In the traditional Christian worldview, God sets the overall path of human history, a history confined to one planet for a few thousand years. Individuals can choose to be on the side of good or evil, and maybe make a modest difference to local human experience, but they can’t change the largest story. That is firmly in God’s hands. Yet an ability to personally choose good or evil, or to make a difference to mere thousands of associates, seemed to be plenty enough to motivate most Christians to action.

In a standard narrative of elites today, the entire future of value in the universe sits our current collective hands. If we make poor choices today, such as about global warming or AI, we may soon kill ourselves and prevent all future civilization, forever destroying all sources of value. Or we might set our descendants down a permanently perverse path, so that even if they never go extinct they also never realize most of the universe’s great potential. And elites today tend to lament that these far grander stakes don’t seem to motivate many to action.

Humans seem to have arrived very early in the history of the universe, a fact that seems best explained by a looming deadline: grabby/aggressive aliens will control all the universe volume within a billion years, and so we had to show up before that deadline if we were to show up at all.

So now we have strong evidence that all future value in the universe does not sit in our hands. What does sit in our collective hands are:
A) the experiences of our descendants for roughly (within a factor of ten around) the next billion years, before they meet aliens, and
B) our influence on the larger mix of alien cultures in the eras after many alien civilizations meet and influence each other.

Now a billion years is in fact a very long time, a duration during which we could have an enormous number of descendants. So even that first part is a big deal. Just not as big a deal as many have been saying lately.

On the longer timescale, the question is not “will there be creatures who find their lives worth living?” We can be pretty assured that the universe will be full of advanced complex creatures who choose to live. The question is instead more “How much will human-style attitudes and approaches influence the hundreds or more alien civilizations with which we may eventually come in contact?”

It is less about whether there will be any civilizations, and more about what sorts of civilizations they will be. Yes, we should try to not go extinct, and yes we should try to find better paths and lifestyles for our descendants. But we should also aspire, and to a similar degree, to become worthy of emulating, when compared to a sea of alien options.

Unless we can offer enough unique and valuable models for emulation, and actually persuade or force such emulation, then it won’t really matter so much if we survive to meet aliens. From that point on, what matters is what difference we make to the mix. Whether we influence the mix, and whether that mix is better off as a result of our influence.

Not an easy goal, and not one we are assured to achieve. But we have maybe a billion years to work on it. And at least we can relax a bit; not all future universe value depends on our actions now. Just an astronomical amount of it. The rest is in “God’s” hands.

GD Star Rating
loading...
Tagged as: ,

How Long Will We Distance?

We often go out of our way, collectively, to accommodate small subsets of the population. For example, in parking spots and building entrances for those who use wheelchairs, in extra food sorting and labeling for those allergic to nuts or gluten, and in extra accommodations in language and labels for the non-binary-gendered.

But there are also population subsets that we do not go out of our way to accommodate. For example, we might have helped pay for the famous “boy in the bubble” to have a bubble, but we did not otherwise do much to accommodate him. (There are other subsets where I could actually get into “trouble” for even mentioning that we might consider accommodating them more. As they are besides the point of this post, I won’t mention them here.)

At the moment we are spending great amounts (too much I’d say) to accommodate the subset of the population who is vulnerable to infection by covid. For a while, that has nominally been a majority of the population, though their risks are far from equal. But over the next year, more people will get vaccinated, and more will get infected, and fewer people will be in the leftover group. And a big question will loom: how far will we go to continue to insist on “distancing” of various forms to protect everyone?

So far the standard story has been that people who’ve been vaccinated or infected must not be held to any more lenient standards; they must all “distance” just as strongly. Not only because there remain other folks, but because protections are not 100% effective. But as the average risk falls, will we get to a point where this standard changes?

To explore this question, I made this poll:

But actually, I think the question hinges more on the moral framing, i.e., the moral colors that will be associated with each side. For example, if the dominant moral story is that the non-vaccinated are anti-social science-deniers who don’t deserve accommodation, then we may switch at a high % still vulnerable.

But if the dominant moral story is instead that those who want to end distancing then are the same people who have always wanted to end distancing, then the previous moral disapproval of such advocates would make people reluctant to embrace their position. Similarly, if the story is that the more vulnerable tend to be the poor and people of color, who don’t have the political and economic clout to cut in line to get vaccines early, and who face larger infection risks due to their jobs.

Another key issue is that at the future time we are seriously considering such a switch, we will have been heavily distancing for over a year. So distancing will have a lot of social inertial then, requiring a substantial degree of social energy and initiative to overcome.

Added 22Feb: I don’t think I was clear enough above that I estimate a low %, say ~3%, and thus a long time before back to normal is allowed.

GD Star Rating
loading...
Tagged as:

Who Wants Common Sense?

The mass media often says things that should seem unlikely, at least to a well-informed common sense. And in such cases, the usual outcome is that common sense is proved right. This seems so obvious to me that I don’t see the point in arguing it. But to illustrate the point, let me mention the book Expert Political Judgement, and recent claims that AI would take away most jobs, that masks and travel restrictions do not help in pandemics, and that hell on Earth will result if the other side wins the next election.

What I want to point out in this post is a noteworthy lack of clearly-available voices that express such well-informed common-sense-based media-skepticism.

Let us focus on the top 1% of the top 1% of people, in terms of their ability to understand and apply common sense. Such people would be reasonably smart, know the intro-textbook basics of many fields, and the basic history of their industry, region, and world for the last century or two. Oh, and they must be able to write tolerably clearly.

Out of 8 billion people in the world, there should be 800K people in this 1% of 1% class. Each of which could in principle author a newsletter, blog, or podcast, etc. wherein they specialize in pointing out the worst ways that recent media reports conflict with common sense. In its first decade or two, such a newsletter could emphasize cases likely to resolve within a decade or two, in the sense that any reasonable attempt to score them for accuracy will be able to credit a substantial fraction of what they’ve said on this timescale.

For example, if you made one comment per week for ten years, that’s 500 comments, and if just 40% of these could be scored within two decades, that’s 200 scoreable comments. And if you make ten comments per week, that’s 2000 to score. Which should be plenty enough to show that an author can see and apply common sense to correct media errors.

Imagine that the top 1% of media consumers could recognize and appreciate such a track record. So if an author took a decade or two to collect such a track record of cases pointing out media deviations from common sense, this 1% of consumers would be capable of browsing this track record to evaluate it, or trusting intermediaries who scored it for them. And they’d value such a common sense corrections enough that they’d spend some time actually reading them.

So I’m postulating 80M media consumers who would want to read common sense media critiques, and 800K authors capable of writing such critiques, and of validating their track record within a decade or two. This seems a large enough market, in terms of supply and demand, that we should see at least 800 actual entrants, who regularly write commentary on media errors. That’s only one entrant per 100K customer/readers, and one per 1K potential authors.

Surely 80M customers eager to read such commentary could induce at least 800 writers to regularly write such things. Even if such authors did it as a hobby on the side, after their regular job, and got paid nothing directly for it. Maybe most of these 800K folks have better things to do with their time, but not all of them. The wisdom of at least one in a thousand of them may not be recognized by the labor market, or its realization may be blocked by individual personality quirks. Surely we all know this large a fraction of smart and wise but under-used folks.

Consider further that this class of 800K potential authors could each team with associates, to create more effective commentary. Associates could feed these authors summaries of media cases to consider, could polish their prose to become more concise and accessible to readers, and could organized the scoring of their track records. And once an author had validated his or her own track record, they might later specialize in rating other sources, either by endorsing their track records or directly including their commentary. Given all these possibilities, I’m confident that at least 800 writers could actually write such commentary, and have it be validated as accurate, if in fact there were 80M customers willing to read them.

Furthermore, 800 authors would allow a substantial degree of specialization, wherein each author focuses to some extent on particular regions, industries, topics, and media sources. I’d expect a lot of overlap, wherein authors end up commenting on the same media stories. But we don’t need all 80M customers to care mainly about the same world-media stories, ones that most of these 800 authors comment on. We just need these 80M customers to have wide enough interests so that 800 authors suffice to serve them.

The attentive reader has probably already deduced my point: As we don’t actually see 800 authors specializing in using common sense to correct common media errors, and proving their accuracy via track records, there must not actually be even 1% of media consumers interested in reading such corrections. And as I’m confident that at least 1% would be able to find and appreciate such corrections, if they were interested, I must put the main blame on their lack of interest.

I’m not sure we even see eight authors who specialize in this basic writing strategy of using common sense to correct media errors. So I’d say there may not even be 800K customers worldwide, 1% of 1% of readers, interested in reading such media corrections written by the top 1% of 1% common-sense authors, assuming that such writers are willing to write commentary if they can expect 100K readers each.

Now, I expect that many people will say that they’d like to read such commentary. But only as long as that comes with all the usual other things they get from their pundits. Such as wit, political affiliation, name recognition, and arguments they can repeat to associates to sound smart. They aren’t much willing to trade off those other desired pundit qualities for more common sense critical accuracy. Which of course really means that they don’t much care for common sense based media criticism.

Yes, media markets are often regulated. Professional licensing prevents most people from talking on some topics, and media regulation prevents many from getting paid for their commentary. Libel laws and other kinds of liability often punish honesty, as do cancel mobs. But on reflection I just can’t put the main blame on these things. There is in fact usually enough freedom of speech that media error correction could find an audience, if a large enough audience actually existed. (And yes, perhaps also if they stayed away from the most controversial of topics.)

Some hope that future innovations like AI-written commentary, or prediction markets on common media topics, could eventually provide such common-sense based criticism. But can it do so cheaply enough to overcome the low market demand problem? If even simple articulate humans can’t find such a market today, I don’t see why AI or prediction markets should expect to do much better later.

Finally, consider this: if there’s no market for the easiest cheapest way to correct many big errors all at once, why would there be markets for less-effective more-expensive ways to correct media errors?

GD Star Rating
loading...
Tagged as: ,

More on Experts Vs. Elites

When a boss issues a new order, usually the main thing he or she is fighting with is the effects of his (or a prior boss’) previous orders. It can take time to undo their effects. And subordinates who fear that yet newer orders will come down before they can make enough changes might prefer to drag their feet, to see if these current orders will last.

Some responded to my last post on experts versus elites by saying how good it is that elites often overrule experts, as experts get it so wrong so often. As with early in this pandemic. But the experts are less of an autonomous force here, and more just the repository of previous elite instructions. If pandemic experts had it wrong before about masks or travel bans, that is mostly because elites previously pushed them to adopt such policies. For example, our continued ban on challenge trials is due to how med ethics experts have interpreted prior elite instructions. Experts won’t change their mind on this until elites tell them they are allowed to change their minds. In contexts where elites are typically so pushy, it can be hard to tell what experts would decide in their absence.

In economics, it usually feels pretty obvious what the elites want us to say. Not all economists do what they are told, but the major institutions and their elite leaders seem mostly willing to go along, and so what the public mostly hears is economists saying what elites want us to say. When elites change their minds, our major institutions also quickly change their minds.

Now I had been thinking this is all bad news for the new kinds of institutions I want to introduce, as I had been assuming they would be framed as new expert institutions. And yes all this suggests a distrust of formal expert mechanisms that can’t be easily overruled by elite opinion. But maybe I have been too hasty about how new institutions might be framed.

Consider the widespread hostility to “market manipulation”, such as seen in the recent Gamestop stock price episode. Or consider movies like Boiler Room, Glengarry Glen Ross, Wall Street, and Wolf of Wall Street. Typically, financial markets are chock full of “manipulation”, in the sense that most traders are trying to talk and spin to get others to agree with and follow their trades. Sometimes they succeed, and sometimes they fail, but that mostly doesn’t bother people. What bothers people most is when they see clearly low status low prestige people seeming to greatly influence prices, especially in ways that seem unlikely to last. (Elite manipulations tend to last.)

Consider also that elites only rarely complain about errors in speculative market prices, such as stock prices or currency prices. They mainly complain when they think they can find non-elite folks to blame for such prices. Together, these facts suggest to me that most elites may see speculative market prices as something that elites create. They know that there is a lot of money at stake in such markets, and that many big powerful rich elite players play heavily in such markets. So perhaps elites usually accept the verdict of such prices as a verdict of elites!

If this were true, then the prospects for improving our social consensus via improving speculative markets would be far higher than I’d ever hoped! If we could get thick markets trading on many more topics, then elites might well defer to those price estimates in their elite conversations, and push experts to also accept such estimates.

Of course, even if elites would accept a price estimate when it exists, this doesn’t mean most are eager for such any particular price to exist. Rivalrous elites constantly try to undermine each other, including via undermining the organs that rival elites use to express their opinions. If if the prices existed for a while, I predict elites would cave and defer to them, at least until they could kill them.

To signal to all that they are dominated by elites, I do think it important that a lot of money seem be riding on these market prices. Mere prediction tournaments or polls of experts just will not do. Even real money markets with small stakes may not be taken seriously enough.

My proposal for Fire-the-CEO markets seems like it could work here. Though I’ve been waiting for 25 years now for someone to take up this idea.

Added 8Feb: I see now why my usual answer to “what should I read?”, namely “textbooks”, falls on deaf ears. People are looking for elites to read, not experts.

GD Star Rating
loading...
Tagged as: ,

The InDirect-Check Sweet Spot

I have specialized somewhat in being a generalist intellectual. I know of two key strategies for pursuing this. The first one is pretty obvious, but still important: learn the basics of many different fields. The more fields you know, the more chances you will find to apply an insight in one field into another. So not only learn many fields, but keep looking for connections between them. That is, keep searching for ways to apply the insights in all the fields you know to all the other fields you know.

The second strategy is a bit less obvious. And that is to work hard to collect indirect tests and checks of everything you know. This doesn’t tend to happen naturally, because we mostly tend to learn only very direct tests of what we know.

Consider someone writing an oped. With experience, an oped writer will learn in great detail the emotion tones hit by each thing they might say. So they will learn to say things in ways that hit the right tones the right way at the right times. These are relatively direct tests, but not of the literal truth of each thing said. Instead these are tests of how people will react to things said.

Now consider someone writing code that is close to a user interface. In this sort of context, usually the only ways that the code can be wrong is to fail to give the proper appearances to users. If the system looks right to users, then for the most part it just is right, as there are few concepts of hidden mistakes or errors at this level.

In contrast, consider someone trying to create a computer simulation of a particular scientific model. This simulation could in fact be wrong, even though users don’t see any obvious mistakes. When you learn to write code like this, you have to learn to collect more ways to check your code, to look for errors. At least you do if you expect errors to eventually be discovered, but that it works out much better for you if you find such errors early, yourself, rather than that they be found by others, later.

Similarly, if you want to have your best shot at being a productive generalists, you should be collecting as many ways as possible to check each hypothesis or claim you might come across against all of the other things you know. If this sort of thing were true, then we should expect to see that sort of pattern.

You see, when you try to apply insights from some fields to other distantly related fields, most of the ideas you will come up with won’t be that easy to test or check directly. So if you are to have much of a chance of finding good applications, you’ll need to collect a big toolkit of ways to devise sanity checks that you can apply.

In contrast, most fields don’t really offer very strong incentives to collect indirect tests. Many fields clearly telegraph the conclusions you are supposed to reach, making it easy to check if your conclusions are among the desire ones. In many other fields, such as in writing fiction or sermons, one can test the quality of work relatively directly against how it seems to effect readers. They don’t care much there about any truth beyond created the desired effects in readers.

But when you think about each new field you explore, it will be healthy if you fear the possibility that you will draw a tentative conclusion that will later turn out to look pretty wrong. This will push you to search for many different ways to check each hypothesis, to avoid such scenarios. You may well need to imagine that you will face different critical audiences than the people in those fields, as they may well not really care so much about such global consistency. But you need to, if you would learn to be a productive generalist.

GD Star Rating
loading...
Tagged as: ,

Experts Versus Elites

Consider a typical firm or other small organization, run via a typical management hierarchy. At the bottom are specialists, who do very particular tasks. At the top are generalists, who supposedly consider it all in the context of a bigger picture. In the middle are people who specialize to some degree, but who also are supposed to consider somewhat bigger pictures.

On any particular issue, people at the bottom can usually claim the most expertise; they know their job best. And when someone at the top has to make a difficult decision, they usually prefer to justify it via reference to recommendations from below. They are just following the advice of their experts, they say. But of course they lie; people at the top often overrule subordinates. And while leaders often like to pretend that they select people for promotion on the basis of doing lower jobs well, that is also often a lie.

Our larger society has a similar structure. We have elites who are far more influential than most of us about what happens in our society. As we saw early in the pandemic, the elites are always visibly chattering among themselves about the topics of the day, and when they form a new opinion, the experts usually quickly cave to agree with them, and try to pretend they agreed all along.

As a book I recently reviewed explains in great detail, elites are selected primarily for their prestige and status, which has many contributions, including money, looks, fame, charm, wit, positions of power, etc. Elites like to pretend they were selected for being experts at something, and they like to pretend their opinions are just reflecting what experts have said (“we believe the science!”). But they often lie; elite opinion often overrules expert opinion, especially on topics with strong moral colors. And elites are selected far more for prestige than expertise.

When an academic wins a Nobel prize, they have achieved a pinnacle of expertise. At which point they often start to wax philosophic, and writing op-eds. They seem to be making a bid to become an elite. Because we all respect and want to associate with elites far more than with experts. Elites far less often lust after becoming experts, because we are often willing to treat elites as if they are experts. For example, when a journalist writes a popular book on science, they are often willing to field science questions when they give a talk on their book. And the rest of us are far more interested in hearing them talk on the subject than the scientists they write about.

Consider talks versus panels at conferences. A talk tends to be done in expert mode, wherein the speaker sticks to topics on which they have acquired expert knowledge. But then on panels, the same people talk freely on most any topic that comes up, even topics where they have little expertise. You might think that audiences would be less interested in hearing such inexpert speculation, but in fact they seem to eat it up. My interpretation: on panels, people pose as elites, and talk in elite mode. Like they might do at a cocktail party. And audiences eagerly gather around panelists, just like they would gather around prestigious folks arguing at a cocktail party about topics on which they have little expertise.

Consider news articles versus columnists. The news articles are written by news experts, in full expert mode. They are clearly more accurate on average than are columns. But columns writers take on an elite mode, where they pontificate on all issues of the day, regardless of how much they know. And readers love that.

Consider boards of directors versus boards of advisors. Advisors are nominally experts, while directors are nominally elites. Directors are far more powerful, are lobbied far more strongly, and are paid a lot more too. Boards of advisors are usually not asked for advise, they are mainly there to add prestige to an organization. But prestige via their expertise, rather than their general eliteness.

Even inside academic worlds, we usually pretend to pick leaders like journal editors, funding program managers, department chairs, etc. based mainly on their expert credentials. But they also lie; raw prestige counts for a lot more than they like to admit.

Finally, consider that recently I went into clear expert mode to release a formal preprint on grabby aliens, which induced almost no (< 10) comments on this blog or Twitter, in contrast to far more comments when arguable-elites discuss it in panelist/elite mode: Scott Aaronson (205), Scott Alexander (108), and Hacker News (110). People are far more interested in talking with elites in elite mode on most topics, than in talking with the clear relevant experts in expert mode.

All of which suggests that my efforts to replace choice via elite association with prediction markets and paying for results face even larger uphill battles than I’ve anticipated.

Added noon: This also helps explain why artists are said to “contribute to important conversations” by making documentaries, etc. that express “emotional truths.” They present themselves as qualifying elites by virtue of their superior art abilities.

See also: More on Experts Vs. Elites

GD Star Rating
loading...
Tagged as: , ,

Counter-Signaling On Aliens

For a long time, people who wrote on U.F.O.s have faced extra hurdles. Compared to those who write on other topics, authors on this topic are scrutinized more carefully for credentials and conflicting interests. The evidence they present is scrutinized much more carefully for detail, consistency, and potential bias and contamination, and much less likely alternative explanations are considered sufficient to reject such evidence. And even when they meet these higher standards, such authors still find it hard to gain much media attention.

A week ago Harvard astrophysics department chair Avi Loeb published a book wherein he argues that the object “Oumuamua” that passed quickly through our solar system in 2017 was an artificial alien artifact. The book doesn’t actually go into much detail on data about the object, certainly not enough to allow readers to apply the scrutiny usually expected of U.F.O. claims.

And even though he says he’s nearly alone among astrophysicists in his view, Loeb doesn’t at all help readers to understand why they believe different from Loeb. His story seems to be that they are all just chicken-shit. And his story about what the aliens are doing out there seems to be that they are mostly long dead.

If Loeb doesn’t talk much about the technical details and evidence, what does he talk about? Mostly his childhood, philosophy, other projects, bigshots he knows, etc. (Though he does also mention me.) And the media have overall been very kind to him, giving him lots of coverage and little criticism.

You might think that Loeb’s claim about this space object and common U.F.O. claims would seem to support each other. But in a few places, Loeb is very dismissive of ordinary U.F.O. evidence. (here and here). He’s clearly trying to say that what he says is nothing like what they say.

All of which seems to me a pretty clear example of countersignaling. Just like you are often nice to acquaintances to distinguish them from strangers, but mean to friends to distinguish them from mere acquaintances, we often do the opposite of the usual signal to show we are special. Loeb doesn’t have to follow the usual rules that would apply to most folks offering data on aliens, because (as he repeatedly reminds us) he is a Harvard astrophysics department chair.

All of which may help you understand why people often don’t follow the usual epistemic rules. Because the usual rules are for little people, and you aren’t little, are you?

GD Star Rating
loading...
Tagged as: ,

Humans Are Early

Imagine that advanced life like us is terribly rare in the universe. So damn rare that if we had not shown up, then our region of the universe would almost surely have forever remained dead, for eons and eons. In this case, we should still be able to predict when we humans showed up, which happens to be now at 13.8 billion years after the universe began. Because we showed up on a planet near a star, and we know the rate at which our universe has and will make stars, how long those stars will last, and which stars where lived far enough away from frequent sterilizing explosions to have at least a chance at birthing advanced life.

However, this chart (taken from our new paper) calculates the percentile rank of our current date within this larger distribution. And it finds that we are surprisingly early, unless you assume both that there are very few hard steps in the evolution of advanced life (the “power n”), and also that the cutoff in lifetime above which planets simply cannot birth advanced life is very low. While most stars have much longer lives, none of those have any chance whatsoever to birth advanced life. (The x-axis shown extends from Earth’s lifetime up to the max known star lifetime.)

In the paper (in figures 2,17), we also show how this percentile varies with three other parameters: the timescale on which star formation decays, the peak date for habitable star formation, and a “mass favoring power” which says bu how much more are larger mass stars favored in habitability. We find that these parameters mostly make only modest differences; the key puzzle of humans earliness remains.

Yes, whether a planet gives rise to advanced life might depend on a great many other parameters not included in our calculations. But as we are only trying to estimate the date of arrival, not many other details, we only need to include factors that correlate greatly with arrival date.

Why have others not reported the puzzle previously? Because they neglected to include the key hard-steps power law effect in how chances vary with time. This effect is not at all controversial, though it often seems counter-intuitive to those who have not worked through its derivation (and who are unwilling to accept a well-established literature they have not worked out for themselves).

This key fact that humans look early is one that seems best explained by a grabby aliens model. If grabby aliens come and take all the volume, that sets a deadline for when we could arrive, if we were to have a chance of becoming grabby. We are not early relative to that deadline.

GD Star Rating
loading...
Tagged as: ,