Author Archives: Robin Hanson

Parsing Pictures of Mars Muck

On Thursday I came across this article, which discusses the peer-reviewed journal article, “Fungi on Mars? Evidence of Growth and Behavior From Sequential Images”. As its pictures seemed to me to suggest fungal life active now on Mars, I tweeted “big news!” Over the next few days it got some quite negative news coverage, mainly complaining that the first author (out of 11 authors) had no prestigious affiliation and expressed other contrarian opinions, and also that the journal charged fees to authors.

I took two small supportive bets and then several people offered me much larger bets, while no one at all offered to bet on my side. That is a big classic clue that you are likely wrong, and so I am for now backing down on my likelihood estimates on this. And thus not (yet) accepting more bets. But to promote social information aggregation, let me try to explain the situation as I now see it. I’ll then listen to your reactions before deciding how to revise my estimates.

First, our priors are that early Mars and early Earth were nearly equally likely as places for life to arise, with Mars being habitable sooner. The rates at which life would have been transferred between the two places look high, though sixty times higher from Mars to Earth than from vice versa. Thus it seems nearly as likely that life started on Mars and then came to Earth, as that life started on Earth. And more likely than not, there was once some life on Mars.

Furthermore, studies that put today’s Earth life in Martian conditions find many that would survive and grow on Mars. So the only question is whether that sort of life ever arose on Mars, or was ever transferred from Earth to Mars. Yes, most of the Martian surface looks quite dead now, including most everything we’ve seen up close due to landers and rovers. But then so does most of the surface of Antartica look dead, but we know is it not all dead. So the chance of life somewhere on Mars now is pretty high; the question is just how common might be the few special places in which Martian life survives.

This new paper offers no single “smoking gun”, but instead offers a collection of pictures that are together suggestive. Some of the authors have been slowly collecting this evidence over many years, and have presented some of it before. The evidence they point to is at the edge of detectability, as you should expect from the fact that the usual view is that we haven’t yet seen life on Mars.

Now if you search though enough images, you’ll find a few strange ones, like the famous “face on mars”, or this one from Mars:

But when there’s just one weird image, with nothing else like it, we mostly should go with a random error theory, unless the image seems especially clear.

In the rest of this post I’ll go over three kinds of non-unique images, and for each compare a conventional explanation to the exotic explanation suggested by this new paper. Continue reading "Parsing Pictures of Mars Muck" »

GD Star Rating
loading...
Tagged as: ,

UFOs Show Govt. Competence As Either Surprisingly High Or Low

Sometimes I pride myself on my taking an intellectual strategy of tackling neglected important questions. However, one indicator of a topic being neglected is that it seems low status; people who discuss it are ridiculed, and their intellectual ability doubted. Thus my strategy risks lowering my status.

To protect against this risk, I can set a policy of only tackling topics that seem to have a substantial synergy with my skills and prior topics. Which seems a valid policy, even if not entirely honest. For a long time this protected me against UFOs as aliens, one of the most ridiculed topics ever. But then I started to study loud very distant aliens, and the topic of alien UFOs became more relevant.

To limit the damage, I once tried to talk only on what UFOs would imply if they really were aliens, but not crossing the line to discuss if they actually are. But on reflection I can see that this topic is in fact neglected, important, and has synergies with my skills and other topics. So now I am shamed into trying to live up to my intellectual ideals, which if truth be told aren’t as strongly rooted in me as I’d like to pretend. Sigh. So here goes, let’s talk about explaining UFO encounters.

I see four major categories of explanation:

  • (A) Honest mistakes: This includes misunderstandings of familiar phenomena, delusions and mental illness, and natural phenomena that we now poorly understand.
  • (B) New Govt. Tech: Some current Earth government is testing new tech far more advanced than anything publicly admitted. Or is using it for limited secret purposes.
  • (C) Hoaxes & Lies: Some are going out of their way to fool observers into thinking they see weird stuff, or just straight lying to say they saw stuff they didn’t see.
  • (D) Aliens, Etc.: This tech seen is far more advanced than anything available to any current Earth government. So it is from a hidden more advanced society on Earth, aliens from elsewhere, time-travelers from the future, or something even weirder.

Now it seems pretty obvious that if we are rather inclusive in our definition of “UFO encounter” then (A) is the best explanation for most of them. The interesting question is how best to explain the few hardest to explain encounters. Here is a related Twitter poll I just did:

Notice that I made the mistake here of lumping foreign governments into option (D), instead of into option (B) as I do above. If I had done the poll right, my guess is that we’d see: (A) 57%, (B) ~23%, (C) 10%, (D) ~10%.

Over the last few months I’ve been doing a lot of reading and watching and thinking on this topic, and I do think I have a judgement to report, a judgement that should represent news to those inclined to copy my judgment. First, (A) or (B) seems to me much less likely than (C) or (D). Second, between (Ca) spontaneous decentralized hoaxes and lies, and (Cb) hoaxes and lies coordinated by a big central organization, (Cb) seems much more likely. And third, among (Da) aliens, (Db) secret societies, (Dc) time-travelers, and (Dd) something even weirder, (Da) seems more likely.

Thus I see the main choice as between (Cb) and (Da), which would together be supported by only ~10% of poll respondents, and between which I can’t decide. Thus I am making a relatively strong claim here, at least relative to poll opinions. Let me outline some of my reasons.

First, if you look at the details of the usual hardest cases, to ones to which UFO fans most often point, you will see that there are often a lot of pretty sober looking people who all say they saw the same pretty clear and dramatic things under pretty good observing conditions. And often what they say they saw is solid-looking objects with remarkable combinations of location, speed, and acceleration, with no attendant thrust or control surfaces of the sort we’d use if we were trying to achieve those combinations.

I know enough physics and tech to know that these claimed abilities are just far beyond anything Earth governments will have access to for a long time, at least if the past is any guide. Or anything that natural weather could make. And similar abilities have been seen for over a half century, so if governments were hiding these abilities they’d be hiding them for far longer than they usually hide techs.

I also know enough human nature to know that these are not close to the sort of things that honest sober sane people would claim to see, if they just somewhat misinterpreted something that they saw or heard. And most of the people reporting in these strongest cases do seem pretty sober and sane. Thus in these strongest cases, the story that all these people are merely mistaken or deluded just doesn’t work, at least for the sorts of things they say they saw in these hardest cases. Nor does the story work that this is advanced government tech that they will release to show everyone in at most a few decades. So I must reject cases (A) and (B), which leaves me only with cases (C) and (D).

[Added 6May: Note that I am making judgements here about particular cases that I’ve considered in some detail. I am not saying I always believe what anyone says they saw. For a comparison, I find the usual evidence presented re ghosts and fairies to be much less persuasive. ]

Yes, humans like to play practical jokes on one another, and sometimes they take those jokes to some pretty far extremes. Sometimes they even try to make the jokes last for years. And often they are inspired to copy the jokes of others. But to explain most of these hardest cases mainly in terms of practical jokes seems just a bridge too far. Really, thousands of disconnected people all around the world playing the same big scary jokes for decades, and then almost never breaking down and laughing and crowing about their jokes even decades later? In contrast, governments, especially their spy parts, have run some pretty big, well-funded, and long-lasting disinformation campaigns. So I have to favor (Cb) over (Ca) by a big margin.

Regarding (D), time-travel seems impossible without crazy extreme physics, and known secret societies on Earth have never reached within orders of magnitude of the scale and degree of secrecy that this would require. Yet spirits or creatures from other dimensions seems even more crazy. Aliens, in contrast, are predicted to exist by our best theories. It is just a matter of finding a plausible scenario wherein they’d be here now doing what we see them doing, and not doing other stuff we don’t see them doing. I’ve tried to work out such a scenario, and find one that is a bit tortured, but far more believable than secret societies or travel across time or between dimensions.

Note that both (Cb)  and (Da) are hypotheses that I would have found priori implausible. So the entire existence of the familiar pattern of UFO encounters was a priori implausible, and so now that I see it I struggle to explain it. And as both of the most likely explanations are low status topics, i.e., aliens and a record-breaking-huge government conspiracy, you can see why most people would rather just avoid the topic.

This post is already too long, so I will stop here once I make one last point: (Cb) is a theory of remarkable government competence. Some governments, or a consortium of them, have managed to get thousands of people to either lie and say they saw stuff they didn’t, or paid for expensive enough tech to fool them. And yet this conspiracy has remained hidden for a great many decades, even from the top levels of their own governments.

In contrast, (Da) seems to require a scenario of remarkable incompetence, among the aliens themselves, among our governments, and even among the UFO activists. So which is more likely: surprisingly high government competence, or incompetence?

GD Star Rating
loading...
Tagged as:

The Debunking of Debunking

In a new paper in Journal of Social Philosophy, Nicholas Smyth offers a “moral critique” of “psychological debunking”, by which he means “a speech‐act which expresses the proposition that a person’s beliefs, intentions, or utterances are caused by hidden and suspect psychological forces.” Here is his summary:

There are several reasons to worry about psychological debunking, which can easily counterbalance any positive reasons that may exist in its favor:

1. It is normally a form of humiliation, and we have a presumptive duty to avoid humiliating others.
2. It is all too easy to offer such stories without acquiring sufficient evidence for their truth,
3. We may aim at no worthy social or individual goals,
4. The speech‐act itself may be a highly inefficient means for achieving worthy goals, and
5. We may unwittingly produce bad consequences which strongly outweigh any good we do achieve, or which actually undermine our good aims entirely.

These problems … are mutually reinforcing. For example, debunking stories would not augment social tensions so rapidly if debunkers were more likely to provide real evidence for their causal hypotheses. Moreover, if we weren’t so caught up in social warfare, we’d be much less likely to ignore the need for evidence, or to ignore the need to make sure that the values which drive us are both worthy and achievable.

That is, people may actually have hidden motives, these might in fact explain their beliefs, and critics and audiences may have good reasons to consider that possibility. Even so, Smyth says that it is immoral to humiliate people without sufficient reason, and we in fact do tend to humiliate people for insufficient reasons when we explain their beliefs via hidden motives. Furthermore, we tend to lower our usual epistemic standards to do so.

This sure sounds to me like Smyth is offering a psychological debunking of psychological debunking! That is, his main argument against such debunking is via his explaining this common pattern, that we explain others’ beliefs in terms of hidden motives, by pointing to the hidden motives that people might have to offer such explanations.

Now Smyth explicitly says that he doesn’t mind general psychological debunking, only that offered against particular people:

I won’t criticize high‐level philosophical debunking arguments, because they are distinctly impersonal: they do not attribute bad or distasteful motives to particular persons, and they tend to be directed at philosophical positions. By contrast, the sort of psychological debunking I take issue with here is targeted at a particular person or persons.

So presumably Smyth doesn’t have an issue with our book The Elephant in the Brain: Hidden Motives in Everyday Life, as it also stays at the general level and does’t criticize particular people. And so he also thinks his debunking is okay, because it is general.

However, I don’t see how staying with generalities saves Smyth from his own arguments. Even if general psychological debunking humiliates large groups all at once, instead of individuals one at a time, it is still humiliation. Which he still might do yet should avoid because of his inadequate reasons, lowering of epistemic standards, there being better ways to achieve his goals, and it unwittingly producing bad consequences. Formally his arguments work just as well against general as against specific debunking.

I’d say that if you have a general policy of not appearing to pick fights, then you should try to avoid arguing by blaming your opponents’ motives if you can find other arguments sufficient to make your case. But that’s just an application of the policy of not visibly picking fights when you can avoid them. And many people clearly seem to be quite willing and eager to pick fights, and so don’t accept this general policy of avoiding fights.

If your policy were just to speak the most relevant truth at each point, to most inform rational audience members at that moment on a particular topic, then you probably should humiliate many people, because in fact hidden motives are quite common and relevant to many debates. But this speak-the-most-truth policy tends to lose you friends and associates over the longer run, which is why it is usually not such a great strategy.

GD Star Rating
loading...
Tagged as: ,

Theories Of Unnatural Selection

In my career I’ve worked in an unusually large number of academic disciplines: physics, computer science, social science, psychology, engineering, and philosophy. But on a map of academic disciplines, where fields that cite each other often are put closer together, all my fields are clumped together on one side. The fields furthest away from my clump, on the opposite side, are biology, biochemistry, and medicine.

It seems to me that my fields tend to emphasize relatively general theory and abstraction, while the opposite fields tend to have far fewer useful abstractions, and instead have a lot more detail to master. People tend to get sorted into fields based on part on their ability and taste for abstractions, and the people I’ve met who do biochemistry and medicine tend to have amazing abilities to recall relevant details, but they also tend to be pretty bad at abstractions. For example they often struggle with simple cost-benefit analysis and statistical inference.

All of which is to say that biologists tend to be bad at abstraction. This tends to make them bad at thinking about the long-term future, where abstraction is crucial. For example, I recently reviewed The Zoologist’s Guide to the Galaxy, wherein a zoologist says that aliens we meet would be much like us, even though they’d be many millions of years more advanced than us, apparently assuming that our descendants will not noticeably change in the next few million years.

And in a new book The Next 500 Years, a geneticist recommends that we take the next few centuries to genetically engineer humans to live in on other planets, apparently unaware that our descendants will most likely be artificial (like ems), who won’t need planets in particular except as a source of raw materials. These two books have been reviewed in prestigious venues, by prestigious biology reviewers who don’t mention these to-me obvious criticisms. Suggesting that our biological elites are all pretty bad at abstraction.

This is a problem because it seems to me we need biologists good at abstraction to help us think about the future. Let me explain.

Computers will be a big deal in the future, even more so than today. Computers will be embedded in and control most all of our systems. So to think well about the future, we need to think think well about very large and advanced computer systems. And since computers allow our many systems to be better integrated, overall all our systems will be larger, more complex, more connected, and more smartly controlled. So to think about the future we need to think well about very large, smart, and complex integrated systems.

Economics will also remain very important in the future. These many systems will be mostly designed, built, and maintained by for-profit firms who sell access to them. These firms will compete to attract customers, investors, workers, managers, suppliers, and complementary products. They will be also taxed and regulated by complex governments. And the future economy will be much larger, making room for more and larger such firms, managing those larger more complex products. So to think well about the future we need to think well about a much larger more complex world of taxed and regulated firms competing to make and sell stuff.

We today have a huge legacy inheritance of designs and systems embedded in biology, systems that perform many essential functions, including supporting our bodies and minds. In the coming centuries, we will either transfer our minds to other more artificial substrates, or replace them entirely with new designs. At which point they won’t need biological bodies; artificial bodies will do fine. We will then either find ways to extract key biological machines and processes from existing biological systems, to use them flexibly as component processes where we wish, or we will replace those machines and processes with flexible artificial versions.

At that point, natural selection of the sort the Earth has seen for the last few billion years will basically come to an end. The universe that we reach by then will be still filled with a vast diversity of active and calculating objects competing to survive. But these objects will be designed not by inherited randomly mutating DNA, and will not be self-sufficient in terms of manufacturing and energy acquisition. They will instead be highly cooperative and interdependent objects, make by competing firms who draw design elements from a wide range of sources, most of them compensated for their contributions.

But even though biology as we know it will then be over, biological theory, properly generalized, should remain quite relevant. Because there will still be vast and rapid competition and selection, and so we will still need ways to think about how that will play out. Thus we need theorists to draw from our best understandings of systems, computers, economics, and biology, to create better ways to think about how all this combines to create a brave new world of unnatural selection.

And while I’ve seen at least glimmerings of such advances from people who think about computers, and from people who think about economics, I’ve yet to see much of anything from people who think about biology. So that seems to me our biggest missing hole here. And thus my plea in this post: please biological theorists, help us think about this. And please people who are thinking about which kind of theory to study, consider learning some biology theory, to help us fill this gap.

GD Star Rating
loading...
Tagged as: ,

Subtext Shows Status

When we talk, we say things that are explicit and direct, on the surface of the text, and we also say things that are in hidden and indirect, said in more deniable ways via subtext. Imagine that there was a “flattext” type of talk (or writing) in which subtext was much harder to reliably express and read. Furthermore, imagine that it was easy to tell that a speaker (or writer) was using this type of talk. So that by talking in this way you were verifiably not saying as much subtext.

Yes, it seems very hard to go all the way to infinitely hard here, but flattext could have value without going to that extreme. Some have claimed that the artificial language Lojban is in some ways such a talk type.

So who would use surface text? A Twitter poll finds that respondents expect that on average they’d use flattext about half of the time, so they must expect many reasons to want to deny that they use subtext. Another such poll finds that they on average expect official talk to be required to be flattext. Except they are sharply divided between a ~40% that thinks it would be required >80% of the time, and another ~40% who thinks it would be required <20% of the time.

The obvious big application of flattext is people and organizations who are often accused of saying bad things via subtext. Such as people accusing of illicitly flirting, or sexual harrassing. Or people accused of “dogwhilsting” disliked allegiances. Or firms accused over-promising or under-warning to customers, employees, or investors.

As people are quite willing to accuse for-profit firms of bad subtext, I expect they’d be the most eager users. As would people like myself who are surrounded by hostile observers eager to identify particular texts as showing evil subtext. You might think that judges and officials speaking to the public in their official voice would prefer flattext, as it better matches their usual tone and style which implicitly claims that they are just speaking clearly and simply. But that might be a hypocrisy, and they may reject flattext so that they can continue to say subtext.

Personal servants, and slaves from centuries ago were required to speak in a very limited and stylized manner which greatly limited subtext. They could suffer big bad consequences for ever being accused of a tone of voice or manner that signaled anything less than full respect and deterrence to their masters.

Putting this all together, it seems that the ability to regularly and openly use subtext is a sign of status and privilege. We “put down” for-profit firms in our society by discouraging their use of subtext, and mobs do similarly when they hound enemies using hair-trigger standards ready to accuse them of bad subtext. And once low status people and organizations are cowed into avoiding subtext, then others can complain that they lack humanity, as they don’t show a sense of humor, which is more clear evidence that they are evil.

So I predict that if flattext were actually available, it would be mainly used to low status people and organizations to protect themselves from accusations of illicit subtext. As our enforcement of anti-subtext rules is very selective. Very risk averse government agencies might use it, but not high status politicians.

GD Star Rating
loading...
Tagged as: ,

Motive/Emotive Blindspot

In this short post what I try to say is unusually imprecise, relative to what I usually try to say. Yet it seems important enough to try to say it anyway.

I’ve noticed a big hole in my understanding, which I think is shared by most economists, and perhaps also most social scientists: details about motives and emotions are especially hard to predict. Consider:

  1. Most of us find it hard to predict how we, or our associates, will feel in particular situations.
  2. We care greatly about how we & associates feel, yet we usually only influence feelings in rather indirect ways.
  3. Even when we have an inkling about how we feel now, we are usually pretty reluctant tell details on that.
  4. Organizations find it hard to motivate, and to predict the motives of, employees and associates.
  5. Marketers find it hard to motivate, and to predict the motives of, customers.
  6. Movie makers find it very hard to predict which movies people will like.
  7. It is hard for authors, even good ones, to imagine how characters would feel in various situations.
  8. It is hard for even good actors to believable portray what characters feel in situations.
  9. We poorly understand declining motive power of religion & ideology, nor which ones motivate what.
  10. We poorly understand declining emotion power of rituals, nor which ones induce which emotions.

We seem to be built to find it hard to see and predict both our and others’ motives and emotions. Oh we can, from a distance, see some average tendencies well enough to predict a great many overall social tendencies. But when we get to details, up close, our vision fails us.

In many common situations, the motive/emotive variance that we find it hard to predict isn’t much correlated across people or time, and so doesn’t much get in the way of aggregate predictions. But in other common situations, that puzzling variance can be quite correlated.

GD Star Rating
loading...
Tagged as: ,

Shoulda-Listened Futures

Over the decades I have written many times on how prediction markets might help the intellectual world. But usually my pitch has been to those who want to get a better actionable info out of intellectuals, or to help the world to make better intellectual progress in the long run. Problem is, such customers seem pretty scarce. So in this post I want to outline an idea that is a bit closer to a business proposal, in that I can better identify concrete customers who might pay for it.

For every successful intellectual there are (at least) hundreds of failures. People who started out along a path, but then were not sufficiently rewarded or encouraged, and so then either quit or persisted in relative obscurity. And a great many of these (maybe even a majority) think that the world done them wrong, that their intellectual contributions were underrated. And no doubt many of them are right. Such malcontents are my intended customers.

These “world shoulda listened to me” customers might pay to have some of their works evaluated by posterity. For example, for every $1 saved now that gains a 3% real rate of return, $19 in real assets are available in a century to pay historians for evaluations. At a 6% rate of return (or 3% for 2 centuries), that’s $339. Furthermore, if future historians needed only to randomly evaluate 1% of the works assigned them, then if malcontents paid $10 per work to be maybe evaluated, historians could spend $20K (or $339K) per work they evaluate. Considering all the added knowledge and tools to which future historians may have access, that seems enough to do a substantial evaluation, especially if they evaluate several related works at the same time.

Given a substantial chance (1% will do) that a work might be evaluated by historians in a century or two, we could then create (conditional) prediction markets now estimating those future evaluations. So a customer might pay their $20 now, and get an immediate prediction market estimate of that future evaluation for their work. That $20 might pay $10 for the (chance of a) future evaluation and another $10 to establish and subsidize a prediction market over the coming centuries until resolution.

Finally, if customers thought market estimate regarding their works looked too low, then they could of course try to bet to raise those estimates. Skeptics would no doubt lie waiting to bet against them, and on average this tendency of authors to bet to support their works would probably subsidize these markets, and so lower the fees that the system needs to charge.

Of course even with big budgets for evaluations, if we want future historians to make reliable enough formal estimates that we can bet on in advance, then we will need to give them a well-defined-enough task to accomplish. And we need to define this task in a way that discourages future historians from expressing their gratitude to all these people who funded their work by giving them all an A+.

I suggest we have future historians estimate each work’s ideal attention: how much attention each particular work should have been given during some time period. So we should pick some measure of attention, a measure that we can calculate for works when they are submitted, and track over time. This measure should weigh if the dissertation was approved, the paper was published and where, how many cites did it get, etc. If we add up all the initial attention for submitted works, then we can assign historians the task of (counterfactually) reallocating this total attention across all the submitted works. So to give more attention to some, they’d have to take away attention from others.

Okay, so now they can’t give every work an A+. (And we ensure that bet assets have bounded values.) But our job isn’t done. We also need to give them a principle to follow when allocating attention among all these prior works. What objective would they be trying to accomplish via this reallocation of attention?

I suggest that the objective just be intellectual progress, toward the world having access to more accurate and useful beliefs. A set of works should have gotten more attention if in that case the world would have been more likely to have more quickly come to appreciate valuable truths. And this task is probably easier if we ask future historians to use their future values in this task, instead of asking them to try to judge according to our values today.

These evaluation tasks probably get easier if historians randomly pick related sets of works to evaluate together, instead of independently picking each work to evaluate. And this system can probably offer scaled fees, wherein the chance that your work gets evaluated rises linearly with the price you paid for that chance. There are probably a lot more details to work out, but I expect I’ve already said enough for most people to decide roughly how much they like this idea.

Once there were many works in this system, and many prediction markets estimating their shoulda-been attention, then we could look to see if market speculators see any overall biases in today’s intellectual worlds. That is, topics, methods, disciplines, genders, etc. to which speculators estimate that the world today is giving too little attention. That could be pretty dramatic and damning evidence of bias, by someone, evidence to which we’d all be wise to attend.

One obvious test of this approach would be to assign historians today the task of reallocating attention among papers published a century or two ago. Perhaps assign multiple independent groups, and see how correlated are their evaluations, and how that correlation varies across topic areas. Perhaps repeating in a decade or two, to see how much evaluations drift over time.

Showing these correlations to potential customers might convince them that there’s a good enough chance that such a system will later correctly vindicate their neglected contributions. And these tests may show good scopes to use, for related works and time periods to evaluate together, and how narrow or broad should be the expertise of the evaluators.

This whole shoulda-listened-futures approach could or course also be applied to many other kinds of works, not just intellectual works. You’d just have to establish your standards for how future historians are to allocate shoulda attention, and trust them to actually follow those standards. Doing tests on works from centuries ago here could also help to show if this is a viable approach for these kinds of works.

Added 7am 28Apr: On average more assets will be available to pay for future evaluations if the fees paid are invested in risky assets. So instead of promising a particular percentage chance of evaluation, it may make more sense to specify how fees will be invested, set the (real) amount to be spent on each evaluation, and then promise that the chance of evaluation for each work will be set by the investment return relative to the initial fee paid. Yes that induces more evaluations in state of the world where investments do better, but customers are already accepting a big chance that their work will never be directly evaluated.

GD Star Rating
loading...
Tagged as: , ,

Schulze-Makuch & Bains on The Great Filter

In their 2016 journal article “The Cosmic Zoo: The (Near) Inevitability of the Evolution of Complex, Macroscopic Life“, Dirk Schulze-Makuch and William Bains write:

An important question is … whether there exists what Robin Hanson calls “The Great Filter” somewhere between the formation of planets and the rise of technological civilizations. …

Our argument … is that the evolution of complex life [from simple life] is likely … [because] functions found in complex organisms have evolved multiple times, an argument we will elaborate in the bulk of this paper … [and] life started as a simple organism, close to [a] “wall” of minimum complexity … With time, the most complex life is therefore likely to become more complex. … If the Great Filter is at the origin of life, we live in a relatively empty universe, but if the origin of life is common, we live in a Cosmic Zoo where such complex life is abundant.

Here they seem to say that the great filter must lie at the origin of life, and seem unclear on if it could also lie in our future.

In the introduction to in their longer 2017 book, The Cosmic Zoo: Complex Life on Many Worlds, Schulze-Makuch and Bains write:

We see no examples of intelligent, radio-transmitting, spaceship-making life in the sky. So there must be what Robin Hanson calls ‘The Great Filter’ between the existence of planets and the occurrence of a technological civilisation. That filter could, in principle, be any of the many steps that have led to modern humanity over roughly the last 4 billion years. So which of those major steps or transitions are highly likely and which are unlikely? …

if the origin of life is common and habitable rocky planets are abundant then life is common, and we live in a Cosmic Zoo. … Our hypothesis is that all major transitions or key innovations of life toward higher complexity will be achieved by a sufficient large biosphere in a semi-stable habitat given enough time. There are only two transitions of which we have little insight and much speculation—the origin of life itself, and the origin (or survival) of technological intelligence. Either one of these could explain the Fermi Paradox – why we have not discovered (yet) any sign of technologically advanced life in the Universe.

So now they add that (part of) the filter could lie at the origin of human-level language & tech. In the conclusion of their book they say:

There is strong evidence that most of the key innovations that we discussed in… this book follow the Many Paths model. … There are, however, two prominent exceptions to our assessment. The first exception is the origin of life itself. … The second exception … is the rise of technologically advanced life itself. …The third and least attractive option is that the Great Filter still lies ahead of us. Maybe technological advanced species arise often, but are then almost immediately snuffed out.

So now they make clear that (part of) the filter could also lie in humanity’s future. (Though they don’t make it clear to me if they accept that we know the great filter is huge and must lie somewhere; the only question is where it lies.)

In the conclusion of their paper, Schulze-Makuch and Bains say:

We find that, with the exception of the origin of life and the origin of technological intelligence, we can favour the Critical Path [= fixed time delay] model or the Many Paths [= independent origins] model in most cases. The origin of oxygenesis, may be a Many Paths process, and we favour that interpretation, but may also be Random Walk [= long expected time] events.

So now they seem to also add the ability to use oxygen as a candidate filter step. And earlier in the paper they also say:

We postulate that the evolution of a genome in which the default expression status was “off” was the key, and unique, transition that allowed eukaryotes to evolve the complex systems that they show today, not the evolution of any of those control systems per se. Whether the evolution of a “default off” logic was a uniquely unlikely, Random Walk event or a probable, Many Paths, event is unclear at this point.

(They also discuss this in their book.) Which adds one more candidate: the origin of the eukaryote “default off” gene logic.

In their detailed analyses, Schulze-Makuch and Bains look at two key indicators: whether a step was plausibly essential for the eventual rise of advanced tech, and whether we can find multiple independent origins of that step in Earth’s fossil record. These seem to me to both be excellent criteria, and Schulze-Makuch and Bains seem to expertly apply them in their detailed discussion. They are a great read and I recommend them.

My complaint is with Schulze-Makuch and Bains’ titles, abstracts, and other summaries, which seem to arbitrarily drop many viable options. By their analysis criteria, Schulze-Makuch and Bains find five plausible candidates for great filter steps along our timeline: (1) life origin ~3.7Gya, (2) oxygen processing ~3.1Gya (3) Eukaryote default-off genetic control ~1.8Gya, (4) human-level language/tech ~0.01Gya, and (5) future obstacles to our becoming grabby. With five plausible hard steps, it seems unreasonable to claim that “if the origin of life is common, we live in a Cosmic Zoo where such complex life is abundant”.

Schulze-Makuch and Bains seem to justify dropping some of these options because they don’t “favour” them. But I can find no explicit arguments or analysis in their article or book for why these are less viable candidates. Yes, a step being essential and only having been seen once in our history only suggests, but hardly assures, that this is a hard step. Maybe other independent origins happened, but have not yet been seen in our fossil record. Or maybe this did only happen once, but that was just random luck and they could easily have happened a bit later. But these caveats are just as true of all of Schulze-Makuch and Bains’ candidate steps.

I thus conclude that we know of four plausible and concrete candidates for great filter steps before our current state. Now I’m not entirely comfortable with postulating a step very recently, given the consistent trend in increasing brain sizes over the last half billion years. But Schulze-Makuch and Bains do offer plausible arguments for why this might in fact have been an unlikely step. So I accept that they have found four plausible hard great filter steps in our past.

The total number of hard steps in the great filter sets the power in our power law model for the origin of grabby aliens. This number includes not only the hard filter steps that we’ve found in the fossil record of Earth until now, but also any future steps that we may yet encounter, any steps on Earth that we haven’t yet noticed in our fossil record, and any steps that may have occurred on a prior “Eden” which seeded Earth via panspermia. Six steps isn’t a crazy middle estimate, given all these considerations.

GD Star Rating
loading...
Tagged as:

Explaining Regulation

During this pandemic, elites have greatly enjoyed getting to feel important by weighing in on big pandemic policy questions, such as masks, lockdowns, travel restrictions, vaccine tests, vaccine distribution, etc. Each elite can feel self-righteous in their concern for others, and morally outraged when the world doesn’t follow their recommendations. Don’t people know that this is too important for XYZ to get in the way of the world believing that they are right? Unconsciously, they seek to signal that they are in fact elites, by the facts that they agree with elites, that other elites listen to them, and that the world does what elites say.

Imagine that these key pandemic policy choices had been made instead by private actors. Such as vaccine makers testing, pricing, and distributing as they wished, airlines limiting travel as they wished, and legal liability via tracking discouraging overly risky behavior. Government could have influenced these choices indirectly via subsidies and taxes, but the key specific choices would still have been made privately.

In this scenario, talking head elites would have been a lot more frustrated, as they’d have to direct their advice to these private actors, who are much less visibly eager than public officials to slavishly follow elite advice. So elites could less clearly show that they are elites by the fact that the world promptly and respectably obeys their commands.

When these private actors made choices that later seemed like mistakes in retrospect, then elites who resented their neglect would make passionate calls to change legal standards in order to rain down retribution and punishment upon these private actors, to “hold them to account.” Even though they were not at fault according to prior legal standards. However, when private decisions seemed right in retrospect, there’d be few passionate calls to rain down extra rewards on them. As we’ve seen recently in the “opiod crisis”, or earlier with subprime loans, cigarettes, and nuclear power.

In contrast, when government authorities do exactly what elites tell them, and yet in retrospect those decisions look mistaken, there are few calls to hold to account these authorities, or the elites and media who goaded them on. We then hear all about how uncertainty is a real thing, and even good decisions can look bad in retrospect. Given these sort of “heads I win, tails we flip again” standards, it is no surprise that private actors would often rather that key decisions be made by government officials. Even if those decisions will be made worse, private actors can avoid frequent retribution for in-hindsight mistakes.

In principle, elites could argue at higher levels of abstraction, not about specific mask or travel rules, but about how best to structure the general institutions and systems of information and incentives in which various choices are made. Then elites could respond to a crisis by reevaluating and refining these more abstract systems. But, alas, most elites don’t know enough to argue at this level. Some people with doctorates in economics or computer science are up to this task, but in our world we use a great many weak indicators to decide who counts as “elites”, and the vast majority of those who quality simply don’t know how to think about abstract institution design questions. But masks, etc. they think they understand.

Yes, there are many other topics which require great expertise, such as for example designing nuclear reactors. In many such cases, elites realize that they don’t know enough to offer judgments on details, and so don’t express opinions at detail levels. When something goes wrong, they instead may just say “more must be done”, even though they almost never say “less must be done” after a long period without things going wrong. Or they may respond to a problem by saying “government-authorized authorities must oversee more of these details”, though again they hardly ever suggest overseeing fewer details in other situations.

So the problem with regulation is more fundamentally that elites focus on reacting to concrete failures, instead of looking for missed opportunities, and they don’t understand much more than “do more” and “oversee more” as the possible institutional responses to concrete problems that they see need expertise. Nor do they understand much about how to design better institutions other than to respond in these ways to more particular observed problems.

And that’s my simple theory of most regulation. Elites love to pontificate on the problems of the day, and want whatever consensus they produce to be quickly enacted by authorities. As government officials are far more prompt and subservient in such responses, elites prefer government authorities to have strong regulatory powers. Elites enforce this preference via asymmetric pressures on private actors, punishing failure but not rewarding success, yet doing neither for public actors and their elite supporters.

Elon Musk is in for a world of pain if any of his many quite risky ventures ever stumbles, as elites are mad at him for ignoring their advice that none of his ventures ever had a chance. Zuckerberg is already being credibly threatened with punishment for supposed missteps by Facebook, even though it isn’t at all clear what they did wrong, and with no gratitude shown for all the social value they’ve contributed thus far.

All this gives me mixed feelings when I see smart people offer good advice in elite discussions on concrete topics like masks, vaccines, etc. Yes, given that this is how decisions are going to be made, it is better to make good than bad choices. But I wish such advisors more often and visibly said that this isn’t how such decisions should made. We should instead design good general institutions we can trust to deal with each crisis without needing constant elite micromanagement.

GD Star Rating
loading...
Tagged as: , ,

Try-Two Contest Board

Imagine that a restaurant wants to ask its associates (cooks, servers, etc.) what are the best two menu items to put on its menu as specials on a particular night. They have a large set of possible menu items to consider, the measure of success is menu item sales revenue, and they want a mechanism that is both fun and easy. (Which rules out conditional prediction markets, at least for now.)

Here’s an idea. Start with a contest board like this, on a wall near associates:

Continue reading "Try-Two Contest Board" »

GD Star Rating
loading...
Tagged as: ,