Monthly Archives: April 2022

The Meaning of Life

Humans act all the time, which implies that they have preferences, i.e. persistent internal structures which say which choices they make in which situations. But humans aren’t usually very good at explaining their preferences. They instead find it hard to give a consistent abstract account that explains their choices. They can act, but can’t say what they want.

One of the things people sometimes say is that they make their choices to gain “meaning”. But they say many different conflicting things about what things actually give “meaning”, different not only between people but even within the same person. That is, people seem quite confused about the “meaning of life”.

If humans are at root pretty similar, then having any one person learn the meaning of their life would seem to be quite informative to everyone else about the meaning of their lives. And a substantial fraction of the many billions of humans who have ever lived have in fact tried to learn about the meaning of their lives. Furthermore, some of these people have claimed to have succeed in discovering this meaning.

Yet no one seem to have persuaded a substantial fraction of humanity to their view on this. Presented solutions to this key questions seem either overly vague or insufficiently supported by evidence in human behavior or words. What can we conclude from this key fact? Let us consider some possible explanations.

One possibility is that there is just no such thing. Human actions are induced by a complex mess of structures that is not reasonably summarized by any abstract coherent shared concept of “meaning”. When people have a feeling of having found “meaning”, that isn’t the result of their matching their lives to such a coherent pre-existing concept, but instead due to yet another complex mess of social and mental processes. We feel “meaning” when that seems to be useful to our minds, but there is no there there. We haven’t found it because it doesn’t exist.

A second possibility is that people have in fact discovered simple abstract expressible truths about the meaning of our lives. But these truths are mostly ugly, and thus not one they are eager to own and tell to others. And when they do tell others, their audiences mostly do not want to hear, and instead prefer to embrace the mistaken claims of those who do not actually know, but instead wishfully offer more aspirational accounts.

And a third possibility, is, what? My mind goes blank here. How could there be simple abstract truth on what gives us meaning, to explain our preferences, and yet either no one among the billions who have looked has ever found it, or when they all do find it they somehow can’t communicate it to others, even though to others this discovery would be quite unobjectionable and pleasing?

GD Star Rating
loading...
Tagged as:

Dealism

We economists, and also other social scientists and policy specialists, are often criticized as follows:

You recommend some policies over others, and thus make ethical choices. Yet your analyses are ethically naive and impoverished, including only a tiny fraction of the relevant considerations known to professional ethicists. Stop it, learn more on ethics, or admit you make only preliminary rough guesses.

My response is “dealism”:

The world is full of competent and useful advisors (doctors, lawyers, therapists, gardeners, realtors, hairstylists, etc.) similarly ignorant on ethics. Yes, much advice says “given options O, choose X to achieve purpose P”, but when they don’t specify purpose P the usual default is not P = “act the most ethically”, but instead P = “get what you want”.

Economists policy recommendations are usually designed to help relatively large groups make better social “deals”, via identifying their “Pareto frontier” (within option subspaces). This frontier is the set of options where some can get more of what they want only via others getting less. We infer what people want via the “revealed preferences” of models that fit their prior choices.

As people can be expected to seek out advice they expect to help them to get what they want, we economists branding ourselves in this way can induce more to seek our advice. We can reasonably want to fill this role. Doing so does not commit us to taking on all possible clients, nor to making any ethical claims whatsoever.

Yes, if people are hypocritical, and pretend to want morality more than they do, they may prefer advisors who similarly pretend. In which case we economists can also pretend that our clients want that, to help preserve their pretensions. But we wouldn’t need to know more about ethics than our clients do, and beneath that veneer of morality, clients likely prefer our advice to be targeted mostly at getting them what they want.

Yes, there are many ways one might argue that this economist’ practice is ethically good. But I make no such arguments here.

Yes, there are other possible ways to help people. Helping them identify deals is not the only way, and often not the best way, to help or advise people.

Most people want in part to be moral, and they think that what they and others want is relevant to what acts are moral. It is just that these two concepts are not identical. If in fact what people want is only and wholly to be ethical, then the difference between being ethical and getting what you want collapses. But even so, this econ approach remains useful, and in this case our advice now also becomes ethical.

The same arguments apply if we replace “be ethical” with “do what you have good reasons to do”. If there is a difference, then others should seek our advice more if it is on what they want, relative to what they have reasons to do.

What if the process of hearing our advice, or following it, can change what people want? (The advice might include a sermon, and doing something can change how you feel about it.) In this case, people will most seek out our advice when those changes in wants match their meta-wants regarding such changes. And those meta-wants are revealed in part via how they choose advisors.

For example, when people choose advisors retrospectively, based on who seems to have been pleased with the advice that they were given, that reveals a preference for changes in wants that make them pleased after the fact. In that case, you’d want to give the advice that resulted in a combination of outcomes and want changes that made them pleased later. In this case they wouldn’t mind changes to their wants, as long as those resulted in their being more pleased.

In contrast, when people choose advisors prospectively, based on how pleased they are now with the outcomes that they expect to result from your advice, then you would only want to offer advice which clients expect to change their wants if such clients expect to be pleased by such changes. So you’d want to offer advice that seemed to promote the want changes that they aspire to, but prevent the want changes that they fear or despise.

And that’s it. Many presume that policy discussions are about morality. But as a policy advisor, you can reasonably take the stance that your advice is not about morality, and that economic analysis is well-suited to the advice role that you have chosen.

GD Star Rating
loading...
Tagged as: , , ,

Hidden Motives In Law

In our book The Elephant in the Brain, Hidden Motives in Everyday Life, Kevin Simler and I first review the reasons to expect humans to often have hidden motives, and then we describe our main hidden motives in each of ten areas of life. In each area, we start with the usual claimed motive, identify puzzles that don’t fit well with that story, and then describe another plausible motive that fits better.

We hoped to inspire others to apply our method to more areas of life, but we have so far largely failed there. So its past time for me to take up that task. And as law & economics is the class I teach most often, that’s a natural first place to start. So what are our motives regarding our official systems for dispute resolution?

Saying the word “justice” doesn’t help much; what does that mean? But the field of law and economics has a standard answer that looks reasonable: economic efficiency. Which in law translates to encouraging cost-benefit-optimal levels of commitment, reliance, care, and activity. And the substantial success of law and economics scholarship suggests that this is in fact an important motive in law. Furthermore, as most everyone can get behind it, this is plausibly our most overt motive regarding law. But we also see many puzzles in law not well explained by this approach. Which suggests to me three other motives.

Back in the forager era, before formal law, disputes were resolved by mobs. That is, the local band talked informally about accusations of norm violations, came to a consensus about what to do, and then implemented that themselves. As this mob justice system has many known failure modes, we probably added law as a partial replacement in order to cut such failures. Thus a plausible secondary motive in law is to try to minimize the common failings of mob justice, and to insulate the legal system from mob influence.

The main failure of mob justice is plausibly a rush to judgment; each person in a gossip network has local incentives to accept the stance of whomever first reports an accusation to them. And the most interested parties are far more likely than average to be the first source of the first report someone hears. In response, law seeks to make legal decision makers independent and disconnected from the disputants and their gossip network, and to make such decision markers listen to all the evidence before making their decision. The rule against hearsay evidence is also plausibly to limit the influence of gossip on trials.

Leaders of the legal system often express concerns about its perceived legitimacy, and this makes sense as a third motive of the legal system. And as the most common threat to such legitimacy is widespread criticism of particular legal decisions, many features of law can be understood as ways to avoid such criticism. For example, criticism is likely cut via having legal personnel, venues, and demeanors be maximally prestigious and deferential to legal authorities.

Also, the more complex are legal language and arguments, the harder it becomes for mobs to question them. The longer the delay before final legal decisions, the less passion will remain to challenge them. Finally, the more expensive is the legal process, the fewer rulings there will be to question. Our most official legal systems differ from all our other less official dispute resolutions systems in all of these ways. They are slower, more expensive, less understandable, and more prestigious.

The last hidden motive that I think I see is that each legal jurisdiction wants to look good to outsiders. So most every jurisdiction has laws against widely disapproved behaviors, such as adultery, prostitution, or drinking alcohol on the street, even though such laws are often quite weakly enforced. Most set high standards of proof and adopt the usual rules constraining what evidence can be presented at trial, even though there’s little evidence that these rules help on net.

Most jurisdictions pretend to enforce all laws equally on everyone, but actually give police differential priorities; some locations, suspects, and victims count a lot more than others. It would be quite feasible, and probably lot more efficient, to use a bounty hunting system to enforce laws, and most locals are well aware of these varying priorities. But that would require admitting such differential priorities to outsiders, via explicit differences in the bounties paid. So most jurisdictions prefer government employees, who can be more hypocritical.

Similarly, our usual form of criminal punishment, nice jail, is less efficient than all the other forms, including mean jail, exile, corporal punishment, and fines. Holding constant how averse a convict is to suffer each punishment, nice jail costs the most. Alas, the world has fallen into an equilibrium where any jurisdiction that allows any punishment other than nice jail is declared to be cruel and unjust. Even giving the convict the choice between such punishments is called unjust. So the strong desire to avoid such accusations pushes most jurisdictions into using the least efficient form of punishment.

In sum, I see four big motives in law: encouraging commitment and care, avoiding failings of mob justice, preserving system legitimacy via avoiding clear decisions, and hindering distant observers from accusing a jurisdiction of injustice, even if most locals are not fooled.

One can of course postulate many more possible motives, including diverting revenue and status to legal authorities, preserving and increasing existing inequalities, giving civil authorities more arbitrary powers, and empowering busybodies to meddle in the lives of others. But it isn’t clear to me that these add much more explanatory power, given the above motives.

GD Star Rating
loading...
Tagged as: ,

Will Design Escape Selection?

In the past, many people and orgs have had plans and designs, many which made noticeable differences to the details of history. But regarding most of history, our best explanations of overall trends has been in terms of competition and selection, including between organisms, species, cultures, nations, empires, towns, firms, and political factions.

However, when it comes to the future, especially hopeful futures, people tend to think more in terms of design than selection. For example, H.G. Wells was willing to rely on selection to predict a future dystopia in The Time Machine, but his utopia in Things to Come was the result of conscious planning replacing prior destructive competition. Hopeful futurists have long painted pictures of shiny designed techs, planned cities, and wise cooperative institutions of charity and governance.

Today, competition and selection continue on in many forms, including political competition for the control of governance institutions. But instead of seeing governance, law, and regulation as driven largely by competition between units of governance (e.g., parties, cities, or nations), many now prefer to see them in design terms: good people coordinating to choose how we want to live together, and to limit competition in many ways. They see competition between units of governance as largely passé, and getting more so as we establish stronger global communities and governance.

My future analysis efforts have relied mostly on competition and selection. Such as in Age of Em, post-em AI, Burning the Cosmic Commons, and Grabby Aliens. And in my predictions of long views and abstract values. Their competitive elements, and what that competition produces, are often described by others as dystopian. And the most common long-term futurist vision I come across these days is of a “singleton” artificial general intelligence (A.G.I.) for whom competition and selection become irrelevant. In that vision (of which I am skeptical), there is only one A.G.I., which has no internal conflicts, grows in power and wisdom via internal reflection and redesign, and then becomes all powerful and immortal, changing the universe to match its value vision.

Many recent historical trends (e.g., slavery, democracy, religion, fertility, leisure, war, travel, art, promiscuity) can be explained in terms of rising wealth inducing a reversion to forager values and attitudes. And I see these design-oriented attitudes toward governance and the future as part of this pro-forager trend. Foragers didn’t overtly compete with each other, but instead made important decisions by consensus, and largely by appeal to community-wide altruistic goals. The farming world forced humans to more embrace competition, and become more like our pre-human ancestors, but we were never that comfortable with it.

The designs that foragers created, however, were too small to reveal the key obstacle to this vision of civilization-wide collective design to over-rule competition: rot (see 1 2 3 4). Not only is it quite hard in practice to coordinate to overturn the natural outcomes of competition and selection, the sorts of complex structures that we are tempted to use to achieve that purpose consistently rot, and decay with time. If humanity succeeds in creating world governance strong enough to manage competition, those governance structures are likely to prevent interstellar colonization, as that strongly threatens their ability to prevent competition. And such structures would slowly rot over time, eventually dragging civilization down with them.

If competition and selection manages to continue, our descendants may become grabby aliens, and join the other gods at the end of time. In that case one of the biggest unanswered question is: what will be the key units of future selection? How will those units manage to coordinate, to the extent that they do, while still avoiding the rotting of their coordination mechanisms? And how can we now best promote the rise of the best versions of such competing units?

GD Star Rating
loading...
Tagged as: , ,

Intellectual Prestige Futures

As there’s been an uptick of interest in prediction markets lately, in the next few posts I will give updated versions of some of my favorite prediction market project proposals. I don’t own these ideas, and I’d be happy for anyone to pursue any of them, with or without my help. And as my first reason to consider prediction markets was to reform academia, let’s start with that.

Back in 2014, I restated my prior proposals that research patrons subsidize markets, either on relatively specific results likely to be clearly resolved, such as the mass of the electron neutrino, or on simple abstract statements to be judged by a distant future consensus, conditional on such a consensus existing. Combinatorial markets connecting abstract questions to more specific ones could transfer their subsidizes to those the latter topics.

However, I fear that this concept tries too hard to achieve what academics and their customers say they want, intellectual progress, relative to what they more really want, namely affiliation with credentialed impressiveness. This other priority better explains the usual behaviors of academics and their main customers, namely students, journalists, and patrons. (For example, it was a bad sign when few journals showed interest in using prediction market estimates of which of their submissions were likely to replicate.) So while I still think the above proposal could work, if patrons cared enough, let me now offer a design better oriented to what everyone cares more about.

I’d say what academics and their customers want more is a way to say which academics are “good”. Today, we mostly use recent indicators of endorsement by other academics, such as publications, institutional affiliations, research funding, speaking invitations, etc. But we claim, usually sincerely, to be seeking indicators of long term useful intellectual impact. That is, we want to associate with the intellectuals about whom we have high and trustworthy shared estimates of the difference that their work will make in the long run toward valuable intellectual progress.

A simple way to do this would be to create markets in assets on individuals, where each asset pays as a function of a retrospective evaluation of that individual, an evaluation made in the distant future via detailed historical analysis. By subsidizing market makers who trade in such assets, we could today have trustworthy estimates to use when deciding which individuals among us we should consider for institutional affiliations, funding, speaking invitations, etc. (It should be easy for trade on assets that merge many individuals with particular features, such as Ph.Ds from a particular school.)

Once we had a shared perception that these are in fact our best available estimates, academics would prefer them over less reliable estimates such as publications, funding, etc. As the value of an individual’s work is probably non-linear in their rank, it might make sense to have people trade assets which pay as a related non-linear function of their rank. This could properly favor someone with a low median rank but high variance in that rank over someone else with a higher median but lower variance.

Why wait to evaluate? Yes, distant future evaluators would know our world less well. But they would know much better which lines of thought ended up being fruitful in a long run, and they’d have more advanced tech to help them study intellectual connections and lineages. Furthermore, compound interest would give us access to a lot more of their time. For example, at the 7% post-inflation average return of the S&P500 1871-2021, one dollar becomes one million dollars in 204 years. (At least if the taxman stays aside.)

Furthermore, such distant evaluations might only be done on a random fraction, say one percent, of individuals, with market estimates being conditional on such a future evaluation being made. And as it is likely cheaper to evaluate people who worked on related topics, it would make sense to randomly pick large sets of related individuals to evaluate together.

Okay, but having ample resources to support evaluations by future historians isn’t enough; we also need to get clear on the evaluation criteria they are to apply. First, we might just ask them to sort a sample of intellectuals relative to each other, instead of trying to judge their overall quality on some absolute scale. Second, we might ask them to focus on an individual’s contributions to helping the world figure out what is true on important topics; being influential but pushing in the wrong directions might count against them. Third, to correct for problems caused by scholars who play organizational politics, I’d rather ask future historians to rate how influential an individual should have been, if others had been a bit more fair in choosing to whom to listen.

The proposal I’ve sketched so far is relatively simple, but I fear it looks too stark; forcing academics to admit more than they’d like that the main thing they care about is their relative ranking. Thus we might prefer to pay a mild complexity cost to focus instead on having future historians rate particular works by intellectuals, such as their journal articles or books. We could ask future historians to rate such works in such a way that the total value of each intellectual was reasonably approximated by the sum of the values of each of their work’s.

Under this system, intellectuals could more comfortably focus on arguing about the the total future impact of each work. Derivatives could be created to predict the total value of all the works by an individual, to use when choosing between individuals. But everyone could claim that is just a side issue, not their main focus.

To pursue this project concept, a good first step would be to fund teams of historians to try to rank the works of intellectuals from several centuries ago. Compare the results of different historian teams assigned to the same task, and have teams seek evaluation methods that can be both reliable and also get at the key questions of actual (or counterfactual) impact on the progress that matters. Then figure out which kinds of historians are best suited to applying such methods, and which funding methods best induce them to do such work in a cost-effective manner.

With such methods in hand, we could with more confidence set up markets to forecast the impact of particular current intellectuals and their works. We’d probably want to start with particular academic fields, and then use success there to persuade other fields to follow their example. This seems easier the higher the prestige of the initial academic fields, and the more open are they all to using new methods.

GD Star Rating
loading...
Tagged as: ,

Hello Alien Polls

Define a “hello” alien civilization as one that might, in the next million years, identify humans as intelligent & civilized, travel to Earth, & say “hello” by making their presence & advanced abilities known to us. I just asked 15 Twitter poll questions on such aliens, each of which got 200-300 responses. 

Respondents mostly agreed to estimate a high chance of having internal status hierarchies (78%), being artificial (68%), trying to talk to us (64%), having morals (64%), and being descended from land animals (60%). Respondents mostly agreed on a low chance of being green (27%), once having had a nuke war (34%), and having internal conflicts (34%). They mostly agreed on a middle estimate (46%) on how much morals we’d share with them.

Respondents were split into two groups with strongly opposing views regarding if they could talk in our language, or if they feel materially threatened by our descendants. Respondents seem basically confused, with nearly even choice among the four options, regarding if hello aliens came from two genders, have identifiable agents, want to impress and lead us, or are led by a single government. 

Here are the main ways I disagree: Any aliens arriving here now on Earth must be very old; recent origin would be an incredible timing coincidence. As we don’t see them elsewhere in the sky, they have somehow prevented themselves from greatly changing nearby galaxies. This suggests they are green, and have a world government to enforce green rules.

Which suggests their reason for visiting: to get us to go along with their green rules. And a way to do that is to look very impressive but not talk to us, as talking would likely reveal things about them we’d hate.

GD Star Rating
loading...
Tagged as:

The Accuracy of Authorities

“WHO treads a difficult line, & tends to be quite conservative in its recommendations to avoid putting out info that later proves to be incorrect. ‘You can’t be backtracking’ … because ‘then you lose complete credibility’.” (More)

There is something important to learn from this example. The best estimates of a maximally accurate source would be very frequently updated and follow a random walk, which implies a large amount of backtracking. And authoritative sources like WHO are often said to be our most accurate sources. Even so, such sources do not tend to act this way. They instead update their estimates rarely, and are especially reluctant to issue estimates that seem to backtrack. Why?

First, authoritative sources serve as a coordination point for the behavior of others, and it is easier to coordinate when estimates change less often. Second, authoritative sources need to signal that they have power; they influence others far more than others influence them. Both of these pressures push them toward making infrequent changes. Ideally only one change, from “we don’t know”, to “here is the answer”. But if so, why do they feel pressures to issue estimates more often than this?

First, sometimes there are big decisions that need to be made, and then authorities are called upon to issue estimates in time to help with those decisions. For example, WHO was often called upon to issue estimates to help with a rapidly changing covid epidemic.

Second, sometimes a big source of relevant info appears, and it seems obvious to all that it must be taken into account. For example, no matter how confident we were to win a battle, we should expect to get news about how that battle actually went, and update accordingly. In this case, the authority is more pressed to update its estimate, but also more forgiven for changing its estimate. So during covid, authorities were expected to update on changing case and death counts, and that didn’t count so much as “backtracking”.

Third, sometimes rivals compete for authority. And then sources might be compared regarding their accuracy track record. This would push them toward the frequently updated random walk scenario, which can degrade the appearance of authority for all such competitors. (The other two pressures to update more often may also degrade authority; e.g., WHO’s authority seems to have degraded during covid.)

Due to the first of these pressures, the need to inform decisions, authoritative sources prefer that dependent decisions be made infrequently and opaquely. Such as by central inflexible organizations, who decide by opaque political processes. E.g., masking, distancing, and vaccine policies for covid. There can thus form a natural alliance between central powers and authoritative sources.

Due to the second of these pressures, authoritative sources prefer a strong consensus on what are the big sources of info that force them to update. This pushes for making very simple, stable, and clear distinctions between “scientific” info sources, on which one must update, and “unscientific” sources, on which it is in considered inappropriate for authors to update. Those latter sources must be declared not just less informative, but un-informative, and slandered in enough ways so that few who aspire to authority are tempted to rely on them.

Due to the third of these pressures, authoritative sources will work hard to prevent challengers competing on track record accuracy. Authorities will issue vague estimates that are hard to compare, prevent the collection of data that would support comparisons, and accuse challengers of crimes (e.g., moral positions) to make them seem ineligible for authority. And other kinds of powers, who prefer a single authority source they can defer to in order to avoid responsibility for their decisions, will help to suppress such competitors.

This story seems to explain why ordinary people take backtracking as a sign of inaccuracy. They have a hidden motive to follow authorities, but give accuracy as their excuse for following such sources. This forces them to see backtracking as a general sign of inaccuracy.

This all seems to be bad news for efforts to gain credibility, funding, and legal permission for alternative estimate sources, such as those based on prediction markets or forecasting competitions. This helps explain why individual org managers are reluctant to support such alternate sources, and why larger polities create barriers to them, such as via censorship, professional licensing, and financial regulation.

This all points to another risk of our increasingly integrated world community of elites. They may form central sources of authoritative estimates, which coordinate with other authorities to suppress alternate sources. Previously, world wide competition made it easier to defy and challenge such estimate authorities.

Added: As pointed out by @TheZvi, a 4th pressure on authorities to update more often is to stay consistent with other authorities. This encourages authorities to coordinate to update together at the same time, by talking first behind the scenes.

Added 11Apr: Seem many comments on this over at Marginal Revolution.

GD Star Rating
loading...
Tagged as: ,

AI Language Progress

Brains first evolved to do concrete mental tasks, like chasing prey. Then language evolved, to let brains think together, such as on how to chase prey together. Words are how we share thoughts.

So we think a bit, say some words, they think a bit, they say some words, and so on. Each time we hear some words we update our mental model on their thoughts, which also updates us about the larger world. Then we think some more, drawing more conclusions about the world, and seek words that, when said, help them to draw similar conclusions. Along the way, mostly as a matter of habit, we judge each other’s ability to think and talk. Sometimes we explicit ask questions, or assign small tasks, which we expect to be especially diagnostic of relevant abilities in some area.

The degree to which such small task performance is diagnostic of abilities re the more human fundamental task of thinking together varies a lot. It depends, in part, on how much people are rewarded merely for passing those tests, and how much time and effort they can focus on learning to pass tests. We teachers are quite familiar with such “teaching to the test”, and it is often a big problem. There are many topics that we don’t teach much because we see that we just don’t have good small test tasks. And arguably schools actually fail most of the time; they arguably pretend to teach many things but mostly just rank students on general abilities to learn to pass tests, and inclinations to do what they are told. Abilities which can predict job performance.

Which brings us to the topic of recent progress in machine learning. Google just announced its PaLM system, which fit 540 billion parameters to a “high-quality corpus of 780 billion tokens that represent a wide range of natural language use cases”, in order to predict from past words the next words appropriate for a wide range of small language tasks. Its performance is impressive; it does well compared to humans on a wide range of such tasks. And yet it still basically “babbles“; it seems not remotely up to the task of thinking together with a human. If you talked with it for a long time, you might well find ways that it could help you. But still, it wouldn’t think with you.

Maybe this problem will be solved by just adding more parameters and data. But I doubt it. I expect that a bigger problem is that such systems have been training at these small language tasks, instead of at the more fundamental task of thinking together. Yes, most of the language data on which they are built is from conversations where humans were thinking together. So they can learn well to say the next small thing in such a conversation. But they seem to be failing to infer the deeper structures that support shared thinking among humans.

It might help to assign such a system the task of “useful inner monologue”. That is, it would start talking to itself, and keep talking indefinitely, continually updating its representations from the data of its internal monologue. The trick would be to generate these monologues and do this update so that the resulting system got better at doing other useful tasks. (I don’t know how to arrange this.) While versions of this approach have been tried before, the fact that this isn’t the usual approach suggests that it doesn’t now produce gains as fast, at least for doing these small language tasks. Even so, if those are misleading metrics, this approach might help more to get real progress at artificial thinking.

I will sit up and take notice when the main improvements to systems with impressive broad language abilities come from such inner monologues, or from thinking together on other useful tasks. That will look more like systems that have learned how to think. And when such abilities work across a wide scope of topics, that will look to me more like the proverbial “artificial general intelligence”. But I still don’t expect to see that for a long time. We see progress, but the road ahead is still quite long.

GD Star Rating
loading...
Tagged as:

Great Filter With Set-Backs, Dead-Ends

A biological cell becomes cancerous if a certain set of rare mutations all happen in that same cell before its organism dies. This is quite unlikely to happen in any one cell, but a large organism has enough cells to create a substantial chance of cancer appearing somewhere in it before it dies. If the chances of mutations are independent across time, then the durations between the timing of mutations should be roughly equal, and the chance of cancer in an organism rises as a power law in time, with the power equal to the number of required mutations, usually around six.

A similar process may describe how an advanced civilization like ours arises from a once lifeless planet. Life may need to advance through a number of “hard step” transitions, each of which has a very low chance per unit time of happening. Like evolving photosynthesis or sexual reproduction. But even if the chance of advanced life appearing on any one planet before it becomes inhabitable is quite low, there can be enough planets in the universe to make the chance of life appearing somewhere high.

As with cancer, we can predict that on a planet lucky enough to birth advanced life, the time durations between its step transitions should be roughly equal, and the overall chance of success should rise with time as the power of the number of steps. Looking at the history of life on Earth, many observers have estimated that we went through roughly six (range ~3-12) hard steps.

In our grabby aliens analysis, we say that a power of this magnitude suggests that Earth life has arrived very early in the history of the universe, compared to when it would arrive if the universe would wait empty for it to arrive. Which suggests that grabby aliens are out there, have now filled roughly half the universe, and will soon fill all of it, creating a deadline soon that explains why we are so early. And this power lets us estimate how soon we would meet them: in roughly a billion years.

According to this simple model, the short durations of the periods associated with the first appearance of life, and with the last half billion years of complex life, suggest that at most one hard step was associated with each of these periods. (The steady progress over the last half billion years also suggests this, though our paper describes a “multi-step” process by which the equivalent of many hard steps might be associated with somewhat steady progress.)

In an excellent new paper in the Proceedings of the Royal Society, “Catastrophe risk can accelerate unlikely evolutionary transitions”, Andrew Snyder-Beattie and Michael Bonsall extend this standard model to include set-backs and dead-ends.

Here, we generalize the [standard] model and explore this hypothesis by including catastrophes that can ‘undo’ an evolutionary transition. Introducing catastrophes or evolutionary dead ends can create situations in which critical steps occur rapidly or in clusters, suggesting that past estimates of the number of critical steps could be underestimated. (more)

Their analysis looks solid to me. They consider scenarios where, relative to the transition rate at which a hard step would be achieved, there is a higher rate of a planet “undoing” its last hard step, or of that planet instead switching to a stable “stuck” state from which no further transitions are possible. In this case, advanced life is achieved mainly in scenarios where the hard steps that are vulnerable to these problems are achieved in a shorter time than it takes to undo or stuck them.

As a result, the hard steps which are vulnerable to these set-back or dead-end problems tend to happen together much faster than would other sorts of hard steps. So if life on early Earth was especially fragile amid especially frequent large asteroid impacts, many hard steps might have been achieved then in a short period. And if in the last half billion years advanced life has been especially fragile and vulnerable to astronomical disasters, there might have been more hard steps within that period as well.

Their paper only looks at the durations between steps, and doesn’t ask if these model modifications change the overall power law formula for the chance of success as a function of time. But my math intuition is telling me it feels pretty sure that the power law dependence will remain, where the power now goes as the number of all these steps, including the ones that happen fast. Thus as these scenarios introduce more hard steps into Earth history, the overall power law dependence of our grabby aliens model should remain but become associated with a higher power. Maybe more like twelve instead of six.

With a higher power, we will meet grabby aliens sooner, and each such civilization will control fewer (but still many) galaxies. Many graphs showing how our predictions vary with this power parameter can be found in our grabby aliens paper.

GD Star Rating
loading...
Tagged as: ,