Tag Archives: Future

Earth: A Status Report

In a universe that is (so far) almost entirely dead, we find ourselves to be on a rare planet full not only of life, but now also of human-level intelligent self-aware creatures. This makes our planet a roughly a once-per-million-galaxy rarity, and if we ever get grabby we can expect to meet other grabby aliens in roughly a billion years.

We see that our world, our minds, and our preferences have been shaped by at least four billions years of natural selection. And we see that evolution going especially fast lately, as we humans pioneer many powerful new innovations. Our latest big thing: larger scale organizations, which have induced our current brief dreamtime, wherein we are unusually rich.

For preferences, evolution has given us humans a mix of (a) some robust general preferences, like wanting to be respected and rich, (b) some less robust but deeply embedded preferences, like preferring certain human body shapes, and (c) some less robust but cultural plastic preferences, such as which particular things each culture finds more impressive.

My main reaction to all this is to feel grateful to be a living intelligent creature, who is compatible enough with his world to often get what he wants. Especially to be living in such a rich era. I accept that I and my descendants will long continue to compete (in part by cooperating of course), and that as the world changes evolution will continue to change my descendants, including as needed their values.

Many see this situation quite differently from me, however. For example, “anti-natalists” see life as a terrible crime, as the badness of our pains outweigh the goodness of our pleasures, resulting in net negative value lives. They thus want life on Earth to go extinct. Maybe, they say, it would be okay to only create really-rich better-emotionally-adjusted creatures. But not the humans we have now.

Many kinds of “conservatives” are proud to note that their ancestors changed in order to win prior evolutionary competitions. But they are generally opposed to future such changes. They want only limited changes to our tech, culture, lives, and values; bigger changes seem like abominations to them.

Many “socialists” are furious that some of us are richer and more influential than others. Furious enough to burn down everything if we don’t switch soon to more egalitarian systems of distribution and control. The fact that our existing social systems won difficult prior contests does not carry much weight with them. They insist on big radical changes now, and disavow any failures associated with prior attempts made under their banner. None of that was “real” socialism, you see.

Due to continued global competition, local adoption of anti-natalist, conservative, or socialist agendas seems insufficient to ensure these as global outcomes. Now most fans of these things don’t care much about long term outcomes. But some do. Some of those hope that global social pressures, via global social norms, may be sufficient. And others suggest using stronger global governance.

In fact, our scales of governance, and level of global governance, have been increasing over centuries. Furthermore, over the last half century we have created a world community of elites, wherein global social norms and pressures have strong power.

However, competition at the largest scales has so far been our only robust solution to system rot and suicide, problems that may well apply to systems of global governance or norms. Furthermore, centralized rulers may be reluctant to allow civilization to expand to distant places which they would find it harder to control.

This post resulted from Agnes Callard asking me to comment on Scott Alexander’s essay Meditations On Moloch, wherein he takes similarly stark positions on these grand issues. Alexander is irate that the world is not adopting various utopian solutions to common problems, such as ending corporate welfare, smaller militaries, and common hospital medical record systems. He seems to blame all of that, and pretty much anything else that has ever gone wrong, on something he personalizes into a monster “Moloch.” And while Alexander isn’t very clear on what exactly that is, my best read is that it is the general phenomenon of competition (at least the bad sort); that at least seems central to most of the examples he gives.

Furthermore, Alexander fears that, in the long run, competition will force our descendants to give up absolutely everything that they value, just to exist. Now he has no empirical or theoretical proof that this will happen; his post is instead mostly a long passionate primal scream expressing his terror at this possibility.

(Yes, he and I are aware that cooperation and competition systems are often nested within each other. The issue here is about the largest outer-most active system.)

Alexander’s solution is:

Elua. He is the god of flowers and free love and all soft and fragile things. Of art and science and philosophy and love. Of niceness, community, and civilization. He is a god of humans. … Only another god can kill Moloch. We have one on our side, but he needs our help. We should give it to him.

By which Alexander means: start with a tiny weak AI, induce it to “foom” (sudden growth from tiny to huge), resulting in a single “super-intelligent” AI who rules our galaxy with an iron fist, but wrapped the velvet glove of being “friendly” = “aligned”. By definition, such a creature makes the best possible utopia for us all. Sure, Alexander has no idea how to reliably induce a foom or to create an aligned-through-foom AI, but there are some people pondering theses questions (who are generally not very optimistic).

My response: yes of course if we could easily and reliably create a god to mange a utopia where nothing ever goes wrong, maybe we should do so. But I see enormous risks in trying to induce a single AI to grow crazy fast and then conquer everything, and also in trying to control that thing later via pre-foom design. I also fear many other risks of a single global system, including rot, suicide, and preventing expansion.

Yes, we might take this chance if we were quite sure that in the long term all other alternatives result in near zero value, while this remained the only scenario that could result in substantial value. But that just doesn’t seem remotely like our actual situation to me.

Because: competition just isn’t as bad as Alexander fears. And it certainly shouldn’t be blamed for everything that has ever gone wrong. More like: it should be credited for everything that has ever gone right among life and humans.

First, we don’t have good reasons to expect competition, compared to an AI god, to lead more reliably to the extinction either of life or of creatures who value their experiences. Yes, you can fear those outcomes, but I can as easily fear your AI god.

Second, competition has so far reigned over four billion years of Earth life, and at least a half billion years of Earth brains, and on average those seem to have been brain lives worth living. As have been the hundred billion human brain lives so far. So empirically, so far, given pretty long time periods, competition has just not remotely destroyed all value.

Now I suspect that Alexander might respond here thus:

The way that evolution has so far managed to let competing creatures typically achieve their values is by having those values change over time as their worlds change. But I want descendants to continue to achieve their values without having to change those values across generations.

However, relatively soon on evolutionary timescales, I’ve predicted that, given further competition, our descendants will come to just directly and abstractly value reproduction. And then after that, no descendant ever need to change their values. But I think even that situation isn’t good enough for Alexander; he wants our (his?) current human values to be the ones that continue and never change.

Now taken very concretely, this seems to require that our descendants never change their tastes in music, movies, or clothes. But I think Alexander has in mind only keeping values the same at some intermediate level of abstraction. Above the level of specific music styles, but below the level of just wanting to reproduce. However, not only has Alexander not been very clear regarding which exact value abstraction level he cares about, I’m not clear on why the rest of us should agree to with him about this level, or care as much as he does about it.

For example, what if most of our descendants get so used to communicating via text that they drop talking via sound, and thus also get less interesting in music? Oh they like artistic expressions using other mediums, such as text, but music becomes much more of a niche taste, mainly of interest to that fraction of our descendants who still attend a lot to sound.

This doesn’t seem like such a terrible future to me. Certainly not so terrible that we should risk everything to prevent it by trying to appoint an AI god. But if this scenario does actually seem that terrible to you, I guess maybe you should join Alexander’s camp. Unless all changes seem terrible to you, in which case you might join the conservative camp. Or maybe all life seems terrible to you, in which case you might join the anti-natalists.

Me, I accept the likelihood and good-enough-ness of modest “value drift” due to future competition. I’m not saying I have no preferences whatsoever about my descendants’ values. But relative to the plausible range I envision, I don’t feel greatly at risk. And definitely not so much at risk as to make desperate gambles that could go very wrong.

You might ask: if I don’t think making an AI god is the best way to get out of bad equilibria, what do I suggest instead? I’ll give the usual answer: innovation. For most problems, people have thought of plausible candidate solutions. What is usually needed is for people to test those solution in smaller scale trials. With smaller successes, it gets easier to entice people to coordinate to adopt them.

And how do you get people to try smaller versions? Dare them, inspire them, lead them, whatever works; this isn’t something I’m good at. In the long run, such trials tend to happen anyway, by accident, even when no one is inspired to do them on purpose. But the goal is to speed up that future, via smaller trials of promising innovation concepts.

Added 5Jan: While I was presuming that Alexander had intended substantial content to his claims about Moloch, many are saying no, he really just mean to say “bad equilibria are bad”. Which is just a mood well-expressed, but doesn’t remotely support the AI god strategy.

GD Star Rating
a WordPress rating system
Tagged as: , ,

What Will Be Fifth Meta-Innovation?

We owe pretty much everything that we are and have to innovation. That is, to our ancestors’ efforts (intentional or not) to improve their behaviors. But the rate of innovation has not been remotely constant over time. And we can credit increases in the rate of innovation to: meta-innovation. That is, to innovation in the processes by which we try new things, and distribute better versions to wider practice.

On the largest scales, innovation is quite smooth, being mostly made of many small-grain relatively-independent lumps, which is why the rate of overall innovation usually looks pretty steady. The rare bigger lumps only move the overall curve by small amounts; you have to focus in on much smaller scales to see individual innovations making much of a difference. Which is why I’m pretty skeptical about scenarios based on expecting very lumpy innovations in any particular future tech.

However, overall meta-innovation seems to be very lumpy. Through almost all history, innovation has happened at pretty steady rates, implying negligible net meta-innovation at most times. But we have so far seen (at least) four particular events when a huge quantity of meta-innovation dropped all at once. Each such event was so short that it was probably caused by one final key meta-innovation, though that final step may have awaited several other required precursor steps.

First natural selection arose, increasing the rate of innovation from basically zero to a positive rate. For example, over the last half billion years, max brain size on Earth has doubled roughly every 30 million years. Then proto-humans introduced culture, which allowed their economy (tracked by population) to double roughly every quarter million years. (Maybe other meta-innovations arose between life and culture; data is sparse.) Then ten thousand years ago, farming culture allowed the economy (tracked by population) to double roughly every thousand years. Then a few hundred years ago, industrial culture allowed the economy (no longer tracked by population) to double every fifteen years.

So these four meta-innovation lumps caused roughly these four factors of innovation growth rate change: 60,120, 240, infinity. Each era of steady growth between these changes encompassed roughly seven to twenty doublings, and each of these transitions took substantially less than a previous doubling time. Thus while a random grain of innovation so far has almost surely been part of a rather small lump of innovation, a random grain of meta-innovation so far has almost surely part of one of these four huge lumps of meta-innovation.

What caused these four huge lumps? Oddly, we understand the oldest lumps best, and recent lumps worse. But all four seems to be due to better ways to diffuse, as opposed to create, innovations. Lump 1 was clearly the introduction of natural selection, where biological reproduction spreads innovations. Lump 2 seems somewhat clearly cultural evolution, wherein we learned enough how to copy the better observed behaviors of others. Lump 3 seem plausibly, though hardly surely, due to a rise in population density and location stability inducing a change from a disconnected to a fully-connected network of long-distance travel, trade, and conquest. And while the cause of lump 4 seems the least certain, my bet is the rise of “science” in Europe, i.e., long distance networks of experts sharing techniques via math and Latin, enhanced by fashion tastes and noble aspirations.

Innovation continues today, but at a pretty steady rate, suggesting that there has been little net meta-innovation recently. Even so, our long-term history suggests a dramatic prediction: we will see at least one more huge lump, within roughly another ten doublings, or ~150 years, after which the economy will double in roughly a few weeks to a few months. And if the cause of the next lump is like the last four, it will be due to some new faster way to diffuse and spread innovations.

Having seen a lot of innovation diffusion up close, I’m quite confident that we are now no where near fundamental limits on innovation diffusion rates. That is, we could do a lot better. Another factor of sixty doesn’t seem crazy. Even so, it boggles the mind to try to imagine what such a new meta-innovation might be. Some new kind of language? Direct brain state transfer? Better econ incentives for diffusion? New forms of social organization?

I just don’t know. But the point of this post is: we have good reason to think such a thing is coming. And so it is worth looking out for. Within the next few centuries, a single key change will appear, and then within a decade overall econ growth would increase by a factor of sixty or more. Plausibly this will be due to a better way to diffuse innovations. And while the last step enabling this would be singular, it may require several precursors that appear at different times over the prior period.

My book Age of Em describes another possible process by which econ growth could suddenly speed up, to doubling in weeks or months. I still think this is plausible, but my main doubt is that the main reason I had predicted much faster growth there was not due to betters way to diffuse innovations in this scenario. Making this scenario a substantial deviation from prior trends. But maybe I’m wrong there.

Anyway, I’m writing here to say that I’m just not sure. Let’s keep an open mind, and keep on the lookout for some radical new way to better diffuse innovation.

Added 6a: Note that many things that look like plausible big meta-innovations did not actually seem to change the growth rate at the time. This includes sex, language, writing, and electronic computing and communication. Plausibly these are important enabling factors, but not sufficient on their own.

GD Star Rating
a WordPress rating system
Tagged as: , , ,

Bizarre Accusations

Imagine that you planned a long hike through a remote area, and suggested that it might help to have an experienced hunter-gather along as a guide. Should listeners presume that you intend to imprison and enslave such guides to serve you? Or is it more plausible that you propose to hire such people as guides?

To me, hiring seems the obvious interpretation. But, to accuse me of advancing a racist slavery agenda, Audra Mitchell and Aadita Chaudhury make the opposite interpretation in their 2020 International Relations article “Worlding beyond ‘the’ ‘end’ of ‘the world’: white apocalyptic visions and BIPOC futurisms”.

In a chapter “Catastrophe, Social Collapse, and Human Extinction” in the 2008 book Global Catastrophic Risks I suggested that we might protect against human extinction by populating underground refuges with people skilled at surviving in a world without civilization:

A very small human population would mostly have to retrace the growth path of our human ancestors; one hundred people cannot support an industrial society today, and perhaps not even a farming society. They might have to start with hunting and gathering, until they could reach a scale where simple farming was feasible. And only when their farming population was large and dense enough could they consider returning to industry.

So it might make sense to stock a refuge with real hunter-gatherers and subsistence farmers, together with the tools they find useful. Of course such people would need to be disciplined enough to wait peacefully in the refuge until the time to emerge was right. Perhaps such people could be rotated periodically from a well protected region where they practiced simple lifestyles, so they could keep their skills fresh. And perhaps we should test our refuge concepts, isolating real people near them for long periods to see how well particular sorts of refuges actually perform at returning their inhabitants to a simple sustainable lifestyle.

On this basis, Mitchell and Chaudhury call me a “white futurist” and “American settler economist” seeking to preserve existing Euro-centric power structures:

Indeed, many contributors to ‘end of the world’ discourses offer strategies for the reconstruction and ‘improvement’ of existing power structures after a global catastrophe. For example, American settler economist Robin Hanson calculates that if 100 humans survived a global catastrophic disaster that killed all others, they could eventually move back through the ‘stages’ of ‘human’ development, returning to the ‘hunter-gatherer stage’ within 20,000 years and then ‘progressing’ from there to a condition equivalent to contemporary society (defined in Euro-centric terms). …

some white futurists express concerns about the ‘de-volution’ of ‘humanity’ from its perceived pinnacle in Euro-centric societies. For example, American settler economist Hanson describes the emergence of ‘humanity’ in terms of four ‘progressions’

And solely on the basis of my book chapter quote above, Mitchell and Chaudhury bizarrely claim that I “quite literally” suggest imprisoning and enslaving people of color “to enable the future re-generation of whiteness”:

To achieve such ideal futures, many writers in the ‘end of the world’ genre treat [black, indigenous, people of color] as instruments or objects of sacrifice. In a stunning display of white possessive logic, Hanson suggests that, in the face of global crisis, it

‘might make sense to stock a refuge with real hunter-gatherers and subsistence farmers, together with the tools they find useful. Of course, such people would need to be disciplined enough to wait peacefully in the refuge until the time to emerge was right.

In this imaginary, Hanson quite literally suggests the (re-/continuing)imprisonment, (re-/continuing)enslavement and biopolitical (re-/continuing) instrumentalization of living BIPOC in order to enable the future re-generation of whiteness. This echoes the dystopian nightmare world described in …

And this in a academic journal article that supposedly passed peer review! (I was not one of the “peers” consulted.)

To be very clear, I proposed to hire skilled foragers and subsistence farmers to serve in such roles, compensating them as needed to gain their consent. I didn’t much care about their race, nor about the race of the world that would result from their repopulating the world. And presumably someone with substantial racial motivations would in fact care more about that last part; how exactly does repopulating the world with people of color promote “whiteness”?

GD Star Rating
a WordPress rating system
Tagged as: ,

MacAskill on Value Lock-In

Will MacAskill has a new book out today, What We Owe The Future, most of which I agree with, even if that doesn’t exactly break new ground. Yes, the future might be very big, and that matters a lot, so we should be willing to do a lot to prevent extinction, collapse, or stagnation. I hope his book induces more careful future analysis, such as I tried in Age of Em. (FYI, MacAskill suggested that book’s title to me.) I also endorse his call for more policy and institutional experimentation. But, as is common in book reviews, I now focus on where I disagree.

Aside from the future being important, MacAskill main concern in his book is “value lock-in”, by which he means a future point in time when the values that control actions stop changing. But he actually mixes up two very different processes by which this result might arise. First, an immortal power with stable values might “take over the world”, and prevent deviations from its dictates. Second, in a stable universe decentralized competition between evolving entities might pick out some most “fit” values to be most common.

MacAskill’s most dramatic predictions are about this first “take over” process. He claims that the next century or so is the most important time in all of human history:

We hold the entire future in our hands. … By choosing wisely, we can be pivotal in putting humanity on the right course. … The values that humanity adopts in the next few centuries might shape the entire trajectory of the future. … Whether the future is government by values that are authoritarian or egalitarian, benevolent or sadistic, exploratory or rigid, might well be determined by what happens this century.

His reason: we will soon create AGI, or ems, who, being immortal, have forever stable values. Some org will likely use AGI to “take over the world”, and freeze in their values forever:

Advanced artificial intelligence could enable those in power to to lock in their values indefinitely. … Since [AGI] software can be copied with high fidelity, an AGI can survive changes in the hardware instantiating it. AGI agents are potentially immortal. These two features of AGI – potentially rapid technological progress and in-principle immortality – combine to make value lock-in a real possibility. …

Using AGI, there are a number of ways that people could extend their values much farther into the future than ever before. First, people may be able to create AGI agents with goals closely assigned with their own which would act on their behalf. … [Second,] the goals of an AGI could be hard-coded: someone could carefully specify what future white want to see and ensure that the AGI aims to achieve it. … Third, people could potentially “upload”. …

International organizations or private actors may be able to leverage AGI to attain a level of power not seen since the days of the East India Company, which in effect ruled large areas of India. …

A single set of values could emerge. …The ruling ideology could in principle persist as long as civilization does. AGI systems could replicate themselves as many times as they wanted, just as easily as we can replicate software today. They would be immortal, freed from the biological process of aging, able to create back-ups of themselves and copy themselves onto new machines. … And there would not longer be competing value systems that could dislodge the status quo. …

Bostrom’s book Superintelligence. The scenario most closely associated with that book is one in which a single AI agent … quickly developing abilities far greater than the abilities of all of humanity combined. … It would therefore be incentivize to take over the world. … Recent work has looked at a broader range of scenarios. The move from subhuman intelligence to super intelligence need not be ultrafast or discontinuous to post a risk. And it need not be a single AI that takes over; it could be many. …

Values could become even more persistent in the future if a single value system were to become global dominant. If so, then the absence of conflict and competition would remove one reason for change in values over time. Conquest is the most dramatic pathway … and it may well be the most likely.

Now mere immortality seems far from sufficient to create either value stability or a takeover. On takeover, immortality is insufficient. Not only is a decentralized world of competing immortals easy to imagine, but in fact until recently individual bacteria, who very much compete, were thought to be immortal.

On values, immortality also seems far from sufficient to induce stable values. Human organizations like firms, clubs, cities, and nations seem to be roughly immortal, and yet their values often greatly change. Individual humans change their values over their lifetimes. Computer software is immortal, and yet its values often change, and it consistently rots. Yes, as I mentioned in my last post, some imagine that AGIs have a special value modularity that can ensure value stability. But we have many good reasons to doubt that scenario.

Thus MacAskill must be positing that a power who somehow manages to maintain stable values takes over and imposes its will everywhere forever. Yet the only scenario he points to that seems remotely up to this task is Bostrom’s foom scenario. MacAskill claims that other scenarios are also relevant, but doesn’t even try to show how they could produce this result. For reasons I’ve given many times before, I’m skeptical of foom-like scenarios.

Furthermore, let me note that even if one power came to dominate Earth’s civilization for a very long time, it would still have to face competition from other grabby aliens in roughly a billion years. If so, forever just isn’t at issue here.

While MacAskill doesn’t endorse any regulations to deal with this stable-AGI-takes-over scenario, he does endorse regulations to deal with the other path to value stability: evolution. He wants civilization to create enough of a central power that it could stop change for a while, and also limit competition between values.

The theory of cultural evolution explains why many moral changes are contingent. … the predominant culture tends to entrench itself. … results in a world increasingly dominated by cultures with traits that encourage and enable entrenchment and thus persistence. …

If we don’t design our institutions to govern this transition well – preserving a plurality of values and the possibility of desirable moral progress. …

A second way for a culture to become more powerful is immigration [into it]. … A third way in which a cultural trait can gain influence is if it gives one group greater ability to survive or thrive in a novel environment. … A final way in which one culture can outcompete another is via population growth. … If the world converged on a single value system, there would be much less pressure on those values to change over time.

We should try to ensure that we have made as much moral progress as possible before any point of lock-in. … As an ideal, we could aim for what we could call the long reflection: a stable state of the world in which we are safe from calamity and can reflect on and debate the nature of the good life, working out what the more flourishing society would be. … It would therefore be worth spending many centuries to ensure that we’ve really figured things out before taking irreversible actions like locking in values or spreading across the stars. …

We would need to keep our options open as much as possible … a reason to prevent smaller-scale lock-ins … would favor political experimentation – increasing cultural and political diversity, if possible. …

That one society has greater fertility than another or exhibits faster economic growth does not imply that society is morally superior. In contrast, the most important mechanisms for improving our moral views are reason, reflection, and empathy, and the persuasion of others based on those mechanisms. … Certain forms of free speech would therefore be crucial to enable better ideas to spread. …

International norms or laws preventing any single country from becoming too populous, just as anti-trust regulation prevents any single company from dominating a market. … The lock-in paradox. We need to lock-in some institutions and ideas in order to prevent a more thorough-going lock-in of values. … If we wish to avoid the lock-in of bad moral views, an entirely laissez-faire approach would not be possible; over time, the forces of cultural evolution would dictate how the future goes, and the ideologies that lead to the greatest military powered that try to eliminate their competition would suppress all others.

I’ve recently described my doubts that expert deliberation has been a large force in value change so far. So I’m skeptical that will be a large force in the future. And the central powers (or global mobs) sufficient to promote a long reflection, or to limit nations competing, seem to risk creating value stability via the central dominance path discussed above. MacAskill doesn’t even consider this kind of risk from his favored regulations.

While competition may produce a value convergence in the long run, my guess is that convergence will happen a lot faster if we empower central orgs or mobs to regulate competition. I think that a great many folks prefer that latter scenario because they believe we know what are the best values, and fear that those values would not win an evolutionary competition. So they want to lock in current values via regs to limit competition and value change.

To his credit, MacAskill is less confident that currently popular values are in fact the best values. And his favored solution of more deliberation probably would’t hurt. I just don’t think he realizes just how dangerous are central powers able to regulate to promote deliberation and limit competition. And he seems way too confident about the chance of anything like foom soon.

GD Star Rating
a WordPress rating system
Tagged as: ,

Beware Upward Reference Classes

Sometimes when I see associates getting attention, I wonder, “do they really deserve more attention than me?” I less often look at those who get less attention than me, and ask whether I deserve more. Because they just don’t show up in my field of view as often; attention makes you more noticeable.

If I were to formalize my doubts, I might ask, “Among tenured econ professors, how much does luck and org politics influence who gets more funding, prestige, and attention?” And I might find many reasons to answer “lots”, and so suggest that such things be handed out more equally or randomly. Among tenured econ professors, that is. And if an economist with a lower degree, or a professor from another discipline, asked why they aren’t included in my comparison suggested redistribution, I might answer, “Oh I’m only talking about econ researchers here.”

Someone with a college econ degree might well ask if those with higher credentials like M.S., Ph.D., or a professor position really deserve the extra money, influence, and attention that they get. And if someone with only a high school degree were to ask why they aren’t included in this comparison, the econ degree person might say “oh, I’m only talking about economists here”, presuming that you can’t be considered an economists if you have no econ degree of any sort.

The pattern here is: “envy up, scorn down”. When considering fairness, we tend to define our comparison group upward, as everyone who has nearly as many qualification as we do or more, and then we ask skeptically if those in this group with more qualifications really deserve the extra gains associated with their extra qualifications. But we tend to look downward with scorn, assuming that our qualifications are essential, and thus should be baked into the definition of our reference class. That is, we prefer upward envy reference classes to justify our envying those above us, while rejecting others envying us from below.

Life on Earth has steadily increased in its abilities over time, allowing life to spread into more places and niches. We have good reasons to think that this trend may long continue, eventually allowing our descendants to spread through the universe, until they meet up with other advanced life, resulting in a universe dense with advanced life.

However, many have suggested that this view of the universe makes us today seem suspiciously early among what they see as the relevant comparison group. And thus they suggest we need a Bayesian update toward this view of the universe being less likely. But what exactly is a good comparison group? For example, if you said “We’d be very early among all creatures with access to quantum computers?”, I think we’d all get that this is not so puzzling, as the first quantum computers only appeared a few year ago.

We would also appear very early among all creatures who could knowingly ask the question “How many creatures will ever appear with feature X”, if the concept X applies to us but has only been recently introduced.  We’d also be pretty early among among all creatures who can express any question in language, if language was only invented in the last million years. It isn’t much better to talk about all creatures with self-awareness, if you say only primates and a few other animals count as having that, and they’ve only been around for a few million more years.

Thus in general in a universe where abilities improve over time, creatures that consider upward defined reference classes will tend to find themselves early. Often very early, if they insist that their class members have some very recently acquired abilities. But once you see this tendency to pick upward reference classes, the answers you get to such questions need no longer suggest updates against the hypothesis of long increasing abilities.

Furthermore, in an any universe that will eventually fill up, creatures who find themselves well before that point in time can estimate that they are very early relative to even very neutral reference classes.

It seems to me that something similar is going on when people claim that this coming century will be uniquely important, the most important one ever, as computers are the most powerful tech we have ever seen, and as the next century is plausibly when we will make most of the big choices re how to use computers.  If we generally make the most important choices about each new tech soon after finding it, and if increasingly powerful new techs keep appearing, then this sort of situation should be common, not unique, in history.

So this next century will only be the most important one (in this way) if computers are the last tech to appear that is more powerful than prior techs. But it we expect that even more important techs will continue to be found, then we shouldn’t expect this one to be the most important tech ever. No, I can’t describe these more important yet-to-be-found future techs. But I do believe they exist.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Space Econ HowTo

In Age of Em, I tried to show how far one could get using standard econ analysis to predict the social consequences of a particular envisioned future tech. The answer: a lot further that futurists usually go. Thus we could do a lot more useful futurism.

My approach to futurism should work more generally, and I’ve hoped to inspire others to emulate it. And space is an obvious application. We understand space tech pretty well, and people have been speculating about it for quite a long time. So I’m disappointed to not yet see better social analysis of space futures.

In this post I will therefore try to outline the kind of work that I think should be done, and that seems quite feasible. Oh I’m not going to actually do most of that work here, just outline it. This is just one blog post, after all. (Though I’m open to teaming with others on such a project.)

Here is the basic approach:

  1. Describe how a space society generally differs from others using economics-adjacent concepts. E.g., “Space econ is more X-like”.
  2. For each X, describe in general how X-like economies differ from others, using both historical patterns and basic econ theory.
  3. Merge the implications of X-analyses from the different X into a single composite picture of space.

Here are some candidates Xs, i.e., ways that space econs tend to differ from other econs. Note that we don’t need these various X to be logically independent of one another. But the more dependencies, the more work we will have to do in step 3 to sort those out.

First, space is further away than is most stuff. Which makes activity there less dense. So we first want to ask: how does economic and social activity tend to differ as it becomes further away from, and less dense than, the rest of the economy? E.g., in terms of distance, travel and communication cost and time, and having a different mix of resources, risks, and products? If lower density induces less local product and service variety, then how do less varied economies differ?

Space also seems different in being a harsher environment. On Earth today, some places are more like the Edens where humans first evolved, and so are less harsh for humans, while other places are more harsh. Such as high in mountains, on or under the sea, or in extreme latitudes. How does econ activity tend to differ in harsher environments? Harsh environments tend to be correlated with less natural biological activity; how does econ activity vary with that?

Space differs also in its basic attractions, relative to other places. One of those attractions is raw inputs, such as energy, atoms, and volume. Another attraction is that space contains more novelty, which attracts scientific and other adventurers. A third attraction is that space has often been a focal place to stage demonstrations of power and ability. Such as in the famous Cold War space race.

A fourth attraction is that growth in space seems to open up more potential for further growth in similar directions. In contrast perhaps to, for example, colonizing tops of mountains when there are only a limited number of such mountains available. How does the potential for further growth of a similar sort influence activity in an area? A fifth attraction is that doing things in space seems a complement to our large legacy of fiction set in space. For each of these attractions, we can ask: in general how does activity driven by such attractions differ from other activity?

Regarding “how does activity differ?”, here are some features Y that one might ask about. How capital intensive is activity? How automated? How long are supply chains? What disasters hit how hard with what frequency? What are typical mixes of genders, ages, and education levels? In what size firms, with how many layers of management, is commercial activity done? How fast do firms last, and how fast do they grow? How many different kinds of jobs are there, and how long are job tenures? How much commitment do firms demand from employees and how easy is it to move to a competing firm in a similar role? How easy is it to move where you live or shop?

In these kinds of societies does growth tend to happen slowly, continuously, in an uncoordinated manner? Or are there instead big gains to actors coordinating to all grow together in a big lump at related places and times? If so, who usually coordinates such lumps, and how do they get paid for it?

These are just a few examples of a long list of questions that economists and other social scientists often ask about different kinds of social activity. I’m not suggesting that one try hard to address how Y differs regarding X-like areas, for every possible combination of X and Y. I’m instead suggesting that one be opportunistic, searching in that big space for easy wins. For where we have empirical data, or simple theory, that gives tentative answers. As I did in Age of Em.

While the above can help us guess how a space economy will differ, we might also want to guess how fast it will grow. So we’d like a past time series and perhaps supporting theory to help predict how fast travel and other costs will fall, and how fast activity expands with falling costs.

GD Star Rating
a WordPress rating system
Tagged as: ,

Why Not Wait On AI Risk?

Years ago when the AI risk conversation was just starting, I was a relative skeptic, but I was part of the conversation. Since then, the conversation has become much larger, but I seem no longer part of it; it seems years since others in this convo engaged me on it.

Clearly most who write on this do not sit close to my views, though I may sit closer to most who’ve considered getting into this topic, but instead found better things to do. (Far more resources are available to support advocates than skeptics.) So yes, I may be missing something that they all get. Furthermore, I’ve admittedly only read a small fraction of the huge amount since written in this area. Even so, I feel I should periodically try again to explain my reasoning, and ask others to please help show me what I’m missing.

The future AI scenario that treats “AI” most like prior wide tech categories (e.g., “energy” or “transport”) goes as follows. AI systems are available from many competing suppliers at similar prices, and their similar abilities increase gradually over time. Abilities don’t increase faster than customers can usefully apply them. Problems are mostly dealt with as they appear, instead of anticipated far in advance. Such systems slowly displace humans on specific tasks, and are on average roughly as task specialized as humans are now. AI firms distinguish themselves via the different tasks their systems do.

The places and groups who adopt such systems first are those flexible and rich enough to afford them, and having other complementary capital. Those who invest in AI capital on average gain from their investments. Those who invested in displaced capital may lose, though over the last two decades workers at more automated jobs have not seen any average effect on their wages or number of workers. AI today is only a rather minor contribution to our economy (<5%), and it has quite a long way to go before it can make a large contribution. We today have only vague ideas of what AIs that made a much larger contribution would look like.

Today most of the ways that humans help and harm each other are via our relations. Such as: customer-supplier, employer-employee, citizen-politician, defendant-plaintiff, friend-friend, parent-child, lover-lover, victim-criminal-police-prosecutor-judge, army-army, slave-owner, and competitors. So as AIs replace humans in these roles, the main ways that AIs help and hurt humans are likely to also be via these roles.

Our usual story is that such hurt is limited by competition. For example, each army is limited by all the other armies that might oppose it. And your employer and landlord are limited in exploiting you by your option to switch to other employers and landlords. So unless AI makes such competition much less effective at limiting harms, it is hard to see how AI makes role-mediated harms worse. Sure smart AIs might be smarter than humans, but they will have other AI competitors and humans will have AI advisors. Humans don’t seem much worse off in the last few centuries due to firms and governments who are far more intelligent than individual humans taking over many roles.

AI risk folks are especially concerned with losing control over AIs. But consider, for example, an AI hired by a taxi firm to do its scheduling. If such an AI stopped scheduling passengers to be picked up where they waited and delivered to where they wanted to go, the firm would notice quickly, and could then fire and replace this AI. But what if an AI who ran such a firm became unresponsive to its investors. Or if an AI who ran an army becoming unresponsive to its oversight government? In both cases, while such investors or governments might be able to cut off some outside supplies of resources, the AI might do substantial damage before such cutoffs bled it dry.

However, our world today is well acquainted with the prospect of “coups” wherein firm or army management becomes unresponsive to its relevant owners. Not only do our usual methods usually seem sufficient to the task, we don’t see much of an externality re these problems. You try to keep your firm under control, and I try to keep mine, but I’m not especially threatened by your losing control of yours. We care a bit more about others losing control of their cars, planes, or nuclear power plants, as those might hurt bystanders. But we care much less once such others show us sufficient liability, and liability insurance, to cover our losses in these cases.

I don’t see why I should be much more worried about your losing control of your firm, or army, to an AI than to a human or group of humans. And liability insurance also seems a sufficient answer to your possibly losing control of an AI driving your car or plane. Furthermore, I don’t see why its worth putting much effort into planning how to control AIs far in advance of seeing much detail about how AIs actually do concrete tasks where loss of control matters. Knowing such detail has usually been the key to controlling past systems, and money invested now, instead of spent on analysis now, gives us far more money to spend on analysis later.

All of the above has been based on assuming that AI will be similar to past techs in how it diffuses and advances. Some say that AI might be different, just because, hey, anything might be different. Others, like my ex-co-blogger Eliezer Yudkowsky, and Nick Bostrom in his book Superintelligence, say more about why they expect advances at the scope of AGI to be far more lumpy than we’ve seen for most techs.

Yudkowsky paints a “foom” picture of a world full of familiar weak stupid slowly improving computers, until suddenly and unexpectedly a single super-smart un-controlled AGI with very powerful general abilities appears and is able to decisively overwhelm all other powers on Earth. Alternatively, he claims (quite implausibly I think) that all AGIs naturally coordinate to merge into a single system to defeat competition-based checks and balances.

These folks seem to envision a few key discrete breakthrough insights that allow the first team that finds them to suddenly catapult their AI into abilities far beyond all other then-current systems. These would be big breakthroughs relative to the broad category of “mental tasks”, and thus even bigger than if we found big breakthroughs relative to the less broad tech categories of “energy”, “transport”, or “shelter”. Yes of course change is often lumpy if we look at small tech scopes, but lumpy local changes aggregate into smoother change over wider scopes.

As I’ve previously explained at length, that seems to me to postulate a quite unusual lumpiness relative to the history we’ve seen for innovation in general, and more particularly for tools, computers, AI, and even machine learning. And this seems to postulate much more of a lumpy conceptual essence to “betterness” than I find plausible. Recent machine learning systems today seem relatively close to each other in their abilities, are gradually improving, and none seem remotely inclined to mount a coup.

I don’t mind groups with small relative budgets exploring scenarios with proportionally small chances, but I lament such a large fraction of those willing to take the long term future seriously using this as their default AI scenario. And while I get why people like Yudkowsky focus on scenarios in which they fervently believe, I am honestly puzzled why so many AI risk experts seem to repudiate his extreme scenarios, and yet still see AI risk as a terribly important project to pursue right now. If AI isn’t unusually lumpy, then why are early efforts at AI control design especially valuable?

So far I’ve mentioned two widely expressed AI concerns. First, AIs may hurt human workers by displacing them, and second, AIs may start coups wherein they wrest control of some resources from their owners. A third widely expressed concern is that the world today may be stable, and contain value, only due to somewhat random and fragile configurations of culture, habits, beliefs, attitudes, institutions, values, etc. If so, our world may break if this stuff drifts out of a safe and stable range for such configurations. AI might be or facilitate such a change, and by helping to accelerate change, AI might accelerate the rate of configuration drift.

Similar concerns have often been expressed about allowing too many foreigners to immigrate into a society, or allowing the next youthful generation too much freedom to question and change inherited traditions. Or allowing many other specific transformative techs, like genetic engineering, fusion energy, social media, or space. Or other big social changes, like gay marriage.

Many have deep and reasonable fears regarding big long-term changes. And some seek to design AI so that it won’t allow excessive change. But this issue seems to me much more about change in general than about AI in particular. People focused on these concerns should be looking to stop or greatly limit and slow change in general, and not focus so much on AI. Big change can also happen without AI.

So what am I missing? Why would AI advances be so vastly more lumpy than prior tech advances as to justify very early control efforts? Or if not, why are AI risk efforts a priority now?

GD Star Rating
a WordPress rating system
Tagged as: , ,

Will Design Escape Selection?

In the past, many people and orgs have had plans and designs, many which made noticeable differences to the details of history. But regarding most of history, our best explanations of overall trends has been in terms of competition and selection, including between organisms, species, cultures, nations, empires, towns, firms, and political factions.

However, when it comes to the future, especially hopeful futures, people tend to think more in terms of design than selection. For example, H.G. Wells was willing to rely on selection to predict a future dystopia in The Time Machine, but his utopia in Things to Come was the result of conscious planning replacing prior destructive competition. Hopeful futurists have long painted pictures of shiny designed techs, planned cities, and wise cooperative institutions of charity and governance.

Today, competition and selection continue on in many forms, including political competition for the control of governance institutions. But instead of seeing governance, law, and regulation as driven largely by competition between units of governance (e.g., parties, cities, or nations), many now prefer to see them in design terms: good people coordinating to choose how we want to live together, and to limit competition in many ways. They see competition between units of governance as largely passé, and getting more so as we establish stronger global communities and governance.

My future analysis efforts have relied mostly on competition and selection. Such as in Age of Em, post-em AI, Burning the Cosmic Commons, and Grabby Aliens. And in my predictions of long views and abstract values. Their competitive elements, and what that competition produces, are often described by others as dystopian. And the most common long-term futurist vision I come across these days is of a “singleton” artificial general intelligence (A.G.I.) for whom competition and selection become irrelevant. In that vision (of which I am skeptical), there is only one A.G.I., which has no internal conflicts, grows in power and wisdom via internal reflection and redesign, and then becomes all powerful and immortal, changing the universe to match its value vision.

Many recent historical trends (e.g., slavery, democracy, religion, fertility, leisure, war, travel, art, promiscuity) can be explained in terms of rising wealth inducing a reversion to forager values and attitudes. And I see these design-oriented attitudes toward governance and the future as part of this pro-forager trend. Foragers didn’t overtly compete with each other, but instead made important decisions by consensus, and largely by appeal to community-wide altruistic goals. The farming world forced humans to more embrace competition, and become more like our pre-human ancestors, but we were never that comfortable with it.

The designs that foragers created, however, were too small to reveal the key obstacle to this vision of civilization-wide collective design to over-rule competition: rot (see 1 2 3 4). Not only is it quite hard in practice to coordinate to overturn the natural outcomes of competition and selection, the sorts of complex structures that we are tempted to use to achieve that purpose consistently rot, and decay with time. If humanity succeeds in creating world governance strong enough to manage competition, those governance structures are likely to prevent interstellar colonization, as that strongly threatens their ability to prevent competition. And such structures would slowly rot over time, eventually dragging civilization down with them.

If competition and selection manages to continue, our descendants may become grabby aliens, and join the other gods at the end of time. In that case one of the biggest unanswered question is: what will be the key units of future selection? How will those units manage to coordinate, to the extent that they do, while still avoiding the rotting of their coordination mechanisms? And how can we now best promote the rise of the best versions of such competing units?

GD Star Rating
a WordPress rating system
Tagged as: , ,

Unblinding Our Admin Futures

Our job as futurists is to forecast the future. Not exactly of course, but at least to cut the uncertainty. And one of the simplest way to do that is to take relatively stable and robust past long term trends and project them into the future. Especially if those trends still have a long way that they could continue before they hit fundamental limits. For example, futurists have tried to apply this method to increasing incomes, leisure, variety, density, non-violence, automation, and ease of communication and transport.

It seems to me that one especially promising candidate for this method is also plausibly the fundamental cause of the industrial revolution: bureaucracy. For centuries we humans have been slowly learning how to manage larger more complex networks and organizations, via more formal roles, rules, and processes. (That is, we have more “admin”.) As a result, our orgs have been getting larger and have wider scope, governments have been doing more, and government functions have moved up to larger scale units (cities to states to nations, etc.).

For example, a twitter poll just found respondents saying 10-1 that the org they know best has been getting more, as opposed to less, bureaucratic over the last decade. And our laws have been getting consistently more complex.

If formal roles, rules, and processes increase over the next century as much as they have over the last century, that should make our future quite different from today. But how exactly? Yes, we’ll use computers more in admin, but that still leaves a lot unsaid. You might think science fiction would be all over this, describing our more admin future in great detail. Yet in fact, science fiction rarely describes much bureaucracy.

In fact, neither does fantasy, the other genre closest to science fiction. Actually, most stories avoid org complexity. For example, most movies and TV shows focus on leisure, instead of work. And when bureaucracy is included, it is usually as a soul-crushing or arbitrary-obstacle villain. It seems that we’d rather look away than acknowledge bureaucracy as a key source of our wealth and value, a pillar and engine of our civilization.

To try to see past this admin blindspot, let us try to find an area of life that today has relatively few formal rules and procedures, and then imagine adding a lot more of them there. This doesn’t necessarily mean that this area of life becomes more restricted and limited compared to today. But it does mean that whatever processes and restrictions there are become more formal and complex.

Public conversation comes to my mind as a potential example here. The rise of social media has created a whole lot more of it, and over the last few years many (including me) have been criticized for saying things the wrong way in public. The claim is often made that it is not the content of what they said that was the problem, it was the way that they said it. So many people say that we accept many complex rules of public conversation that are often being violated.

Thus I’m inclined to imagine a future where we have a lot more formal rules and processes regarding public conversations. These might not be seen as a limit on free speech, in that they only limit how you can say things, not what you can say. These rules might be complex enough to push us to pay for specialist advisors who help us navigate the new rules. Perhaps automation will make such advisors cheaper. And people of that era might prefer the relatively neutral and fair application of these complex rules to the more opportunistic and partisan ways that informal norms were enforced back in the day.

Now I’m not very confident that this is an area of life where we will get a lot more bureaucracy. But I am confident that there will be many such areas, and that we are so far greatly failing to imagine our more bureaucratic future. So please, I encourage you all to help us imagine what our more admin future may look like.

Added 11a: I’m about to attend an event whose dress code is “resort casual”. Whatever that means. I can imagine such dress rules getting a lot more explicit and complex.

GD Star Rating
a WordPress rating system
Tagged as: ,

To Innovate, Unify or Fragment?

In the world around us, innovation seems to increase with the size of an integrated region of activity. For example, human and computer languages with more users acquire more words and tools at a faster rate. Tech ecosystems, such as those collected around Microsoft, Apple, or Google operating systems, innovate faster when they have more participating suppliers and users. And there is more innovation per capita in larger cities, firms, and economies. (All else equal, of course.)

We have decent theories to explain all this: larger communities try more things, and each trial has more previous things to combine and build on. The obvious implication is that innovation will increase as our world gets larger, more integrated, and adopts more wider-shared standards and tech ecosystems. More unification will induce more innovation.

Simple theory also predicts that species evolve faster when they have larger populations. And this seems to have applied across human history. But if this were generally true across species, then we should expect most biological innovation to happen in the largest species, which would live in the largest most integrated environmental niches. Like big common ocean areas. And most other species to have descended from these big ones.

But in fact, more biological innovation happens where the species are the smallest, which happens where mobility is higher and environments are more fragmented and changing. For example, over the last half billion years, we’ve seen a lot more innovation on land than in the sea, more on the coasts than on the interiors of land or sea, and more closer to rivers. All more mobile and fragmented places. How can that be?

Maybe big things tend to be older, and old things rot. Maybe the simple theory mentioned above focuses on many small innovations, but doesn’t apply as well to the few biggest innovations, that require coordinating many supporting innovations. Or maybe phenomena like sexual selection, as illustrated by the peacock’s tail, show how conformity and related collective traps can bedevil species, as well as larger more unified tech ecosystems. It seems to require selection between species to overcome such traps; individual species can’t fix them on their own.

If so, why hasn’t the human species fallen into such traps yet? Maybe the current fertility decline is evidence of such a trap, or maybe such problems just take a long time to arise. Humans fragmenting into competing cultures may have saved us for a while. Individual cultures do seem to have often fallen into such traps. Relatively isolated empires consistently rise and then fall. So maybe cultural competition is mostly what has saved us from cultures falling into traps.

While one might guess that collective traps are a rare problem for species and cultures, the consistent collapse of human empires and our huge dataset on bio innovation suggest that such problems are in fact quite common. So common that we really need larger scale competition, such as between cultures or species, to weed it out. To innovate, the key to growth, we need to fragment, not unify.

Which seems a big red loud warning sign about our current trend toward an integrated world culture, prey to integrated world collective traps, such as via world mobs. They might take some time to reveal themselves, but then be quite hard to eradicate. This seems to me the most likely future great filter step that we face.

Added 10Jan: There are papers on how to design a population structure to maximize the rate of biological evolution.

GD Star Rating
a WordPress rating system
Tagged as: , ,