In a universe that is (so far) almost entirely dead, we find ourselves to be on a rare planet full not only of life, but now also of human-level intelligent self-aware creatures. This makes our planet a roughly a once-per-million-galaxy rarity, and if we ever get grabby we can expect to meet other grabby aliens in roughly a billion years.
We see that our world, our minds, and our preferences have been shaped by at least four billions years of natural selection. And we see that evolution going especially fast lately, as we humans pioneer many powerful new innovations. Our latest big thing: larger scale organizations, which have induced our current brief dreamtime, wherein we are unusually rich.
For preferences, evolution has given us humans a mix of (a) some robust general preferences, like wanting to be respected and rich, (b) some less robust but deeply embedded preferences, like preferring certain human body shapes, and (c) some less robust but cultural plastic preferences, such as which particular things each culture finds more impressive.
My main reaction to all this is to feel grateful to be a living intelligent creature, who is compatible enough with his world to often get what he wants. Especially to be living in such a rich era. I accept that I and my descendants will long continue to compete (in part by cooperating of course), and that as the world changes evolution will continue to change my descendants, including as needed their values.
Many see this situation quite differently from me, however. For example, “anti-natalists” see life as a terrible crime, as the badness of our pains outweigh the goodness of our pleasures, resulting in net negative value lives. They thus want life on Earth to go extinct. Maybe, they say, it would be okay to only create really-rich better-emotionally-adjusted creatures. But not the humans we have now.
Many kinds of “conservatives” are proud to note that their ancestors changed in order to win prior evolutionary competitions. But they are generally opposed to future such changes. They want only limited changes to our tech, culture, lives, and values; bigger changes seem like abominations to them.
Many “socialists” are furious that some of us are richer and more influential than others. Furious enough to burn down everything if we don’t switch soon to more egalitarian systems of distribution and control. The fact that our existing social systems won difficult prior contests does not carry much weight with them. They insist on big radical changes now, and disavow any failures associated with prior attempts made under their banner. None of that was “real” socialism, you see.
Due to continued global competition, local adoption of anti-natalist, conservative, or socialist agendas seems insufficient to ensure these as global outcomes. Now most fans of these things don’t care much about long term outcomes. But some do. Some of those hope that global social pressures, via global social norms, may be sufficient. And others suggest using stronger global governance.
In fact, our scales of governance, and level of global governance, have been increasing over centuries. Furthermore, over the last half century we have created a world community of elites, wherein global social norms and pressures have strong power.
However, competition at the largest scales has so far been our only robust solution to system rot and suicide, problems that may well apply to systems of global governance or norms. Furthermore, centralized rulers may be reluctant to allow civilization to expand to distant places which they would find it harder to control.
This post resulted from Agnes Callard asking me to comment on Scott Alexander’s essay Meditations On Moloch, wherein he takes similarly stark positions on these grand issues. Alexander is irate that the world is not adopting various utopian solutions to common problems, such as ending corporate welfare, smaller militaries, and common hospital medical record systems. He seems to blame all of that, and pretty much anything else that has ever gone wrong, on something he personalizes into a monster “Moloch.” And while Alexander isn’t very clear on what exactly that is, my best read is that it is the general phenomenon of competition (at least the bad sort); that at least seems central to most of the examples he gives.
Furthermore, Alexander fears that, in the long run, competition will force our descendants to give up absolutely everything that they value, just to exist. Now he has no empirical or theoretical proof that this will happen; his post is instead mostly a long passionate primal scream expressing his terror at this possibility.
(Yes, he and I are aware that cooperation and competition systems are often nested within each other. The issue here is about the largest outer-most active system.)
Alexander’s solution is:
Elua. He is the god of flowers and free love and all soft and fragile things. Of art and science and philosophy and love. Of niceness, community, and civilization. He is a god of humans. … Only another god can kill Moloch. We have one on our side, but he needs our help. We should give it to him.
By which Alexander means: start with a tiny weak AI, induce it to “foom” (sudden growth from tiny to huge), resulting in a single “super-intelligent” AI who rules our galaxy with an iron fist, but wrapped the velvet glove of being “friendly” = “aligned”. By definition, such a creature makes the best possible utopia for us all. Sure, Alexander has no idea how to reliably induce a foom or to create an aligned-through-foom AI, but there are some people pondering theses questions (who are generally not very optimistic).
My response: yes of course if we could easily and reliably create a god to mange a utopia where nothing ever goes wrong, maybe we should do so. But I see enormous risks in trying to induce a single AI to grow crazy fast and then conquer everything, and also in trying to control that thing later via pre-foom design. I also fear many other risks of a single global system, including rot, suicide, and preventing expansion.
Yes, we might take this chance if we were quite sure that in the long term all other alternatives result in near zero value, while this remained the only scenario that could result in substantial value. But that just doesn’t seem remotely like our actual situation to me.
Because: competition just isn’t as bad as Alexander fears. And it certainly shouldn’t be blamed for everything that has ever gone wrong. More like: it should be credited for everything that has ever gone right among life and humans.
First, we don’t have good reasons to expect competition, compared to an AI god, to lead more reliably to the extinction either of life or of creatures who value their experiences. Yes, you can fear those outcomes, but I can as easily fear your AI god.
Second, competition has so far reigned over four billion years of Earth life, and at least a half billion years of Earth brains, and on average those seem to have been brain lives worth living. As have been the hundred billion human brain lives so far. So empirically, so far, given pretty long time periods, competition has just not remotely destroyed all value.
Now I suspect that Alexander might respond here thus:
The way that evolution has so far managed to let competing creatures typically achieve their values is by having those values change over time as their worlds change. But I want descendants to continue to achieve their values without having to change those values across generations.
However, relatively soon on evolutionary timescales, I’ve predicted that, given further competition, our descendants will come to just directly and abstractly value reproduction. And then after that, no descendant ever need to change their values. But I think even that situation isn’t good enough for Alexander; he wants our (his?) current human values to be the ones that continue and never change.
Now taken very concretely, this seems to require that our descendants never change their tastes in music, movies, or clothes. But I think Alexander has in mind only keeping values the same at some intermediate level of abstraction. Above the level of specific music styles, but below the level of just wanting to reproduce. However, not only has Alexander not been very clear regarding which exact value abstraction level he cares about, I’m not clear on why the rest of us should agree to with him about this level, or care as much as he does about it.
For example, what if most of our descendants get so used to communicating via text that they drop talking via sound, and thus also get less interesting in music? Oh they like artistic expressions using other mediums, such as text, but music becomes much more of a niche taste, mainly of interest to that fraction of our descendants who still attend a lot to sound.
This doesn’t seem like such a terrible future to me. Certainly not so terrible that we should risk everything to prevent it by trying to appoint an AI god. But if this scenario does actually seem that terrible to you, I guess maybe you should join Alexander’s camp. Unless all changes seem terrible to you, in which case you might join the conservative camp. Or maybe all life seems terrible to you, in which case you might join the anti-natalists.
Me, I accept the likelihood and good-enough-ness of modest “value drift” due to future competition. I’m not saying I have no preferences whatsoever about my descendants’ values. But relative to the plausible range I envision, I don’t feel greatly at risk. And definitely not so much at risk as to make desperate gambles that could go very wrong.
You might ask: if I don’t think making an AI god is the best way to get out of bad equilibria, what do I suggest instead? I’ll give the usual answer: innovation. For most problems, people have thought of plausible candidate solutions. What is usually needed is for people to test those solution in smaller scale trials. With smaller successes, it gets easier to entice people to coordinate to adopt them.
And how do you get people to try smaller versions? Dare them, inspire them, lead them, whatever works; this isn’t something I’m good at. In the long run, such trials tend to happen anyway, by accident, even when no one is inspired to do them on purpose. But the goal is to speed up that future, via smaller trials of promising innovation concepts.
Added 5Jan: While I was presuming that Alexander had intended substantial content to his claims about Moloch, many are saying no, he really just mean to say “bad equilibria are bad”. Which is just a mood well-expressed, but doesn’t remotely support the AI god strategy.
Moloch is not as you describe it: it's about having incentives to make individually rational decisions that make everyone worsen off (like the parable of the fish farmers who all polite their lake).It's related to the prisoners Dilemma, where you should defect if you think the other prisoner will defect, even though c-c is the best outcome. Solutions to Molochian problems include government imposing the desired outcome, eg. banning pollution, and people being more nice and co operative as a value system.
I have no clear idea of why you think the limit of competition better than some random AI.