We humans have come a long way since we first became human; we’ve innovated and grown our ability to achieve human ends by perhaps a factor of ten million. Not at all shabby, even though it may be small compared to the total factor of growth and innovation that life achieved before humans arrived. But even if humanity’s leap is a great achievement, I fear that we have much further to go than we have come.
The universe seems almost entirely dead out there. There’s a chance it will eventually be densely filled with life, and that our descendants may help to make that happen. Some worry about the quality of that life filling the universe, and yes there are issues there. But I worry mostly about the difference between life and death. Our descendants may kill themselves or stop growing, and fail to fill the universe with life. Any life.
To fill the universe with life requires that we grow far more than our previous leap factor of ten million. More like three to ten factors that big still to go. (See Added below.) So think of all the obstacles we’ve overcome so far, obstacles that appeared when we reached new scales of size and levels of ability. If we were lucky to make it this far, we’ll have to be much more lucky to make it all the way.
Of course few individuals today focus on filling the universe with life. Most attend to their individual needs. And as we’ve been getting rich over the last few centuries, our needs have changed. Many cite Maslow’s Hierarchy of Needs:
While few offer much concrete evidence for this, most seem to accept it or one of its many variations. Once our basic needs are met, our attention switches to “higher” needs. Wealth really does change humans. (I see this in part as our returning to forager values with increasing wealth.)
It is easy to assume that what is good for you is good overall. If you are an artist, you may assume the world is better when consumers more art. If you are a scientist, you may assume the world is better if it gives more attention and funding to science. Similarly, it is easy to assume that the world gets better if more of us get more of what we want, and thus move higher into Maslow’s Hierarchy.
But I worry: as we attend more to higher needs, we may grow and innovate less regarding lower needs. Can the universe really get filled by creatures focused mainly on self-actualization? Why should they risk or tolerate disruptions from innovations that advance low needs if they don’t care much for that stuff? And many today see their higher needs as conflicting with more capacity to fill low needs. For example, many see more physical capacities as coming at the expense of less nature, weaker indigenous cultures, larger more soul-crushing organizations, more dehumanizing capitalism, etc. Rich nations today do seem to have weaker growth in raw physical capacities because of such issues.
Yes, it is possible that even rich societies focused on high needs will consistently grow their capacities to satisfy low needs, and that will eventually lead to a universe densely filled with life. But still I worry about all those unknown obstacles yet to be seen as our descendants try to grow through another three to ten factors as large as humanity’s leap. At some of those obstacles, will a focus on high needs lead them to turn away from the grand growth path? To a comfortable “sustainable” stability without all that disruptive innovation? How much harder would become to restart growth again later?
Pretty much all the growth that we have seen so far has been in a context where humans, and their ancestors, were focused mainly on low needs. Our current turn toward high needs is quite new, and thus relatively unproven. Yes, we have continued to grow, but more slowly. That seems worth at least a bit of worry.
Added 28Oct: Assume humanity’s leap factor is 107. Three of those is 1021. As there are 1024 stars in observable universe, that much growth could come from filling one in a thousand of those stars with as many rich humans as Earth now has. Ten of humanity’s leap is 1070, and there are now about 1010 humans on Earth. As there are about 1080 atoms in the observable universe, that much growth could come from finding a way to implement one human like creature per atom.
I have become much more sympathetic to Robin's position on this recently. The two main arguments that convinced me:
Some claim that the lives of most poor people / animals / etc are obviously not worth living, with their evidence being that they would prefer to spend an hour unconscious rather than spend that hour as (say) an average wild animal. But this kind of argument seems far too flimsy to support such claims.
For example, assuming I will live a long life, I would be happy to skip out, say, a randomly selected dull plane journey from some point in my future. But if told that I was about to experience a plane journey and then die afterward, and given the choice to just die immediately instead, I would MUCH rather experience those extra few hours, even spent sitting in an uncomfortable plane seat with my ears popping. And I expect that some ecstatically happy pampered transhuman would say, truthfully, that they would rather spend an hour unconscious than experience an hour of my life - yet I think my life is worth living, even the bad parts. And my assessment of an experience seems to depend very heavily on my prior expectations, and also on what my peers are doing - I'm much more willing to put up with something if I know I'm not alone in having to do so.
And so on. Also, the concept of a happiness set point seems to be quite well established. Upon examination this argument seems to me about as strong as folk economic claims about the 'true value' of a good, that if I wouldn't buy something then surely anyone who does is being ripped off.
Some other people claim that, even if the modal experience of a wild animal or poor person is positive utility, severe pain is so strongly negative that even a small amount of it can easily outweigh a life that's otherwise comprised of long periods of low, but positive, utility experiences. But I think this puts far too little emphasis on experience duration. I would MUCH rather experience a few seconds of severe pain, than a day of, say, a dull stomach ache.
Really the thing that frightens me most about severe pain is when it's accompanied by long-term effects like disfigurement. But if you're experiencing severe pain in the process of being killed, this doesn't apply - in fact this is probably the best time to experience disfigurement.
Yes this is ideological and in some sense political, so what?
Tastes aren't claims. Treating them as claims - which is what ideologies do - results in falsehood. (See Why do what you "ought"?—A habit theory of explicit morality - http://juridicalcoherence.b... )
The core issue seems to come down to different models of how ethics scales with pain vs pleasure, over many beings or over much time, etc.
Robin and Eliezer do something else. They assert (Robin) or try to establish (Eliezer) a particular correspondence. They aren't scaling any extant ethics. They are aesthetically enamored with the model and adopt the corresponding ethics, not as an ethics (which you can't simply adopt) but as an ideology.
Is it off topic? If this were a pure subjective exercise, yes. But it's a subjective exercise implicitly pretending to objectivity.