24 Comments

I feel like this whole discussion is a bit confused or at least ambiguous. I mean are you suggeating there is some kind of normative fact you can discover about how you should extrapolate your values to very different futures?

Sure, the kind of considerations you list are ones that I happen to find persuasive when I engage in reflection about my values. However, it doesn't seem like there is anything even slightly inconsistent about saying:

"Yes, my values are a totally contingent feature of my biology and the particular time and place I live but so what. Those are my values so I'll just apply them to very different futures w/o modification even though I know if I lived there I'd have different values"

Unless you think there are something like objective (mind independent) moral facts there isn't any truth to be discerned here...only a choice to be made.

It's no different than deciding whether you (all things considered) want to eat a piece of cake. Maybe thinking about the fact that in the same situation you'd advise a friend not to eat it moves you to decide you don't really want to eat it. But maybe it just doesn't and neither response is anymore correct than the other since there is no external standard against which your judgements are being measured.

Expand full comment

> "...Which pushes me to see my core more as trying to win the natural selection game."

Consider scenario A: your descendants evolved/were engineered to become small non-sapient mouse-like rodents, but these rodents were highly successful and managed to spread around the galaxy aboard spaceships of an alien race. The aliens like the rodents and will never let them go extinct for millions of years. You have no sapient descendants.

Now consider scenario B: your descendants are sapient, intelligent, but of much smaller number and biomass, and the aliens restrict your descendants to Earth, though we may suppose the descendants still don't go extinct for millions of years.

Which would you pick, A or B? The rodents in A are "winning the natural selection game" much better than the people in B. But I think you would prefer B over A. This demonstrates that you do not simply want to reproduce as much as possible, but you wish to reproduce certain *phenotypes* that you prefer over others, such as sapience.

If you grant that, it now becomes a question of not simply reproducing as much as possible, but of which phenotypes you prefer among your descendants. Is it merely sapience, or would you like your descendants to be good and high-value people in other ways as well, such as having honesty or kindness?

Expand full comment
May 19, 2023·edited May 19, 2023

We start with a disorganized set of beliefs and values, granted to us by nature and random circumstance, many of them irrational, unfounded, and self-contradictory. Then we try to organize these beliefs and values so that:

1. The beliefs and values are justified by more fundamental or basic beliefs and values, in a directed acyclic graph.

2. The beliefs and values are not in contradiction with themselves. In every hypothetical or actual scenario we consider, our organized beliefs and values will yield a single consistent judgment.

3. The most fundamental beliefs and values - the roots of the directed acyclic graph - seem "self-evident," and are irreducible to simpler beliefs or values.

In order to organize our beliefs and values this way, we'll have to discard and revise a lot of them. The theoretical endpoint of this process - which no human can actually reach - is the beliefs and values that we "ought to" hold, contingent on the beliefs and values that we initially held. These are the beliefs and values that we would persuade ourselves to hold if we thought about the issues for long enough.

You mention "predicting" what we would value in different situations by statistical curve-fitting. The problem with such an approach is that it's not persuasive. If I predict that in a certain situation I could be induced to insanity, this does not persuade me to adopt this insanity in the present. What *does* persuade me is noticing that some of my beliefs are in contradiction with each other, or noticing that some of them are insufficiently justified.

Expand full comment

The standard answer in moral philosophy is "reflective equilibrium", i.e. using your best judgment about how to revise plausible-seeming principles and/or intuitions about cases to bring them into mutual coherence.

I don't find it very appealing to defer to "causal origins": even granting that our values are shaped by natural selection, why would that give us any reason to value "winning the natural selection game", given that that does not well match the actual *content* of our values? I'm much more inclined to ignore origins completely and instead try to identify principles about what is (most plausibly) worth valuing. These judgments necessarily stem from our actual perspectives, but can go beyond them in various ways. For example, thinking about the badness of suffering in our own lives, and those we care about, can lead us to value the avoidance of suffering no matter who (or what) is experiencing it. And similarly, I think, for reflections about the good things in life.

Expand full comment

Agree on the second option, but I think the level of selection you treat as fundamental makes a big difference - i.e. is the 'game' maximising your genetics / memetics / 'something else' that captures the knowledge in both of those (plus that elsewhere in the institutions that allow me to exist).

I would probably go with the later and say that my governing principle was the growth of knowledge and complexity in the universe. Sounds a bit pompous though.

Expand full comment

Isn't this the opposite of the takeaway from your AI risk post, which seems to argue that it's overly parochial to be concerned about humans and our current values being replaced by potentially random and arbitrary AI values? I find it hard to see how you can self-consistently arrive it, "It's the right conclusion to prefer your own culture's/genome's/life history's values over all others within the scope of values humans might hold, but also the wrong conclusion to be opposed to a world where all past and present human values are overwritten by entities that do things to which we assign no value whatsoever."

To be clear, I think your conclusion in this post is mostly the right one.

Expand full comment

Value is impersonal; it is The Good. There is also the good-for-me, the good-for-my-genetic -line, the good-for-my-culture, etc. My culture tends to inculcate precepts about how we had best act to maximize the good-for-my-culture; if my culture interacts little with other cultures, this will in practice be negligibly different from The Good. Natural selection inclines me to try to maximize the good-for-my-genetic-line (but natural selection is a crude workman--I often behave differently); nobody thinks this is The Good.

Expand full comment